Navigation – Plan du site

AccueilNuméros172Hackers’ self-selection in crowds...

Hackers’ self-selection in crowdsourced bug bounty programs

Auto-sélection des hackers dans les programmes « crowdsourcés » de chasse aux bogues
Arrah-Marie Jo
p. 83-132

Résumés

Un « programme de chasse aux bogues » consiste à récompenser des individus lorsqu’ils trouvent des failles de sécurité dans un logiciel ou un système. C’est une forme de crowdsourcing utilisée de manière croissante par les entreprises qui souhaitent améliorer la sécurité de leur système, mécanisme représentatif du marché des vulnérabilités qui capitalise sur la contribution des tierce-parties. A travers l’analyse de données de panel sur 156 programmes de chasse aux bogues gérés sur la plateforme HackerOne, nous montrons comment la perception de l’incertitude à être rémunérés des hackers, défini par le niveau de détail des termes contractuels, affecte leur choix de participation et par conséquent l’efficacité du programme.

Haut de page

Notes de l'auteur

I thank Marc Bourreau for his support and the three anonymous reviewers for their valuable comments, which were of great help in improving the paper. I also thank Maya Bacache, Thierry Pénard, Christine Zulehner, and the seminar audiences at 3EN 2018, EARIE 2018, DIF Lyon 2018, ZEW summer Workshop 2018, and at the 2019 TSE Digital Seminar for their useful comments on a previous version of the paper.

Texte intégral

1. Introduction

1Five years ago, a 17-year-old teenager in India discovered a serious vulnerability in several airline booking systems that allowed travelers to get free plane tickets. Despite his effort to contact the airline companies and to alert them about the flaw, only one company took him seriously and reacted.2 This is less true nowadays: an increasing number of companies seek to collaborate with benign vulnerability identifiers—the so-called white-hat hackers—to improve their systems’ security. Methodical approaches have been developed to work with independent researchers, such as offering incentives in the form of a “bounty”. Organizations started running Vulnerability Research Programs (VRPs)—also commonly called Bug Bounty Programs—which give monetary compensation to crowdsourced resources in exchange for information about vulnerabilities. Along with it, web platforms called “Bug bounty platforms” have emerged, hosting and managing these VRPs as a third party (E.g. HackerOne, BugCrowd, Yeswehack).

2Launching a VRP has become a typical way to improve software and system security and is becoming accepted as a normal part of the software development lifecycle. However, managing a successful VRP is not an easy task. One of the key challenges companies face is to strike the right balance between attracting enough participants and setting sufficiently high standards for participation in order to limit the proportion of low-value participations.3 Indeed, a VRP aims to benefit from the diversity of participants, so it is important to let a large pool of individuals participate. Yet, each participation induces a cost, as it requires dedicated resources to sort the relevant participations from those which are invalid, and to communicate with participants.4

  • 5 For instance, HackerOne allows hackers to submit a report only when they meet a given Level of vali (...)

3One method applied by bug bounty platforms to reduce the rate of invalid participations is to allow only individuals that have been sufficiently efficient in the past to participate.5 Unfortunately, one can also lose potentially valuable participations by applying such restrictive policies (Zhao, Laszka, and Grossklags, 2017). Apart from setting such a minimum quality standard, the contest’s policies—i.e., the terms and rules of the contest—may shape the outcome of the contest.

4The policy of a crowdsourced contest such as VRPs is comparable to an employment contract, as it defines the contractual relationship between the contest owner and participants, especially by specifying the compensation scheme the contest offers to participants and what it expects in exchange. The relationship between workers’ performance and the attributes of an employment contract have been extensively studied by economists. The literature distinguishes at least two types of effects. First, a compensation scheme can affect a worker’s performance—for instance, through a pay-for-performance scheme—by producing a certain Level of effort from the worker, which may be referred to as effort effects. Secondly, it can have an effect on personal attributes. For example, a number of empirical works show that more productive workers systematically prefer variable-pay to fixed-pay schemes (e.g., Dohmen and Falk, 2011). The possibility that agents with different individual characteristics feel attracted by different pay schemes and therefore self-select into particular forms of contracts may be referred to as self-selection effects.

5In the same way as an employment contract, the characteristics of a contest’s policy may affect both the Level of effort provided by participants and the type of individuals that choose to participate in the contest. In this paper, we focus on the second aspect, that is, how the attributes of a VRP affects the decision of an individual to participate. In particular, we are interested in how the completeness of the contract offered by a VRP affects a hacker’s choice to participate. By completeness of the contract, we mean how much information the VRP’s policy provides about the compensation scheme and about what it expects as an outcome.

6Crowdsourced innovation contests like VRPs present several important specificities that make it difficult to derive the answer to our question directly from the case of a standard employment contract. First, the number of participants—the number of brains that work on a problem—is an important factor that defines the effectiveness of a crowdsourced innovation contest (Terwiesch and Xu, 2008; Boudreau, Lacetera, and Lakhani, 2011). A contest policy should be thus applicable and attractive enough to a wide panel of individuals. At the same time, it has to be specific enough to provide adequate incentives and streamline the search process. Secondly, in VRPs, participants are asked to find new security flaws that were unknown before. That is, they are asked to find an innovative way to penetrate a system rather than to carry out predefined tasks as is the case in other crowdsourcing platforms such as Amazon Mechanical Turk. Certain types of individuals may be more qualified to innovate. Solvers who have extensive knowledge and experience in the problem domain could be most effective. Or, on the contrary, technical and social marginality could be an advantage to successfully solve the problem (Jeppesen and Lakhani, 2010). All in all, the success of an innovation contest largely depends on sorting and attracting the right kind of solvers. Third, a VRP offers only variable pay: participants are compensated only if their contributions are relevant enough, regardless of the effort they have actually provided. Variable‑pay schemes are likely to attract more productive workers than fixed-pay schemes, but they also occasion a sorting effect on other attributes such as the relative self-assessment or risk preference (Dohmen and Falk, 2011) that might alter the effectiveness of a contest.

7The purpose of this study is to examine how the completeness of the contract proposed by a VRP affects its effectiveness. As the goal for a bug bounty program is to find as many relevant vulnerabilities as possible (i.e., it is interested in maximizing the sum across all outcomes, as opposed to a one-prize contest in which the goal is to maximize the value of the highest outcome), we examine the effect of contract type both on quantity (the number of participations) and on quality (whether it attracts more skilled and more experienced participants).

8Our analysis is based on publicly available data from the web platform HackerOne. We use an unbalanced panel data set on 156 bug bounty programs run from January 2015 to February 2019. VRPs on this platform are free to choose the Level of information they provide about the contractual terms through a written policy. They can provide more or less detail about their compensation scheme, from fully specifying the payouts for each task to having a large degree of discretion about the monetary rewards and the targeted scope. They can also modify their policy over time. We consider that the Level of information provided in the written policy reflects the degree of completeness of the contract the contest offers. We find that the more precise and detailed the policy is, the more participants it attracts. However, it also attracts participants with more heterogenous performance and it reduces the average quality of participants. In particular, top hackers are not necessarily attracted by VRPs providing more information. On the contrary, leaving more uncertainty about the monetary rewards and the targeted scope generates fewer, more homogenous participants, but with higher quality on average.

9This paper proceeds as follows. Section 2 reviews the relevant literature, Section 3 develops our hypothesis for analysis and Section 4 describes the data and estimation strategies. Section 5 reports the results, and Section 6 concludes.

2. Related literature

10Our paper is closely related to three streams of research. The first one is on the economics of information security. The literature on the economics of information security aims at studying the potential market failures causing information systems insecurity. Various questions are addressed, from modeling the interaction between attackers and defenders (Varian, 2004; Bohme and Moore, 2010), examining the role of different liability and vulnerability information disclosure (Kannan and Telang, 2005; Arora, Caulkins, and Telang, 2006; Kim, Chen, and Mukhopadhyay, 2011; August and Tunca, 2011; Lam, 2016), risk sharing and coordination possibilities between vendors and users (August and Tunca, 2006; Cavusoglu, Cavusoglu, and Zhang, 2008; Kim, Chen, and Mukhopadhyay, 2009). Although there are far more theoretical studies than empirical ones, this literature contributes to a large number of applied areas related to information security like software vulnerability discovery and patching, security investment decisions and market insurance, network security or payment system security. We contribute empirically to this literature, first by studying a marketplace—a market for software vulnerabilities, where hackers sell vulnerability information to software vendors and companies—that has been barely studied for now as it is so new, and secondly by studying how the design of a bug bounty program affects the contribution of the individual researchers.

  • 6 They suggest that in VRPs, a penalty system for invalid reports is more efficient than applying a m (...)
  • 7 See definition and description of a bug bounty platform in Subsection 4.1.

11Among the few papers that focus on Vulnerability Research Programs, Finifter, Akhawe, and Wagner (2013) analyze two programs run by big pioneers in the vulnerability research community (Google and Mozilla) and examine whether running a VRP is economically profitable for a firm. Zhao et al. (2017) develop an analytical framework that compares different policies that aims at reducing the number of invalid reports.6 To our knowledge, there are only two empirical studies that analyze data from bug bounty platforms.7 Zhao, Grossklags, and Liu (2015) compare the currently biggest bug bounty platform HackerOne run by a US-based company to the well-known Chinese bug disclosure platform Wooyun. They compare the trend in the vulnerabilities discovered in the two platforms, the different reward structures VRPs offer, and how offering monetary incentives attracts more participants to a VRP. Maillart, Zhao, Grossklags, and Chuang (2017) also use data from HackerOne and show that the number of participations in a VRP reducse considerably over its duration and that hackers strategically switch to new programs when new programs become available.

12Along with these two papers, our paper is one of very few to provide an empirical analysis of bug bounty platforms. Besides the principal characteristics of VRPs already examined by existing papers—such as the effects related to monetary incentives or the decreasing probability of finding new vulnerabilities—, we identify an important mechanism that affects the effectiveness of a VRP. We are the first to focus on how the amount of information provided by a VRP about its contractual terms affects its effectiveness. We assimilate a VRP’s policy to an employment contract proposed by a firm to workers and we study how the perception of the uncertainty to obtain a reward affects a worker’s choice to participate in the contest. Our data set is also unique, in that it is a recent and large panel data set on VRPs run by diverse types of organizations managed on a single platform. As we have a panel data set, we were able to account for the different fixed effects and reliably identify the effect attributed to a change in a VRP’s policy. Moreover, we used both quantitative (the number of participations) and qualitative (the number of top hackers, participants’ average quality and variance) information that defines the outcome of a contest.

13Our paper is also related to the literature on innovation contests and tournaments. VRPs are a type of contest in which the organizer commits to rewarding the participants according to the rules and terms it defines, and participants spend resources in order to win the compensation. For each new security flaw, only the first person to find it is rewarded. It is thus close to an innovation contest in that the goal of the contest is to find a new idea—an innovative way to penetrate the system and secure it. Economists have studied the optimal design of contests from various angles, mainly about how to allocate the prizes (Moldovanu and Sela, 2001; Archak and Sundararajan, 2009; Liu, Yang, Adamic, and Chen, 2014) and whether free entry or restricted numbers of participants yield better outcomes (Fullerton and McAfee, 1999; Terwiesch and Xu, 2008; Boudreau et al., 2011).

  • 8 They show empirically that greater rivalry between the participants in a contest reduces the incent (...)

14As Boudreau et al. (2011), we are interested in how the degree of uncertainty faced by participants affects the outcome of the contest.8 However, our scope and approach differ in several aspects. First, we study a type of contest increasingly used with the rise of crowdsourcing but barely studied for now. As mentioned earlier, in contests like VRPs, the goal is to maximize the sum of the outcomes, while “traditional” innovation contests like those launched on the well-known web platform InnoCentive aims at selecting one best solution. Secondly, in Boudreau et al. (2011), the degree of uncertainty is measured by the number of problem domains on which a given solution draws. They focus on the fact that participants exert less effort when they face more uncertainty to solve the problem. In our case, the uncertainty comes from the level of information provided by the contest and we are interested in how the preference for uncertainty attracts a given type of participant. Lastly, the results we obtain are different from the findings of Boudreau et al. (2011). In our work, we find that uncertainty attracts more skilled participants, which has a positive effect on the overall outcome, while in their case, it is the number of participants that compensate for the reduced effort of each individual due to problem uncertainty.

15Our analysis also draws on results from the rich body of literature on employment contracts. Specifically, in our paper, we are interested in the incentive schemes used by firms to attract specific types of workers, namely the self-selection effect as defined in Salop and Salop (1976) or in Chow (1983). Analytical works show that individuals with higher skills are more likely to choose a performance-based pay schemes than low-skilled workers (e.g., Salop and Salop, 1976; Demski and Feltham, 1978; Lazear, 2000b; Jensen, 2003). The basic idea is that a worker evaluates the match between his self-perceived personal attributes and the perceived attributes of available employment contracts and selects the contract that maximizes his expected utility. This theory is supported by a number of empirical papers. Most of them are laboratory experiments (Chow, 1983; Waller and Chow, 1985; Cadsby, Song, and Tapon, 2007; Eriksson, Teyssier, and Villeval, 2009; Dohmen and Falk, 2011), except from a field experiment by Fehrenbacher, Kaplan, and Pedell (2017) and the studies of Lazear (2000a,b) based on a large data set of an auto glass company’s workforce.

16The originality of our work also comes from the fact that we investigate the mechanism of a crowdsourcing contest by referring to research applied to a standard employment framework. In particular, our focus is on participants’ self selection, while studies on contests are more concerned with the relationship between the design of the contest and the effort exerted by participants.

17To our knowledge, Eriksson et al. (2009) is the only paper that studies the self-selection effect in the context of tournaments. In a laboratory experiment, they show that when workers are allowed to choose between performance-based pay and a tournament, there is a considerable reduction in effort variance among contestants in the tournament. They suggest that this is due to the fact that subjects self-select their payment scheme according to the degree of risk aversion. We also rely on their findings and see whether their arguments can be applied to a more general case where the degree of uncertainty perceived by the participants differs according to the level of information the contest provides about the contractual terms.

3. Hypothesis development

18The literature on tournaments and innovation contests identifies two opposing effects related to how the number of contestant impacts on its efficiency. The first is how the competition between participants affects the amount of effort exerted by each participant. Related to this effect, analytical works (e.g, Fullerton and McAfee, 1999; Moldovanu and Sela, 2001; Terwiesch and Xu, 2008) suggest that having many contributors working on a single issue leads to a lower equilibrium effort for each contributor because they face a greater risk of not being rewarded for the effort they are exerting. Empirical works have also provided evidence of the effort-reducing impact of an increased number of participants (Casas-Arce and Martínez-Jerez, 2009; Garcia and Tor, 2009).

19The second effect is on the probability of finding a solution by increasing the number of “brains” working on a subject. Terwiesch and Xu (2008) argue that a larger number of participants is not necessarily inefficient because it brings a more diverse set of solutions, which can outweigh the negative effect from the underinvestment of each individual. Boudreau et al. (2011) empirically show that in the case of problems with high uncertainty, adding additional participants increases the overall contest performance. In fact, this second effect—the positive effect of having a large number of solvers—is particularly applicable to innovation contests, in which the discovery of a solution and thus obtaining a reward presents a high degree of uncertainty.

20Regarding how uncertainty affects the decision of an individual to participate, Eriksson et al. (2009) suggest that when participants are free to choose whether to participate or not in a contest, the uncertainty of success divides up the participants before they enter the contest. The idea is that facing uncertainty, some individuals would choose the minimum effort, that is, they drop out of competition, securing the loser’s prize without bearing any cost. In sum, if there is too much uncertainty to win a reward or it is perceived as such by potential participants, the contest may lose some contributors that would have otherwise exerted an effort, however small.

21In this paper, we consider that the uncertainty of being rewarded is induced by the contractual uncertainty. In fact, the level of information provided by a VRP, which determines contractual uncertainty, reflects the extent to which the contest owner commits to internalizing the reward uncertainty borne by participants: the more detailed and accurate are the information provided by the contest about the compensation terms and the scope of work, the better participants can optimize their efforts and reduce the uncertainty of being rewarded.

22Based on this discussion, we predict the following about the relationship between the degree of completeness of a VRP’s policy and the number of participants.

23Hypothesis 1. A VRP that provides more detailed information about the contractual terms, i.e., less uncertainty about winning a reward, attracts a larger number of participants.

24According to Terwiesch and Xu (2008) and Boudreau et al. (2011), the uncertainty about being rewarded may negatively affect the overall outcome as it reduces the number of participants that work on the problem. However, Eriksson et al. (2009) argue that the degree of uncertainty also limits the type of participants that actually participate in the contest, hence leads to an ex-ante selection process. They show that when workers are given the possibility to choose the degree of uncertainty of their payment scheme, they are sorted by their attributes. This results in a pool of participants with less variance in their performance in contracts with more uncertainty like tournaments (contests), compared to those with more certainty like piece-rate payment schemes.

25Based on this reasoning, we predict the following:

26Hypothesis 2. More uncertainty about the contractual terms attract participants more homogeneous in their skill and experience.

27Finally, what argue is that when we offer the possibility of choosing whether to participate or not in a contest, subjects who have a higher probability of winning choose to enter the contest, which actually has a positive effect on the overall efficiency of tournaments.

28More generally, the main idea suggested by studies on the self-selection effect in employment is that the ex-ante sorting effects contribute significantly to output difference between different incentive systems. For instance, Dohmen and Falk (2011) show in a laboratory experiment that the output in all variable-payment schemes (piece rate, tournament, or revenue-sharing schemes) is higher than the results under the fixed-wage regime and that this output difference is mainly attributable to productivity sorting. That is, when facing the alternative between more or less uncertain payments, more productive workers systematically prefer a payment scheme which is linked to their own performance even though it is more risky. Additionally, in payment schemes like tournaments in which compensation depends on relative performance, they show that relative self-assessment plays an important role in sorting into tournaments. Considering that hackers who have participated more often in other VRPs in the past (i.e., who have more experience in the platform) are more likely to have a better relative self-assessment, we hypothesize the following:

29Hypothesis 3. A VRP with more uncertainty about the contractual terms sorts the participants into more skilled and more experienced participants.

4. Data and empirical framework

4.1. Data

30For the purpose of our analysis, we collected publicly available data from HackerOne. HackerOne is a well-known web platform, created in November 2013, which partners companies that want to run a Vulnerability Research Programs (VRPs) with white-hat hackers who want to participate in such programs. HackerOne is currently the most dominant bug bounty platform on the market. Bug bounty platforms such as HackerOne are different from innovation crowdsourcing platforms such as InnoCentive, as VRPs do not set out to choose one winner that suggests the best solution but rather look for as many valuable contributions as they can get. That is, in VRPs, all relevant submissions are rewarded. VRPs are also different from crowdsourcing platforms such as Amazon Mechanical Turk, in which workers are usually asked to carry out very simple tasks.

31In order to launch a VRP, the organizing company first needs to define and publish its policy on the platform. The policy provides information about the rules and the contractual terms of the contest: vulnerability types the VRP is looking for, the scope it is targeting, the reward structure, a vulnerability disclosure agreement, etc. On HackerOne, it falls to each VRP to define its policy and the level of information it provides within this policy. Some VRPs provide minimal description about their expectations, while others develop detailed sections describing the targeted scope of the VRP, the requirements for being eligible for a bounty or the detailed reward structure and amounts. In Figure 3 in the Appendix, we provide an example of a VRP on HackerOne that gives only basic information about the compensation and what it is looking for, while Figure 4 is an example of a VRP providing a greater amount of information about the compensation and about what they expect in exchange. A VRP can also modify its policy after the program has been launched: it can add new details, change the reward conditions, add bonus rewards, etc.

32In our study, we only consider VRPs that offer monetary rewards, that are publicly accessible and that do not pre-select their participants. Any hacker that has an account on the platform can participate in these public VRPs. They can browse information about active VRPs in several ways: They can visit the VRP’s webpage where the VRP presents its policy, some program statistics (number of closed reports, their average response time, if the VRP disclose it, rewards paid in the past, etc.). They also can look at the title of recent reports and who reported it and the payment amount if the VRP discloses it. Hackers also can classify VRPs according to their launch date, the number of closed reports, or the average amount of reward they offer. However, reward information is not systematically disclosed and this is one of the criteria we use to measure the information level provided by VRPs. Hackers can also look at other hackers’ profiles where information about their past performance is available (statistics on the vulnerabilities they have found, Reputation, Signal and Impact score, etc.)

  • 9 When a report is rejected, the submitter does not receive any financial reward. Moreover, the signa (...)

33Participating in a contest involves submitting a report about a vulnerability and fixing the reported vulnerability in collaboration with VRP coordinators. The flowchart in Figure 1 provides an overview of how report submissions are processed. The submitted report has to respect the rules and the guidelines defined by the VRP. After a report is submitted, the VRP evaluates whether the submission is relevant and how valuable the submitted report is (i.e., how critical and how important the discovered vulnerability is). If the report is assessed as irrelevant or as a duplicate, then it is rejected.9 For reports that are considered as relevant, the VRP validates the submission and starts exchanging with the participant to fix the vulnerability. When the vulnerability is fixed, the report becomes “resolved” and the VRP offers a reward to the participant. Then finally, the submitted report is closed.

Figure 1. Flowchart of report submission process on HackerOne

Image 1000000000000554000005C96FB679FD449BA2A3.png

34The data we collected concerns 156 active VRPs on HackerOne during the period from January 2015 to February 2019. All VRPs in our data set are public (anyone can submit a report) and an offer financial compensation. All VRPs in our data set are public (anyone can submit a report) and offer financial compensation. The pie charts in Figure 2 in the Appendix present some information about the VRP owners. More than two thirds of them are run by US companies. The main industry sectors companies belong to are IT, media and entertainment, hospitality and transportation, and financial services. As VRP policies evolve over time, we were able to build a panel data set. It is an unbalanced panel data set, as all the VRPs were not always active during the period. We collected information about each VRP each month, including the number of valid submissions, the performance indicators of each hacker at the date their submissions were accepted, the number of words used in the policy, whether a VRP presents a dedicated section describing the scope of the contest, and whether it specifies the reward structure. We collected data on how old a program is—i.e., the number of months since its launch—, and whether the program is managed by HackerOne or by the company itself. We also counted the total number of active VRPs each month on the platform. Additionally, information about the number of employees, the year the company was founded, and the location of its headquarters were collected from Crunchbase, Owler.com and Wikipedia. Indeed, we used the number of employees as a proxy for the size of the company and we categorized the companies into 11 different industries. We also accounted for how old the companies are, whether they are based in the United States and whether they were acquired by another firm during the studied period.

4.2. Empirical specifications

35Our objective is to analyze how the level of information given by the contest through its written policy affects the outcome of the contest. As the goal of a VRP is to maximise the sum of the values of all the participations, we consider two complementary measurable aspects that defines the outcome of a VRP: the quantity of participations and the quality of the participants, i.e., how successful and experienced the participants were in the past.

36First, to analyze the effect of the level of information on the quantity of participations, we use the following baseline specification:

Nb_participationit = β0 + β1Information_Levelit + β2Prog_Ageit + β3Total_Nb_participationi(t–1) + FEi + FEt + εit,

  • 10 A list of 422 words that are considered as the most commonly used in English are excluded. For inst (...)

where, Nb_participation, our dependent variable, is the number of submissions received by VRP i validated at period t. Our explanatory variable of interest is Information_Level, which measures how detailed the policy of VRP i is in period t. Three alternative variables are used as Information_Level. The first one is Nb_words, which counts how many words are used in a given VRP’s policy in a given period. The second variable is Reward, a dummy which identifies whether the VRP provides detailed information about the structure and the amount of the rewards at period t. The third variable we use for Information_Level is the dummy Scope, which identifies whether the VRP’s policy includes a dedicated section that describes in detail the scope targeted by the contest. Additionally, we use two alternative measures that reflect the Information_Level, the first one being Vocabulary_Diversity, which measures the vocabulary diversity of the written policy by counting the number of unique words it uses while excluding all “common words”10 in English, and the second variable being Contract_Completeness, which is an aggregate measure of Nb_words, Reward and Scope equal to ln(Nb_words + 1) + Reward + Scope.

  • 11 We do not account for other information about the contests’ statistics in t – 1 such as the average(...)

37According to Hypothesis 1, we expect to obtain a positive coefficient for β1. That is, we expect that the more information a VRP provides about its policy, the less contractual uncertainty participants will perceive, and the more participations it will generate. Prog_Age measures how long the VRP ran (in number of months). Total_Nb_participation is the total number of valid reports submitted to the contest at – 1. As the Total_Nb_participation is observable by any hacker that visits the VRP’s webpage, one might expect the decision to participate to be affected by this information.11 We expect the coefficients for Prog_Age and Total_Nb_participation to be negative because the probability of finding a new vulnerability decreases overtime and also decreases with the number of vulnerabilities that have already been discovered (Maillart et al., 2017).

38FEi and FEt represent VRPs and time fixed effects respectively. In this specification, as we account for program and time fixed effect, we do not account for variables at the program level that do not vary over time. Lastly, εit is an error term.

39Next, we use the following alternative specification:

Nb_participationit = β0 + β1Information_Levelit + β2Prog_Ageit + β3Total_Nb_participationi(t–1) + β4Platform_Growtht + β5Managed_by_HOi + β6Firm_Sizei + β7Based_in_USi + Founded_period_FEi + Industry_FEi + εit

  • 12 Theoretically, a company can run more than one VRP but this is not the case in our data set.

40In this specification, we include a range of variables specific to VRPs instead of accounting for VRP specific fixed effects. On the platform HackerOne, each VRP is run by a distinct company.12 Thus each VRP is associated to one company. We control for Prog_Age and Total_Nb_participation in the same way as in Specification 1. Platform_Growth measures the number of active VRPs at period t on the platform. It accounts for the time trend. Managed_by_HO is a dummy which identifies whether the VRP is fully managed by HackerOne. For the Firm_Size, we use the definitions of small, medium-sized and big companies used by the World Bank. Specifically, we classify the companies into 4 categories according to the number of employees they have during the observed period. As the platform is based in the US, contests are subject to US laws so we control for whether the company’s headquarters are located in the United States (Based_in_US). We also control for the type of industry the company belongs to (Industry_FE). Lastly, εit is an error term. As in Specification 1, we expect β1 to be positive according to Hypothesis 1.

41It is important to mention that the dependent variable Nb_participation is the number of participations that has been evaluated as “valid” participations. In other words, we are not aware of the number of rejected participations. This can be an issue for our specification as the number of relevant participations could also account for some qualitative effects, as valid participations may come from participants with higher Signal scores (i.e., participants who are more likely to submit valid reports. See the following paragraph, which describes the performance indicators we use to measure participant quality). That is, the number of valid participations can be represented as the product of the number of actual participations (rejected participations included) and the probability of each participants submitting a valid report. We perform additional regressions using this definition for the dependent variable, in order to test the robustness of our principal results.

  • 13 In Zhao et al. (2017), the percentage of valid reports in public VPRs remains relatively stable at (...)

42It is also worth noting that in the baseline specifications, we ignore that the number of participations in the period tk (k > 0) can affect the decision of a VRP to modify its policy in period t. In other words, we might ignore the potential simultaneity between the choice of hackers to participate in a VRP (the supply) and the demand of the VRP. Regarding this issue, statistics from show that invalid submissions in HackerOne are relatively constant over time.13 We could thus consider that ignoring the rejected participations does not have an effect on our estimations. Moreover, according to our data set, VRPs do not change their policy very frequently, which suggests that the “demand effect” is limited: among the 156 VRPs considered in our study, 27,6% of them maintained their choice in providing (or not providing) detailed information about the rewards and 28,2% of them kept their choice in providing (or not) a scope section. On average, VRPs changed their policy 5 times and removed or added detailed information about the rewards and the scope only once during their whole lifetime. Nonetheless, we additionally instrument our main regressor Information_Level in order to account for the potential reverse causality. Specifically, we use two variables as our instruments. The first is a dummy which identifies whether the company that owns the VRP (and that finances it) is being Acquired by another company at period t. The acquisition of the company could cause a change in the VRP’s demand and thus on its policy while it does not directly affect the decision of a hacker to participate in the VRP. We thus consider that the exclusion restriction is satisfied. The second instrument we use is PSD2, which accounts for the effect of the second European Directive on Payment Services on cybersecurity investments. This directive came into force in January 2018 and one of its main goals is to improve the security of online and mobile payments and cross-border payment services. The variable PSD2 identifies whether the core activity of the company that owns the VRP i concerns the online or mobile payment sector, whether it is a European company, and whether period t is after January 2018. We consider that PSD2 satisfies the exclusion restriction as it may directly impact the investment decision of a VRP, especially for companies that are regulated (and thus on the choice of the Information_Level of a VRP’s policy) but will not impact hackers’ participation.

43Next, in order to analyze the effect of the level of information on the quality of participants, three specifications are used.

44First, we use Specification 1, with a dependent variable that considers a subgroup of the participations: we use the number of participations by top hackers (Nb_Top_Hackers). As suggested in Hypothesis 3, we expect the coefficient for our main explanatory variable Information_Level to be negative, i.e., the less information the VRP provides about the contractual terms, the more it attracts “top hackers”, i.e., hackers who are more skilled and experienced.

45Secondly, we use a set of dependent variables that reflects the average quality of the participants (Av_Reputation, Av_Signal, and Av_Impact). Indeed, HackerOne provides three different performance indicators that reflect the quality of a hacker over time, which are the Reputation score, the Signal score and the Impact score. These are the indicators we use to build our dependent variables for the quality of the participants. The Reputation score is a measure of a hacker’s level of experience on the platform. It is an aggregate score of their contribution since the hacker has entered the platform (since it has created an account on HackerOne). It takes account of both the number of valid reports the hacker has submitted and how valuable these contributions are. The Signal score is the average relevance of the reports submitted by the hacker. The lower the Signal score, the higher the probability that the hacker will submit an invalid report (an invalid report is a report that is considered as not relevant enough or that another participant has already submitted to the VRP). The Impact score is a measure of the average severity of the vulnerabilities reported by the hacker. It represents the average value of the relevant reports submitted by the hacker. We compute the mean value of each indicators for participants in each VRP each month to construct our dependent variables Av_Reputation, Av_Signal, and Av_Impact.

46The same specification as for Nb_Participation (Specification 1) is used to examine the effect of the Information_Level on the average quality of the participants. That is:

Av_Qualityit = β0 + β1Information_Levelit + β2Prog_Ageit + β3Total_Nb_participationi(t–1) + FEi + FEt + εit,

where we use Av_Reputation, Av_Signal, and Av_Impact as measures of the average quality (Av_Quality). Following Hypothesis 3, we expect that β1 is negative, i.e., the more detailed a VRP’s policy, the lower the participants’ average Level of experience and Level of performance.

47Lastly, we compute the standard deviation of each quality indicator to define the last series of dependent variables (Sd_Reputation, Sd_Signal, and Sd_Impact). These variables allow us to examine the effect of the level of contractual information provided by a VRP on the variance of participant quality. Again, the same specification is used (Specification 1). As Hypothesis 2 specifies, we expect the coefficient of our main explanatory variable β1 to be negative, that is, a less detailed policy attracts participants who are more homogenous in their Levels of both experience and performance.

4.3. Summary statistics

48Tables 6 to 10 in the Appendix provide a range of information about the data set we used. Table 6 provides a description of the variables and Table 7 reports the summary statistics. In Table 8, we estimate the mean value of each of our dependent variables (Nb_Participation, Nb_Top_Hackers, Av_Reputation, Av_Signal, and Av_Impact) according to the total duration of the VRPs and Table 9 presents statistics on the number of times VRPs modified their policies. Lastly, Table 10 reports a correlation matrix for a list of variables.

  • 14 Allison and Waterman (2002) shows that the conditional maximum likelihood method for a fixed effect (...)

49Our data set is composed of 156 VRPs. A VRP is active for an average of 27 months with a large standard deviation of 17 months. The per-month number of participations varies considerably according to the VRP, with a large proportion of VRP-month pair with 0 participations (25% of the observations are equal to 0). Because of this over-dispersion, we used a Negative Binomial regression for the specifications using the number of participations as the dependent variable.14

50VRPs received an average of 184 valid reports during the total duration of the program and an average of 6.4 valid reports in a month. Overall, for a given VRP, the number of participations decreased over time. Interestingly, the number of participations is not correlated with the average amount of reward a VRP offered. We also observe that VRPs which ran for a longer period of time had a higher per-month participation rate. On the other hand, it is not because a VRP is run during a longer period of time that participants are more successful on average (See Table 8).

51Regarding our main explanatory variables, on average, a VRP’s policy was composed of 494 words with large disparity. The average value of Vocabulary_diversity—measured by the number of unique “non common” words used in the written policy—was 273. Policies evolved over time, with an average of difference of 654 words between the shortest and the longest version over time. On average, VRPs changed their policy 5 times during their whole lifetime and removed or added a Reward or Scope section only once during their whole lifetime (See Table 9). From the correlation matrix (See Table 10) we can observe that Reward and Scope have a positive relation with Nb_Words, i.e., a longer policy is more likely to include a reward and a scope section. However, Reward and Scope are not highly correlated with Nb_Words. Lastly, we observe that the number of participations and the average quality of participants are very weakly correlated even though we observe a positive relationship between them.

5. Results

5.1. The effect of the level of information of a VRP’s policy on the number of participations

  • 15 As Wooldridge (1999) has raised some concerns about using Negative Binomial regressions with fixed (...)

52Table 1 reports the regression results for specification 1 and 2 using the number of participations as the dependent variable. We use negative binomial regression with fixed effects by including dummy variables for all individuals.15 In each pair of columns, we report the results using the three different measures for our main regressor of interest Information_Level, namely Nb_Words, Reward, and Scope. The first column of each pair of columns reports the estimation results for Specification 1 and the second one for Specification 2. As expected, the coefficients for Nb_Words, Reward, and Scope—i.e., β1 and β1 in Specification 1 and 2—are all positive and statistically significant. That is, the more detailed the information in a VRP’s policy, the more participations it generates. For example, according to the reported average marginal effects in column (3) and (5), when a VRP provides detailed information about the rewards, participation is increased by 1.6 more participations per month, while giving detailed information about the scope of the contest generates 2.2 more participations per month.

Table 1. Estimation results for the effect of Information_Level on the Nb_Participation—Regressions without Ivs

(1)

(2)

(3)

(4)

(5)

(6)

NB FE

NB

NB FE

NB

NB FE

NB

Nb_Words

0.00263***

0.00274***

(0.000457)

(0.000395)

Reward

1.620***

0.887**

(0.552)

(0.347)

Scope

2.203***

0.822**

(0.509)

(0.349)

Prog_Age

-0.139***

0.0469***

-0.187***

0.0429***

-0.154***

0.0463***

(0.0508)

(0.0120)

(0.0505)

(0.0120)

(0.0512)

(0.0120)

Platform_Growth

-0.0443***

-0.0154

-0.0187

(0.0123)

(0.0114)

(0.0118)

Firm_Size

1.843***

1.853***

1.896***

(0.202)

(0.201)

(0.201)

Managed_byHO

-0.197

0.0589

-0.114

(0.404)

(0.405)

(0.405)

Created_in_90s

7.265***

6.431***

6.116***

(1.353)

(1.312)

(1.382)

Created_in2000to2007

3.835***

3.915***

3.431***

(0.916)

(0.969)

(1.072)

Created_in2008to2010

1.661**

1.146

0.766

(0.754)

(0.792)

(0.877)

Created_after_2011

1.611**

1.132

0.786

(0.788)

(0.825)

(0.912)

Based_in_US

2.228***

2.724***

2.665***

(0.425)

(0.418)

(0.421)

VRP FE

Time FE

Industry dummies

Observations

4,177

3,230

4,177

3,230

4,177

3,230

LR 𝜒2

2575.07

833.22

2549.38

783.67

2559.78

782.61

Number of VRPs

156

156

156

Note: the dependent variable is Nb_Participation. Negative Binomial Fixed Effects (using dummies for individuals) and Negative Binomial regressions with robust standard errors. Coefficients are Average Marginal Effects. Standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1. The number of observations in columns (2), (4), and (6) are reduced because we do not have the information for the Firm_Size for every VRPs. Baseline value for Created_Period is “Created before 1990”.

53Regarding the control variables, we observe that Prog_Age presents a negative coefficient when we account for individual fixed effects, meaning that when everything else remains fixed, the number of participations decreases over the duration of a VRP. We would explain the different sign of the coefficients for Prog_Age for column (2), (4) and (6) by the fact that Prog_Age also accounts for the attractiveness specific to each VRPs when we do not control for individual fixed effects, and as we have seen in the summary statistics, VRPs that are launched during a longer period of time are likely to generate more participations on the whole. We also observe that the larger the number of active VRPs on the platform (Platform_Growth), the less participation a VRP receives. This may be because new hackers are not entering the platform as fast as new VRPs are created and the amount of effort a hacker dedicates to a single VRP reduces with the number of contests that are launched on the platform. As for coefficients for Firm_Size, they are positive and always significant, showing that VRPs owned by larger firms generate a larger number of participations. We do not identify any particular trend regarding the industry sector of the organizing company (See Table 12 in the Appendix), or the age of the company, except that companies created after the 90s generate more participations than older companies. US based companies are likely to be more attractive. Lastly, Table 13 in the Appendix presents regression results including the control variable Lagged_tot_nb_participation is included. We do not present these regressions as our main results because Lagged_tot_nb_participation is highly correlated with a number of other variables, including our dependent variable (See correlation matrix Table 10). However, we observe that VRPs who have generated a higher number of total participation in the past attract more participation (Lagged_tot_nb_participation).

  • 16 We consider that there are no multicollinearity concerns. For instance, Variance inflation factor ( (...)

54Table 14 in the Appendix reports a set of regressions using alternative measures for the Level of information provided by the contest. Columns (1) and (2) report estimation results when we include the three main regressors— Nb_Words, Reward and Scope—altogether within the same regression.16 Columns (3) to (6) report estimations using two alternative measures for Information_Level: The first is a measure of the Vocabulary_Diversity of the written policy. In comparison to Nb_Words, this variable counts the number of unique words that are used in the policy while excluding words that are commonly used in English. The second variable is an aggregated measure of Nb_Words, Reward and Scope, that we named Contract_Completeness. This variable is equal to ln(Nb_Words + 1) + Reward + Scope. Estimation results using Nb_Words, Reward and Scope altogether are similar to the main results in Table 1, with some minor differences: coefficients all remain positive, the effect of Nb_Words remains the same both in its magnitude and statistical significance, while coefficients for Reward and Scope variables keep their magnitude but are statistically less significant. Regarding the use of the variables Vocabulary_Diversity and Contract_Completeness, we obtain very similar results with Table 1.

In order to check whether using the number of valid participations as a measure of the number of participations presents bias, we performed the same regressions (Specifications 1 and 2) using an alternative measure of Nb_Participation: we used the sum of Image 10000000000000240000001A7F5948D34587A663.png of all valid participations in a given month (Nb_Participation_2). As the Signal score reflects the probability of a hacker submitting a valid report, Nb_Participation_2 represents an estimated value of the number of total participations (both valid and rejected participations). Table 15 in the Appendix reports the regression results. We observe that a greater Information_Level still positively affects the number of participations. Moreover, neither the magnitude of the effect nor its significance vary significantly by using this alternative definition. We thus conclude that neither ignoring nor accounting for the rejected participations has an identifiable effect on our main findings.

55Note that in the preceding regressions we used Negative Binomial regressions that do not include any instruments to deal with the potential reverse causality between the number of participations and the Level of information of a VRP’s policy. The next table reports the estimation results using Poisson IV regressions using a control function approach.

  • 17 We have two instruments and F statistics for the excluded instruments are much greater than 19.93 w (...)

56Table 2 reports the estimation results when we include the instrumental variables in our regressions, using Specification 1. As specified in Subsection 4.2, we used two instruments: Acquired and PSD2. The estimation results for the first stage is reported in Table 16 in the Appendix. First stage regressions show that the excluded instrument is correlated with the endogenous variable and the Stock-Yogo test is satisfied.17 Table 17 in the Appendix shows that the over-identification restrictions are also valid, thus we consider our instruments as valid.

Table 2. Estimation results of the effect of Information_Level on Nb_Participation—Regressions with IVs.

(1)

(2)

(3)

Nb_Words

0.00353*

(0.00188)

Reward

0.912***

(0.252)

Scope

1.595***

(0.455)

Prog_Age

-0.245***

-0.319***

-0.570***

(0.0939)

(0.0698)

(0.148)

Residuals of First Stage

-0.00120***

-0.706***

-1.404***

(0.00177)

(0.225)

(0.457)

Time FE

program FE

Observations

4,177

4,177

4,177

Fstat (excluded instr.)

187.67

124.59

44.46

Number of VRPs

156

156

156

Note: the dependent variable is Nb_Participation. Poisson IV regressions with robust standard errors, using a control function approach. Coefficients are Average Marginal Effects. Bootstrapped Standard errors are in parentheses. *** p<0.01, ** p<0.05, * p<0.1

57As in Table 1, we report the results for the three different measures for Information_Level in each pair of columns. We observe that the coefficients for our main explanatory variables all remain positive and statistically significant. Regarding the coefficients for Reward and Scope, we observe that the magnitude of the effect is smaller when we correct for the simultaneity between hackers’ participations and the VRP’s demand for participation. That is, providing more detailed information about the reward and the scope generates more participations but part of the effect is due to reverse causality. Nevertheless, we do not observe a large effect from reverse causality.

58All in all, our results confirm Hypothesis 1, showing that more information about the VRP’s contractual terms, or in other words, less contractual uncertainty, generates more participations.

5.2. The effect of the Level of information of a VRP’s policy on the performance and experience Levels of participants

59Table 3 reports the results for estimations that examine the effect of the level of information of a VRP’s policy on the performance and experience levels of participants, using as dependent variables Av_Reputation, Av_Signal, and Av_Impact.

Table 3. Estimation results for the effect of Information_Level on participant quality

Dependent Variable is

Av_Reputation

Av_Signal

Av_Impact

(1)

(2)

(3)

Nb_Words

-0.809***

-0.000312***

-0.000942***

(0.176)

(0.000109)

(0.000344)

Prog_Age

32.46*

0.0665***

0.202***

(18.03)

(0.0137)

(0.0426)

Lagged_tot_nb_participation

0.588*

0.000682**

0.00459***

(0.350)

(0.000297)

(0.000887)

Time FE

program FE

Observations

3,057

3,057

3,057

VRPs

156

156

156

(4)

(5)

(6)

Reward

-500.3**

-0.576***

-1.327***

(236.5)

(0.134)

(0.446)

Prog_Age

44.27**

0.0680***

0.212***

(18.29)

(0.0134)

(0.0419)

Lagged_tot_nb_participation

0.441

0.000720**

0.00458***

(0.361)

(0.000296)

(0.000892)

Time FE

program FE

Observations

3,057

3,057

3,057

(7)

(8)

(9)

Scope

-289.1

-0.0560

-0.134

(182.8)

(0.123)

(0.408)

Prog_Age

41.64**

0.0744***

0.227***

(18.31)

(0.0137)

(0.0423)

Lagged_tot_nb_participation

0.268

0.000569*

0.00424***

(0.360)

(0.000295)

(0.000876)

Time FE

program FE

Observations

3,057

3,057

3,057

VRPs

156

156

156

Note: We use Poisson regression for Av_Reputation, OLS regressions for Av_Signal and Av_Impact scores. For all regressions, VRP and time fixed effects are included. Coefficients are Average Marginal Effects for Poisson regression. Standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1

60Each triplet of columns in Table 3 reports regression results using the three different measuresfor Information_Level (Nb_Words, Reward, and Scope) and for each column of each triplet, we use the three different measures of participant quality Av_Reputation, Av_Signal, and Av_Impact. As a reminder, Reputation represents the experience level of a participant, Signal reflects the probability that the participant will submit a valid solution and Impact reflects the probability that the participant will submit a valid solution and Impact reflects the average severity of the vulnerabilities that are found by the hacker, i.e., how valuable its participation is on average. We use a Poisson regression for Av_Reputation while we use linear regressions for Av_Signal and Av_Impact. Additionally, Table 18 in the Appendix reports the same regressions using standardized values for the dependent variables (i.e., values are rescaled from 0 to 1), so that one can compare the effect of the explanatory variables on the different measures of participant quality.

61As expected, the coefficients for Information_Level are negative, meaning that less information about the contractual terms, i.e., more uncertainty about getting rewarded, attracts hackers who are more experienced, who find more critical vulnerabilities and who makes less mistakes on average. We also observe that the effect of the amount of information provided in the written policy or detailing the amounts of rewards are statistically significant while the effect of detailing the scope is not. On the other hand, we observe from Table 18 in the Appendix (where values of the dependent variables are rescaled from 0 to 1) that the magnitude of the effect does not vary a lot for the different quality indicators. We also observe that the coefficients for Prog_Age are all positive, showing that over time, more skilled and experienced participants are participating on average.

  • 18 Other definition for top hackers were tested, such as Reputation, Signal and Impact scores at the 2 (...)

62Next, in Table 4, we report the regression results for Specification 1 and 2 using the Nb_Top_Hackers as the dependent variable. “Top hackers” are those with Reputation, Signal and Impact scores at the 30th percentile.18 In Table 4, coefficients for Information_Level are mostly negative and are not statistically significant. The sign of the effect—the negative coefficients—corroborates the idea that providing more information attracts less successful hackers. This result also rejects the idea that the negative effect of higher Level information on average quality comes from a long tail distribution of low-quality hackers. In the meantime, the effect is statistically less significant as we focus on the behavior of more skilled and more experienced participants (the “top hackers”). This difference in statistical significance could be interpreted as less successful hackers are more sensitive to the degree of information provided by a contest while top hackers are less affected. This has an important practical implication in that changing the Level of information will influence the number of low quality participants rather than that of high quality participants.

Table 4. Estimation results for the effect of Information_Level on Nb_Top_Hackers

(1)

(2)

(3)

(4)

(5)

(6)

NB FE

NB

NB FE

NB

NB FE

NB

Nb_Words

-3.08e-05

9.57e-06

(0)

(9.21e-05)

Reward

-0.175

-0.0436

(0)

(0.0919)

Scope

0.0633

-0.112

(0)

(0.0968)

Prog_Age

0.0164

0.0168***

0.0140

0.0170***

0.0194

0.0172***

(0)

(0.00364)

(0)

(0.00365)

(0)

(0.00367)

Platform_Growth

-0.00136

-0.000955

-0.000139

(0.00343)

(0.00325)

(0.00334)

Firm_Size

0.461***

0.463***

0.463***

(0.0629)

(0.0630)

(0.0631)

Managed_byHO

0.541***

0.537***

0.562***

(0.130)

(0.129)

(0.131)

Created_in_90s

1.146***

1.133***

1.171***

(0.287)

(0.288)

(0.286)

Created_in2000to2007

1.022***

1.032***

1.120***

(0.226)

(0.225)

(0.242)

Created_in2008to2010

0.615***

0.604***

0.646***

(0.146)

(0.149)

(0.145)

Created_after_2011

0.0612

0.0518

0.0827

(0.120)

(0.122)

(0.114)

Based_in_US

0.0433

0.0536

0.0689

(0.107)

(0.104)

(0.105)

VRP FE

Time FE

Industry dummies

Observations

4,177

3,230

4,177

3,230

4,177

3,230

LR 𝜒2

1408.74

557.27

1409.02

583.23

1408.91

582.57

Number of VRPs

156

156

156

Note: the dependent variable is Nb_Top_Hackers. Nb_Top_Hackers are hackers who have Reputation, Signal and Impact scores at the 30th percentile of all participants on the platform during the month. Negative Binomial FE (using dummies) and Negative Binomial regressions with robust standard errors. Coefficients are Average Marginal Effects. Standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1. The number of observations in columns (2), (4), and (6) are reduced because we do not have the information for the Firm_Size for every VRP.

63While the dependent variable in Table 3 is the measure of the quality itself, we report in Table 5 the regression results for the effect of the Information_Level on the variance of participant quality. In Table 5, each triplet of columns reports the regression results using the standard deviation of Av_Reputation, Av_Signal and Av_Impact as dependent variables. The coefficients for Information_Level are all positive, corroborating Hypothesis 2. That is, providing more information about the contract increases the variance of participants’ performance and experience, while providing less information attracts participants that are more homogenous in their attributes. The coefficients are also always statistically significant except for the case where we use Scope as a measure of the Level of information. Additionally, we observe that the coefficients for Prog_age are always negative, meaning that participant quality becomes more homogenous over the duration of a VRP.

Table 5. Estimation results for the effect of Information_Level on the variance of participant quality

Dependent Variable is

Sd_Reputation

Sd_Signal

Sd_Impact

(1)

(2)

(3)

Nb_Words

0.137***

0.000302***

0.0981

(0.00299)

(7.42e-05)

(0.155)

Prog_Age

-0.454

-0.00771

0.805

(0.343)

(0.00931)

(19.50)

Lagged_tot_nb_participation

1.155***

0.000288

1.375***

(0.00710)

(0.000203)

(0.424)

Time FE

program FE

Observations

3,057

3,057

3,057

VRPs

156

156

156

(4)

(5)

(6)

Reward

81.14***

0.233**

143.7

(3.393)

(0.0916)

(191.7)

Prog_Age

-2.836***

-0.0136

-0.189

(0.337)

(0.00914)

(19.11)

Lagged_tot_nb_participation

1.175***

0.000343*

1.373***

(0.00711)

(0.000202)

(0.423)

Time FE

program FE

Observations

3,057

3,057

3,057

VRPs

156

156

156

(7)

(8)

(9)

Scope

152.3***

0.152*

108.2

(3.177)

(0.0841)

(175.8)

Prog_Age

0.526

-0.0127

0.743

(0.347)

(0.00933)

(19.50)

Lagged_tot_nb_participation

1.184***

0.000378*

1.392***

(0.00703)

(0.000202)

(0.421)

Time FE

program FE

Observations

3,057

3,057

3,057

VRPs

156

156

156

Note: the dependent variable is the Standard Deviation of participant quality (Av_Reputation, Av_Signal, Av_Impact scores) during the month. Poisson regression for Sd_Reputation, OLS regressions for Sd_Signal and Sd_Impact scores. For all regressions, VRP and time fixed effects are included. Coefficients are Average Marginal Effects for Poisson regression. Standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1

6. Discussion and conclusion

64The objective of this paper was to see how the main communication tool for a Vulnerability Research Program, its written policy, may affect its efficiency. We particularly focused on the Level of information it provides about the contractual terms. We consider that the Level of information determines the uncertainty perceived by participants about getting compensated and we used three variables that represent different aspects of the Level of information: the length of the VRP’s policy, whether it provides detailed information about the reward, and the scope of the contest.

65Using a data set covering 4 years of activities, we found that providing more information about the contest generates more participations but also reduces the overall quality of participants and increases the variance of their quality. In particular, participants with a top ranking, i.e., the most skilled and experienced hackers, are not strongly affected by the Level of contractual information. On the contrary, leaving more uncertainty about the rewards and what is expected in exchange is likely to attract more homogenous, more successful and more experienced participants.

  • 19 Statistics on the platform show that invalid submissions are relatively constant over time (Zhao et (...)
  • 20 Disclosed reports are usually examples of good practice or an advertising of the VRP that show how (...)

66As with any empirical work, this paper has some limitations that present opportunities for future research. First, the main dependent variable we use as a measure of the outcome of a VRP is the number of valid participations. As we only observed participations that are rewarded, we are not aware of the number of rejected participations. Although we consider that ignoring the rejected participations does not affect our estimation19, we also estimate a variety of different models accounting for the Signal score in order to compensate for this limitation. Secondly, the number of valid participations equates to the number of vulnerabilities that are found and fixed. It therefore represents the quantitative aspect of the outcome. As for qualitative measures, it would have also been relevant to use the amount of the rewards or the severity of the vulnerabilities. However, this information is not systematically disclosed to public and analyzing only reports that are fully disclosed would present significant selection bias.20 Instead, we used a variety of different models that estimate the effect of the information Level on the performance and experience Levels of participants.

  • 21 Most research that studies the self-selection effect, including Eriksson et al. (2009), compares va (...)

67Our work confirms the findings of—which is based on a laboratory experiments on 120 students—in a more robust way, using a “natural setting” from 156 contests involving 184 participants on average. Our approach is unique, as the uncertainty of being compensated is reflected through several measures that reflects the completeness of a contract.21 This allows us to have a greater granularity in the degree of uncertainty a contract represents. We show that the self-selection effect also exists and is significant when a worker has to choose among contests with different degrees of contractual uncertainty. Our understanding is that more experienced and more skilled participants are more inclined to take risks (i.e., to select a more uncertain contract) because they are more certain of their ability to succeed and to get a reward. Another interpretation would be that more experienced hackers are more able to assess the needs of the VRP owner even when the VRP does not provide sufficient information either voluntarily or because it lacks experience in managing such contests. This self-selection process is all the more important for contests like VRPs since it has an impact on reducing the triage cost by limiting the number of non-relevant participations.

68Our findings also provide interesting insights in terms of managerial implications for companies organizing a VRP, and in the understanding of a bug bounty platform’s strategy.

6.1. Managerial implications for VRP owners

69We show that the choice of a VRP to reveal more or less information to potential participants leads to a trade-off between having a larger number of participation but attracting less skilled participants and attracting higher quality participants but generating fewer participations. Thus the VRP should strike a balance according to the type of outcome it is looking for. If the VRP is aimed at fixing a maximum number of vulnerabilities and is not specifically looking for the scarcity or originality of vulnerabilities, giving more detailed information and offering more complete contractual terms encourage a maximum number of hackers to participate. Although they would not necessarily be the most skilled, each hacker would provide a certain Level of effort, while the number of participations is maximized. However, if the objective of the VRP is to find the most complex and severe vulnerabilities, our results suggest that leaving more uncertainty helps select the best profiles since only the most skilled and experienced ones would choose VRPs that provide less information.

70VRPs can modify their policy anytime they want. Thus, they can start by providing very little information at first in order to attract only the best participants, then provide progressively more details in order to attract additional participants. Besides, our regressions show that older VRPs attract fewer participants. Providing more information could alleviate this effect.

71VRP owners also learn over time: they learn how to formulate their needs, how to better evaluate the financial value of a vulnerability, and how to better communicate with hackers. Thus they become more efficient over time, by reducing the marginal cost to handle a report from beginning to end. As the VRP gets more efficient, it can afford to handle a larger number of participations and may prefer to lessen the Level of screening by providing more information.

72VRPs can also combine the use of a minimum quality standard (by managing a private VRP) and the self-selection effect of contractual uncertainty. For instance, one can first run a public VRP and manage the screening Level by controlling the information it provides, then after having processed sufficient participations, it can turn into a private VRP that invites only hackers with a minimum Level of past performance.

73It is interesting to note that the platform advises the contrary to companies: it suggests first launching a private VRP with preselected participants (for which it receives an additional service fee) and then going public in a second phase. Indeed, the platform considers that applying a minimum quality standard is the only way to reduce the triage cost. Our findings offer a different perspective.

74This raises further questions about whether the platform’s goal is aligned with the goal of a VRP.

6.2. The bug bounty platform

75Bug bounty platforms take full advantage of their two-sidedness. They benefit from cross-sided network effects between VRPs and hackers: firms launch their VRP on the platform, because they can get access to a large pool of hackers, while hackers benefit from accessing a large number of contests on a single platform, which additionally allows them to maintain a “hacker” profile that cumulates experiences from the different VRPs run on the platform. As it is common in this type of markets, the platform uses an asymmetric pricing strategy, where hackers get free access to the platform, while companies pay a usage fee. Besides, hackers can multi-home while companies usually launch their VRPs on a single platform, as it is costly for them to manage multiple VRPs using distinct platforms. Regarding the usage fee, HackerOne requests a fee proportional to the amount of rewards a VRP pays to hackers (HackerOne requires around 20% of each reward, which also include the taxes). The basic service a company can subscribe to is the access to the web platform and a customized web interface to manage its VRP. The platform also offers a range of services companies can subscribe, related to the management of a VRP: it can help sort the submitted reports, manage the whole program, pre-select skilled hackers for private VRPs, etc.

76As an intermediary between VRPs and participants, the platform stores information that neither VRPs nor the hackers have access to. The platform is best able to analyze hackers’ behavior and to create the right incentives so that they work efficiently. It also advises the VRP owners. Although we do not have much information about the advice they offer to VRPs, publicly disclosed information shows that the platform recommends providing detailed information to participants: it points out that being transparent and providing accurate and detailed information helps to create a trusting relationship with hackers.

77If we consider that the platform earns mainly from VRP hosting (by receiving a fee proportional to the rewards offered to hackers), the goal of the platform is aligned with those of the VRPs: maximizing the sum of the values of fixed vulnerabilities, or in other words, generating the largest value of transactions. It is then in the platform’s interest to reduce the transaction cost and to limit the rate of non relevant participations. In this sense, our findings, which show an alternative way of running a private VRP in order to reduce the triage cost, is worthy of interest for the platform. However, if the platform gains more from services that complement the VRP hosting service, such as managing a VRP from beginning to end or preselecting skilled hackers, its advice and decisions could go against the actual interest of a VRP owner.

Haut de page

Bibliographie

P. D. ALLISON and R. P. WATERMAN. Fixed–effects negative binomial regression models. Sociological methodology, 32(1):247–265, 2002.

N. ARCHAK and A. SUNDARARAJAN. Optimal design of crowdsourcing contests. ICIS 2009 proceedings, page 200, 2009.

A. ARORA, J. P. CAULKINS, and R. TELANG. Research note-sell first, fix later: Impact of patching on software quality. Management Science, 52(3):465–471, 2006.

T. AUGUST and T. I. TUNCA. Network software security and user incentives. Management Science, 52 (11):1703–1720, 2006.

T. AUGUST and T. I. TUNCA. Who should be responsible for software security? A comparative analysis of liability policies in network environments. Management Science, 57(5):934–959, 2011.

R. BOHME and T. MOORE. The iterated weakest link. IEEE Security & Privacy, 8(1):53–55, 2010.

K. J. BOUDREAU, N. LACETERA, and K. R. LAKHANI. Incentives and problem uncertainty in innovation contests: An empirical analysis. Management science, 57(5):843–863, 2011.

C. B. CADSBY, F. SONG, and F. TAPON. Sorting and incentive effects of pay for performance: An experimental investigation. Academy of management journal, 50(2):387–405, 2007.

P. CASAS-ARCE and F. A. MARTÍNEZ-JEREZ. Relative performance compensation, contests, and dynamic incentives. Management Science, 55(8):1306–1320, 2009.

H. CAVUSOGLU, H. CAVUSOGLU, and J. ZHANG. Security patch management: Share the burden or share the damage? Management Science, 54(4):657–670, 2008.

C. W. CHOW. The effects of job standard tightness and compensation scheme on performance: An exploration of linkages. The Accounting Review, 58(4):667, 1983.

J. S. DEMSKI and G. A. FELTHAM. Economic incentives in budgetary control systems. Accounting Review, pages 336–359, 1978.

T. DOHMEN and A. FALK. Performance pay and multidimensional sorting: Productivity, preferences, and gender. American Economic Review, 101(2):556–90, 2011.

T. ERIKSSON, S. TEYSSIER, and M.-C. VILLEVAL. Self-selection and the efficiency of tournaments. Economic Inquiry, 47(3):530–548, 2009.

D. D. FEHRENBACHER, S. E. KAPLAN, and B. PEDELL. The relation between individual characteristics and compensation contract selection. Management Accounting Research, 34:1–18, 2017.

M. FINIFTER, D. AKHAWE, and D. WAGNER. An empirical study of vulnerability rewards programs. In Presented as part of the 22nd {USENIX} Security Symposium ({USENIX} Security 13), pages 273–288, 2013.

L. FULLERTON and R. P. McAFEE. Auctionin entry into tournaments. Journal of Political Economy, 107(3):573–605, 1999.

M. GARCIA and A. TOR. The n-effect: More competitors, less competition. Psychological Science, 20 (7):871–877, 2009.

M. C. JENSEN. Paying people to lie: The truth about the budgeting process. European Financial Management, 9(3):379–406, 2003.

L. B. JEPPESEN and K. R. LAKHANI. Marginality and problem-solving effectiveness in broadcast search. Organization science, 21(5):1016–1033, 2010.

K. KANNAN and R. TELANG. Market for software vulnerabilities? Think again. Management Science, 51 (5):726–740, 2005.

B. C. KIM, P.-Y. CHEN, and T. MUKHOPADHYAY. An economic analysis of the software market with a risk-sharing mechanism. International Journal of Electronic Commerce, 14(2):7–40, 2009.

B. C. KIM, P.-Y. CHEN, and T. MUKHOPADHYAY. The effect of liability and patch release on software security: The monopoly case. Production and Operations Management, 20(4):603–617, 2011.

W. M. W. LAM. Attack-prevention and damage-control investments in cybersecurity. Information Economics and Policy, 37:42–51, 2016.

E. P. LAZEAR. Performance pay and productivity. American Economic Review, 90(5):1346–1361, 2000a.

E. P. LAZEAR. The power of incentives. American Economic Review, 90(2):410–414, 2000b.

T. X. LIU, J. YANG, L. A. ADAMIC, and Y. CHEN. Crowdsourcing with all-pay auctions: A field experiment on taskcn. Management Science, 60(8):2020–2037, 2014.

T. MAILLART, M. ZHAO, J. GROSSKLAGS, and J. CHUANG. Given enough eyeballs, all bugs are shallow? Revisiting Rric Raymond with bug bounty programs. Journal of Cybersecurity, 3(2):81–90, 2017.

B. MOLDOVANU and A. SELA. The optimal allocation of prizes in contests. American Economic Review, 91(3):542–558, 2001.

J. SALOP and S. SALOP. Self-selection and turnover in the labor market. The Quarterly Journal of Economics, pages 619–627, 1976.

C. TERWIESCH and Y. XU. Innovation contests, open innovation, and multiagent problem solving. Management science, 54(9):1529–1543, 2008.

H. VARIAN. System reliability and free riding. pages 1–15, 2004.

W. S. WALLER and C. W. CHOW. The self-selection and effort effects of standard-based employment contracts: A framework and some empirical evidence. Accounting Review, pages 458–476, 1985.

J. M. WOOLDRIDGE. Distribution-free estimation of some nonlinear panel data models. Journal of Econometrics, 90(1):77–97, 1999.

M. ZHAO, J. GROSSKLAGS, and P. LIU. An empirical study of web vulnerability discovery ecosystems. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pages 1105–1117. ACM, 2015.

M. ZHAO, A. LASZKA, and J. GROSSKLAGS. Devising effective policies for bug-bounty platforms and security vulnerability discovery. Journal of Information Policy, 7:372–418, 2017.

Haut de page

Annexe

Figure 2. Distribution of VRP owners

Image 10000000000005DC000004311B29C00DEB320841.png

Table 6. Description of variables

Variable

Description

Nb_Participation

Total number of valid reports a VRP processed during the month

Nb_Top_Hackers

Total number of valid reports submitted by “Top hackers”, i.e., hackers whose Reputation, Impact and Signal scores are at the 30th percentile

Av_Reputation

The average Reputation score of the participants to a given VRP at a given month. The Reputation score is an aggregate score of a hackers’ contribution in terms of vulnerability severity, relevancy, and quantity

Av_Signal

The average Signal score of the participants to a given VRP at a given month. The Signal score is the average relevance of the reports submitted by a hacker

Av_Impact

The average Impact score of the participants to a given VRP at a given month. Impact is a measure of the average severity of the vulnerabilities reported by a hacker.

Sd_Reputation

Standard deviation of Av_Reputation

Sd_Signal

Standard deviation of Av_Signal

Sd_Impact

Standard deviation of Av_Impact

Nb_Words

Number of words of the written policy at the observed period

Reward

A dummy which identifies whether the VRP provides detailed information about the structure and the amount of the rewards during period t

Scope

A dummy which identifies whether the policy includes a dedicated section that describes in detail the scope targeted by the contest.

Vocabulary_Diversity

Number of unique words used in the written policy, excluding 422 words that are the most commonly used in English.

Contract_Completeness

Equal to ln(Nb_Words + 1) + Reward + Scope

Prog_Age

Number of months since the VRP was launched

Lagged_tot_nb_participation

Total number of participations to the VRP in the past

Firm_size

Categorization of the size of the company which owns the VRP by their number of employees (Small, Medium Size company, Big company)

Founded_period dummies

Dummies on whether the VRP owner (the organizing company) was created in 90s, from 2000 to 2007, from 2008 to 2010, or after 2011.

Industry dummies

The type of industry the VRP’s owner company belongs to

Managed_by_HO

A dummy which identifies whether the VRP is fully managed by HackerOne

Platform_Growth

The number of active VRPs on the platform at a given period

Based_in_US

A dummy equal to 1 if the organizing company of the VRP is based in the US (headquarters located in the US)

Acquired

A dummy which identifies whether the company that owns the VRP (and thus that finances it) is being Acquired by another company during period t

PSD2

A dummy which identifies whether the core activity of the company that owns the VRP i concerns the online or mobile payment sector, whether it is a European company, and whether period t is after January 2018

Figure 3. Example of VRP on HackerOne, providing minimal information about the contractual terms: General Motor’s VRP

Image 10000000000003E80000074F1B69D84AAB23CE35.png

Figure 4. Example of VRP on HackerOne, providing very detailed information about the contractual terms: Slack's VRP

Image 10000000000005DC000009782A24FDC2C34FEA47.png

Figure 4. Rest of Slack's VRP policy

Image 10000000000004980000095DC7B095F082B6D732.png

Table 7. Summary statistics

Variable

Obs

Mean

Std. Dev.

Min

Max

Nb_Participation

4177

6.38

10.58

0

140

Nb_Participation_2

4177

7.845

14.18

0

191.21

Av_Reputation

3057

2174.74

2909.83

8

36120

Av_Signal

3057

2.46

1.80

-4

7

Av_Impact

3057

12.45

6.11

0

37.5

Rescaled Av_Reputation

3057

.06

.081

0

1

Rescaled Av_Signal

3057

.587

.163

0

1

Rescaled Av_Impact

3057

.332

.163

0

1

Nb_Top_Hackers

4177

.906

2.904

0

53

Sd_Reputation

3057

1854.95

2427.54

0

25142.6

Sd_Signal

3057

1.59

1.16

0

7.21

Sd_Impact

3057

5.65

3.91

0

25

Nb_Words

4177

493.84

457.31

0

2509

Vocabulary_Diversity

4177

272.66

257.01

0

1292

Reward

4177

.349

.477

0

1

Scope

4177

.430

.495

0

1

Contract_Completeness

4177

5.31

3.51

0

9.69

Prog_Age

4177

16.98

14.3

0

64

Lagged_tot_nb_participation

4177

146.49

228.28

0

2420

Managed_by_HO

4177

.245

.43

0

1

Based_in_US

4177

.682

.466

0

1

Firm_Size

3260

2.89

1.03

1

4

Platform_Growth

4177

66

15.51

19

89

Acquired

4177

.028

.17

0

1

PSD2

4177

.052

.22

0

1

Table 8. Mean values of the dependent variables according to the duration of a VRP

Duration of a VRP (in months)

Mean value of Nb_Participation

Mean value of Nb_Top_Hackers

Mean value of Av_Reputation

Mean value of Av_Signal

Mean value of Av_Impact

- 5

5.15

0.59

1722

2.70

13.37

5 - 15

5.39

0.59

1743

1.95

10.19

15 - 25

3.49

0.34

2048

2.33

11.51

25 - 35

6.03

0.84

1815

2.37

12.02

35 - 45

4.69

0.60

2184

2.30

12.28

45 - 55

8.98

1.40

2275

2.39

12.74

Table 9. Statistics on the number of times a VRP has modified its policy

Number of times a VRP added or removed

Detailed information about rewards

The scope section

Words

Vocabulary Diversity

On average

1.01

0.97

5.04

5.00

Minimum

0

0

0

0

Maximum

3

2

25

25

Table 10. Correlation matrix of the principal variables

Nb_Participation

Nb_Top_Hackers

Nb_Participation_2

Av_Reputation

Av_Signal

Av_Impact

Nb_Words

Reward

Scope

Contract_Comp.

Vocabulary_Div.

Prog_Age

Lagged_tot_nb_p.

Platform_Growth

Nb_Participation

1.0000

Nb_Top_Hackers

0.6422

1.0000

Nb_Participation_2

0.9928

0.5915

1.0000

Av_Reputation

0.0459

0.1799

0.0238

1.0000

Av_Signal

0.1412

0.2571

0.0785

0.3542

1.0000

Av_Impact

0.0890

0.1559

0.0537

0.2997

0.5904

1.0000

Nb_Words

0.0954

0.0234

0.0964

-0.0983

0.0136

-0.0027

1.0000

Reward

0.0790

0.0240

0.0814

-0.0809

-0.0267

-0.0429

0.5482

1.0000

Scope

0.0559

0.0574

0.0482

-0.0246

0.0342

-0.0115

0.5186

0.2770

1.0000

Contract_Completeness

0.0446

-0.0178

0.0502

-0.1455

-0.0393

-0.0691

0.8077

0.5951

0.6561

1.0000

Vocabulary_Diversity

0.0992

0.0283

0.0994

-0.0936

0.0178

0.0030

0.9895

0.5221

0.5324

0.7954

1.0000

Prog_Age

0.0647

0.0598

0.0527

-0.0871

0.0587

0.0415

0.2441

0.2141

0.1969

0.3226

0.2418

1.0000

Lagged_tot_nb_participation

0.4511

0.3350

0.4228

0.0057

0.1734

0.1030

0.1692

0.1515

0.1217

0.1155

0.1796

0.5629

1.0000

Platform_Growth

-0.0125

0.0078

-0.0305

-0.1059

0.1510

0.0802

0.3831

0.1807

0.2499

0.4011

0.3770

0.4120

0.2012

1.0000

Table 11. Estimation results for the effect of Information_Level on the Nb_Participation—Using Poisson FE

(1)

(2)

(3)

(4)

(5)

(6)

Poisson FE

Poisson

Poisson FE

Poisson

Poisson FE

Poisson

Nb_Words

0.00220***

0.00221***

(0.000147)

(0.000105)

Reward

1.801***

0.589***

(0.177)

(0.101)

Scope

1.486***

0.232**

(0.164)

(0.105)

Prog_Age

-0.120***

0.0660***

-0.166***

0.0637***

-0.160***

0.0677***

(0.0213)

(0.00376)

(0.0211)

(0.00381)

(0.0214)

(0.00375)

Platform_Growth

-0.0468***

-0.0184***

-0.0177***

(0.00388)

(0.00361)

(0.00375)

Firm_Size

1.908***

2.056***

2.101***

(0.0677)

(0.0688)

(0.0688)

Managed_byHO

-0.931***

-1.003***

-1.054***

(0.121)

(0.123)

(0.122)

Created_in_90s

8.165***

7.402***

7.066***

(0.430)

(0.421)

(0.423)

Created_in2000to2007

1.267***

1.461***

1.328***

(0.251)

(0.260)

(0.277)

Created_in2008to2010

0.312

0.0948

-0.157

(0.219)

(0.224)

(0.233)

Created_after_2011

-0.272

-0.363

-0.504**

(0.235)

(0.240)

(0.252)

Based_in_US

1.908***

2.273***

2.240***

(0.126)

(0.127)

(0.129)

VRP FE

Time FE

Industry dummies

Observations

4,177

3,230

4,177

3,230

4,177

3,230

LR 𝜒2

24369

11429

24242

11020

24219

10991

Number of VRPs

156

156

156

Note: the dependent variable is Nb_Participation. Poisson FE and Poisson regressions with robust standard errors. Coefficients are Average Marginal Effects. Standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1. The number of observations in columns (2), (4), and (6) are reduced because we do not have the information for the Firm_Size for every VRPs.

Table 12. Estimation results for the effect of Information_Level on the Nb_Participation—Regressions without IVs—Focus on Industry Sector dummies

(1)

(2)

(3)

NB

NB

NB

Nb_Words

0.00290***

(0.000430)

Reward

1.314***

(0.392)

Scope

1.009***

(0.382)

Cybersecurity

-5.862***

-7.641***

-7.560***

(2.264)

(2.804)

(2.709)

Ecommerce&Retail

-4.313*

-6.502**

-6.031**

(2.316)

(2.847)

(2.742)

Education

-7.714***

-10.39***

-9.939***

(2.621)

(3.036)

(2.936)

Financial_services

-5.508**

-7.644***

-6.999***

(2.256)

(2.805)

(2.690)

Food&Beverage

2.517

0.459

0.742

(3.035)

(3.477)

(3.414)

Gaming

-1.722

-4.597

-4.143

(2.390)

(2.860)

(2.757)

Hospitality&Transport

1.745

0.0463

0.259

(2.726)

(3.254)

(3.160)

IT_General

-3.925*

-5.983**

-5.385**

(2.279)

(2.822)

(2.711)

IT_IoT

-6.327***

-8.773***

-8.560***

(2.350)

(2.861)

(2.756)

IT_infrastructure

-8.193***

-10.47***

-9.787***

(2.333)

(2.880)

(2.776)

Media&Entertainment

-3.437

-5.613**

-5.240*

(2.276)

(2.813)

(2.709)

Observations

4,177

3,230

4,177

Note: the dependent variable is Nb_Participation. Negative Binomial regressions with robust standard errors. Coefficients are Average Marginal Effects. Baseline value for Industry dummies is “Other industry sector”. All control variables are included as in Table 1. Standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1

Table 13. Estimation results for the effect of Information_Level on the Nb_Participation—Including Lagged_tot_nb_participation variable

(1)

(2)

(3)

(4)

(5)

(6)

NB FE

NB

NB FE

NB

NB FE

NB

Nb_Words

0.00253***

0.00290***

(0.000458)

(0.000430)

Reward

1.487***

1.314***

(0.552)

(0.392)

Scope

2.087***

1.009***

(0.507)

(0.382)

Prog_Age

-0.150***

-0.108***

-0.199***

-0.123***

-0.167***

-0.115***

(0.0510)

(0.0189)

(0.0505)

(0.0205)

(0.0512)

(0.0202)

Lagged_tot_nb_participation

0.00201*

0.0142***

0.00252**

0.0153***

0.00235*

0.0152***

(0.00121)

(0.00167)

(0.00122)

(0.00191)

(0.00121)

(0.00191)

Platform_Growth

-0.0256*

0.00412

0.000682

(0.0132)

(0.0128)

(0.0132)

Firm_Size

1.323***

1.357***

1.419***

(0.210)

(0.217)

(0.216)

Managed_byHO

0.488

0.904**

0.605

(0.434)

(0.451)

(0.446)

Created_in_90s

3.084***

2.428**

2.048

(1.116)

(1.163)

(1.259)

Created_in2000to2007

4.661***

4.674***

4.088***

(1.058)

(1.144)

(1.252)

Created_in2008to2010

1.801**

1.238

0.712

(0.863)

(0.939)

(1.027)

Created_after_2011

1.707*

1.175

0.754

(0.895)

(0.970)

(1.064)

Based_in_US

2.726***

3.304***

3.209***

(0.466)

(0.479)

(0.480)

VRP FE

Time FE

Industry dummies

Observations

4,177

3,230

4,177

3,230

4,177

3,230

LR 𝜒2

2577.84

1179.60

2553.69

1113.47

2563.54

1109.06

Number of VRPs

156

156

156

Note: the dependent variable is Nb_Participation. Negative Binomial Fixed Effects (using dummies for individuals) and Negative Binomial regressions with robust standard errors. Coefficients are Average Marginal Effects. Standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1. The number of observations in columns (2), (4), and (6) are reduced because we do not have the information for the Firm_Size for every VRPs. Baseline value for Created_Period is Before 1990.

Table 14. Estimation results for the effect of Information_Level on the Nb_Participation—Using alternative measures for Information_Level

(1)

(2)

(3)

(4)

(5)

(6)

NB FE

NB

NB FE

NB

NB FE

NB

Nb_Words

0.00230***

0.00358***

(0.000598)

(0.000542)

Reward

0.217

0.856**

(0.650)

(0.416)

Scope

0.933

0.595

(0.586)

(0.404)

Vocabulary_Diversity

0.00460***

0.00492***

(0.000815)

(0.000702)

Contract_Completeness

0.424***

0.240***

(0.0627)

(0.0494)

Prog_Age

-0.128**

0.0485***

-0.140***

0.0468***

-0.0909*

0.0456***

(0.0512)

(0.0121)

(0.0508)

(0.0120)

(0.0522)

(0.0120)

Platform_Growth

-0.0427***

-0.0435***

-0.0343***

(0.0124)

(0.0123)

(0.0124)

Firm_Size

1.846***

1.900***

1.933***

(0.201)

(0.204)

(0.203)

Managed_byHO

-0.338

-0.275

0.0124

(0.411)

(0.405)

(0.406)

Created_in_90s

7.523***

7.166***

7.090***

(1.364)

(1.324)

(1.414)

Created_in2000to2007

4.200***

3.962***

3.542***

(0.945)

(0.905)

(0.991)

Created_in2008to2010

1.760**

1.725**

1.062

(0.744)

(0.740)

(0.819)

Created_after_2011

1.748**

1.733**

1.030

(0.777)

(0.776)

(0.853)

Based_in_US

2.236***

2.102***

2.526***

(0.426)

(0.426)

(0.423)

VRP FE

Time FE

Industry dummies

Observations

4,177

3,230

4,177

3,230

4,177

3,230

Number of VRPs

156

156

156

Note: the dependent variable is Nb_Participation. Negative Binomial FE and Negative Binomial regressions with robust standard errors. Coefficients are Average Marginal Effects. Standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1. The number of observations in columns (2), (4) are reduced because we do not have the information for the Firm_Size for every VRPs.

Table 15. Estimation results for the effect of Information_Level on Nb_Participation—Alternative measure of Nb_Participation accounting for Signal score

(1)

(2)

(3)

(4)

(5)

(6)

NB FE

NB

NB FE

NB

NB FE

NB

Nb_Words

0.00387***

0.00424***

(0.000671)

(0.000489)

Reward

2.551***

1.456***

(0.808)

(0.543)

Scope

3.276***

1.374**

(0.747)

(0.547)

Prog_Age

-0.254***

0.0592***

-0.322***

0.0528***

-0.275***

0.0585***

(0.0742)

(0.0180)

(0.0743)

(0.0184)

(0.0749)

(0.0181)

Platform_Growth

-0.0715***

-0.0290*

-0.0345*

(0.0181)

(0.0171)

(0.0178)

Firm_Size

2.457***

2.469***

2.542***

(0.333)

(0.334)

(0.330)

Managed_byHO

-0.557

-0.196

-0.484

(0.598)

(0.602)

(0.595)

Created_in_90s

9.795***

8.355***

7.887***

(1.931)

(1.941)

(2.004)

Created_in2000to2007

4.917***

4.871***

4.111***

(1.208)

(1.265)

(1.379)

Created_in2008to2010

2.071**

1.189

0.580

(1.009)

(1.021)

(1.131)

Created_after_2011

2.554**

1.722

1.154

(1.086)

(1.093)

(1.201)

Based_in_US

3.237***

3.946***

3.868***

(0.620)

(0.640)

(0.643)

VRP FE

Time FE

Industry dummies

Observations

4,177

3,230

4,177

3,230

4,177

3,230

Number of VRPs

156

156

156

Note: the dependent variable is Nb_Participation_2 is the sum of of all valid participations in a given month. Negative Binomial FE and Negative Binomial regressions with robust standard errors. Coefficients are Average Marginal Effects. Standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1. The number of observations in columns (2), (4), and (6) are reduced because we do not have the information for the Firm_Size for every VRPs.

Table 16. First Stage regressions for Table 2

(1)

(2)

(3)

Dependent variable is:

Nb_Words

Reward

Scope

PSD2

1,613***

0.723***

0.292***

(89.13)

(0.0761)

(0.0850)

Acquired

508.7***

0.827***

0.631***

(75.21)

(0.0642)

(0.0718)

Prog_Age

-19.49***

-0.00675***

-0.0206***

(1.665)

(0.00142)

(0.00159)

VRP FE

Time FE

Observations

4,177

4,177

4,177

R-squared

0.732

0.818

0.787

Number of VRPs

156

156

156

Note: Linear FE regressions. Standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1

Table 17. Validity of over-identification restrictions

Included IVs:

PSD2 & Acquired

Only
PSD2

Only Acquired

PSD2 & Acquired

Only
PSD2

Only Acquired

PSD2 & Acquired

Only
PSD2

Only Acquired

(1)

(2)

(3)

(4)

(5)

(6)

(7)

(8)

(9)

Nb_Words

0.00353

0.00251

0.0222

Reward

0.912

0.541

1.335

Scope

1.595

1.361

1.622

Prog_Age

-0.245

-0.146

-0.762

-0.319

-0.156

-0.379

-0.570

-0.503

- 0.578

residuals from 1st stage

-0.00120

-0.000374

-0.0198

-0.706

-0.384

-1.124

-1.404

-1.201

-1.434

VRP FE

Time FE

Observations

4,177

4,177

4,177

4,177

4,177

4,177

4,177

4,177

4,177

Note: the dependent variable is Nb_Participation. Poisson IV regressions with robust standard errors, using a control function approach. Coefficients are Average Marginal Effects. Standard errors and p-values omitted because Bootstrapping is needed.

Table 18. Estimation results for the effect of Information_Level on participant quality—Dependent variable are rescaled from 0 to 1

Dep. var. is

Av_Reputation

Av_Signal

Av_Impact

(1)

(2)

(3)

Nb_Words

-2.19e-05***

-2.83e-05***

-2.51e-05***

(5.36e-06)

(9.89e-06)

(9.16e-06)

Prog_Age

0.00196***

0.00604***

0.00539***

(0.000752)

(0.00124)

(0.00114)

Lagged_tot_nb_participation

1.12e-05

6.20e-05**

0.000122***

(1.18e-05)

(2.70e-05)

(2.37e-05)

Time FE

program FE

Observations

3,057

3,057

3,057

VRPs

156

156

156

(4)

(5)

(6)

Reward

-0.0141

-0.0524***

-0.0354***

(0.00869)

(0.0122)

(0.0119)

Prog_Age

0.00243***

0.00618***

0.00566***

(0.000742)

(0.00121)

(0.00112)

Lagged_tot_nb_participation

6.41e-06

6.54e-05**

0.000122***

(1.19e-05)

(2.69e-05)

(2.38e-05)

Time FE

program FE

Observations

3,057

3,057

3,057

VRPs

156

156

156

(7)

(8)

(9)

Scope

-0.0107*

-0.00509

-0.00356

(0.00628)

(0.0112)

(0.0109)

Prog_Age

0.00233***

0.00676***

0.00605***

(0.000736)

(0.00124)

(0.00113)

Lagged_tot_nb_participation

4.56e-06

5.17e-05*

0.000113***

(1.20e-05)

(2.69e-05)

(2.34e-05)

Time FE

program FE

Observations

3,057

3,057

3,057

VRPs

156

156

156

Note: the dependent variables were standardized (values rescaled from 0 to 1) in order to compare the magnitudes of the effects. We use OLS regressions. For all regressions, VRP and time fixed effects are included. Standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1

Haut de page

Notes

2 Source: https://medium.com/@kanishksajnani/how-i-could-have-travelled-the-world-for-free-5bb10ac46ae5.

3 Source: https://www.hackerone.com/blog/signal-requirements.

4 Specifically, VRPs in our data set are public, i.e., any hacker on the platform can submit a report. The submitted report has to be examined and evaluated by the VRP to decide whether they accept the report as a relevant one, so it incurs a “triage cost”.

5 For instance, HackerOne allows hackers to submit a report only when they meet a given Level of valid to invalid submission ratio.

6 They suggest that in VRPs, a penalty system for invalid reports is more efficient than applying a minimum quality standard. Their model also shows that an increasing number of participants may decrease both the organization’s and the hackers’ utilities.

7 See definition and description of a bug bounty platform in Subsection 4.1.

8 They show empirically that greater rivalry between the participants in a contest reduces the incentive to exert effort, but adding competitors increases the probability of obtaining an extreme value solution and this effect prevails for problems with more uncertainty.

9 When a report is rejected, the submitter does not receive any financial reward. Moreover, the signal score is reduced except for reports that are considered informative.

10 A list of 422 words that are considered as the most commonly used in English are excluded. For instance, words such as “a”, “each”, “only”, “work” are excluded.

11 We do not account for other information about the contests’ statistics in t – 1 such as the average quality of participants in the past because this information is not easily computable by participants and we therefore consider that they do not take it into account when deciding whether to participate.

12 Theoretically, a company can run more than one VRP but this is not the case in our data set.

13 In Zhao et al. (2017), the percentage of valid reports in public VPRs remains relatively stable at around 25% of the total submission during the total observed period of 2 years.

14 Allison and Waterman (2002) shows that the conditional maximum likelihood method for a fixed effects negative binomial regression model presents coefficients that are wrongly statistically significant. We thus use a Negative Binomial regression by including dummy variables for all individuals as they suggest (Also see https://statisticalhorizons.com/fe-nbreg). However Wooldridge (1999) recommend using Poisson FE regressions rather than Negative Binomial regressions with fixed effects so we also did some robustness checks using Poisson FE regressions.

15 As Wooldridge (1999) has raised some concerns about using Negative Binomial regressions with fixed effects, we also report regression results using Poisson FE in Table 11 in the Appendix.

16 We consider that there are no multicollinearity concerns. For instance, Variance inflation factor (VIF) for Nb_Words, Reward and Scope are respectively 2.36, 1.78 and 1.62 for regression in column (2)

17 We have two instruments and F statistics for the excluded instruments are much greater than 19.93 which is the size of nominal 5% Wald test for 2SLS.

18 Other definition for top hackers were tested, such as Reputation, Signal and Impact scores at the 25th percentile, Reputation and Signal scores at the 25th or 30th percentiles. We obtain similar results for these definitions. Otherwise, we find a positive sign and no statistical significance for the coefficients of the main explanatory variables when the scores are at a larger percentile.

19 Statistics on the platform show that invalid submissions are relatively constant over time (Zhao et al., 2017).

20 Disclosed reports are usually examples of good practice or an advertising of the VRP that show how it pays well or communicates well.

21 Most research that studies the self-selection effect, including Eriksson et al. (2009), compares variable-pay schemes to “less variable” pay schemes.

Haut de page

Pour citer cet article

Référence papier

Arrah-Marie Jo, « Hackers’ self-selection in crowdsourced bug bounty programs »Revue d'économie industrielle, 172 | 2020, 83-132.

Référence électronique

Arrah-Marie Jo, « Hackers’ self-selection in crowdsourced bug bounty programs »Revue d'économie industrielle [En ligne], 172 | 4e trimestre 2020, mis en ligne le 02 janvier 2024, consulté le 23 février 2024. URL : http://journals.openedition.org/rei/9519 ; DOI : https://doi.org/10.4000/rei.9519

Haut de page

Auteur

Arrah-Marie Jo

IMT Atlantique, arrahmarie.jo[at]gmail.com
December 2020

Haut de page

Droits d'auteur

CC-BY-NC-ND-4.0

Le texte seul est utilisable sous licence CC BY-NC-ND 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.

Haut de page
Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search