skip to content
@whiskeykilo

Bug Bounty Program Tactics

/ 5 min read

For information on bug bounty strategy and keys to success, take a look at the first article in the series here.

Even a well-strategized bug bounty program can face obstacles. “How” to consistently execute with excellence in a bug bounty program varies between organizations and is tricky to get right. For day-to-day bounty operations, organizations should look to focus on several areas of importance: hacker engagement, automation, and actionable program insights.

Hacker Engagement

First impressions matter! Win hackers over early on and create “anchor” hackers - program stalwarts that learn all about the target organization and keep coming back to hack more. Make their end-to-end experience great: present a simple test plan, respond quickly and clearly, and award bonuses for extraordinary work.

A solid program testing plan considers how much time it takes to set up or access the testing environment and what obstacles might present themselves in that environment. The setup process should be as quick and painless as possible - time spent in setup is time hackers could lose interest, check out other programs, and not hack. Obstacles take the form of features that require spending real money, sign up processes that require real hacker PII, data integrity/confidentiality concerns, and much more. Every scope requires a slightly different approach, which is why HackerOne includes advisory services to help answer questions like these. Ask: if I were a hacker, would I want to participate in this program? Is the setup worth my time and energy for the potential reward?

Keeping all types of hackers engaged over time is important too. Returning hackers should enjoy consistency, clarity, and transparency in all interactions and bounty decisions, but how can a program attract fresh hacker eyes and skills? There are a lot of ways to keep hacker engagement steady:

  • Scope changes
  • Limited-time incentive boosts (i.e. bounty multipliers)
  • New software releases or other significant updates
  • New technologies in use
  • Additional features and access boost
  • New/updated credentials (for authenticated testing)

Payment is of course central to a bug bounty program. Going above and beyond for hackers who make an exceptional effort in their bug reports is always appreciated. Situations may arise where the severity of the report could arguably be High or Critical (such as reading PII) - in these cases a bonus is a great tool to help find middle ground in the reward decision.

To quote HackerOne’s CISO and Chief Hacking Officer Chris Evans: “pay for value!”

Automation Opportunities

Bug bounty programs can involve plenty of tedium but fortunately also offer plenty of opportunity for automation. Robust vulnerability management processes can help even the largest bug bounty programs stay on top of vulnerability report submissions.

Routing of reports to internal stakeholders is crucial to ensuring bugs are remediated in a timely fashion. Every organization has a different workflow that uses a mixture of software and processes to accomplish this. Areas that apply universally and may benefit from automation include:

  • Automatic responses to hackers based on keywords (at HackerOne we call these triggers)
  • Labeling of the reports by product line, business unit, geographic region, etc.
  • Escalation paths based on severity
  • SLA reminders

HackerOne can help reduce workloads on security teams not just with services like Triage, but also through simple ticketing and notification integrations. Consider who needs to know about vulnerability reports, where they need to go, and how to reduce manual processes along the way.

Program Insights

A security team that isn’t learning from its bug bounty program is missing out on valuable information. The content of reports, hacker engagement statistics, and time to final remediation all offer valuable insights on the health of a bug bounty program (and therefore its efficiency in reducing risk at an organization).

There are plenty of ways to analyze bug reports: by volume, by weakness type, by severity, and more. Low volume can occur for a variety of reasons such as low incentives, a small attack surface, or an onerous setup (hacker surveys offer a chance to gather quality insights). But going a level deeper than superficial statistics can showcase what’s really going on:

  • Lots of duplicate reports might indicate a broken feedback mechanism
  • High CVE-based report count could mean an ineffective scanning setup
  • Inapplicable reports could be the result of poor hacker instructions

Issues like these might mean going back to the bounty strategy drawing board and improving cross-functional alignment.

Better hacker engagement means better results, so understanding success and program health in this area is a must. A program should know how many unique hackers are submitting reports, have submitted successfully resolved reports, are being paid bounties, count of returning hackers in subsequent weeks or months, and of course who the top hackers are. Metrics like these ensure that a program is taking full advantage of the bug bounty model’s strengths by receiving consistent, diverse talent influxes and keeping high performing hackers interested.

Mean Time to Resolve (MTTR) is a key metric in any bug bounty program. It’s great to find bugs, it’s even better to fix them on time! Along the way to bug remediation are steps such as initial acceptance, full validation/replication, correct labeling/routing of reports, implementation of the fix, and retesting, any one of which may be the culprit of a poor MTTR at an organization. Frequently missed SLAs may be a sign of lingering risk and lack of resources. Setting reasonable SLAs and sticking to them can be tough but the risk reduced is well worth the effort.

Driving Improvement

Not all bug bounty programs are equally effective at reducing risk and managing attack resistance. What sets the best programs apart is the execution of these three critical areas.


The above post was originally published on the HackerOne Blog. Special thanks to Michiel Prins and Dane Sherrets for their review.

Go back and read part one.