Bug bounties are great, aren’t they? A company decides that instead of just testing their product in-house, it will also open the challenge to outside developers, and compensate them for their effort. Crowdsourced security at its best, with more of the attack surface covered than anyone could’ve done on their own.

Only that’s not how bug bounties work out. Not really, anyway. While seeming prudent and community-friendly, with positive PR implications if run well, their net security impact is questionable, and the problems associated with them should not be ignored.

Let’s preface with saying that this isn’t a rant against security researchers. We’re not going to claim that the researchers who make their living from bug bounties are susceptible to influence of attackers who pay more for most finds – as this is a fear that’s often unfounded. There’s also little to no reason for concern about the professionalism of the researchers involved. A well-run bug bounty program puts an emphasis on the ethics and good work, weeding out the bad actors.

The issue that’s truly disconcerting about this industry is the perceived effect of the bounty vis-a-vis its actual purpose and optimal scope.

The two main types of companies that turn to bug bounty programs are ones that are too small to have their own security, and established organizations that supplement their security teams with bug bounty programs.

For the smaller outfits, bug bounties are about necessity.  They recognize this weakness but have no resources or proper technology to make security-related decisions, and so they crowdsource the solution. They want whatever the security research find, and treat the bug bounty as a cost-saving measure.

Don’t Alienate Your Researchers

In large companies, bug bounties are a way to show that they’re security-focused. It’s another base to cover. But when you look deeper, more often than not these programs are structured in a way that mostly enables them to say that they’ve done the best they could if something goes wrong.

This is well reflected in the scope of their programs, which are narrowly tailored to pay only for certain vulnerabilities, and get away with acknowledgements for all else. And as any security researcher who had submitted a vulnerability to one of these programs knows, the submission process is usually just the beginning of a conversation about proof of exploitation, small print in the contract, and arbitrary decisions about the value of the vulnerability.

But most of these disputes, which can get somewhat unpleasant, happen behind the scenes. All the while, the company has perceived protection and prestige of a bug bounty program. The researchers are the ones who are left with bad taste in their mouths – for them it’s often freelance work, rather than charity. And when they look around, they find that offensive outfits ask less questions, argue less, and pay much more.

Bug Bounty Mismanagement Is An Industry Problem

This kind of friction does not benefit the security industry, does not benefit product security and helps mostly PR efforts, which isn’t the purpose of bug bounty programs. In order to become truly beneficial, these efforts need to be rethought, and it should be recognized that behind every vulnerability disclosure, there’s a person.

For some, that would mean eliminating the bug bounty altogether, as the company is simply not built to work well with researchers. Others should recognize that crowdsourcing security can benefit only if it’s not a PR tool, but an actual security initiative – to be used in conjunction with in-house tools to provide meaningful, tangible benefit that’s more than “someone will report vulnerabilities to us”.

At Cybellum, we’re big proponents of a solution that uses bug bounties to supplement in-house tools such as ours, which automatically detects vulnerabilities without access to source code, in order to provide actionable intelligence to both developers and decision-makers. While we’re in the business of automating the vulnerability detection process, there’s always space for dedicated security researchers who can do things no machine currently can.

 We’re not about to let our solution become a fig leaf for inadequate security measures, and same should go for bug bounties. At their best, they are an invaluable tool – but only if you know how to deploy them thoughtfully.

More From Cybellum:

CY-2017-022: Type Confusion in Microsoft Word 2016

DevSecOps Is Important, But Can It Be Done Well?

It’s Not About Trust: 3rd Party Software Security Is Complicated, but Necessary


See How Cybellum Platform Detects Vulnerabilities

Subscribe to get our vulnerability analysis posts, new blog content and very infrequent special announcements. We won’t spam you, and will never sell your data.

About the Author


Did you find this interesting? Share it with others:

< Back to Blog