In cybersecurity, we’re obsessed with false positives, whether we’re identifying or avoiding them. As with so much industry jargon, the term is wearing thin, to the point that vendor claims of lower false positive rates are taken for granted and often get lost in the marketing din. Time to go back to square one and reset the buzzwords to the current realities of web application security and development.
Not every false alarm is a false positive
Whenever you look at results for any type of test, you need to factor in the risk of errors in the testing process. Each result can be a negative (test says no) or a positive (test says yes), and each can also be true (test was right) or false (test was wrong). Of the four possible combinations, false positives are the most troublesome. In this case, you’ve got a positive result, so you need to check what is wrong – but in reality, nothing is wrong, apart from the test itself.
In security testing, false positives have been the bane of automated tools since day one, bringing the term into widespread use. Over time, the narrow technical definition of false positives as erroneous test results has become diluted. It’s now common to apply the “false positive” label to all test results that don’t need action, whether or not they are technically correct. All these false alarms can accumulate quickly, and once they reach a certain level, you lose sight of individual issues in the sea of noise – and that’s when security issues can slip through the net.
Positive thinking
In the realm of application security, different approaches to testing all come with their own flavors of false alarm woes. Static analysis tools, in particular, are notorious for flooding users with results that, even when technically valid, are often irrelevant in a specific context. Early vulnerability scanners, on the other hand, tended to err on the side of caution and report even slightly suspicious behaviors as vulnerabilities, leading to high false positive rates. In both cases, this meant that having your results told you nothing about what needed to be done because someone still had to verify them manually – meaning that you couldn’t automate the process.
We’ve said it before, and we’ll say it again: in dynamic application security testing (DAST), false positives are especially troublesome. Unlike static analysis, which is only designed to flag insecure patterns in the code, DAST probes the application for vulnerabilities just like real attackers would. Acting on a DAST report could make the difference between preventing a breach and remaining vulnerable. That’s why, in the DAST world, technical accuracy that early scanners could only dream of is now the bare minimum required for a usable tool.
More signal, less noise
At Invicti, we’ve taken over a decade of continuous security research and development as our accuracy baseline and added Proof-Based Scanning to go beyond probabilities and get certainty for the most serious issues. Vulnerabilities marked as confirmed in an Invicti scan have been safely and provably exploited by the scanner – and if an automated tool can exploit them, then so can determined attackers. Combined with a technical severity rating, it immediately shows you which issues to prioritize and which can wait until you have the resources to address them.
This is what cybersecurity solutions of the future should focus on: delivering actionable and pre-triaged reports with zero noise that security teams and developers can immediately act upon. Whether in application security testing or other cybersecurity areas, there will always be a long tail of issues that don’t require urgent action and of reports that need manual verification. The crucial thing is to deliver results that support quick and accurate decision-making, allowing team leaders to say, “this needs to be done today, this can wait until next week, the rest we can ignore for now.”
Focus mode engaged
Getting reliable information out of raw data can make the difference between efficiently fixing dangerous security defects and wasting hours, if not days, on sifting through false alarms – whether they’re false positives or not. Alert overload is bad enough for security engineers (being a leading cause of professional burnout), but as you start building security testing into the development pipeline, you also risk flooding your developers with distractions that pull them out of their well-oiled workflows.
Human nature dictates that already after the first few false alarms, subsequent warnings will likely be ignored. Especially for developers, who need to focus first and foremost on building software, every single security issue needs to be clear and actionable so they can fix it like any other bug and get on with work that fuels innovation. Ensuring that they don’t see any false positives should already be a given for enterprise-quality tools and workflows. Now, the challenge is to optimize and prioritize how all the true positive results are delivered. Done right, this allows everyone to work on issues that make the biggest difference without losing focus.
Power to the people
While it’s easy to get lost in the industry jargon and vendor claims, cybersecurity is, ultimately, about people, and the tools we use and develop should deliver the right information to the right people at the right time. False positives are only a small part of this significant challenge – your security engineers and developers should be addressing vital security issues, not counting how many false positives they got this time. At Invicti, we’ve combined Proof-Based Scanning with automatic prioritization and workflow integrations to get your teams working on what really matters: eliminating real vulnerabilities to improve web application security.
The post Let’s stop the noise around false positives appeared first on Invicti.
0 Comments