The application security testing market is highly fragmented. From SAST to DAST to SCA to IAST to RASP, the current state of the market is a byproduct of various assertions on what is believed to be the best way to address application security testing.
Yet, year after year, we hear of the zero-days not only found, but also exploited in the wild, only reinforcing the painful fact that we’ve not managed to figure out the age old question: “What’s the best way to address application security testing?”.
If we continue to rely on the same assumptions and apply simplified approaches to this complex problem, we only add the risk of adding yet another technique to the mix, forcing onto vendors another tool they must not only add, but also maintain as a part of their larger application security testing program.
This is undesirable. Time and time again we hear it from clients: “We’ve had enough. When can I stop adding more tools into the mix? When can I stop worrying with some level of confidence? When can I stop testing?”. They don’t want more. They want less. Or, perhaps another way to think about it is, they want more with less.
At ForAllSecure, our aim isn’t to add yet another technique to the market. Our aim is to finally tackle the question of “what’s the best way to address application security testing” once and for all. Our answer? Autonomous testing through fuzz testing and symbolic execution.
The latest evolution of fuzz testing -- the progression from protocol fuzzing to guided fuzzing -- has significantly shifted the perception of this technique. Prior, it was considered a dark art that could only be harnessed by security researchers. But things have changed. With guided fuzzing, the technique is able to pull from the strengths of existing AST styles, while leaving behind its weaknesses.
Mayhem, for example, is able to:
Fuzz testing's ability to pull on the strengths of existing tooling while leaving behind its weaknesses is why we believe fuzzing will redefine the way we approach application security. It will bring more efficiency and simplicity in today's lean continuous models that can't afford any wasted time and resources.
We’re making bold statements, but we don’t find them as delusional as they may seem on the surface. In 2016, DARPA held a Cyber Grand Challenge (CGC), where they challenged the world to build an autonomous testing machine that is able to compete in a capture-the-flag (CTF) competition without any human intervention. Machines from seven teams competed. ForAllSecure’s Mayhem was the last machine standing.
Articles often highlight what made the difference: Mayhem’s accurate analysis allowed it to make complex business decisions that it otherwise wouldn’t have been able to do with inaccurate information.
They rarely highlight the commonality. Not a single research team leveraged SAST. In fact, they all use fuzz testing because it was the technique that would allow them to find and fix vulnerabilities before their attackers (or, in this case, their competitors) could exploit them. This approach took away the inherent advantage of time from offensive tactics.
“The slow reaction time of humans has created a permanent offensive advantage.” -- DARPA
I want to emphasize this. Seven of the top cybersecurity research teams unanimously and independently came to the conclusion that fuzz testing was the technique needed to gain the best advantage in the competition. This isn’t just research conducted in a lab. We’ve also seen validation in real world use cases.
Google has been open about its use of fuzz testing for its Chrome browser. The fuzzing team has reported that 80% of all bugs are found via fuzzing while the remaining 20% is found with linters or in production -- meaning they're taking a fuzzing first strategy. Fuzzing findings are so critical that up to 98.6% of bugs found by fuzzing are fixed. The benefits of fuzzing doesn’t end there. Google further claims that fuzzing has also prevented 40% more bugs being introduced via a new commit that broke previously working code (regression).
In conclusion, it’s possible to address the majority of application security testing needs with one solution, if we dare to think outside the box as DARPA and Google have.
However, not all organizations have $60M to host a competition as DARPA had, or have Google resources to build their own fuzz testing solution from the ground up. But one thing is certain: The future of application security testing is here. And, ForAllSecure’s mission is to make this a reality for all organizations. You don’t need to be DARPA or Google to be able to leverage the future of application security.
The market is taking notice, which convinces us that we may be on to something.
On June 11, 2020 GitLab acquired not one, but two fuzz testing technologies: Peach Tech, a protocol fuzzer, and Fuzzit, a guided-fuzzer.
On August 19, 2020, Robert Lemos at DarkReading asserted that 2020 would be the year things change for fuzz testing. Despite being largely outside the SDLC and the last technique to be adopted within appsec programs, he placed his bet on fuzz testing. He passionately pushed the inevitability of fuzz testing becoming mainstream.
On May 28, 2021 Gartner released the Magic Quadrant for Application Security Testing. Gartner defines the application security testing (AST) market as sellers of products and services that analyze and test applications for security vulnerabilities. They identify three main styles of AST: Static AST, Dynamic AST, and Interactive AST. They also recognize software composition analysis (SCA). This year, Gartner has expanded their scope to address the following trends:
With continuous approaches, DevOps disciplines, and digital transformation strategies on the rise, fuzz testing is the natural fit to address the analysis speed, scale, and accuracy needed to conduct layers of automated testing in a continuous model.
Mayhem is an award-winning AI that autonomously finds new exploitable bugs and improves your test suites.
Thank you for subscribing!