Fuzz testing, or fuzzing, is a dynamic application security testing technique for negative testing. Fuzzing aims to detect known, unknown, and zero-day vulnerabilities. A f can be used to create a and send malformed or random inputs to fuzz targets. Their objective is to trigger bad behaviors, such as crashes, infinite loops, and/or memory leaks. These anomalous behaviors are often a sign of an underlying vulnerability. Fuzz testing should be a part of every SDLC. It looks at the runtime behavior of the code. It provides more code coverage than SAST or SCA.
Ten years ago fuzzing could only be conducted by security experts, but the technology has matured to the point that even novice developers can get up to speed quickly. Test and evaluation teams that have a basic understanding of Linux can also use fuzzers.
Fuzzers send malformed inputs to targets. Their objective is to trigger bad behaviors, such as crashes, infinite loops, and/or memory leaks. These anomalous behaviors are often a sign of an underlying vulnerability. A seed corpus is a set of valid inputs that serve as a starting point for fuzzing a target.
In order to fuzz test, a fuzzer needs a way to interact with the application. Unit tests and integration tests both typically involve running the software under test with a specific input and asserting that a specific output was observed. Fuzzing extends this form of testing by parameterizing the test within an array of bytes and then searching for strings of input bytes that trigger bugs. Fortunately, developers can write a fuzz test harness in much less time than required to write individual unit tests. Better yet, these harnesses typically only need to be written once for a given application.
The rise in fuzzing has resulted in security vulnerabilities getting found and fixed at an amazing rate. But it has raised some new questions: how do we find good fuzz targets quickly, and what is left to fuzz? These questions require tools and workflow that remain uncommon among software developers and security researchers alike, and one potential solution is in automated coverage analysis.
Fuzz testing is a technique that has been around for nearly four decades. With each generation of fuzzing software, we’re seeing evolution at play, adapting to the needs of its time.
Random fuzzing is sending random inputs to an application. There is no systematic method with a , so the inputs of this might not even penetrate the applications! This is sometimes compared with monkeys typing on a keyboard.
Grammar-based fuzzers rely on a template that’s manually generated. It informs the fuzzing engine how to generate each . Typically, the individuals creating these templates have an understanding of how the protocol is built, so there is intelligence around how those inputs are generated. However, these templates can inadvertently constrain the areas of the applications to explore and take a one-size-fits-all approach.
Guided fuzzers rely solely on its target applications behavior to inform how to generate its inputs. So, the fuzzing engine will generate 1 input, observe how the application behaves, learns from it, then generates the next input. These fuzzers are effective because they custom generate tests specific to the applications. This is the category ForAllSecure Mayhem sits in.
We can’t definitively say that one fuzzing technique or is better than the other because it really does depend on what you’re trying to address. Regardless of whether you want to go with Mayhem, we can help you navigate those waters to find the right type of fuzzer for you.
"Symbolic execution" usually means "static symbolic execution", in which you analyze a non-executing program to consider how it might behave when it does execute for real. Symbolic execution is a program analysis technique that uses formal computer science methods to determine an input that triggers a node in the application to execute. Once determined, the valid input is used to derive invalid inputs for negative testing.
Concolic execution is a form of dynamic symbolic execution and a type of analysis that runs on a trace of a program that ran (for real) on a specific input. At ForAllSecure we technically only do concolic execution, not static symbolic execution. We don't do static symbolic execution, which is what someone might mean when they say "symbolic execution", but when we say it we mean in the more general sense of "static or dynamic symbolic execution", of which we do one of those things.
Academic research shows that for in-depth dynamic software analysis, you need both fuzzing and symbolic execution. We compared Mayhem with AFL, a state-of-the-art guided fuzzer. The results are in regarding Analysis speed, Code coverage, and Defect detection
A review of software security investments reveals that a majority of spending is in application testing solutions, such as static analysis, software composition analysis, and scanners. These conventional testing approaches, however, test known or common attack patterns, only addressing CVEs or CWEs. But what about the unknown vulnerabilities -- the weaknesses often exploit?
Fuzzing is a proven tool that maximizes defect detection with the least amount of time and resources. As a result, it not only buys organizations time and money, it also frees scarce technical resources from manual, mundane tasks and allows them to focus on strategic initiatives that require true expertise.
“Fuzzing seems like black magic and it seems impossible to bring into a company. At the same time [the fuzzing industry] just [doesn’t] know how else to make it easier and more accessible. Who can be right in that conversation? The answer is: both sides. Because, the future is always already here, it’s just unevenly distributed”
- Mike Walker, Sr. Director of Microsoft Research NExT Special Projects
Time and time again, the has been proven as an effective technique for uncovering zero-day vulnerabilities. If it’s so effective, why don’t more people use it, much less know about it?
Fuzzing has been deemed as a dark art that only the security researchers and hackers knew how to harness. As we observe increased demand in fuzzing, the market has been shifting to make it more accessible, approachable, and easier to understand.
While each vendor has their own philosophies on how to make fuzzing more approachable and easier to understand, the collective industry’s dedication to accessibility is undeniable. Within the last 2 years alone we’ve seen more vendors enter the , including Fuzzbuzz, FuzzIt (now GitLab), and Microsoft to name a few. Remember, this was a market that used to only be comprised of : AFL, libfuzzer, and Google OSS-Fuzz.
You’ve invested quite a bit time, money, and effort into your SAST solution. So, you may not be warm to the idea of breaking up with it. Great news! You don’t have to.
When it comes to product security, best practices call for layering. Much like defense-in-depth, consider it like testing-in-depth. Assess the gaps left behind by your SAST solution, then identify additional solutions for augmentation.
Our research showed that these three application security testing techniques offer the greatest coverage: static analysis, software composition, and fuzz testing.