Categories
Python

Collision: the story of the random bug

So here I was, trying to write some Django server-side code, when every once in a while, some test would fail.
Now, it is important to know that we are using any_model, a cute little library that allows you to specify only the fields you need when creating objects, and randomizes the rest (to help uncover more bugs).

In this particular instance, the test that was failing was trying to store objects on the server using an API, and then check that the new objects exist in the DB. Every once in a while, an object didn’t exist. It should be noted that the table with the missing rows had a Djano-ORM URLField.

So first things first, I changed the code to print the random seed it was using on every failure. Now the next time it failed (a day later), I had the random seed in hand.

I then proceeded to use that random seed – and now I had a reproducible bug – it failed every time, consistently.

The next step was finding the cause of the bug. To cut a long story short – it turns out that it looked for an object with a specific URL. Which url? the url created for the first object (we had two).

The bug was that the second object was getting the same url as the first. I remind you, these urls are generated randomly. The troublesome url was http://72.14.221.99

I leave you now to guess/check what are the chances for the collision here
(the correct way to do that would be to check any_model’s code for generating urls, and not just say 1 in 2^32… :)

So I made sure the second object got a new url, and all was well, and the land had rest for forty years. (or less).

Categories
Python Testing

Fuzz-Testing With Nose

A few days ago, I found a in my website, plnnr.com. The bug was in a new feature I added to the algorithm. The first thing I did was write a small unit-test to reproduce the bug. With that unit-test in hand, I then worked to fix the bug, and got this unit-test to pass.

As I previously persumed this feature to be (relatively :) bug free, I decided that more testing was in order. This time however, a single test-case would not be enough – I needed to make sure that the trip-generation algorithm works in many cases. Enter fuzzing.

Plnnr.com generates trips according to trip preferences. Why not generate the trip preferences with a fuzzer, and then check if the planning algorithm chokes on them? While fuzzing is usually used to generate invalid input with the goal of causing the program to crash, in this case I’m generating valid input with the goal of causing the planning algorithm to fail.

Usually fuzzing is done with one of two techniques – exhaustive fuzzing, that goes systematically (possibly selectively) over the input space and random fuzzing, which picks inputs at random – or “somewhat” randomly. In my case, the input space consists of “world data” – locations of attractions, restaurants, etc, and trip preferences – intensity, required attractions, and so on. Since the input space is so large and “unstructured”, I found it much easier to go with random fuzzing.

In each test-case, I will generate a “random world”, and random trip preferences for that world.
Here is some sample code that shows how this might look:

trip_prefs.num_days = random.randint(0, 5)
trip_prefs.intensity = random(0, 5)
if randbit():
    trip_prefs.schedule_lunch = True

Where randbit is defined like so:

def randbit(prob = 0.5):
    return random.random() <  prob

This is all very well, but tests need to be reproducible. If a fuzzer-generated test case fails and I can’t recreate it to analyze the error and later verify that it is fixed, it isn’t of much use. To solve this issue, the input generation function receives some value, and sets the random seed with this parameter. Now, generating test cases is just a matter of generating a sequence of random values. Here is my code to do that:

class FuzzTestBase(object):
    __test__ = False
    def run_single_fuzz(self, random_seed):
        pass
    def fuzz_test(self):
        random.seed()
        random_seeds = [str(random.random()) for i in range(NUM_FUZZ_TESTS)]
        for seed in random_seeds:
            yield self.run_single_fuzz, seed

FuzzTestBase is a base-class for actual test classes. Each test class just needs to define its own version of run_single_fuzz, and in it call random.seed(random_seed) and log random_seed.

This code uses nose‘s ability to test generators: it assumes that a test generator yields test functions and their parameters.

A few interesting issues:
* I generate the random seeds beforehand, so that calling random.seed() in the actual test case doesn’t affect the seed sequence.
* Originally I used just random.random() as a seed instead of str(random.random()). The problem with that is that this way it’s not reproducible. random.random() returns a floating point value x, for which usually x != eval(str(x)):

In [10]: x = random.random()
In [11]: x == eval(str(x))
Out[11]: False

Even though x == eval(repr(x)) for that case, there’s still room for error. Unlike floating point numbers, it’s harder to go wrong with string equality. So str(random.random()) is just a cheap way to generate random strings.

I’d recommend that if your testing mostly consists of selected test cases based on what you think is possible user behavior, you might want to add some fuzzed inputs. I originally started the fuzz-testing described in this blog-post to better test for a specific bug. After adding the fuzz-testing, I found another bug I didn’t know was there. This just goes to show how useful fuzzing is as a testing tool. The fact that it’s so easy to implement is just a bonus.

Categories
Assembly computer science Programming Projects Testing

Issues in writing a VM – Part 1

Arkon and I decided to write a VM for vial. First though, a short explanation on what is vial:
vial is a project aimed at writing a general disassembler that outputs expression trees instead of text. On top of vial, we intend to write various code-analysis tools. The expression trees in the output should be an accurate description of the all of the code’s actions.
(note: the x86 disassembler behind vial is Arkon’s diStorm.)

So why do we need a VM? Apart from it being ‘nice and all’, it is critical for testing.

Some time ago, I described writing a VM to test a compiler I wrote as university homework. It is a similar issue here.
The disassembler is written according to the x86 specification. If we just check its output against this specification, we are not doing much to verify the code’s correctness. This is evident when you try to implement such a testing module – you end up writing another disassembler, and testing it against the original one. There has to be a different test method, one that does not directly rely on the specification.

Enter the VM. If you write a program, you can disassemble it, and then try to execute the disassembly. If it yields the same output as the original program – your test passed.
This is a good testing method, because it can be easily automated, reach good code coverage, and it tests against known values.
Consider the following illustration:

Testing Process

We are testing here a complete process on the left hand, against a known valid value, the original program’s output, on the right hand. All of the boxes on the left hand are tested along the way. Of course, one test may miss. For example, both the VM and the disassembler may generate wrong output for register overflows. We can try to cover as many such cases as possible by writing good tests for this testing framework. In this case, good tests are either c programs, or binary programs. This is essentially what I was doing when I manually fuzzed my own compiler.

Once the VM is finished, we can start writing various optimizations for the disassembler’s generated output. We can test these optimizations by checking the VM’s output on the optimized code against the output on the original code. This makes the VM a critical milestone on the road ahead.

Categories
Compilation computer science Programming Projects Python

Manually fuzzing my own compiler

As I mentioned before, I had to write a compiler for simplified CPL. An obvious requirement was that the compiler generate correct code. A less obvious requirement, but important none-the-less, was that after a syntax error, the compiler will keep parsing the source program.

Now, the default behavior of a parser generated by Bison for a synatx error is to return from the parsing function, yyparse. You may of-course call yyparse again, but this would be mostly meaningless – you lost all your context. A simple example would be a language that has to have ‘program’ as the first word. Once you are past that token, you will not be able to parse the source-program again, because your first production (which might look like this):

program: TOK_PROGRAM declarations statements TOK_END

won’t parse.

This is solved in Bison by adding error productions. For example:

expression: '(' error ')'

This production means that an error encountered within parenthesis may be considered a valid expression for the purposes of parsing. A proper semantic action for that error (the code that runs when the production is parsed) will add an error message to some error list, and maybe do some general book-keeping.

So where does the fuzzing come in?
Well, my compiler was mostly working, but it still had no error recovery. That means that any syntax error would cause it to exit, with just that one error. Consider your favorite compiler, (such as gcc), exiting on the first missing semicolon. This is just not acceptable. So I added my basic error recovery and was pretty satisfied.
Then, I had to test the newly written error handling. So I wrote a CPL program in which my goal was to try and kill my own compiler. Thats a fun way to test your code. This also happens to be a ‘fuzzing mindset’. I actually managed to find some holes I didn’t think about, and closed them. Of course, these were not security holes, just ‘compilation holes’.
Here is an excerpt from one of the programs I wrote:
[c]
x=x++++++++++++++100;
x=x=3;
}
for (x=1; x<100; x=x+1; { x = x; } else () { for (x=1; x<100; x=x+1;) { x = -1; } } [/c] It goes on like this for a bit, so I'll save you the trouble. Later I did some deep testing for the code generation. I wrote a test-case for every possible operator in the language (there aren't that many) and for each type (real and int. Did I mention it was simplified cpl?). Since the language doesn’t support strings, each of these test cases printed 1 for success and 0 for failure. I ran the compiled output with my VM script and then had a simple Python script collect all the results and return failure if any of them failed. I later also tested control flow structures using the same method.
I had all of these tests available in my makefile, and after each change all I had to do was ‘make test’. Which I did, often.

Looking back after finishing the assignment, it occurred to me that I could have also written an actual ‘fuzzer’ for source programs. However, in the context of such a university assignment this seems like overkill. Had I been writing a real compiler, there’s a very good chance I’d have written a ‘source fuzzer’.

All in all, after I got over the nastiness which is Bison, it was pretty fun. It has been a long time since I wrote C or C++, and it’s good to clean some of the rust away from time to time.