Categories
Python Testing

Fuzz-Testing With Nose

A few days ago, I found a in my website, plnnr.com. The bug was in a new feature I added to the algorithm. The first thing I did was write a small unit-test to reproduce the bug. With that unit-test in hand, I then worked to fix the bug, and got this unit-test to pass.

As I previously persumed this feature to be (relatively :) bug free, I decided that more testing was in order. This time however, a single test-case would not be enough – I needed to make sure that the trip-generation algorithm works in many cases. Enter fuzzing.

Plnnr.com generates trips according to trip preferences. Why not generate the trip preferences with a fuzzer, and then check if the planning algorithm chokes on them? While fuzzing is usually used to generate invalid input with the goal of causing the program to crash, in this case I’m generating valid input with the goal of causing the planning algorithm to fail.

Usually fuzzing is done with one of two techniques – exhaustive fuzzing, that goes systematically (possibly selectively) over the input space and random fuzzing, which picks inputs at random – or “somewhat” randomly. In my case, the input space consists of “world data” – locations of attractions, restaurants, etc, and trip preferences – intensity, required attractions, and so on. Since the input space is so large and “unstructured”, I found it much easier to go with random fuzzing.

In each test-case, I will generate a “random world”, and random trip preferences for that world.
Here is some sample code that shows how this might look:

trip_prefs.num_days = random.randint(0, 5)
trip_prefs.intensity = random(0, 5)
if randbit():
    trip_prefs.schedule_lunch = True

Where randbit is defined like so:

def randbit(prob = 0.5):
    return random.random() <  prob

This is all very well, but tests need to be reproducible. If a fuzzer-generated test case fails and I can’t recreate it to analyze the error and later verify that it is fixed, it isn’t of much use. To solve this issue, the input generation function receives some value, and sets the random seed with this parameter. Now, generating test cases is just a matter of generating a sequence of random values. Here is my code to do that:

class FuzzTestBase(object):
    __test__ = False
    def run_single_fuzz(self, random_seed):
        pass
    def fuzz_test(self):
        random.seed()
        random_seeds = [str(random.random()) for i in range(NUM_FUZZ_TESTS)]
        for seed in random_seeds:
            yield self.run_single_fuzz, seed

FuzzTestBase is a base-class for actual test classes. Each test class just needs to define its own version of run_single_fuzz, and in it call random.seed(random_seed) and log random_seed.

This code uses nose‘s ability to test generators: it assumes that a test generator yields test functions and their parameters.

A few interesting issues:
* I generate the random seeds beforehand, so that calling random.seed() in the actual test case doesn’t affect the seed sequence.
* Originally I used just random.random() as a seed instead of str(random.random()). The problem with that is that this way it’s not reproducible. random.random() returns a floating point value x, for which usually x != eval(str(x)):

In [10]: x = random.random()
In [11]: x == eval(str(x))
Out[11]: False

Even though x == eval(repr(x)) for that case, there’s still room for error. Unlike floating point numbers, it’s harder to go wrong with string equality. So str(random.random()) is just a cheap way to generate random strings.

I’d recommend that if your testing mostly consists of selected test cases based on what you think is possible user behavior, you might want to add some fuzzed inputs. I originally started the fuzz-testing described in this blog-post to better test for a specific bug. After adding the fuzz-testing, I found another bug I didn’t know was there. This just goes to show how useful fuzzing is as a testing tool. The fact that it’s so easy to implement is just a bonus.

Categories
Personal Programming Python Testing

My Bad Memory, High Load, and Python

About a month ago the new Ubuntu 8.04 was released and I wanted a clean install. I downloaded an image and burned it to a CD. Just before installing, I tried “check CD for defects” and found a few. Turns out (*) this was because of bad memory – and memtest confirmed it.
So I went to the shop, replaced the bad memory, and also bought two new sticks. I went home to install the new Ubuntu, and after the installation, Firefox crashed. After rebooting back to memtest, I saw this:

memory errors in memtest

Back at the computer shop, they asked me to reproduce the errors. Just firing up the computer and booting directly into memtest didn’t seem to do the trick, so I suspected that I had to overwork my computer a bit to reproduce this.

Since I was at the lab, I didn’t want to muck around too much.
So I thought, “what’s the quickest way to give your CPU a run around the block?”
That’s right – a tight loop:

while True:
    pass

However, this snippet doesn’t really play with the memory.

The next simplest thing to do, that also jiggles some ram is the following (and one of my favorites):

In [1]: x = 4**(4**4)
In [2]: y = 4**x

I will talk about this peculiar piece of code at a later post.

In any case, this snippet also didn’t reproduce the error. It is also quite unwieldy, as it raises a MemoryError after some time. Later at home I tried two more scripts.
The first is a variation on the one above:

x = 4**(4**4)
while True:
    try:
        y = 4**x
    except MemoryError:
        pass

I ran a few of those in parallel. However, my Ubuntu machine actually killed the processes running this one by one.

The second is smarter. It allocates some memory and then just copies it around:

import sys
import copy
megabytes = int(sys.argv[1])
 
x1 = [["a"*1000 + str(i) for i in range(1000)] for j in range(megabytes)]
while True:
    x2 = copy.deepcopy(x1)

After both of these scripts didn’t reproduce the problem and it still persisted arbitrarily, I returned the computer to the lab. Turns out that the two replacement sticks and the two new sticks weren’t exactly identical, and that was the cause of the problem. So now my memory is well again.

As for the scripts above, I once wrote a similar script at work. I was asked to help with testing some software in some stress testing. The goal was to simulate a heavily used computer. A few lines of Python later and the testing environment was ready.

Footnotes:
(*) – Finding out that it was a memory issue wasn’t as easy as it sounds. I didn’t think of running memtest. I checked the image on my HD with md5, and the hash didn’t match. I downloaded a second image, and again the hash didn’t match. I checked twice.
At this point I was really surprised: not only the second check didn’t match the published md5, it also didn’t match the first check. Some hours and plenty of voodoo later, a friend suggested running memtest, and the culprit was found.

Categories
Assembly Challenges Programming Testing

Some Assembly Required No. 1

I’ve been working on some of the instruction tests in vial, and I wanted to test the implementation of LOOP variants. My objective was to make sure the vial version is identical to the real CPU version (as discussed here). To achieve this, I had to cover all of the essential behaviors of LOOP.

Well, using the framework Gil and I wrote, I hacked up some code that should cover the relevant cases:

code_template = """
mov edx, ecx ; control the start zf
mov ecx, eax ; number of iterations
mov eax, 0 ; will hold the result, also an iteration counter
loop_start:
 
    cmp eax, ebx    ; check if we need to change zf
    setz dh
    xor dh, dl      ; if required, invert zf
    inc eax         ; count the iteration
    cmp dh, 0       ; set zf
 
    loop%s loop_start
"""
for loop_kind in ['','z','nz']:
    code_text = code_template % loop_kind
    c = FuncObject(code_text)
    for start_zf_value in [0,1]:
        for num_iters in [1,4,10]:
            for when_zf_changes in [1,2,15]:
                c(num_iters, when_zf_changes, start_zf_value)
                c.check()

Note that c(…) executes the code both on vial’s VM, and on the real cpu. c.check() compares their return value (EAX) and flags after the execution. I also wanted to avoid other kinds of jumps in this test.

To check that the code ran the same number of times, I returned EAX as the number of iterations.
All the games with edx are there to make sure that I’m testing different zf conditions.

The challenge for today:
Can you write a shorter assembly snippet that tests the same thing?

Categories
Assembly computer science Programming Projects Testing

Issues in writing a VM – Part 1

Arkon and I decided to write a VM for vial. First though, a short explanation on what is vial:
vial is a project aimed at writing a general disassembler that outputs expression trees instead of text. On top of vial, we intend to write various code-analysis tools. The expression trees in the output should be an accurate description of the all of the code’s actions.
(note: the x86 disassembler behind vial is Arkon’s diStorm.)

So why do we need a VM? Apart from it being ‘nice and all’, it is critical for testing.

Some time ago, I described writing a VM to test a compiler I wrote as university homework. It is a similar issue here.
The disassembler is written according to the x86 specification. If we just check its output against this specification, we are not doing much to verify the code’s correctness. This is evident when you try to implement such a testing module – you end up writing another disassembler, and testing it against the original one. There has to be a different test method, one that does not directly rely on the specification.

Enter the VM. If you write a program, you can disassemble it, and then try to execute the disassembly. If it yields the same output as the original program – your test passed.
This is a good testing method, because it can be easily automated, reach good code coverage, and it tests against known values.
Consider the following illustration:

Testing Process

We are testing here a complete process on the left hand, against a known valid value, the original program’s output, on the right hand. All of the boxes on the left hand are tested along the way. Of course, one test may miss. For example, both the VM and the disassembler may generate wrong output for register overflows. We can try to cover as many such cases as possible by writing good tests for this testing framework. In this case, good tests are either c programs, or binary programs. This is essentially what I was doing when I manually fuzzed my own compiler.

Once the VM is finished, we can start writing various optimizations for the disassembler’s generated output. We can test these optimizations by checking the VM’s output on the optimized code against the output on the original code. This makes the VM a critical milestone on the road ahead.