Categories
Programming

Writing a pandemic simulation

Over the last weekend I felt like programming something fun and easy, so I thought, why not write yet another pandemic/epidemic simulation.

A quick demo of simpandemic

So between helping a crying child and preparing lunch, I created simpandemic. It’s small, simplistic, but easy to play with and change parameters. As a toy project, it’s far from perfect. I implemented infection based on distance rather than collision detection, like some other simulations do, and optimized it using a grid and not a tree structure (e.g. rtree). However, it works, it is playable and very much tweak-able.

Right now it depends on pygame, which is great fun, but a bit of a pain to get it working on mac these days.

Feel free to download it, fork it, play with it, whatever. I’ll accept fun pull requests in case these actually come.

Stay healthy, stay safe, stay home!

Categories
Python

Collision: the story of the random bug

So here I was, trying to write some Django server-side code, when every once in a while, some test would fail.
Now, it is important to know that we are using any_model, a cute little library that allows you to specify only the fields you need when creating objects, and randomizes the rest (to help uncover more bugs).

In this particular instance, the test that was failing was trying to store objects on the server using an API, and then check that the new objects exist in the DB. Every once in a while, an object didn’t exist. It should be noted that the table with the missing rows had a Djano-ORM URLField.

So first things first, I changed the code to print the random seed it was using on every failure. Now the next time it failed (a day later), I had the random seed in hand.

I then proceeded to use that random seed – and now I had a reproducible bug – it failed every time, consistently.

The next step was finding the cause of the bug. To cut a long story short – it turns out that it looked for an object with a specific URL. Which url? the url created for the first object (we had two).

The bug was that the second object was getting the same url as the first. I remind you, these urls are generated randomly. The troublesome url was http://72.14.221.99

I leave you now to guess/check what are the chances for the collision here
(the correct way to do that would be to check any_model’s code for generating urls, and not just say 1 in 2^32… :)

So I made sure the second object got a new url, and all was well, and the land had rest for forty years. (or less).

Categories
Python

Cheap language detection using NLTK

Some months ago, I was facing a problem of having to deal with large amounts of textual data from an external source. One of the problems was that I wanted only the english elements, but was getting tons of non-english ones. To solve that I needed some quick way of getting rid of non-english texts. A few days later, while in the shower, the idea came to me: using NLTK stopwords!

What I did was, for each language in nltk, count the number of stopwords in the given text. The nice thing about this is that it usually generates a pretty strong read about the language of the text. Originally I used it only for English/non-English detection, but after a little bit of work I made it specify which language it detected. Now, I needed a quick hack for my issue, so this code is not very rigorously tested, but I figure that it would still be interesting. Without further ado, here’s the code:

import nltk
 
ENGLISH_STOPWORDS = set(nltk.corpus.stopwords.words('english'))
NON_ENGLISH_STOPWORDS = set(nltk.corpus.stopwords.words()) - ENGLISH_STOPWORDS
 
STOPWORDS_DICT = {lang: set(nltk.corpus.stopwords.words(lang)) for lang in nltk.corpus.stopwords.fileids()}
 
def get_language(text):
    words = set(nltk.wordpunct_tokenize(text.lower()))
    return max(((lang, len(words & stopwords)) for lang, stopwords in STOPWORDS_DICT.items()), key = lambda x: x[1])[0]
 
 
def is_english(text):
    text = text.lower()
    words = set(nltk.wordpunct_tokenize(text))
    return len(words & ENGLISH_STOPWORDS) > len(words & NON_ENGLISH_STOPWORDS)

The question to you: what other quick NLTK, or NLP hacks did you write?

Categories
Python

Python Module Usage Stats – Feb 2011

Here are the top 30 “base modules”, ordered by number of PyPI projects importing them. These results are based on 11,204 packages download from PyPI. Explanations, full results and code to generate them are available below.

Results


(click to enlarge)

Full results are available (see Methodology to understand what they mean exactly).

Discussion

Some interesting tidbits and comparisons:

  • It seems django has gained “some popularity”. Zope is very high up on the list, and plone is at 42 with 907 projects importing it.
  • The number of projects importing unittest is somewhat depressing, especially relative to setuptools which is impressive. That might be because setuptools is somewhat a prerequisite to appear on PyPI (practically speaking), while unittest is not. (Edit: corrected by Michael Foord in a comment)
  • optparse with 1875 vs. getopt with 515.
  • cPickle with 690 vs. pickle with 598.
  • simplejson with 760 vs. json with 593.

I invite you all to find out more interesting pieces of information by going over the results. I bet there’s a lot more knowledge to be gained from this.

Background

Back in 2007 I wrote a small script that counted module imports in python code. I used it to generate statistics for Python modules. A week or two ago I had an idea to repeat that experiment – and see the difference between 2007 and 2011. I also thought of a small hypothesis to test: since django became very popular, I’d expect it to be very high up on the list.

I started working with my old code, and decided that I should update it. Looking for imports in Python code is not as simple as it seems. I considered using the tokenize and parser modules, but decided against that. Using parser would make my code version dependent and by the time I thought of tokenize, I had the complicated part already worked out. By the complicated part I mean of course the big regexps I used ;)

Methodology

Input: PyPI and a source distribution of the Python2.7 standard library. I wrote a small script (cheese_getter.py) to fetch python modules. It does it by reading the PyPI index page, and then using easy_install to fetch each module. Since currently there are a bit less than 13k modules in PyPI, this took some time.

Parsing: I wrote a relatively simple piece of code to find “import x” and “from x import y” statements in code. This is much more tricky than it seems: statements such as “from x import a,b”, “from . import bla” and

from bla import \
               some_module\
               some_module2

should all be supported. In order to achieve uniformity, I converted each import statement to a series of dotted modules. So for example, “import a.b” will yield “a” and “a.b”, and “from b import c,d” will yield “b”, “b.c”, and “b.d”.

Processing: I created three result types:

  1. total number of imports
  2. total number of packages importing the module
  3. total number of packages importing the module, only for the first module mentioned in a dotted module name, e.g. not “a.b”, only “a”.

I believe the third is the most informative, although there are interesting things to learn from the others as well.

Code: Full code is available. Peer reviews and independent reports are welcome :)

Categories
Programming Philosophy

Beautiful Code

A few days ago, @edensh mentioned in Facebook beautiful code, and many people gave examples of assembly, while I was thinking of Python.

That got me thinking: what is beautiful code for me?

So here are my criteria for beautiful code:

  1. Readable (also visually pretty)
  2. Concise
  3. Does something non trivial (usually in an unexpectedly short manner)
  4. Good (solves the problem, efficiently)

If we consider code to be an implementation of a solution to a problem, than 3 & 4 usually apply to the solution, while 1 & 2 apply to the code itself. This brings me to why I like Python:

  1. Code is more readable. Specifically, I can still still easily understand code I wrote years ago. Also, Python’s zen encourages you to write readable code. For example “explicit is better than implicit” directly applies to readability.
  2. Python is visually appealing, although I guess that’s a matter of opinion :)
  3. Python almost always allows me to express my solutions easily & succinctly, whereas with other languages (C, C++, Java) I have to fight to “get my point across”.
  4. Python almost always has the right data structures to implement my solutions efficiently.

With that in mind, it’s clear to me now how assembly code can be beautiful.

Note that I didn’t mention C#, Ruby or Haskell. I don’t have much experience with these languages, but from what I’ve seen so far, it seems to me that these languages may help you write beautiful code. Of these, Haskell is probably going to be the first language I’ll learn – I think it will be the most educating experience, although I’m pretty sure others will argue with me regarding Haskell’s readability :)

Now, My question to you is: what do you think makes code beautiful?

Categories
Challenges Programming Python Statistics

Small programming challenge No. 6 – nblocks

I came up with this challenge when I had to write a function to divide a sequence to percentiles. I needed this to calculate some statistics over trips for plnnr.com. “This sounds trivial” I thought, and reached for my simple blocks function:

def blocks(seq, block_len):
    """blocks(range(5),2) -> [[0, 1], [2, 3], [4]]"""
    seq_len = len(seq)
    if seq_len%block_len == 0:
        num_blocks = seq_len/block_len
    else:
        num_blocks = 1 + (seq_len/block_len)
 
    result = [[] for i in xrange(num_blocks)]
    for idx, obj in enumerate(seq):
        result[idx/block_len].append(obj)
    return result

So I set block_len to len(seq)/10 and called blocks(seq, block_len). Unfortunately, according to the docs of blocks (which I wrote…), when there is extra data, a “partial” block is added – which is exactly what we don’t want when calculating percentiles.
Instead, the behavior we want is nblocks(seq, number_of_blocks), for example: nblocks(range(10), 3) -> [0, 1, 2], [3, 4, 5], [6, 7, 8, 9].

This is a surprisingly hard to write function, or rather, harder than you’d expect. I’ll be especially glad if someone writes it elegantly. My solution works well enough, but isn’t the prettiest.

So, you have the definition – let’s see if you can do better than me. Once enough solutions are presented, I will present my own.

IMPORTANT EDIT: the required signature is nblocks(seq, num_blocks). So for seq(range(10), 3), the return value should be 3 blocks, with the last one having an extra item. As a general rule, the extra items should be spread as evenly as possible.

Categories
Python Sound

One-liner Guitar Tuner in Python

Guitar

On windows, assuming imports are free:

import winsound
winsound.Beep(220*((2**(1/12.0))**7), 2000)

But that’s just because I like to tune to E. If you prefer a more “natural looking” note, you can use A:

winsound.Beep(110, 1000)

But why choose at all when you can go for all of them?

[winsound.Beep(220*((2**(1/12.0))**i), 500) for i in [7, 2, -2, -7, -12, -17]]

Image by Keela84

Categories
Fractals Graphics Math Programming Python

Visualizing Data Using the Hilbert Curve

Some time ago, a coworker asked me to help him visualize some data. He had a very long series (many millions) of data points, and he thought that plotting a pixel for each one would visualize it well, so he asked for my help.
I installed Python & PIL on his machine, and not too long after, he had the image plotted. The script looked something like:

data_points = get_data_points()
n =  int((len(data_points)**0.5) + 0.5)
 
image = Image('1', (n, n))
for idx, pt in enumerate(data_points):
	image.putpixel(pt, (idx/n, idx%n))
image.save('bla.png', 'png')

Easy enough to do. Well, easy enough if you have enough memory to handle very large data sets. Luckily enough, we had just enough memory for this script & data series, and we were happy. The image was generated, and everything worked fine.

Still, we wanted to improve on that. One problem with this visualization is that two horizontally adjacent pixels don’t have anything to do with each other. Remembering xkcd’s “Map of the Internet“, I decided to use the Hilbert Curve. I started with wikipedia’s version of the code for the Python turtle and changed it to generate a string of instructions of where to put pixels. On the way I improved the time complexity by changing it to have only two recursion calls instead of four. (It can probably be taken down to one by the way, I leave that as a challenge to the reader :)

Unfortunately, at this point we didn’t have enough memory to hold all of those instructions, so I changed it into a generator. Now it was too slow. I cached the lower levels of the recursion, and now it worked in reasonable time (about 3-5 minutes), with reasonable memory requirements (no OutOfMemory exceptions). Of course, I’m skipping a bit of exceptions and debugging along the way. Still, it was relatively straightforward.

Writing the generator wasn’t enough – there were still pixels to draw! It took a few more minutes to write a simple “turtle”, that walks the generated hilbert curve.
Now, we were ready:

hilbert = Hilbert(int(math.log(len(data_points), 4) + 0.5))
for pt in data_points:
    x,y = hilbert.next()
    image.putpixel(pt, (x,y))

A few minutes later, the image was generated.

Categories
Python Testing

Fuzz-Testing With Nose

A few days ago, I found a in my website, plnnr.com. The bug was in a new feature I added to the algorithm. The first thing I did was write a small unit-test to reproduce the bug. With that unit-test in hand, I then worked to fix the bug, and got this unit-test to pass.

As I previously persumed this feature to be (relatively :) bug free, I decided that more testing was in order. This time however, a single test-case would not be enough – I needed to make sure that the trip-generation algorithm works in many cases. Enter fuzzing.

Plnnr.com generates trips according to trip preferences. Why not generate the trip preferences with a fuzzer, and then check if the planning algorithm chokes on them? While fuzzing is usually used to generate invalid input with the goal of causing the program to crash, in this case I’m generating valid input with the goal of causing the planning algorithm to fail.

Usually fuzzing is done with one of two techniques – exhaustive fuzzing, that goes systematically (possibly selectively) over the input space and random fuzzing, which picks inputs at random – or “somewhat” randomly. In my case, the input space consists of “world data” – locations of attractions, restaurants, etc, and trip preferences – intensity, required attractions, and so on. Since the input space is so large and “unstructured”, I found it much easier to go with random fuzzing.

In each test-case, I will generate a “random world”, and random trip preferences for that world.
Here is some sample code that shows how this might look:

trip_prefs.num_days = random.randint(0, 5)
trip_prefs.intensity = random(0, 5)
if randbit():
    trip_prefs.schedule_lunch = True

Where randbit is defined like so:

def randbit(prob = 0.5):
    return random.random() <  prob

This is all very well, but tests need to be reproducible. If a fuzzer-generated test case fails and I can’t recreate it to analyze the error and later verify that it is fixed, it isn’t of much use. To solve this issue, the input generation function receives some value, and sets the random seed with this parameter. Now, generating test cases is just a matter of generating a sequence of random values. Here is my code to do that:

class FuzzTestBase(object):
    __test__ = False
    def run_single_fuzz(self, random_seed):
        pass
    def fuzz_test(self):
        random.seed()
        random_seeds = [str(random.random()) for i in range(NUM_FUZZ_TESTS)]
        for seed in random_seeds:
            yield self.run_single_fuzz, seed

FuzzTestBase is a base-class for actual test classes. Each test class just needs to define its own version of run_single_fuzz, and in it call random.seed(random_seed) and log random_seed.

This code uses nose‘s ability to test generators: it assumes that a test generator yields test functions and their parameters.

A few interesting issues:
* I generate the random seeds beforehand, so that calling random.seed() in the actual test case doesn’t affect the seed sequence.
* Originally I used just random.random() as a seed instead of str(random.random()). The problem with that is that this way it’s not reproducible. random.random() returns a floating point value x, for which usually x != eval(str(x)):

In [10]: x = random.random()
In [11]: x == eval(str(x))
Out[11]: False

Even though x == eval(repr(x)) for that case, there’s still room for error. Unlike floating point numbers, it’s harder to go wrong with string equality. So str(random.random()) is just a cheap way to generate random strings.

I’d recommend that if your testing mostly consists of selected test cases based on what you think is possible user behavior, you might want to add some fuzzed inputs. I originally started the fuzz-testing described in this blog-post to better test for a specific bug. After adding the fuzz-testing, I found another bug I didn’t know was there. This just goes to show how useful fuzzing is as a testing tool. The fact that it’s so easy to implement is just a bonus.

Categories
Optimization Programming Python

Pyweb-il Presentation on Optimization Slides

Last Monday I gave a presentation in pywebil on optimization, that’s loosely based on my blog post on the same subject. Here are the slides for that presentation.