Categories
Programming

Wikipedia Images

A few days ago a friend (x) of a friend (y) showed me and my friend (y) a small app he was developing, that had photos from flickr and picasa. We suggested adding photos from Wikipedia as well, but he (x) said that the photos were too big, and it was too much trouble resizing them.
Luckily for him I knew of Wikipedia’s “hidden” image resizing feature, and as it was useful to me and to someone else, I thought I’d share it here.

Let’s say you are looking to resize the following image of the Eiffel Tower: http://en.wikipedia.org/wiki/File:Tour_Eiffel_Wikimedia_Commons.jpg. Then the url to the image itself is:

http://upload.wikimedia.org/wikipedia/commons/a/a8/Tour_Eiffel_Wikimedia_Commons.jpg

To get the url to a resized image, just add ‘thumb/’ after ‘commons/’ and then add ‘/[%d]px-[filename]’ at the end, were %d is the new width. So for our image, the new url would be:

http://upload.wikimedia.org/wikipedia/commons/thumb/a/a8/Tour_Eiffel_Wikimedia_Commons.jpg/300px-Tour_Eiffel_Wikimedia_Commons.jpg

That’s it, simple and quick. Have fun adding Wikipedia to your content!

Categories
Python

Python Module Usage Stats – Feb 2011

Here are the top 30 “base modules”, ordered by number of PyPI projects importing them. These results are based on 11,204 packages download from PyPI. Explanations, full results and code to generate them are available below.

Results


(click to enlarge)

Full results are available (see Methodology to understand what they mean exactly).

Discussion

Some interesting tidbits and comparisons:

  • It seems django has gained “some popularity”. Zope is very high up on the list, and plone is at 42 with 907 projects importing it.
  • The number of projects importing unittest is somewhat depressing, especially relative to setuptools which is impressive. That might be because setuptools is somewhat a prerequisite to appear on PyPI (practically speaking), while unittest is not. (Edit: corrected by Michael Foord in a comment)
  • optparse with 1875 vs. getopt with 515.
  • cPickle with 690 vs. pickle with 598.
  • simplejson with 760 vs. json with 593.

I invite you all to find out more interesting pieces of information by going over the results. I bet there’s a lot more knowledge to be gained from this.

Background

Back in 2007 I wrote a small script that counted module imports in python code. I used it to generate statistics for Python modules. A week or two ago I had an idea to repeat that experiment – and see the difference between 2007 and 2011. I also thought of a small hypothesis to test: since django became very popular, I’d expect it to be very high up on the list.

I started working with my old code, and decided that I should update it. Looking for imports in Python code is not as simple as it seems. I considered using the tokenize and parser modules, but decided against that. Using parser would make my code version dependent and by the time I thought of tokenize, I had the complicated part already worked out. By the complicated part I mean of course the big regexps I used ;)

Methodology

Input: PyPI and a source distribution of the Python2.7 standard library. I wrote a small script (cheese_getter.py) to fetch python modules. It does it by reading the PyPI index page, and then using easy_install to fetch each module. Since currently there are a bit less than 13k modules in PyPI, this took some time.

Parsing: I wrote a relatively simple piece of code to find “import x” and “from x import y” statements in code. This is much more tricky than it seems: statements such as “from x import a,b”, “from . import bla” and

from bla import \
               some_module\
               some_module2

should all be supported. In order to achieve uniformity, I converted each import statement to a series of dotted modules. So for example, “import a.b” will yield “a” and “a.b”, and “from b import c,d” will yield “b”, “b.c”, and “b.d”.

Processing: I created three result types:

  1. total number of imports
  2. total number of packages importing the module
  3. total number of packages importing the module, only for the first module mentioned in a dotted module name, e.g. not “a.b”, only “a”.

I believe the third is the most informative, although there are interesting things to learn from the others as well.

Code: Full code is available. Peer reviews and independent reports are welcome :)

Categories
Challenges Programming Python Statistics

Small programming challenge No. 6 – nblocks

I came up with this challenge when I had to write a function to divide a sequence to percentiles. I needed this to calculate some statistics over trips for plnnr.com. “This sounds trivial” I thought, and reached for my simple blocks function:

def blocks(seq, block_len):
    """blocks(range(5),2) -> [[0, 1], [2, 3], [4]]"""
    seq_len = len(seq)
    if seq_len%block_len == 0:
        num_blocks = seq_len/block_len
    else:
        num_blocks = 1 + (seq_len/block_len)
 
    result = [[] for i in xrange(num_blocks)]
    for idx, obj in enumerate(seq):
        result[idx/block_len].append(obj)
    return result

So I set block_len to len(seq)/10 and called blocks(seq, block_len). Unfortunately, according to the docs of blocks (which I wrote…), when there is extra data, a “partial” block is added – which is exactly what we don’t want when calculating percentiles.
Instead, the behavior we want is nblocks(seq, number_of_blocks), for example: nblocks(range(10), 3) -> [0, 1, 2], [3, 4, 5], [6, 7, 8, 9].

This is a surprisingly hard to write function, or rather, harder than you’d expect. I’ll be especially glad if someone writes it elegantly. My solution works well enough, but isn’t the prettiest.

So, you have the definition – let’s see if you can do better than me. Once enough solutions are presented, I will present my own.

IMPORTANT EDIT: the required signature is nblocks(seq, num_blocks). So for seq(range(10), 3), the return value should be 3 blocks, with the last one having an extra item. As a general rule, the extra items should be spread as evenly as possible.

Categories
Python Sound

One-liner Guitar Tuner in Python

Guitar

On windows, assuming imports are free:

import winsound
winsound.Beep(220*((2**(1/12.0))**7), 2000)

But that’s just because I like to tune to E. If you prefer a more “natural looking” note, you can use A:

winsound.Beep(110, 1000)

But why choose at all when you can go for all of them?

[winsound.Beep(220*((2**(1/12.0))**i), 500) for i in [7, 2, -2, -7, -12, -17]]

Image by Keela84

Categories
Fractals Graphics Math Programming Python

Visualizing Data Using the Hilbert Curve

Some time ago, a coworker asked me to help him visualize some data. He had a very long series (many millions) of data points, and he thought that plotting a pixel for each one would visualize it well, so he asked for my help.
I installed Python & PIL on his machine, and not too long after, he had the image plotted. The script looked something like:

data_points = get_data_points()
n =  int((len(data_points)**0.5) + 0.5)
 
image = Image('1', (n, n))
for idx, pt in enumerate(data_points):
	image.putpixel(pt, (idx/n, idx%n))
image.save('bla.png', 'png')

Easy enough to do. Well, easy enough if you have enough memory to handle very large data sets. Luckily enough, we had just enough memory for this script & data series, and we were happy. The image was generated, and everything worked fine.

Still, we wanted to improve on that. One problem with this visualization is that two horizontally adjacent pixels don’t have anything to do with each other. Remembering xkcd’s “Map of the Internet“, I decided to use the Hilbert Curve. I started with wikipedia’s version of the code for the Python turtle and changed it to generate a string of instructions of where to put pixels. On the way I improved the time complexity by changing it to have only two recursion calls instead of four. (It can probably be taken down to one by the way, I leave that as a challenge to the reader :)

Unfortunately, at this point we didn’t have enough memory to hold all of those instructions, so I changed it into a generator. Now it was too slow. I cached the lower levels of the recursion, and now it worked in reasonable time (about 3-5 minutes), with reasonable memory requirements (no OutOfMemory exceptions). Of course, I’m skipping a bit of exceptions and debugging along the way. Still, it was relatively straightforward.

Writing the generator wasn’t enough – there were still pixels to draw! It took a few more minutes to write a simple “turtle”, that walks the generated hilbert curve.
Now, we were ready:

hilbert = Hilbert(int(math.log(len(data_points), 4) + 0.5))
for pt in data_points:
    x,y = hilbert.next()
    image.putpixel(pt, (x,y))

A few minutes later, the image was generated.

Categories
Python Testing

Fuzz-Testing With Nose

A few days ago, I found a in my website, plnnr.com. The bug was in a new feature I added to the algorithm. The first thing I did was write a small unit-test to reproduce the bug. With that unit-test in hand, I then worked to fix the bug, and got this unit-test to pass.

As I previously persumed this feature to be (relatively :) bug free, I decided that more testing was in order. This time however, a single test-case would not be enough – I needed to make sure that the trip-generation algorithm works in many cases. Enter fuzzing.

Plnnr.com generates trips according to trip preferences. Why not generate the trip preferences with a fuzzer, and then check if the planning algorithm chokes on them? While fuzzing is usually used to generate invalid input with the goal of causing the program to crash, in this case I’m generating valid input with the goal of causing the planning algorithm to fail.

Usually fuzzing is done with one of two techniques – exhaustive fuzzing, that goes systematically (possibly selectively) over the input space and random fuzzing, which picks inputs at random – or “somewhat” randomly. In my case, the input space consists of “world data” – locations of attractions, restaurants, etc, and trip preferences – intensity, required attractions, and so on. Since the input space is so large and “unstructured”, I found it much easier to go with random fuzzing.

In each test-case, I will generate a “random world”, and random trip preferences for that world.
Here is some sample code that shows how this might look:

trip_prefs.num_days = random.randint(0, 5)
trip_prefs.intensity = random(0, 5)
if randbit():
    trip_prefs.schedule_lunch = True

Where randbit is defined like so:

def randbit(prob = 0.5):
    return random.random() <  prob

This is all very well, but tests need to be reproducible. If a fuzzer-generated test case fails and I can’t recreate it to analyze the error and later verify that it is fixed, it isn’t of much use. To solve this issue, the input generation function receives some value, and sets the random seed with this parameter. Now, generating test cases is just a matter of generating a sequence of random values. Here is my code to do that:

class FuzzTestBase(object):
    __test__ = False
    def run_single_fuzz(self, random_seed):
        pass
    def fuzz_test(self):
        random.seed()
        random_seeds = [str(random.random()) for i in range(NUM_FUZZ_TESTS)]
        for seed in random_seeds:
            yield self.run_single_fuzz, seed

FuzzTestBase is a base-class for actual test classes. Each test class just needs to define its own version of run_single_fuzz, and in it call random.seed(random_seed) and log random_seed.

This code uses nose‘s ability to test generators: it assumes that a test generator yields test functions and their parameters.

A few interesting issues:
* I generate the random seeds beforehand, so that calling random.seed() in the actual test case doesn’t affect the seed sequence.
* Originally I used just random.random() as a seed instead of str(random.random()). The problem with that is that this way it’s not reproducible. random.random() returns a floating point value x, for which usually x != eval(str(x)):

In [10]: x = random.random()
In [11]: x == eval(str(x))
Out[11]: False

Even though x == eval(repr(x)) for that case, there’s still room for error. Unlike floating point numbers, it’s harder to go wrong with string equality. So str(random.random()) is just a cheap way to generate random strings.

I’d recommend that if your testing mostly consists of selected test cases based on what you think is possible user behavior, you might want to add some fuzzed inputs. I originally started the fuzz-testing described in this blog-post to better test for a specific bug. After adding the fuzz-testing, I found another bug I didn’t know was there. This just goes to show how useful fuzzing is as a testing tool. The fact that it’s so easy to implement is just a bonus.

Categories
Programming Security web-design

Open Redirects

In this post I’ll discuss an issue I tackled a short while ago – open redirects. But first, the story of how I got to it. Feel free to skip ahead to the technical discussion.

Background

Our analytics for plnnr.com – the website for trip planning wasn’t working as well as we wanted. We’re using Google Analytics, and it’s hard generating the specific report we want, and when we did get it, it seemed to show inaccurate numbers. To partially alleviate the issue, I was required to add tracking pixels for facebook & adwords, so we can better track conversions.
For us, an “internal” conversion is when a user clicks on a link to a booking url (for a hotel, or any other “bookable” attraction).
After reviewing it, I decided that the best course of action would be to create an intermediate page, on which the tracking pixels would appear. Such a page would receive as a parameter the url to redirect to, and will contain the appropriate tracking pixels.

Description of problem

Let’s say we build the url for the page like so:

/redirect/?url=X

This page will load the appropriate tracking pixels, and then redirect to the given url (the X).
The problems are:
1. We are potentially exposing ourselves to Cross Site Scripting (XSS), if we don’t filter the redirect url correctly. A malicious website could create links to our page that will run scripts in our context.

2. A malicious webmaster could steal search engine authority. Let’s say he has two domains: a.com and b.com, of which he cares about b.com. He creates links on a.com to:

ourdomain.com/redirect/?url=b.com

A search engine crawls his page, sees the links to his domain, and gives ourdomain.com authority to b.com. Not nice.

3. A malicious website could create links to ourdomain.com that redirect to some malware site, this way harming the reputation of ourdomain.com, or creating better phishing links for ourdomain.com.

Possible solutions

Before we handle the open-redirect issues it’s important to block cross site scripting attacks. Since the attack might be possible by inject code into the url string, this is doable by correctly filtering the urls, and by using existing solutions for XSS.

As for the open redirect:

1. Non solution: cookies. We can pass the url we want in a cookie. Since cookies may only be set by our domain, other websites would not be able to set the redirect url. This doesn’t work well if you want more than one redirect link, or with multiple pages open, etc.

2. Checking the referrer (“referer”), and allowing redirects to come only from our domain. This will break for all users who use a browser that hides referrer information, for example, those using zone-alarm. Google also suggests that if the referrer information is available, block if it’s external. That way we are permissive for clients that hide it.

3. Whitelisting redirect urls. This solutions actually comes in two flavors – one is keeping a list of all possible urls, and then checking urls against it. The other is keeping a list of allowed specific url parts, for example, domains. While keeping track of all allowed urls may be impractical, keeping track of allowed domains is quite doable. Downside is that you have to update that piece of the code as well each time you want to allow another domain.

4. Signing urls. Let the server keep a secret, and generate a (sha1) hash for each url of “url + secret”. In the redirect page, require the hash, and if it doesn’t match the desired hash, don’t redirect to that url. This solution is quite elegant, but it means that the client code (the javascript) can’t generate redirect URLs. In my case this incurs a design change, a bandwidth cost, and a general complication of the design.

5. Robots.txt. Use the robots.txt file to prevent search engines from indexing the redirect page, thereby mitigating at least risk number 2.

6. Generating a token for the entire session, much like CSRF protection. The session token is added to all links, and is later checked by the redirect page (on the server side). This is especially easy to implement if you already have an existing anti-csrf mechanism in place.

7. A combination of the above.

Discussion and my thoughts

It seems to me, that blocking real users is unacceptable. Therefor, only filtering according to referrer information is unacceptable if you block users with no referrer information.
At first I started to implement the url signing mechanism, but then I saw the cost associated with it, and reassessed the risks. Given that cross-site-scripting is solved, the biggest risk is stealing search-engine-authority. Right now I don’t consider the last risk (harming our reputation) as important enough, but this will become more acute in the future.

Handling this in a robots.txt file is very easy, and that was the solution I chose. I will probably add more defense mechanisms in the future. When I do add another defense mechanism, it seems that using permissive referrer filtering, and the existing anti-csrf code will be the easiest to implement. A whitelist of domains might also be acceptable in the future.

If you think I missed a possible risk, or a possible solution, or you have differing opinions regarding my assessments, I’ll be happy to hear about it.

My thanks go to Rafel, who discussed this issue with me.

Further reading

* http://www.owasp.org/index.php/Open_redirect
* http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=171297
* Open Redirects and Phishing Vectors/

Categories
Optimization Programming Python

Pyweb-il Presentation on Optimization Slides

Last Monday I gave a presentation in pywebil on optimization, that’s loosely based on my blog post on the same subject. Here are the slides for that presentation.

Categories
Javascript Optimization Programming Projects startup web-design

Javascript Element Creator

Some time ago I was working on optimizing the client side code of my website, plnnr.com, an online trip planner.
This website does automatic trip planning, and the problem was that recalculating trips was slow. After profiling, I found out that most of the time wasn’t actually taken up by the algorithm, but by the UI. Rendering the trip to html was the costly part. The process was like so:

Client-side Javascript code generates new trip prefs -> application calculates new trip -> Client-side Javascript gets the new trip, and creates new html.

It’s important to note that the app is “ajax based”, so the actual trip html was generated by the Javascript code, and not the server. At the time I was using Mochikit to generate the new html. Mochikit has a pretty nifty API for generating html, but it’s quite a bit on the slow side. Basically, this API is a wrapper around createElement.

Well, first I did a little test, and found out that generating html with cloneNode and innerHTML is much faster than createElement. Still, there was a problem – I needed to generate many similar elements – similar but not identical. Consider entries on a trip itinerary – they all look the same, yet each one has a different name, a different time string, and a different onclick event.

What I needed was a Javascript based html template library. My requirements:
1. Speed. Html had to be generated quickly.
2. Expressiveness. It had to be able to create pretty arbitrary html with a given context. For example, an anchor element (<a> tag) with a given href property, and a given text content.
3. References to inner elements: Many elements inside the generated html need various events attached to them, or various code processing. This should be easy to achieve.
4. The library has to allow the template html to be written as html, and not only as javascript strings.

So, I sat down with Benny, and we wrote the Javascript Element Creator, which we are now releasing under the BSD license. I originally wrote it to work with Mochikit and the Sizzle library, and Benny changed his version to worked with jquery.

After adding the code to my project, I got two things: first, everything worked much, much faster. Second, it was much easier changing the generated html when it was generated according to a template, and not directly in code.

Instructions

1. Write your html somewhere visible to the javascript code. Add the “template” class to the upper node, and the id will be the name of the template. For example:

<body>
    <div id="some_div" class="template">
    </div>
</body>

2. Similarly to other template engines, add double brackets to signify where text should be inserted:

<body>
    <div id="some_div" class="template">
        <a href="[[link_url]]">[[link_text]]</a>
    </div>
</body>

3. Create a creator object. It will “collect” your template, and will make it available to your code.

var creator = new ElementCreator();

4. Generate your DOM object, and add it to the document;

var obj = creator.generate("some_div",
                           {link_url: '/url/',
                            link_text: 'hello world'});
appendChildNodes(foo, obj);

The code

We decided to publish for now only the jquery version. I might publish the mochikit version as well at a later date. Since Benny wrote the jquery version, he also wrote the tests for that version.

All in all, the final code is pretty short, and could probably be even shorter. Still, it’s good enough, and gave me a very real performance boost.

Here is the code, have fun with it.

Categories
Programming Programming Philosophy

Ethics in Programming

Some time ago I was bothered by the issue of ethics in programming.
I heard the question best raised during a “game unconference” I attended. There was a panel about monetary systems for games, and people talked about the issues faced when adding money to an online game.
At one point someone from the audience said about ingame monetary systems (such as in WoW) “it’s like gambling and drugs!”, to which one panelist jokingly replied “so we have a proven business model”, and another said “except it’s legal”.

This was all in good spirit, but it got me thinking:

What are the programming jobs I will not take?