Solution for the Random Selection Challenge

A few days ago, I wrote up two small Python Challenges. Several people have presented solutions for the first challenge, and I also posted my solution in the comments there.

However, the second challenge remained unsolved, and I will present a solution for it in this post.

First, a quick reminder:

Given a continuous, solvable bell-shaped function p(x) in [a,b], and a uniform random selection function (such as random.random()), write a function that will yield a distribution proportional to p(x)

The requirement that p(x) be solvable is critical for the solution.
As an example for such a function, consider y=e^(-x**2), in [-1,1]. It is bell shaped, and the solutions are given by ,[ -sqrt(-log(y)), sqrt(-log(y)) ].

I’ll start with the code for the solution, as it might be clearer for the less mathematically inclined:

def random_dist(dist_func, dist_solve_func):
    y = random.random()
    a,b = dist_solve_func(y)
    x = a+random.random()*(b-a)
    return x

Why does this work? (warning: math ahead)

We need to generate a probability distribution proportional to p(x).

(Note: the distribution will be exact (and not proportional) only if the integral of p(x) in [a,b] is 1. This is equivalent to requiring that the probabilities sum to 1 in the discrete case. We’ll call such a function p(x) a standard distribution function.)

To generate the distribution, we note that if we select at random a y in [0,1], for each x, The probability of p(x)>=y is p(x). For example, if p(x) = 0.5, then given a random number 0<=y<=1, the condition p(x) >= y will happen exactly half the time.
(If p(x) is a non-standard function, then the probability is instead only proportional to p(x))

Now, solving the problem is easy – we select a y at random. We find all the x such that p(x) >= y. We select one of these x at random, and the resulting x will be distributed proportionally to p(x). Selecting such an x at random becomes easy because of our requirements that p(x) be bell shaped and solvable. Given x0, x1 such that y=p(x0) and y=p(x1), we know that all x such that p(x)>y lie in between these two solutions.
Hence, the second random() call chooses an x in the range [x0, x1].

In the following illustration of a bell function, two areas are marked. One is the result of selecting y=0.3, and the other of selecting y=0.2. The second area is indeed larger than the first, and contains the first area, as expected of a bell function. Thus, each x will be selected proportionally to p(x).

Illustration of a bell function

Empirical Proof of the solution:
Again, code speaks louder (but more tersely) than words.

#Choose a lot of random numbers
#distributed proportionally to p
result = [random_select.random_dist(random_select.p,
                                    random_select.solve_p)
                for i in xrange(10000)]
#divide the numbers to buckets:
#truncate to 2 digits after the point
d = [numpy.floor(x*100) for x in result]
h = {}
 
#count how many numbers ended up in each bucket
for x in d:
    h[x] = h.get(x,0)+1.0/len(result)
 
#plot the 'height' of each bucket
xvals = result
yvals = [h[numpy.floor(x*100)] for x in result]
 
 
#plot our original distribution,
#aligned with the graph (but not exactly).
avg = sum(yvals)/len(yvals)
m = max(yvals)
xvals2 = numpy.arange(-3,3,0.1)
yvals2 = [random_select.p(x)*m for x in xvals2]
 
pylab.plot(xvals, yvals, "r+")
pylab.plot(xvals2, yvals2, "b-")

This code generates the following graph, which proves this solution works:
Proof of the random selection solution

A few thoughts
This problem may be seen as a standard computer science ‘computation with an oracle’ problem. Here, our oracle is random.random(), and we want to compute a function. In these kind of cases, it is customary to ask:
1. What may be generally computed with this oracle?
2. Given a specific problem, what is the minimum number of oracle calls required to solve it?

The answer to the first question is the easier one – we can generalize this solution to any continuous solvable distribution p(x) in [a,b]. Since the same reasoning applies – we just need to be able to find all x such that p(x)>y given a specific y. If our solutions are [x,y,z,w], and p(a) < y, then we need to select an x from [x,y] U [z,w].This is easily achieved in any number of ways. For example, select one of the ranges with the discrete selection (the first challenge), and then select an x from the selected range.

The answer to the second question is a bit more complicated. The solution described above for bell shaped curves solves the problem with just two calls to random(). To beat it, a solution with just one call is required. This seems daunting at first, but it is possible, even if complicated. What we want is a function that will convert the uniform density of random()’s output to our distribution’s density.
Consider the graph of a function as a path. If we select a random point on the path, and check its x-coordinate, we’ll see that the probability distribution of the x’s proportional to the absolute inclination of the function at that point. Since inclination is equivalent to the differential, if we do this process to the integral of our distribution function, we’ll get the required distribution.

This entry was posted in computer science, Math, Programming, Python, Statistics, Utility Functions and tagged , , , , , , , , . Bookmark the permalink.