Optimising Mandelbrot

Optimising Mandelbrot

Let’s start by reproducing our mandel1 function and its output, data1, from the previous notebook.

xmin = -1.5
ymin = -1.0
xmax = 0.5
ymax = 1.0
resolution = 300
xstep = (xmax - xmin) / resolution
ystep = (ymax - ymin) / resolution
xs = [(xmin + xstep * i) for i in range(resolution)]
ys = [(ymin + ystep * i) for i in range(resolution)]
def mandel1(position, limit=50):
    value = position
    while abs(value) < 2:
        limit -= 1
        value = value ** 2 + position
        if limit < 0:
            return 0
    return limit
data1 = [[mandel1(complex(x, y)) for x in xs] for y in ys]
from matplotlib import pyplot as plt

%matplotlib inline
plt.imshow(data1, interpolation="none", extent=[xmin, xmax, ymin, ymax])
<matplotlib.image.AxesImage at 0x7f3ecdc53a60>
../_images/09_01_optimising_mandelbrot_5_1.png

Many Mandelbrots

Let’s compare our naive python implementation which used a list comprehension, taking around 500ms, with the following:

def mandel_append():
    data = []
    for y in ys:
        row = []
        for x in xs:
            row.append(mandel1(complex(x, y)))
        data.append(row)
    return data
%%timeit
data2 = mandel_append()
686 ms ± 6.59 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

Interestingly, not much difference. I would have expected this to be slower, due to the normally high cost of appending to data.

data2 = mandel_append()
plt.imshow(data2, interpolation="none", extent=[xmin, xmax, ymin, ymax])
<matplotlib.image.AxesImage at 0x7f3ecbae6bb0>
../_images/09_01_optimising_mandelbrot_11_1.png

We ought to be checking if these results are the same by comparing the values in a test, rather than re-plotting. This is cumbersome in pure Python, but easy with NumPy, so we’ll do this later.

Let’s try a pre-allocated data structure:

data3 = [[0 for i in range(resolution)] for j in range(resolution)]
def mandel_preallocated(data_structure):
    for j, y in enumerate(ys):
        for i, x in enumerate(xs):
            data_structure[j][i] = mandel1(complex(x, y))
%%timeit
mandel_preallocated(data3)
694 ms ± 4.21 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
mandel_preallocated(data3)
plt.imshow(data3, interpolation="none", extent=[xmin, xmax, ymin, ymax])
<matplotlib.image.AxesImage at 0x7f3ecba5dd00>
../_images/09_01_optimising_mandelbrot_17_1.png

Nope, no gain there.

Let’s try using functional programming approaches:

def mandel_functional():
    data = []
    for y in ys:
        bind_mandel = lambda x: mandel1(complex(x, y))
        data.append(list(map(bind_mandel, xs)))
    return data
%%timeit
data4 = mandel_functional()
685 ms ± 5.97 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
data4 = mandel_functional()
plt.imshow(data4, interpolation="none", extent=[xmin, xmax, ymin, ymax])
<matplotlib.image.AxesImage at 0x7f3ecba4c8b0>
../_images/09_01_optimising_mandelbrot_22_1.png

That was a tiny bit slower.

So, what do we learn from this? Our mental image of what code should be faster or slower is often wrong, or doesn’t make much difference. The only way to really improve code performance is empirically, through measurements.