I came upon a very interesting and cryptic snippet of code somewhere nameless, and I can’t decide if it is brilliant or completely insane. It is a very obscure way of accomplishing the required task, but it’s around four times faster than the alternatives I’ve tried, so I have to admit that it’s not completely without merit. Still, I cringe a bit at seeing it, since it packs around four unusual Python concepts in almost as many characters.
This is the snippet in question:
def GetContourPoints(self, array): """Parses an array of xyz points and returns a array of point dictionaries.""" return zip(*[iter(array)]*3)
As the docstring says, what this function does is that it parses an iterable of (x, y, z) points and returns an array of point dictionaries. Only, it doesn’t really. It takes an iterable of points like so:
(x1, y1, z1, x2, y2, z2, ...)
and returns an iterable of 3-tuples of groupped points:
((x1, y1, z1), (x2, y2, z2), ...)
So how does it do this? Let’s analyze it. I just started from the outermost part and progressed inwards, like a worm burrowing in a delicious chocolate cake, only less disturbing. Since the function returns an iterable of 3-tuples,
zip must accept three iterables in its command line, like so:
zip((x1, x2, x3, ...), (y1, y2, y3, ...), (z1, z2, z3, ...)) # So that must be what this is: >>> [iter(array)] * 3 (x1, x2, x3, ...), (y1, y2, y3, ...), (z1, z2, z3, ...)
The asterisk in the function call unpacks the iterable (the list, in this case), so we’re pretty much at the meat of this curious function. However,
array, our input, is just an iterable of
(x, y, z) points, so how can it be transformed to three iterables of one coordinate each?
Well, the magic is here:
[iter(array)] * 3
What does this produce? One’s first thought would be that it produces a list of three iterators, which, when evaluated, would return something like:
(x1, y1, z1, x2, ...), (x1, y1, z1, x2, ...), (x1, y1, z1, x2, ...)
i.e. the original sequence three times, which is nothing like what we need. The keen eye, however, will notice that this is not three iterators, but it is the same iterator, three times:
>>> print repr([iter(array)] * 3) [<listiterator object at 0x7fd2db258f90>, <listiterator object at 0x7fd2db258f90>, <listiterator object at 0x7fd2db258f90>]
As you can see, all the iterators have the same address, which means they are the same object. Thus, when
zip tries to iterate over one array each time, the iterator gets advanced and returns the next element in a row, so what actually gets returned is what we needed (i.e. the tuple of three tuples).
This is a fantastic abuse of the… well… everything, and I am very impressed at how someone could have come up with this. I don’t really like the fact that it relies in an implementation detail (the order the
zip function iterates over the arrays) to work, but it does, and it’s much faster than the usual alternatives I tried, so I think I like it. I definitely would have expected a comment on it, though, rather than leaving people to read the bones in an attempt to divine what it does.
Here are some timings, which my friend Marc graciously provided:
In : %timeit [(arr[3*x], arr[3*x+1], arr[3*x+2]) for x in range(len(arr)/3)] 10000 loops, best of 3: 42.8 us per loop In : %timeit numpy.reshape(arr, (-1, 3)) 10000 loops, best of 3: 59.2 us per loop In : %timeit zip(*([iter(arr)]*3)) 100000 loops, best of 3: 11.2 us per loop
As you can see, this way of doing things is four to five times faster than the alternatives, for reasons unknown and unknowable (although I suspect that what takes numpy that long to do it is transferring the data in and out of it).
If you have any such tricks of your own, I’d appreciate if you could post them in the comments, as I’m always interested in reading and thinking about them. Thanks!
UPDATE: In JuanManuel Gimeno Illa’s comment below, he mentions that this is actually the officially sanctioned way to cluster an iterable into n-length groups by the Python documentation on zip(). That’s very interesting, and clarifies this whole method.
UPDATE 2: There is excellent discussion in this Hacker News thread.