Home

What’s the point? [5]

March 9, 2013

Welcome to the —for now— last post in this series on algorithmic optimization. We talked a bit about generic optimization, then we introduced a specific problem and explored different ways of speeding up finding the solution to that specific problem. We talked about brute-force searching and why it’s not an option when we’re dealing with large point collections. We discussed one dimensional sorting and spatial bucketing and the merits and drawbacks of each algorithm. It is now time to discuss the ultimate of closest-point-searching algorithms, the weapon of choice for the informed and discriminating programmer; spatial trees.

But let us start by briefly rehashing the logic of the square-grid-search algorithm and why it’s not a particularly good one. By creating a grid of adjacent cells and assigning each point p in a collection C to whatever cell contains it, we gain the ability to explore C in a localized fashion. Although we may not know exactly which two points are direct neighbours, we do know which two cells are direct neighbours and therefore all the points in these cells can be treated as being in the same neighbourhood. This works splendidly while single cells do not contain too many points and while there aren’t too many completely empty cells. For balanced point collections we just need to pick a clever grid size, but for unbalanced collections we are rightly and truly screwed:

Large cells vs. small cells in unbalanced point collections.

Large cells vs. small cells in unbalanced point collections.

If we pick a large grid cell size then too many points end up in the same bucket and we’ve converted a smart search algorithm into a brute-force algorithm. If we pick a small grid cell size then there will be loads of empty cells and iterating over them will retard the search process.

If at this point you’re thinking “If brute-force search is a prohibiting factor in the case of grid cells with many points, why not put a smaller grid inside those cells?” you can feel real good about yourself. In fact, once we establish that it’s allowed to put a smaller grid inside the cell of a bigger grid, we can simplify the whole grid notion and only support grids that are composed of 2×2 cells. To create a nested grid structure like this, we only need a very short algorithm:

  1. Draw a rectangle around all points in C.
  2. If the number of points in this rectangle exceeds N, then subdivide the rectangle into four smaller rectangles.
  3. Assign each of the points in the parent rectangle to the appropriate child rectangle, and then repeat steps 2 & 3 for each child rectangle.

The above algorithm is recursive and the grid structure it generates is no longer one where all cells have the same size. If points are more densely packed in one region of space, the grid-structure will subdivide more often and it will result in smaller cells. If points are completely absent from another region of space the grid-structure will not subdivide at all and we won’t be plagued by a plethora of empty grid cells. If we apply this logic to an unbalanced point collection, we should get something like this:

QuadTreeStructure

 

Level 0 represents the top-most level of the grid. The number of points inside the outer boundary is too high and the region is split into four quadrants. The lower right quadrant still has too many points so it too is split into four quadrants. Then the process repeats one more time giving us a third level of subdivision. Now we have a grid containing ten cells of varying size containing the entire point collection. In fact two of the cells contain no points and can be removed from the grid structure in order to reduce memory overhead. The entire structure also contains three cells which host nested grids.

A spatial data structure like this is typically referred to as a tree in programming because the recursive and fractal nature of the algorithm resembles the branching patterns of biological trees. There are many different flavours of spatial trees, some more suited for specific problems, some easier to implement, some optimized for point insertion or deletion. What I’ve drawn above is usually called a Quadtree (or Q-tree) because it always subdivides into quadrants. If we extend this notion to 3D space and we have to subdivide a single box volume into eight sub-volumes and we call the resulting structure an Octree instead.

So how does searching in a Quadtree work? Remember that in order to perform a fast closest point search, we need to reject as many points as possible, limiting the search to only those points which are serious candidates. In some ways searching a Quadtree is easier than searching a fixed grid, one of the reasons being that we don’t actually have any empty cells. The search algorithm might work roughly like this:

  1. Build a tree T from the point collection C.
  2. Find the cell L in T which is closest to the search point h.
  3. Find the point p in L which is closest to h using —once again— brute-force search.
  4. Start recursing to sibling and parent cells to see if they contain points closer still.
  5. Every cell that has a minimum distance to h which is already larger than the smallest distance we found so far can instantly be skipped, as can all child-cells within it.

Step 5 is where the real magic happens. Spatial trees make it very easy to weed out enormous portions of C that are not near h. Adding points to an existing tree structure is also very easy, even if they are outside the outer bounds of the level 0 cell. All we need to do in those cases is insert another level underneath level 0 (which then by definition becomes level 1) in order to grow the reach of the tree. Every time we add such a bottom level we double the reach, so even inserting points which are really far away will not take intolerably long (unlike fixed grids which grow in a linear rather than exponential fashion).

As I mentioned in post #1, optimization is often a difficult process and code which is optimized often gains so much complexity as to become unreadable —at least without significant effort— by third parties. Only start optimizing code once you’ve determined beyond a shadow of a doubt where the bottleneck in your program is.

5 Responses to “What’s the point? [5]”

  1. heumanndesigntech Says:

    Thanks for this fantastic series! Learned a lot; you are a gifted explainer of things.

  2. svendersar Says:

    Great stuff David, well explained. After reading the series I decided to make an octree implementation for MATLAB (http://www.mathworks.com/matlabcentral/fileexchange/40732-octree-partitioning-3d-points-into-spatial-subvolumes).

    MATLAB is a bit weird in that it’s highly optimised for processing matrices (or arrays) of points at a time, so in some instances (as long as memory permits) it’s actually faster to query all points in one command rather than, say, process half of the points but do so in 10 iterative chunks.

    Anyway, such space decomposition can greatly speed things, so thanks for the clear descriptions of the how and why.


  3. Amazing series and amazing explanation. It could be awesome to compile it in a PDF to read offline.

    Proposal: How about to make a workshop (this summer?) in Mcneel Europe (BCN = amazing weather + spanish wines + mediterranean coast :P) to talk about this stuff related to code/optimization/geometry linked with Grasshopper/Rhino plugin development ? If it is not very expensive I ensure that all tickets will be sold :P

    • David Rutten Says:

      I’ve been toying with the idea of a developer only Grasshopper conference (maybe include teachers if there’s room). However my impending move out of Slovakia to (probably) Austria has totally sidelined this idea. If it’s a general Rhino Plug-in Developer meet-up then that would also be very interesting. I’m all for it.


Leave a comment