Evolutionary Principles applied to Problem Solving

March 4, 2011

This series of blog posts is a rough approximation of the lecture I gave at the AAG10 conference in Vienna on September 21st 2010. Naturally it will be a rather different experience as the medium is different, but it my hope the basic premise of the lecture remains intact. These posts deals with Evolutionary Solvers in general, but I use Rhino, Grasshopper and Galapagos to demonstrate the topics.

September 24th 2010, David Rutten

There is nothing particularly new about Evolutionary Solvers or Genetic Algorithms. The first references to this field of computation stem from the early 60’s when Lawrence J. Fogel published the landmark paper “On the Organization of Intellect” which sparked the first endeavours into evolutionary computing. The early 70’s witnessed further forays with seminal work produced by -among others- Ingo Rechenberg and John Henry Holland. Evolutionary Computation didn’t gain popularity beyond the programmer world until Richard Dawkins’ book “The Blind Watchmaker” in 1986, which came with a small program that generated a seemingly endless stream of body-plans called Bio-morphs based on human, artificial selection. Since the 80’s the advent of the personal computer has made it possible for individuals without government funding to apply evolutionary principles to personal projects and they have since made it into the common parlance.

The term “Evolutionary Computing” may very well be widely known at this point in time, but these algorithms are still very much a programmers tool. ‘By programmers for programmers’ if you will. The applications out there that apply evolutionary logic are either aimed at solving specific problems, or they are generic libraries that allow other programmers to piggyback along. It is my hope that Galapagos will provide a generic platform for the application of Evolutionary Algorithms to be used on a wide variety of problems by non-programmers.


Pros and Cons

Before we dive into the subject matter too deeply though I feel it is important to highlight some of the (dis)advantages of this particular type of solver, just so you know what to expect. Since we are not living in the best of all possible worlds there is often no such thing as the perfect solution. Every approach has drawbacks and limitations. In the case of Evolutionary Algorithms these are luckily well known and easily understood drawbacks, even though they are not trivial. Indeed, they may well be prohibitive for many a particular problem.

Firstly; Evolutionary Algorithms are slow. Dead slow. It is not unheard of that a single process may run for days or even weeks. Especially complicated set-ups that require a long time in order to solve a single iteration will quickly run out of hand. A light/shadow or acoustic computation for example may easily take a minute per iteration. If we assume we’ll need at least 50 generations of 50 individuals each (which is almost certainly an underestimate unless the problem has a very obvious solution.) we’re already looking at a two-day runtime.

Secondly, Evolutionary Algorithms do not guarantee a solution. Unless a predefined ‘good-enough’ value is specified, the process will tend to run on indefinitely, never reaching The Answer, or, having reached it, not recognizing it for what it is.

All is not bleak and dismal however, Evolutionary Algorithms have strong benefits as well, some of them rather unique amongst the plethora of computational methods. They are remarkably flexible for example, able to tackle a wide variety of problems. There are classes of problems which are by definition beyond the reach of even the best solver implementation and other classes that are very difficult to solve, but these are typically rare in the province of the human meso-world. By and large the problems we encounter on a daily basis fall into the ‘evolutionary solvable’ category.

Evolutionary Algorithms are also quite forgiving. They will happily chew on problems that have been under- or over-constrained or otherwise poorly formulated. Furthermore, because the run-time process is progressive, intermediate answers can be harvested at practically any time. Unlike many dedicated algorithms, Evolutionary Solvers spew forth a never ending stream of answers, where newer answers are generally of a higher quality than older answers. So even a pre-maturely aborted run will yield something which could be called a result. It might not be a very good result, but it will be a result none-the-less.

Finally, Evolutionary Solvers allow -in principle- for a high degree of interaction with the user. This too is a fairly unique feature, especially given the broad range of possible applications. The run-time process is highly transparent and browsable, and there exists a lot of opportunity for a dialogue between algorithm and human. The solver can be coached across barriers with the aid of human intelligence, or it can be goaded into exploring sub-optimal branches and superficially dead-ends.

The Process

In this section I shall briefly outline the process of an Evolutionary Solver run. It is a highly simplified version of the remainder of this blog post series, and I’ll skip over many interesting and even important details. I’ll show the process as a series of image frames, where each frame shows the state of the ‘population’ at a given moment in time. Before I can start however, I need to explain what the image below means.

What you see here is the Fitness Landscape of a particular model. The model contains two variables, meaning two values which are allowed to change. In Evolutionary Computing we refer to variables as genes. As we change Gene A, the state of the model changes and it either becomes better or worse (depending on what we’re looking for). So as Gene A changes, the fitness of the entire model goes up or down. But for every value of A, we can also vary Gene B, resulting in better or worse combinations of A and B. Every combination of A and B results in a particular fitness, and this fitness is expressed as the height of the Fitness Landscape. It is the job of the solver to find the highest peak in this landscape.

Of course a lot of problems are defined by not just two but many genes, in which case we can no longer speak of a ‘landscape’ in the traditional sense. A model with 12 genes would be a 12-dimensional fitness volume deformed in 13 dimensions instead of a two-dimensional fitness plane deformed in 3 dimensions. As this is impossible to visualize I shall only use one and two-dimensional models, but note that when we speak of a “landscape”, it might mean something terribly more complex than the above image shows.

As the solver starts it has no idea about the actual shape of the fitness landscape. Indeed, if we knew the shape we wouldn’t need to bother with all this messy evolutionary stuff in the first place. So the initial step of the solver is to populate the landscape (or “model-space”) with a random collection of individuals (or “genomes”). A genome is nothing more than a specific value for each and every gene. In the above case, a genome could for example be {A=0.2 B=0.5}. The solver will then evaluate the fitness for each and every one of these random genomes, giving us the following distribution:

Once we know how fit every genome is (i.e., the elevation of the red dots), we can make a hierarchy from fittest to lamest. We are looking for high-ground in the landscape and it is a reasonable assumption that the higher genomes are closer to potential high-ground than the low ones. Therefore we can kill off the worst performing ones and focus on the remainder:

It is not good enough to simply pick the best performing genome from the initial population and call it quits. Since all the genomes in Generation Zero were picked at random, it is actually quite unlikely that any of them will have hit the jack-pot. What we need to do is breed the best performing genomes in Generation Sero to create Generation One. When we breed two genomes, their offspring will end up somewhere in the intermediate model-space, thus exploring fresh ground:

We now have a new population, which is no longer completely random and which is already starting to cluster around the three fitness ‘peaks’. All we have to do is repeat the above steps (kill off the worst performing genomes, breed the best-performing genomes) until we have reached the highest peak:

In order to perform this process, an Evolutionary Solver requires five interlocking parts, which I’ll discuss in something resembling detail in upcoming blog posts. We could call this the anatomy of the Solver.

  1. Fitness Function
  2. Selection Mechanism
  3. Coupling Algorithm
  4. Coalescence Algorithm
  5. Mutation Factory


Galapagos is still a very young product and hasn’t really had time to position itself firmly in any work-flow, provided that it could. It seems to be capable of solving relatively small problems quite quickly, but it certainly needs a lot of work to make it more robust and usable. It is likely that the most effective applications for a solver of this type and capability are small or partial problems. To try and evolve anything complicated will almost certainly result in frustration.

3 Responses to “Evolutionary Principles applied to Problem Solving”

  1. […] hope that alternative perspectives will help reveal new potentials. Does analyzing a problem using evolutionary principles lead to different solutions than the typical best of […]

  2. Bill Says:

    I wrote my dissertation on the genetic optimisation of steel framed structures 6 years ago and was shown the potential of these algorithms. Thanks for providing the algorithm and the opportunity to begin using them again

  3. […] Optimisation Algorithm– the form-generating rule(s) that coverts and assigns the parameters into geometry and material specifications, and connects the parent software to an environment analysis tool. This process is set on loop by varying the parameters to automate exhaustive exploration. When the looping is concluded, the result is sorted according to their performance indices (or fitness) to filter out the best performing options. For more on optimisation, click here. […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: