Genetic algorithms are a fantastic way to quickly find a *near*-optimal solution to a given problem. Rather than expand on the excellent descriptive writeups above, I thought I would give a simple example of a genetic algorithm at work.

Let's say we have a pool of random letters and "space" characters, which we can string together into series of length 44. We are attempting to assemble the following sentence out of this primordial soup of letters and spaces:

the quick brown dog jumped over the lazy fox

All we know about this sentence is that it is 44 characters in length, is composed up of the letters a-z and spaces, and that we have a function called fitness(string), which accepts our test strings and returns a fitness value; in other words, merely tells us how many of the characters are correct.

We then make a number of random strings of letters of length 44. For this example, we'll just make two of them; in a real environment, we would make hundreds or thousands of them. Here are the two random strings, matched up against the actual sequence, as well as their fitness values:

tzag uinapd oxnavng asrqmdalv evxaempzno yce
| || | | | | | | | fitness = 10
the quick brown dog jumped over the lazy fox
hgfboaiey afdoanvsdar ayrebnonas od ahdsflga
| | | fitness = 3
the quick brown dog jumped over the lazy fox

We can clearly see that the first one has the best fitness, so we toss out the other string and focus on the first one. Real algorithms would save several of the best ones. For the next round, we make a number of mutations to the saved string, meaning we choose a number of characters and substitute in, completely randomly, another letter. If we had several to choose from, we might also take a prefix of one of the strings and combine it with a suffix of another string to result in a third string of the same length; this is called crossing over and is another way to generate good matches.

Let's make a couple of mutants...

original: tzag uinapd oxnavng asrqmdalv evxaempzno yce
| | | | | | | | | |
mutant: tzeg uioapdxoxnzvng amrqmdxlv ekxaemlznolycx
original: tzag uinapd oxnavng asrqmdalv evxaempzno yce
| | | | | | | | | |
mutant: thag ubnape oxxavng lsrqmpalv zvxaekpzne yve

... and then see how fit these mutants are compared to the best of the previous batch!

tzag uinapd oxnavng asrqmdalv evxaempzno yce
| || | | | | | | | fitness = 10
the quick brown dog jumped over the lazy fox
tzeg uioapdxoxnzvng amrqmdxlv ekxaemlznolycx
| | || | | | | | | | fitness = 11
the quick brown dog jumped over the lazy fox
thag ubnape oxxavng lsrqmpalv zvxaekpzne yve
|| | | || | | | fitness = 9
the quick brown dog jumped over the lazy fox

The first mutant is better than the original, so we base the next round of mutations on this one. Eventually, it improves and gradually becomes an exact match for the given string.

Given the simplicity of the algorithm and example here, it's easy to imagine a better way of solving this particular problem, but for extremely complex situations where the string is merely a model of something else, you generate and conserve a larger number of mutants, and you are using additional techniques such as crossing over, it's an effective way of solving the problem.