Ever since I learned about evolutive algorithms, I’ve been fascinated by them. This class of algorithms use concepts from evolution, such as diversity and selection, to learn how to solve problems they know nothing about. It seems both magical and too good to be true ; and at the same time, being based on evolution, makes complete sense.

There’s also something strangely satisfying in grooming new AI breeds and savagely kill them because they’re not “fit enough”.

NEAT is a neuroevolutive genetic machine learning algorithm. The paper presenting it is sufficiently clear to give a good guideline to implement it from scratch.

In this post, I will build a game of snake, then groom an AI that masters it without ever even knowing it’s playing snake. The entirety of the code is in the post itself1, feel free to look at the source to figure what’s happening. If you don’t want the details, you can skip directly to the conclusion, demonstrating the learning itself.

The game

We will start with a simple game of snake. You can play it with the IJKL keys of your keyboard:

Now that’s all well an fun, but we need to simplify the problem for the algorithm. We center the game on the player and rotate so that straight forward is up. That way the problem becomes relative to the snake, rather than absolute to the board.

Since directions are now relative, commands should also be: we’ll use U and O to turn left and right:

It gets harder, but the problem we are looking at is now standardized: apples are passing by, and you want to turn to hit them, without eating you tail2.

Now we’ve got something working that we can feed the computer. Time to switch to the AI


We will build a neural network that will learn how to play this game. In fact, we will build a bunch of these, kill the worst, keep the best, make them mate together, take their progeny, expose them to radiations for accelerated mutations, and keep iterating till we find one that actually works. Pretty atrocious algorithm if you push the metaphor. But efficient.

We’ll loosely follow is NEAT. There are a few bits we need to build.

  • Two entities: genomes, and networks. Networks take inputs, transform them to output. Like in biology, genomes are the encoding of the network.
  • Convert genetic material to a functional network.
  • Handle mating and mutations
  • Handle our collection of genomes, notably ablation (removing the least fit)
  • A loop to orchestrate all that, iterate, run the game, etc.

We will start with the whole genome and network.

We’ll use this structure:

interface Link {
  innovationId: string,
  from: node,
  to: node,
  weight: number, // 0 .. 1
  enabled: bool

interface Node {
  type: "in" | "out" | "hidden",
  pos: {                    // when type = in
    row: number, 
    col: number 
  dir: "left" | "right",    // when type = out
  nodeId: number  // when type = hidden

interface Genome {
    links: Link[],
    genomeId: number,
    speciesId: number,
    nodes: number // that's the count

First let’s look at an intelligently designed organism’s genome:

Each of the genes is indicating a link from a node to another. The nodes formatted like “1x2” are input nodes. Nodes indicating directions are output nodes. The number on top is called the “innovation number”. We’ll come back to it in a moment.

Here is a view of the topology it generates:

Let’s see what it can do. We’ll transform the genome into a set of functions. There will be one function per output, taking the board as input, and determine how much the output is activated, applying an activation function over the sum of the incoming signal.

For example if we simulate stuff positionally to our snake: (use ^ > < v to change its direction), the output receives the following activation:

We now have a functional transformation of the genome into an activation function. The last step necessary is to build a player from that and let it play snake.

Now this is a pretty simple genome, but it works ok. It turns whenever it sees an apple or when it’s gonna eat itself. Running it a few times already teaches us a few things. A) it’s much faster than you at snake. B) it’s not perfect, despite being intelligently designed. C) scores are not completely deterministic. In fact, let’s try to understand how much runs settle a good average, Monte Carlo style. (It’s mostly going to give us an indication for this species, but should give an order of magnitude).


This yields interesting results. First, we see a significant spread between min and max. Second, we see that the game is not terribly optimized, and iterations take a few ms each. Third, we observe that the average results are consistent to +/-5% over 5 iterations, +/- 10% over 10 iterations, and about +/- 2% over 100 iterations.

Figuring the right number of trial per species is a balancing game between obtaining precise results each round, and having more rounds. 10% fidelity seem enough for a variety of reasons.


Let’s see if we can build species which can evolve genes that play the game, and maybe beat our simple intelligent design genome.

There are two aspects to evolution: variation (changing the species) and selection. Let’s focus on variation first.


Variation is the set of mechanisms changing the genome of a species. It introduces entropy in the system so that better fitted (and worse…) species appear.


The first mechanism of variation is mutation, introducing random changes within a genome. Since we can’t actually use radiation to modify our virtual genome, we’ll have to help with that.

We can mutate genome by changing the existing genes: activating deactivated genes, randomizing the weight of existing links. We can also create new genes, i.e. create new connections, and new nodes.

with probabilities ([0, 1]) for mutation on weight: enable: disable: split link: new link:


The second mechanism of variation is mating. It allows traits to be propagated between generations by crossing the genomes of two individuals. Randomly merging topologies is hard: how do you know if a node with the same id is the same between two genomes? How do you know if a given topology will make sense?

NEAT introduces the notion of innovation to handle this. Every time a new node or a new link is inserted, we give it an innovation. The innovation id is global, that means that if a new link between two given nodes, the new link’s innovation id will always be the same across individuals and species. Respectively, for nodes. This is how NEAT traces ancestry and allows simple mating between genomes: a given gene’s innovation id will be the same throughout genomes.

We implement a dictionary that will generate innovation ids accordingly. If the node or the link gene is indeed new, it gets a new id. If it creates a topology that we already encountered, we reuse the same innovation id.

between and . Result: <div id="innovation-id-demo-split-result" ></div>

between and . Result: <div id="innovation-id-demo-insert-result" ></div>

Now we can proceed with mating. This happens through matching two genomes. Matching genes are selected randomly. Excess genes are picked from the fittest parent. If fitness is equal, all are selected.


Fitness delta: (> 0 means genome on the left is fittest)

Viewed as graph:


<div id="mating-genome-res-graph" style="border: 1pt solid; display: inline-block; width: 45%; height: 300px; margin: auto"></div>


We have the basics of evolution at an individual level, now we’ll tackle the selection part. This will happen by having generation after generation compete in our game, eliminate the lowest performers, breed the rest, rinse and repeat.

NEAT introduces the concept of speciation in addition, so that new genes that could be beneficial compete “locally” and have more chances to survive their introduction. The idea here is that we group genomes that are similar, operate selection and ablation within a species, mate, reassemble species, then carry on.


Compatibility distance is calculated based on the number of genes that differ, and the distance of weights.

Distance using coefficients: weight: excess genes count: disjointed genes count:
Distance: 0

Calculating distance will allow to group genomes by species, and allowing the development of variations in a semi-protected environments. This happens by maintaining a list of species from a generation to the next. For each species an individual genome is picked randomly as a representative. Each individual from the new generation is placed with the first species with which it is compatible (i.e. compatibility > threshold). If it ain’t compatible, then a new species is created, with that genome as representative.

interface Species {
  id: number,
  representative: Genome,
  genomes: Genome[],
  speciesAge: number

Now we’ll switch to executing all of that. Starting with an initial population with all gene links between input and output, but disabled. We introduce a new entity: the genome

We make the population compete at snake and track their fitness:

    Then prune the lowest half of each species sorted by fitness. We also remove under-performing species older than a given number of generations.


    Now we repopulate the remaining half through random breeding, sparkle some mutations to the mix.


    We now have all the recipients for breeding snake players. Let’s now plug all of that into an evolutionary loop. We will first generate a base population and group into species. Then iterate on:

    • Compete all genomes
    • Ditch the lowest half of each species
    • Ditch any species lower than the average and older than X generations
    • Repopulate with offspring of the remaining species, introduce mutations
    • Re-sort into species
    • Loop

    Pause for demos every 10 generations Pause when a new record is achieved

    Generation Max fitness Number of species


    Maximum fitness over generations:


    1. and lesson learned, not necessarily the best way of doing things. But I wanted to see what it would do. 

    2. in truth we don’t rotate the entire board but really just map the coordinates.