Biologically Inspired Computing:
Evolutionary Algorithms: Some
Details and Examples
This is lecture 4 of
`Biologically Inspired Computing’
Contents:
Steady state and generational EAs, basic ingredients,
example run-throughs on the TSP, example encodings,
hillclimbing, landscapes
A Standard Evolutionary Algorithm
The algorithm whose pseudocode is on the
next slide is a steady state, replace-worst
EA with tournament selection, using
mutation, but no crossover.
Parameters are popsize, tournament size,
mutation-rate.
It can be applied to any problem; the details
glossed over are all problem-specific.
A steady state, mutation-only, replaceworst EA with tournament selection
0. Initialise: generate a population of popsize random
solutions, evaluate their fitnesses.
1.
2.
Run Select to obtain a parent solution X.
With probability mute_rate, mutate a copy of X to
obtain a mutant M (otherwise M = X)
3. Evaluate the fitness of M.
4. Let W be the current worst in the population (BTR).
If M is not less fit than W, then replace W with M.
(otherwise do nothing)
5. If a termination condition is met (e.g. we have
done 10,000 evaluationss) then stop. Otherwise go
to 1.
Select: randomly choose tsize individuals from the
population. Let c be the one with best fitness
(BTR); return X.
A generational, elitist,
crossover+mutation EA
with Rank-Based selection
0. Initialise: generate a population G of popsize
random solutions, evaluate their fitnesses.
1.
2.
3.
4.
5.
6.
Run Select 2*(popsize – 1) times to obtain a
collection I of 2*(popsize-1) parents.
Randomly pair up the parents in I (into popsize – 1
pairs) and apply Vary to produce a child from each
pair. Let the set of children be C.
Evaluate the fitness of each child.
Keep the best in the population G (BTR) and delete
the rest.
Add all the children to G.
If a termination condition is met (e.g. we have
done 100 or more generations (runs through steps 1—
5) then stop. Otherwise go to 1,
A generational, elitist, crossover+mutation EA
with Rank-Based selection, continued …
Select: sort the contents of G from best to worst,
assigning rank popsize to the best, popsize-1 to
the next best, etc …, and rank 1 to the worst.
The ranks sum to F = popsize(popsize+1)/2
Associate a probability Rank_i/F with each
individual i.
Using these probabilities, choose one individual
X, and return X.
Vary:
1.
With probability cross_rate, do a crossover:
I.e produce a child by applying a crossover
operator to the two parents. Otherwise, let the
child be a randomly chosen one of the parents.
2. Apply mutation to the child.
3. Return the mutated child.
The basic ingredients
A running EA can be seen as a sequence of generations.
A generation is simply characterised by a population.
An EA goes from generation t to generation t+1 in 3 steps:
Select parents
Gen t
Parents
Produce children
from the parents
Children
Merge parent and
child populations
Gen t
Children
And remove some so that
popsize is maintained
The basic ingredients
A running EA can be seen as a sequence of generations.
A generation is simply characterised by a population.
An EA goes from generation t to generation t+1 in 3 steps:
Select parents
Gen t+1
Parents
ETC … as on last
slide, leading to Gen
t + 2, and so on.
Steady State vs Generational
There are two extremes:
Steady state: population only changes slightly in each
generation. (e.g. select 1 parent, produce 1 child, add that
child to pop and remove the worst)
Generational: population changes completely in each
generation. (select some [could still be 1] parent(s),
produce popsize children, they become the next generation.
Although, in practice, if we are using a generational EA, we
usually use elitism, which means the new generation
always contains the best solution from the previous
generation, the remaining popsize-1 individuals being new.
Selection and Variation
A selection method is a way to choose a parent from
the population, in such a way that the fitter an
individual is, the more likely it is to be selected.
A variation operator or genetic operator is any
method that takes in a (set of) parent(s), and
produces a new individual (called child). If the
input is a single parent, it is called a mutation
operator. If two parents, it is called crossover or
recombination.
Replacement
A replacement method is a way to decide
how to produce the next generation from the
merged previous generation and children.
E.g. we might simply sort them in order of
fitness, and take the best popsize of them.
What else might we do instead?
Encodings
We want to evolve schedules, networks,
routes, coffee percolators, drug designs –
how do we encode or represent solutions?
How you encode dictates what your
operators can be, and certain constraints
that the operators must meet.
E.g. Bin-Packing
The Problem: You have items of different sizes or weights:
e.g: 1 (30) 2 (30) 3 (30) 4 (20) 5 (10)
And you have several `bins’ with a max capacity, say 40. Find a way to pack them
into the smallest number of bins. This is a hard problem.
We will look at the equalize-bins version, also hard: pack these items into N bins
(we will choose N = 3), with arbitrary capacity, so that the weights of the 3 bins
are as equal as possible.
Example Encoding; Have nitems numbers, each between 1 and 3, So:
1,2,1,3,3 means: items 1, 3 are in bin 1, item 2 is in bin 2, items 4, 5 are in bin 3.
Fitness: minimise difference between heaviest and lightest bin. In this case what is
the fitness?
Mutation: choose a gene at random, and change it to a random new value.
Crossover: choose a random crossover point c (genes 1, 2, 3, and 4); child is copy of
parent 1 up to and including gene c, it is copy of parent 2 thereafter. E.g.
Crossover of 3,1,2,3,3 and 2,2,1,2,1 at gene 2: 3,1,1,2,1
Crossover of 1,1,3,2,2 and 2,3,1,11 at gene 2: 1,1,1,1,1
Back to Basics
With your thirst for seeing example EAs
temporarily quenched, the story now skips
to simpler algorithms.
This will help to explain what it is about the
previous ones which make them work.
The Travelling Salesperson Problem
(also see http://francisshanahan.com/tsa/tsaGAworkers.htm )
An example (hard) problem, for illustration
The Travelling Salesperson Problem
Find the shortest tour through the cities.
The one below is length: 33
B
C
D
A
E
A B
A
5
B 5
C 7 3
D 4 4
E 15 10
C
7
3
2
7
D
4
4
2
9
E
15
10
7
9
Simplest possible EA: Hillclimbing
0.
Initialise:
Generate a random solution c; evaluate its
fitness, f(c). Call c the current solution.
1. Mutate a copy of the current solution – call the mutant m
Evaluate fitness of m, f(m).
2. If f(m) is no worse than f(c), then replace c with m,
otherwise do nothing (effectively discarding m).
3. If a termination condition has been reached, stop.
Otherwise, go to 1.
Note. No population (well, population of 1). This is a very simple
version of an EA, although it has been around for much longer.
Why “Hillclimbing”?
Suppose that solutions are lined up along the x axis, and that mutation
always gives you a nearby solutions. Fitness is on the y axis; this
is a landscape
6 9 10
7
5, 8
4
2 1 3
1. Initial solution; 2. rejected mutant; 3. new current solution,
4. New current solution; 5. new current solution; 6. new current soln
7. Rejected mutant; 8. rejected mutant; 9. new current solution,
10. Rejected mutant, …
Example: HC on the TSP
We can encode a candidate solution to the TSP as a permutation
A
Here is our initial random solution ACEDB with
fitness 32
Current solution
B
C
D
A
E
A
B
C
D
E
5
7
4
15
3
4
10
2
7
B
5
C
7
3
D
4
4
2
E
15
10
7
9
9
Example: HC on the TSP
We can encode a candidate solution to the TSP as a permutation
A
Here is our initial random solution ACEDB with
fitness 32. We are going to mutate it – first make
Mutant a copy of Current.
Current solution
B
C
A
B
C
D
E
5
7
4
15
3
4
10
2
7
B
5
C
7
3
D
4
4
2
E
15
10
7
9
9
Mutant
B
C
D
A
E
A
D
E
HC on the TSP
A
A
We randomly mutate it (swap randomly chosen
B
adjacent nodes) from ACEDB to ACDEB
C
which has fitness 33 -- so current stays the same D
E
Because we reject this mutant.
Current solution
Mutant
B
B
C
C
E
D
A
D
A
B
C
D
E
5
7
4
15
3
4
10
2
7
5
7
3
4
4
2
15
10
7
9
9
E
HC on the TSP
A
We now try another mutation of Current (swap
randomly chosen adjacent nodes) from ACEDB
to CAEDB. Fitness is 38, so reject that too.
Current solution
B
C
A
B
C
D
E
5
7
4
15
3
4
10
2
7
B
5
C
7
3
D
4
4
2
E
15
10
7
9
9
Mutant
B
C
D
A
E
A
D
E
HC on the TSP
A
A
Our next mutant of Current is from ACEDB to
AECDB. Fitness 33, reject this too.
Current solution
B
C
B
C
D
E
5
7
4
15
3
4
10
2
7
B
5
C
7
3
D
4
4
2
E
15
10
7
9
9
Mutant
B
C
D
A
E
A
D
E
HC on the TSP
A
A
Our next mutant of Current is from ACEDB to
ACDEB. Fitness 33, reject this too.
Current solution
B
C
B
C
D
E
5
7
4
15
3
4
10
2
7
B
5
C
7
3
D
4
4
2
E
15
10
7
9
9
Mutant
B
C
D
A
E
A
D
E
HC on the TSP
A
A
Our next mutant of Current is from ACEDB to
ACEBD. Fitness is 32. Equal to Current, so
this becomes the new Current.
Current solution
B
C
B
C
D
E
5
7
4
15
3
4
10
2
7
B
5
C
7
3
D
4
4
2
E
15
10
7
9
9
Mutant
B
C
D
A
E
A
D
E
HC on the TSP
A
ACEBD is our Current solution, with fitness 32
Current solution
B
C
A
D
E
A
B
C
D
E
5
7
4
15
3
4
10
2
7
B
5
C
7
3
D
4
4
2
E
15
10
7
9
9
HC on the TSP
A
ACEBD is our Current solution, with fitness 32.
We mutate it to DCEBA (note that first and last
are adjacent nodes); fitness is 28. So this becomes
our new current solution.
Current solution
Mutant
B
C
D
E
5
7
4
15
3
4
10
2
7
B
5
C
7
3
D
4
4
2
E
15
10
7
9
9
B
C
A
A
B
D
C
E
A
D
E
HC on the TSP
A
Our new Current, DCEBA, with fitness 28
Current solution
B
C
A
D
E
A
B
C
D
E
5
7
4
15
3
4
10
2
7
B
5
C
7
3
D
4
4
2
E
15
10
7
9
9
HC on the TSP
A
Our new Current, DCEBA, with fitness 28 . We
mutate it, this time getting DCEAB, with fitness
33 – so we reject that and DCEBA is still our
Current solution.
Mutant
Current solution
C
D
E
5
7
4
15
3
4
10
2
7
B
5
C
7
3
D
4
4
2
E
15
10
7
9
9
B
B
C
C
A
A
B
D
E
A
D
E
And so on …
Landscapes
Recall S, the search space, and f(s), the fitness of a candidate in S
f(s)
members of S lined up along here
The structure we get by imposing f(s) on S is called a landscape
What does the landscape look like if f(s) is a random number generator?
What kinds of problems would have very smooth landscapes?
What is the importance of the mutation operator in all this?
Neighbourhoods
Let s be an individual in S, f(s) is our fitness function, and M is
our mutation operator, so that M(s1) s2, where s2 is a
mutant of s1.
Given M, we can usually work out the neighbourhood of an individual
point s – the neighbourhood of s is the set of all possible mutants of s
E.g. Encoding: permutations of k objects (e.g. for k-city TSP)
Mutation: swap any adjacent pair of objects.
Neighbourhood: Each individual has k neighbours. E.g.
neighbours of EABDC are: {AEBDC, EBADC, EADBC, EABCD, CABDE}
Encoding: binary strings of length L (e.g. for L-item bin-packing)
Mutation: choose a bit randomly and flip it.
Neighbourhood: Each individual has L neighbours. E.g.
neighbours of 00110 are: {10110, 01110, 00010, 00100, 00111}
Landscape Topology
Mutation operators lead to slight changes in
the solution, which tend to lead to slight
changes in fitness.
I.e. the fitnesses in the neighbourhood of s are
often similar to the fitness of s.
Landscapes tend to be locally smooth
What about big mutations ??
It turns out that ….
Typical Landscapes
f(s)
members of S lined up along here
Typically, with large (realistic) problems, the huge majority of the
landscape has very poor fitness – there are tiny areas where the decent
solutions lurk.
So, big random changes are very likely to take us outside the nice areas.
Quiz Questions
Here is an altered distance matrix for the TSP problem in these
slides. Illustrate five steps of Hillclimbing, in the same way done
in these slides (inventing your own random mutations etc…),
using this altered matrix.
Consider the example encoding and mutation operator given in
these slides for the equalize-bins problem, and imagine nitems =
100 and N (number of bins) is 10. What is the neighbourhood
size?
A
A
B
C
D
E
15
17
4
15
3
14
10
2
17
B
15
C
17
3
D
4
14
2
E
15
10
17
19
19
© Copyright 2026 Paperzz