Neural Optimization of
Evolutionary Algorithm
Strategy Parameters
Hiral Patel
Outline
Why optimize parameters of an EA?
Why use neural networks?
What has been done so far in this
field?
Experimental Model
Preliminary Results and Conclusion
Questions
Why optimize parameters of
an EA?
Faster
convergence
Better overall results
Avoid premature convergence
Why use neural networks?
Ability
to learn
Adaptability
Pattern recognition
Faster then using another EA
What has been done so far in
this field?
Machine
Learning primarily used to
optimize ES and EP
Optimized mutation operators
Little has been done to optimize GA
parameters
Experimental Model Outline
Neural
Network Basics
Hebbian Learning
Parameters of the Genetic
Algorithm to be optimized
Neural Network Inputs
Neural Network Basics
bq(k) bias
Vector input
signal
x(k)Rn1
wq1(k)
wq2(k)
Sigmoid
activation
function
Neuron
response
(output)
yq(k)
f(•)
vq(k)
wqn(k)
Synaptic
weights
x(k)Rn1
Deviation of
activation
function
g(•)=f’(•)
dq(k)
Desired
neuron
response
e (k )
Weight update algorithm
Adapted from: Ham, M. H., Kostanic, I Principles of Neurocomputing for Science and Engineering, McGraw-Hilll, NY, 2001
Hebbian Learning
Unsupervised
learning
Time-dependent
Learning signal and Forgetting
factor
x0
Hebb Learning for
single neuron
w0
x1
xn
w1
v
f(v)
y
wn
Standard Hebbian
learning rule
{,}
ly
d (v)
f (v )
dv
Adapted from: Ham, M. H., Kostanic, I Principles of Neurocomputing for Science and Engineering, McGraw-Hilll, NY, 2001
Parameters of the Genetic
Algorithm to be optimized
Crossover Probability
Crossover Cell Divider
Cell Crossover Probability
Mutation Probability
Mutation Cell Divider
Cell Mutation Probability
Bit Mutation Probability
Neural Network Inputs
Current Parameter Values
Variance
Mean
Max fitness
Average bit changes for crossover
Constant parameters of the GA
Preliminary Results
Tests run with Knapsack problem with
dataset 3, pop. size 800, rep. size 1600
Learning Signal and Forgetting factor
are not yet optimal enough to suggest
better performance with NN
Output for 1600 generations
500
400
Fitness
300
Mean
200
Variance
CCD
100
MCD
0
0
-100
500
1000
1500
2000
Probabilities for 1600
generations
1.2
1
CP
0.8
CCP
0.6
MP
CMP
0.4
BMP
0.2
0
0
500
1000
1500
2000
Conclusion
It may be possible to get better
performance out of a Neural Optimized
EA as long as the (unsupervised) Neural
Network is able to adapt to the changes
quickly and to recognize local minima.
Possible Future Work
ES to optimize parameters, use a SOM to
do feature extraction of the optimized
parameter values, use the SOM output as
codebook vectors for LVQ network and
then classify the output of the original ES,
use the classifications to perform
supervised training of LevenbergMarquardt Backpropagation network to
form rule set.
Question ?
© Copyright 2026 Paperzz