Algorithm 6: The Competitive Learning Algorithm

Algorithm 6: The Competitive Learning Algorithm
(Self-Organizing Map (SOM) - Kohonen)
Step 1: Initialization
Set initial synaptic weights wij to small random values, say in an interval [0, 1], and assign
a small positive value to the learning rate parameter . //  = 0.1
W1
w11
...
wi1
...
wn1
W=
...
...
...
Wj
w1j
...
wij
...
wnj
...
...
...
Wm
w1m
...
wim
...
wnm
Step 2: Activation and similarity matching
Activate the Kohonen network by applying the input vector X, and find the winner-takesall (best matching) neuron jX at iteration p, using the minimum-distance Euclidean criterion
1/ 2
n

jX = min ||X - Wj(p)|| =  [ xi  wij ( p )]2 
j
 i 1

, j = 1, 2, ..., m
(6.38)
where n is the number of neurons in the input layer, and m is the number of neurons in the
output or Kohonen layer.
// dj is the smallest → j is specified → Wj is determined
// Wj = (w1j, w2j, ..., wnj)T
Step 3: Learning
Update the synaptic weights
wij(p + 1) = wij(p) + wij(p), // j is fixed from Step 2, i = 1, 2, ..., n
// Wj(p + 1) = Wj(p) + wij(p)
where wij(p) is the weight correction at iteration p. // initially set p = 0
The weight correction is determined by the competitive learning rule
 [ xi  wij ( p )],
wij = 
0,
j   j ( p)
j   j ( p)
(3.39)
where  is the learning rate parameter, and Λj(p) is the neighborhood function centered
around the winner-takes-all neuron jX at iteration p.
The neighborhood function Λj usually has a constant amplitude. It implies that all the
neurons located inside the topological neighborhood are activated simultaneously, and the
relationship among those neurons is independent of their distance from the winner-takesall neuron jX. This simple form of a neighborhood function is shown in Figure 6.27.
1
Figure 6.27 Rectangular neighborhood function
The rectangular neighborhood function Λj takes on a binary character. Thus,
identifying the neuron outputs, we may write
1,
yj = 
0,
j  Λ j ( p)
(6.40)
j  Λ j ( p)
Step 4: Iteration
Increase iteration p by one, go back to Step 2 and continue until the minimum-distance
Euclidean criterion is satisfied, or no noticeable changes occur in the feature map.
n input neurons, m output neurons: yj
W1 ... Wj ... Wm
w11
w1j
w1m
...
...
...
W = wi1 ... wij ... wim
...
...
...
wn1 ... wnj ... wnm
Figure 6.23 Feature-mapping Kohonen model
X = (x1, x2, ..., xi, ..., xn)T
Figure 6.24 Architecture of the Kohonen network
2
Figure 6.25 The Mexican hat function of lateral connection
Example: on pages 208-209
• Suppose that the two-dimensional input vector X is presented to the three-neuron Kohonen
network,
0.52  x1 
X= 
 =  
0.12  x2 
 x1 
 
 
// X = (x1, x2, ..., xi, ..., xn)T =  xi 
 
 
 xn 
• The initial weight vectors, Wj (i.e., the column j of the weight matrix Wn×m), are given by
0.27  w11 
W1 = 
 =   , W2 =
0.81  w21 
 w1 j 
 
 
// Wj =  wij  ,
 
 
 
 wnj 
0.42  w12 
0.70 =  w  , W3 =

  22 
j = 1, 2, ..., m;
0.43  w13 
0.21 =  w 

  23 
 w11 ... w1 j ... w1m 



 

W =  wi1 ... wij ... wim 




 


 wn1 ... wnj ... wnm 
• We find the winning neuron jX (i.e., the best-matching neuron of X) using the minimum-distance
Euclidean criterion:
d1 =
( x1  w11 ) 2  ( x2  w21 ) 2 =
(0.52  0.27) 2  (0.12  0.81) 2 = 0.73
d2 =
( x1  w12 ) 2  ( x2  w22 ) 2 =
(0.52  0.42) 2  (0.12  0.70) 2 = 0.59
d3 =
( x1  w13 ) 2  ( x2  w23 ) 2 =
(0.52  0.43) 2  (0.12  0.21) 2 = 0.13
3
1/ 2
n

// jX = min ||X - Wj(p)|| =  [ xi  wij ( p )]2 
j
 i 1

, j = 1, 2, ..., m;
(6.38)
• Thus, neuron j = 3 is the winner and its weight vector Wj = W3 is to be updated according to the
competitive learning rule described in Eq. (6.36). Assuming that the learning rate parameter  is
equal to 0.1, we obtain
w13 =  (x1 - w13) = 0.1 (0.52 - 0.43) = 0.01
w23 =  (x2 - w23) = 0.1 (0.12 - 0.21) = -0.01
 ( xi  wij ),
// wij = 
0,
if neuro j wins the competition
(3.36)
if neuro j loses the competition
• The updated weight vector Wj = W3 at iteration (p + 1) is determined as:
 w13  13  0.43  0.01 0.44
W3(p + 1) = W3(p) + W3(p) =   +   = 
 + 

 = 
 w23   23  0.21  0.01 0.20
// Wj(p + 1) = Wj(p) + wij(p)
 w1 j 
 
 
//  wij  (p + 1) =
 
 
 
 wnj 
 w1 j 
 
 
 w  (p) +
 ij 
 
 
 wnj 
1 j 
 
 
  (p)
 ij 
 
 
 nj 
• The weight vector Wj = W3 of the winning neuron j = 3 becomes closer to the input vector X
with each iteration.
• Continue for next iterations p = 2, 3, ... until minimum Euclidean distance dj is small enough (or
unchanged) or no noticeable changes occur in the feature map.
0.45
W3(2) = 
 , W3(3) =
0.19
0.46
0.18 , W3(4) =


0.47
0.17 , W3(5) =


0.48
0.17 , W3(6) =


0.48
0.17 → Stop.


4