download

Matakuliah
Tahun
Versi
: H0434/Jaringan Syaraf Tiruan
: 2005
:1
Pertemuan 9
JARINGAN LEARNING VECTOR
QUANTIZATION
1
Learning Outcomes
Pada akhir pertemuan ini, diharapkan mahasiswa
akan mampu :
• Mendemonstrasikan Jaringan Learning
Vector Quantization
2
Outline Materi
• Arsitektur Jaringan
• Learning Rule
3
Learning Vector Quantization
The net input is not computed by taking an inner product of the
prototype vectors with the input. Instead, the net input is the
negative of the distance between the prototype vectors and the
input.
4
Subclass
For the LVQ network, the winning neuron in the first layer
indicates the subclass which the input vector belongs to. There
may be several different neurons (subclasses) which make up
each class.
The second layer of the LVQ network combines subclasses into
a single class. The columns of W2 represent subclasses, and the
rows represent classes. W2 has a single 1 in each column, with
the other elements set to zero. The row in which the 1 occurs
indicates which class the appropriate subclass belongs to.
2
w k i = 1  subclass i is a part of class k
5
Example
10 1100
W = 01 0000
00 0011
2
• Subclasses 1, 3 and 4 belong to class 1.
• Subclass 2 belongs to class 2.
• Subclasses 5 and 6 belong to class 3.
A single-layer competitive network can create convex classification
regions. The second layer of the LVQ network can combine the
convex regions to create more complex categories.
6
LVQ Learning
LVQ learning combines competive learning with supervision.
It requires a training set of examples of proper network behavior.
 p1  t 1   p2 t2     pQ t Q
If the input pattern is classified correctly, then move the winning
weight toward the input vector according to the Kohonen rule.
1
1
1
i w  q  = iw q – 1  +  p q  – i w q – 1  
2
a k = t k = 1
If the input pattern is classified incorrectly, then move the
winning weight away from the input vector.
1
1
1
w q = i w  q – 1 –  p q – i w q – 1 
i
2
a k = 1  t k = 0
7
Example

0 t = 1  
1  t = 1  p = 1  t = 0  p = 0  t = 0 
p
=
=
p
 1
  2

1
4
  3
  4
2
3
1
0
0
1
0
1

 

0
1
 
 
1 T
 1w 
1
W 0  =
1 T
 2w 
1 T
 3w 
1 T
0.25 0.75
= 0.75 0.75
1 0.25
0.5 0.25
2
W = 110 0
001 1
 4w 
8
First Iteration




1
1
a = compet  n  = compet 









1
a = compet 





– 0.25 0.75
– 0.75 0.75
– 1.00 0.25
– 0.50 0.25
T
T
T
T
– 01
– 01
– 01
– 01
T
T
T
T
1

– 1w – p 1 

1
– 2w – p 1 

1
– 3w – p 1 

1
– 4w – p 1 



 – 0.354



– 0.791
 = compet 
– 1.25



 – 0.901




1

 = 0

0


0
9
Second Layer
1
a2 = W2 a1 = 1 1 0 0 0 = 1
0011 0
0
0
This is the correct class, therefore the weight vector is moved
toward the input vector.
1
1
1
1 w  1  = 1w  0  +  p1 – 1 w 0 
1
0.25 + 0.5  0 – 0.25  = 0.125
w

1

=


1
0.75
 1
0.75 
0.875
10
Figure
11
Final Decision Regions
12