4
Perceptron Learning Rule
1
4
Learning Rules
• Supervised Learning
Network is provided with a set of examples
of proper network behavior (inputs/targets)
{ p1, t 1} { p2, t 2} {pQ,tQ }
• Reinforcement Learning
Network is only provided with a grade, or score,
which indicates network performance
• Unsupervised Learning
Only network inputs are available to the learning
algorithm. Network learns to categorize (cluster)
the inputs.
2
4
Perceptron Architecture
w 1 1 w 1 2 w 1 R
w
w
w
W = 2 1 2 2 2 R
w S 1 w S 2 w S R
T
1w
w i 1
iw
=
w i 2
T
2w
W =
w i R
T
Sw
T
ai = har dlim n i = hardlim iw p + b i
3
4
Single-Neuron Perceptron
w 1 1 = 1
w 1 2 = 1
b = –1
T
a = hardlim 1w p + b = hardlim w1 1 p1 + w1 2 p2 + b
4
4
Decision Boundary
T
1w p + b = 0
T
1w p = – b
• All points on the decision boundary have the same inner
product with the weight vector.
• Therefore they have the same projection onto the weight
vector, and they must lie on a line orthogonal to the
weight vector
5
Example - OR
4
0t =
p1 =
1
0
0
0t =
p2 =
2
1
1
1 t =
p3 =
3
0
1
1 t = 1
p
=
4
4
1
6
4
OR Solution
Weight vector should be orthogonal to the decision boundary.
0.5
w
=
1
0.5
Pick a point on the decision boundary to find the bias.
T
w
p + b = 0.5 0.5 0 + b = 0.25 + b = 0
1
0.5
b = – 0.25
7
4
Multiple-Neuron Perceptron
Each neuron will have its own decision boundary.
T
iw p + bi = 0
A single neuron can classify input vectors
into two categories.
A multi-neuron perceptron can classify
input vectors into 2S categories.
8
4
Learning Rule Test Problem
{p1, t1} { p2, t 2} {pQ, tQ }
1t =
p1 =
1
2
1
–1 t = 0
p
=
2
2
2
0 t =
p3 =
3
–1
0
9
Starting Point
4
Random initial weight:
1w
=
1.0
– 0.8
Present p1 to the network:
1
a = hardlim1 w p1 = hardlim 1.0 – 0.8
2
T
a = hardlim–0.6 = 0
Incorrect Classification.
10
Tentative Learning Rule
4
• Set 1w to p1
– Not stable
• Add p1 to 1w
Tentative Rule:
1w
new
ol d
= 1w
+ p1 =
new
If t = 1 and a = 0, then 1w
old
= 1w
+p
1.0 + 1 = 2.0
– 0.8
2
1.2
11
Second Input Vector
4
T
a = hardlim1 w p2 = hardlim 2.0 1.2 – 1
2
a = hardlim0.4 = 1
Modification to Rule:
1w
ne w
ol d
= 1w
– p2 =
(Incorrect Classification)
new
If t = 0 and a = 1, then 1w
2.0
–1
–
=
1.2
2
old
= 1w
–p
3.0
– 0.8
12
Third Input Vector
4
0
a = hardlim 1 w p3 = hardlim 3.0 –0.8
–1
T
a = hardlim0.8 = 1
1w
ne w
ol d
= 1w
– p3 =
(Incorrect Classification)
3.0 – 0 = 3.0
– 0.8
–1
0.2
Patterns are now correctly classified.
If t = a, then 1w
ne w
o ld
= 1w
.
13
4
Unified Learning Rule
ne w
= 1w
old
+p
n ew
= 1w
old
–p
If t = 1 and a = 0, then 1 w
If t = 0 and a = 1, then 1 w
If t = a, then 1w
new
= 1w
ol d
e = t–a
ne w
If e = 1, then 1 w
= 1w
ne w
If e = – 1, then 1 w
= 1w
ne w
If e = 0, then 1 w
new
1w
ol d
= 1w
b
ol d
= b
ol d
+e
+p
old
= 1w
+ e p = 1w
ne w
old
–p
old
+ t – a p
A bias is a
weight with
an input of 1.
14
4
Multiple-Neuron Perceptrons
To update the ith row of the weight matrix:
iw
new
bi
old
+ e ip
ol d
+ ei
= iw
ne w
= bi
Matrix form:
new
W
b
ol d
+ ep
ol d
+e
= W
new
= b
T
15
Apple/Banana Example
4
Training Set
–1
p
=
t
=
1
1 1
1
–
1
1
p
=
t
=
2
1 2
0
–
1
Initial Weights
W = 0.5 –1 –0.5
b = 0.5
First Iteration
–
1
a = hardlim Wp 1 + b = hardlim 0.5 – 1 – 0.5 1 + 0.5
–1
a = hardlim –0.5 = 0
ne w
W
ol d
= W
+ ep
b
T
new
e = t1 – a = 1 – 0 = 1
= 0.5 –1 –0.5 + 1 –1 1 –1 = –0.5 0 –1.5
= b
ol d
+ e = 0.5 + 1 = 1.5
16
Second Iteration
4
1
a = hardlim (Wp 2 + b ) = hardlim ( – 0.5 0 – 1.5 1 + 1.5 )
–1
a = hardlim (2.5) = 1
e = t 2 – a = 0 – 1 = –1
ne w
W
= W
old
T
+ e p = –0.5 0 –1.5 + –1 1 1 –1 = –1.5 –1 –0.5
b
new
=b
ol d
+ e = 1.5 + –1 = 0.5
17
4
Check
–1
a = hardl im (Wp 1 + b ) = hardlim ( – 1.5 – 1 – 0.5 1 + 0.5)
–1
a = hardlim (1.5) = 1 = t 1
1
a = hardl im (Wp 2 + b ) = hardlim ( – 1.5 – 1 – 0.5 1 + 0.5)
–1
a = hardlim (– 1.5) = 0 = t 2
18
4
Perceptron Rule Capability
The perceptron rule will always
converge to weights which accomplish
the desired classification, assuming that
such weights exist.
19
4
Perceptron Limitations
Linear Decision Boundary
T
1w p + b = 0
Linearly Inseparable Problems
20
© Copyright 2026 Paperzz