Learning Recalling Energy function 朝陽科技大學資管系李麗華教授 3

-Artificial Neural NetworkBasic Model
朝陽科技大學
資訊管理系
李麗華 教授
陳榮昌 修訂(2013/02/27)
The Basic Model of ANN
1. Input layer
2. Hidden layer
3. Weights
4. Output layer
5. Processing Element(PE)
6. Learning
7. Recalling
8. Energy function
朝陽科技大學資管系 李麗華 教授
2
ANN Components (1/4)
2. Hidden layer:
(i.e. PE)
I j => net j => f(netj)
1. Input layer:
X=[X1,X2,…,Xn]t ,
here t means
vector transpose
Notes:
Wij
X1
Y1
H2
X2
‧
‧
‧
Xn
H1
‧
‧
‧
‧
‧
‧
‧
‧
‧
Yj
Hh
朝陽科技大學資管系 李麗華 教授
3
ANN Components (1/4)
3. Weights : Wij
4. Output layer: Yj
Three common ways of
generating output
- normalized output
- competitive output
- competitive learning
means the
connection value
between layers
W11
X1
H1
Y1
W21
W??
W??
X2
‧
‧
‧
Xn
‧
‧
‧
H2
Wih
Wnh
‧
‧
‧
Hh
朝陽科技大學資管系 李麗華 教授
‧
‧
‧
Yj
※要注意weights編號原則
4
ANN Components (2/4)
5. Processing Element(PE)
(A)Summation Function:
I j  Wij X i
supervised
i
or
I j   ( X i  Wij )2
unsupervised
i
n
n
n 1
n
(B)Activity Function:
or
net

I

C

net
j
j
j
net j  I j
or net  I n  C  I n1
(*)通常與上面函數合併,即:
j
j
j
net j  Wij X i
i
(C)Transfer Function:
(a) Discrete type (hard limiter)
(b) Linear type
(c) Non-linear type
朝陽科技大學資管系 李麗華 教授
5
Transfer Functions (1/4)
(a) Discrete type (Hard Limiter) transfer function:
1
Yj=
net j > 0
if
0
net j
1
Ynj=
0
00
netj=0
if
net j > 0
net j
-1
1
Hopfield-Tank fc.
0
net j<0
1
-1
Step function
or
perceptron fc.
net j > 0
Yn-1j if
Yj =
<=0
1
-1
1
Signum fc.
net j<=0
0
-1
朝陽科技大學資管系 李麗華 教授
6
Transfer Functions (2/4)
(a) Discrete type transfer function (cont.):
1
Yj =
0
-1
if
netj =
0
net j<0
1
Yn j =
Yn-1j
-1
1
net j > 0
Signum0 fc.
-1
1
net j > 0
if
net j = 0
0
BAM fc.
net j<0
朝陽科技大學資管系 李麗華 教授
0
-1
7
Transfer Functions (3/4)
(b) Linear type:
Draw:
Yj = net j
Draw:
Yj =
net j <=0
0
if
net j
net j > 0
朝陽科技大學資管系 李麗華 教授
8
Transfer Functions (4/4)
(c) Nonlinear type transfer function:
Draw:
1
Yj =
1 e
 net j
Sigmoid function
Yj = tanh(netj) =
net j
 net j
e
e
e
net j
e
 net j
Draw:
Hyperbolic
Tangent function
Draw:
2
Yj =exp(-net )
Bell function
朝陽科技大學資管系 李麗華 教授
9
Learning & Recalling
6. Learning:
Based on the ANN model used, learning is to
adjust weights to accommodate a set of
training pattern in the network.
Notes:
7. Recalling:
Based on the ANN model used, recalling is to
apply the real data pattern to the trained
network so that the outputs are generated and
examined.
Notes:
朝陽科技大學資管系 李麗華 教授
10
Energy Function (1/2)
8. Energy function:
Energy function is a verification function which
determines if the network energy has converged
to its minimum. Whenever the energy function
approaches to zero, the network approaches to
its optimum solution.
Notes:
朝陽科技大學資管系 李麗華 教授
11
Energy Function (2/2)
(a) The energy function for supervised network learning:
1
2


T

Y
E=
where E is the energy value

j
j
2 j
E


‧
ΔW=
This is the general form of weights
Wij
updating for weight W
ij
(b) The energy function for unsupervised network learning:
1
2


X

W
E=  i
ij
2 i
ΔW=  ‧
E
Wij
This is the value for adjusting weight Wij
朝陽科技大學資管系 李麗華 教授
12
A Simple ANN Formula
• We Can also summary the ANN function into a simple
function. That is the output Y is derived from a
transferred formula,the summation of weighted input,
as shown below
Yj= f( Wij X i   j )
i
Y= the output of ANN (輸出)
f = the transfer function of ANN (轉換函數)
Wij = the weights, representing the connection strength neurons
(連結加權值)
Xi = the input of ANN (輸入)。
θj=the bias of ANN (閥值)。
13
Basic Model Q&A
What you should learn in this lecture
(1) The basic model of ANN?
(2) The ANN terminology of network
structure?
(3) The various types of transfer function?
(4) The mathematical concept of
supervised and unsupervised model?
(5) The concept of Energy function?
朝陽科技大學資管系 李麗華 教授
14