Chapter 8 Fuzzy Associative Memories

Chapter 8
Fuzzy Associative Memories
Li Lin
2004-11-24
CONTENTS





Review
Fuzzy Systems as between-cube mapping
Fuzzy and Neural Function Estimators
Fuzzy Hebb FAMs
Adaptive FAMs
Review



In Chapter 2, we have mentioned BAM
theorem
Chapter 7 discussed fuzzy sets as points in
the unit hypercube
What is associative memories?
Fuzzy systems
Output
Koskos: fuzzyInput
systems as between-cube
universe of
universe of
mapping
discourse
discourse
I
n
I
p
Fig.1 A fuzzy system
The continuous fuzzy system behave as associative
memories, or fuzzy associative memories.
Fuzzy and neural function
estimators

Fuzzy and neural systems estimates sampled
function and behave as associative memories

Similarities:
1. They are model-free estimator
2. Learn from samples
3. Numerical, unlike AI

Differences:
They differ in how to estimate the sampled function
1. During the system construction
2. The kind of samples used
Differences:
3. Application
4. How they represent
and store those samples
5. How they associatively
inference
Fig.2 Function f maps
domains X to range Y
Neural vs. fuzzy representation of
structured knowledge

Neural network
problems:
1. computational burden of training
2. system inscrutability
There is no natural inferential audit tail, like
an computational black box.
3. sample generation
Neural vs. fuzzy representation of
structured knowledge

Fuzzy systems
1. directly encode the linguistic sample
(HEAVY,LONGER) in a matrix
2. combine the numerical approaches with the
symbolic one

Fuzzy approach does not abandon neural-network,
it limits them to unstructured parameter and state
estimate, pattern recognition and cluster formation.
FAMs as mapping

Fuzzy associative memories are
transformations
FAM map fuzzy sets to fuzzy sets, units cube to units cube.

Access the associative matrices in parallel and
store them separately
Numerical point inputs permit this simplification
binary input-out FAMs, or BIOFAMs
FAMs as mapping
1
0
Light
Medium
50
100
Heavy
150
1
xn  200
0
Short
Medium
10
Long
20
30
Green light duration
Traffic density
Fig.3 Three possible fuzzy subsets of traffic-density
and green light duration, space X and Y.
yn  40
Fuzzy vector-matrix multiplication:
max-min composition

Max-min composition “  ”
A M  B
Where, A  (a1 ,...an ), B  (b1 ,...b p ) , M is a fuzzy
n-by-p matrix (a point in I n p )
b j  max min( ai , mi , j )
1i  n
Fuzzy vector-matrix multiplication:
max-min composition
.2
 Example
.7
Suppose A=(.3 .4 .8 1), M  
.8

0
B  A  M  .8 .4 .5

Max-product composition
b j  max ai mij
1i  n
.8 .7 

.6 .6 
.1 .5 

.2 .3 
Fuzzy Hebb FAMs

Classical Hebbian learning law:
 ij  mij  Si ( xi )S j ( y j )
m

Correlation minimum coding:
mij  min( ai , b j )

Example
a1  B

T
M  A B   

a n  B
 .3 
.3
 
.4
 .4 
M  A  B     .8 .4 .5  
.8
.8
 

1
 
.8
.3 .3
.4 .4
.4 .5

.4 .5


T
T

A

b

A
 bm

1




The bidirectional FAM theorem for
correlation-minimum encoding

The height and normality of fuzzy set A
H ( A)  max ai
1i  n

fuzzy set A is normal, if H(A)=1
Correlation-minimum bidirectional theorem
(i) A  M  B
iff H ( A)  H ( B)
T
(ii) B  M  A
iff H ( B)  H ( A)
(iii) A  M  B
for any A
(iv) B  M T  A
for any B 
The bidirectional FAM theorem for
correlation-minimum encoding

Proof
A  AT  max ai  A  max ai  H ( A)
1i n
Then
1i n
A  M  A  ( AT  M )
 ( A  AT )  B
 H ( A)  B
 H ( A)  B
So
H ( A)  B  B
iff
H ( A)  H ( B)
Correlation-product encoding

Correlation-product encoding provides an
alternative fuzzy Hebbian encoding scheme
M  AT B


Example
and mij  aib j
 .3 
.24 .12 .15
 
.32 .16 .2 
.
4



M  AT B     .8 .4 .5  
.64 .32 .4 
.8
 


1
.
8
.
4
.
5
 


Correlation-product encoding preserves more
information than correlation-minimum
Correlation-product encoding

Correlation-product bidirectional FAM theorem
if M  AT B and A and B are nonnull fit vector
then
(i) A  M  B
iff H ( A)  1
T
(ii) B  M  A
iff H ( B )  1
(iii) A  M  B
for any A
(iv) B  M T  A
for any B 
FAM system architecture
FAM Rule 1
( A1 , B1 )
FAM Rule 2
A
( A2 , B2 )

B1 
1
B2

( Am , Bm )
FAM Rule m
2
m
Bm
FAM SYSTEM

B
Defuzzifier
yj
Superimposing FAM rules

Suppose there are m FAM rules or associations
The natural neural-network maximum or add the m
associative matrices in a single matrix M:
M  max M k
1 k  m


or
M  Mk
k
This superimposition scheme fails for fuzzy Hebbian
encoding
The fuzzy approach to the superimposition problem

additively superimposes the m recalled vectors B
instead
k
Mk
of the fuzzy Hebb matrices
A  M k  A  ( AkT  Bk )  Bk
Superimposing FAM rules

Disadvantages:
Separate storage of FAM associations consumes
space

Advantages:
1 provides an “audit trail” of the FAM inference
procedure
2 avoids crosstalk
3 provides knowledge-base modularity
4 a fit-vector input A activates all the FAM rules in
parallel but to different degrees.
Back
Recalled outputs and “defuzzification”

The recalled output B equals a weighted sum
of the individual recalled vectors Bk
B    B'
m
k 1

k
k
How to defuzzify?
1. maximum-membership defuzzification
mB ( ymax )  max mB ( y j )
1 j  p
simple, but has two fundamental problems:
① the mode of the B distribution is not unique
② ignores the information in the waveform B
Recalled outputs and “defuzzification”
2. Fuzzy centroid defuzzification
p
B
y m
j1
p
j
m
j 1
B
B
(yj)
(yj)
The fuzzy centroid is unique and uses all the
information in the output distribution B
Thank you!