Introduction to Computer Organization

FUZZY SETS
AND
FUZZY LOGIC
Theory and
Applications
PART 7
Constructing
Fuzzy Sets
1. Direct/one-expert
2. Direct/multi-expert
3. Indirect/one-expert
4. Indirect/multi-expert
5. Construction from samples
Direct/one-expert
•
An expert is expected to assign to each given
element x a membership grade A(x) that,
according to his or her opinion, best captures
the meaning of the linguistic term represented
by the fuzzy set A.
It can be done by either
1. defining the membership function completely in
terms of a justifiable mathematical formula,
2. exemplifying it for some selected elements of X.
2
Direct/multi-expert
• When a direct method is extended from one
expert to multiple experts, the opinions of
individual experts must be appropriately
aggregated.
One of the most common methods is based on a
probabilistic interpretation of membership
functions.
3
Direct/multi-expert
"x belongs to A" is either true or false, where A is
a fuzzy set on X that represent a linguistic term
associated with a given linguistic variable.
let ai (x) denote the answer of expert i. Assume
that ai (x) = 1 when the proposition is valued by
expert i as true, and ai (x) = 0 when it is valued
as false.
n: number of expects
(i  Nn)
4
Direct/multi-expert
Generalize the interpretation A(x) by
allowing one to distinguish degrees of
competence, ci, of the individual experts.
where
5
Direct/multi-expert
Let A and B are two fuzzy sets, defined on the
same universal set X.
We can calculate A(x) and B(x) for each x  X,
and then choose appropriate fuzzy operators to
calculate A , B , A U B, A ∩ B, and so forth.
Let
6
Direct/multi-expert
and
7
Indirect/one-expert
• Given a linguistic term in a particular context, let
A denote a fuzzy set that is supposed to capture
the meaning of this term.
Let x1, …,n be elements of the universal set X for
which we want to estimate the grades of
membership in A.
8
Indirect/one-expert
Our problem is to determine the values ai = A(xi).
Instead of asking the expert to estimate values ai
directly.
We ask him or her to compare elements x1, …,n in
pairs according to their relative weights of
belonging to A.
9
Indirect/one-expert
• pairwise comparisons
A square matrix P = [pij ], i,j  Nn, which has
positive entries everywhere.
Assume first that it is possible to obtain perfect
values pij . In this case, pij = ai /aj ; and matrix P
is consistent in the sense that
for all i, j, k Nn, which implies that pii = 1 and pij
= 1/ pji.
10
Indirect/one-expert
Furthermore,
for all i Nn or, in matrix form,
where
11
Indirect/one-expert
Pa = na means that n is an eigenvalue of P and
a is the corresponding eigenvector. It also can
be rewritten in the form
where I is the identity matrix.
12
Indirect/one-expert
If we assume that
then aj for any j Nn can be determined by the
following simple procedure:
hence,
13
Indirect/one-expert
The problem of estimating vector a from matrix P
now becomes the problem of finding the largest
eigenvalue λmax and the associated eigenvector.
That is, the estimated vector a must satisfy the
equation
Pa = λmax a,
where λmax is usually close to n.
14
Indirect/multi-expert
• Let us illustrate methods in this category by
describing an interesting method, which enables
us to determine degrees of competence of
participating experts.
It is based on the assumption that, in general,
the concept in question is n-dimensional (based
on n distinct features), each defined on R.
Hence, the universal set on which the concept is
defined is Rn.
15
Indirect/multi-expert
The full opinion of expert i regarding the
relevance of elements (n-tuples) of Rn to the
concept is expressed by the hyperparallelepiped
Where,
denote the interval of values of
feature ; that, in the opinion of expert i, relate to
the concept in question (i  Nm, j  Nn).
16
Indirect/multi-expert
We obtain m hyperparallelepipeds of this form
for m experts.
Membership function of the fuzzy set by which
the concept is to be represented is then
constructed by the following algorithmic
procedure:
17
Indirect/multi-expert
18
Indirect/multi-expert
19
Indirect/multi-expert
20
Indirect/multi-expert
21
Construction from samples
• Lagrange Interpretation
A curve-fitting method in which the constructed
function is assumed to be expressed by a
suitable polynomial form.
The function f employed for the interpolation of
given sample data <xi, ai> for all x  R has the
form
22
Construction from samples
( x  x1 )...( x  xi 1 )( x  xi 1 )...( x  an )
Li ( x) 
( xi  x1 )...( xi  xi 1 )( xi  xi 1 )...( xi  an )
for all i  Nn. Since values f (x) need not be in
[0,1] for some x  R, function f cannot be
directly considered as the sought membership
function A. We may convert f to A for each x
by the formula
23
Construction from samples
An advantage of this method is that the
membership function matches the sample data
exactly.
Its disadvantage is that the complexity of the
resulting function (expressed by the degree of
the polynomial involved) increases with the
number of data samples.
24
Construction from samples
25
Construction from samples
26
Construction from samples
27
Construction from samples
• Least-square curve fitting
The method of least-square curve fitting selects
that function f (x: α0, β0, • • •) from the class for
which
reaches its minimum. Then,
for all x  R.
28
Construction from samples
An example of the class of bell-shaped functions
is frequently used for this purpose.
where α controls the position of the center of the
bell, (β / 2 )-2 defines the inflection points, and γ
control the height of the bell (Fig. 10.4a).
29
Construction from samples
Given sample data <xi, ai>, we determine (by
any effective optimization method) values α0, β0,
γ0 of parameters α, β, γ, respectively, for which
reaches its minimum. Then, according to A(x),
the bell-shape membership function A that best
conforms to the sample data is given by the
formula
for all x
R.

30
Construction from samples
Another class of functions that is frequently used
for representing linguistic terms is the class of
trapezoidal-shaped functions,
The meaning of the five parameters is illustrated
in Fig. 10.4b.
31
Construction from samples
32
Construction from samples
33
Construction from samples
34
Construction from samples
35
Construction from samples
36
Construction from samples
37
Construction from samples
• Neural networks
38
Construction from samples
Following
the
backpropagation
learning
algorithm, we first initialize the weights in the
network. This means that we assign a small
random number to each weight. Then, we apply
pairs〈xp, tp〉of the training set
to the learning algorithm in some order.
39
Construction from samples
For each xp, we calculate the actual output yp
and calculate the square error
Using Ep, we update the weights in the network
according to the backpropagation algorithm
described in Appendix A. We also calculate a
cumulative cycle error,
40
Construction from samples
At the end, we compare the cumulative error with
the largest acceptable error, Emax, specified by
the user.
–
E ≦ Emax : the neural network represents
the desired membership function.
– E > Emax : we initiate a new cycle.
The algorithm is terminated when either we
obtain a solution or the number of cycles
exceeds a number specified by the user.
41
Construction from samples
42
Construction from samples
43
Construction from samples
44
Construction from samples
45
Construction from samples
46
Construction from samples
47
Construction from samples
48
Construction from samples
49
Construction from samples
50
Construction from samples
51
Construction from samples
52
Exercise 7
•
•
•
•
7.1
7.2
7.3
7.4
53