download

SDA 5: Fuzzy linear programming
The conventional model of Linear Programming (LP) can be stated as
c, x → min; subject to Ax ≤ b,
where
• c, x = c1x1 + · · · + cnxn is the objective function,
• x = (x1, . . . , xn) is the decision variable,
1
• (Ax)i = ai, x = ai1x1 +· · ·+ainxn
• X = {x|Ax ≤ b} is the set of feasible alternatives.
• xopt ∈ X is called an optimal solution to LP problem if
c, xopt ≤ c, x
for all x ∈ X.
In many real-world it may be sufficient
to determine an x such that
c1x1+· · ·+cnxn ≤ b0, subject to Ax ≤ b,
where b0 is a predetermined aspiration
level.
2
The fuzzy objective function is characterized by its membership function, and
so are the constraints.
Since we want to satisfy (optimize) the
objective function as well the constraints,
a decision in a fuzzy environment is defined in analogy to nonfuzzy environment
as the selection of activities which simultaneously satisfy objective function(s)
and constraints.
In fuzzy set theory the intersection of
sets normally corresponds to the logical
and. The decision in fuzzy environment
can therefore be viewed as the intersec3
tion of fuzzy constraints and fuzzy objective function(s).
The relationship between constraints and
objective functions in a fuzzy environment is therefore fully symmetric, i.e.
there is no longer difference between the
former and latter.
Starting from the problem
c, x → min; subject to Ax ≤ b.
the adopted fuzzy version is
c, x b0; subject to Ax b.
4
That is
c1 x 1 + · · · + cn xn b0
ai1x1 + · · · + ainxn bi, i = 1, . . . , m.
Here b0 means an aspiration level of the
decision maker.
We now define the membership functions
for the constraints
µi(x) =
5

1
if ai, x ≤ bi




ai, x − bi
1−
if bi < ai, x ≤ bi + di

di



0
if ai, x > bi + di,
where di > 0 are subjectively chosen
constants of admissible violations, i =
1, . . . , m.
µi
1
bi+d i <ai, x>
bi
6
µi(x) is the degree to which x satisfies
the ith constraint
We define the membership functions for
objective function
µ0(x) =

1




if c, x ≤ b0
c, x − b0
1−
if b0 < c, x ≤ b0 + d0

d0



0
if c, x > b0 + d0,
7
where d0 > 0 are subjectively chosen
constants of admissible violations.
µ0
1
b0 + d0
b0
<c, x>
µ0(x) is the degree to which x satisfies
the fuzzy goal function
The fuzzy decision is defined by Bellman and Zadeh’s principle as
8
D(x) = min{µ0(x), µ1(x), . . . , µm(x)}
where x can be any element of the n dimensional space, because any element
has a degree of feasibility, which is between zero and one.
And an optimal solution to the fuzzy LP
is determined from the relationship
D(x∗) = max D(x).
x∈Rn
Triangular norms (t-norms for short) were
9
introduced by Schweizer and Sklar (1963)
to model distances in probabilistic metric spaces.
In fuzzy sets theory triangular norms are
extensively used to model logical connective and.
Definition 1. A mapping
T : [0, 1] × [0, 1] → [0, 1]
is a triangular norm (t-norm for short)
iff it is symmetric, associative, non-dec10
reasing in each argument and T (a, 1) =
a, for all a ∈ [0, 1]. In other words, any
t-norm T satisfies the properties:
Symmetricity:
T (x, y) = T (y, x), ∀x, y ∈ [0, 1].
Associativity:
T (x, T (y, z)) = T (T (x, y), z), ∀x, y, z ∈ [0, 1].
Monotonicity:
T (x, y) ≤ T (x, y ) if x ≤ x and y ≤ y .
One identy: T (x, 1) = x, ∀x ∈ [0, 1].
These axioms attempt to capture the basic properties of set intersection.
11
The basic t-norms are:
• minimum: min(a, b) = min{a, b},
• Łukasiewicz: TL(a, b) = max{a+b−
1, 0}
• product: TP (a, b) = ab
All t-norms may be extended, through
associativity, to n > 2 arguments.
Triangular conorms are extensively used
to model logical connective or.
Definition 2. (Triangular conorm.) A map12
ping
S : [0, 1] × [0, 1] → [0, 1],
is a triangular co-norm (t-conorm) if it
is symmetric, associative, non-decreasing
in each argument and S(a, 0) = a, for
all a ∈ [0, 1].
In other words, any t-conorm S satisfies
the properties:
S(x, y) = S(y, x)
(symmetricity)
S(x, S(y, z)) = S(S(x, y), z)
ciativity)
(asso-
S(x, y) ≤ S(x, y ) if x ≤ x and y ≤ y (monotonicity)
13
S(x, 0) = x, ∀x ∈ [0, 1]
(zero identy)
The basic t-conorms are:
• maximum: max(a, b) = max{a, b}
• Łukasiewicz: SL(a, b) = min{a +
b, 1}
• probabilistic: SP (a, b) = a + b − ab
Lemma 1. Let T be a t-norm. Then the
following statement holds
T (x, y) ≤ min{x, y}, ∀x, y ∈ [0, 1].
14
Proof. From monotonicity, symmetricity
and the extremal condition we get
T (x, y) ≤ T (x, 1) = x,
T (x, y) = T (y, x) ≤ T (y, 1) = y.
This means that T (x, y) ≤ min{x, y}.
Lemma 2. Let S be a t-conorm. Then
the following statement holds
max{a, b} ≤ S(a, b), ∀a, b ∈ [0, 1]
Proof. From monotonicity, symmetricity
and the extremal condition we get
S(x, y) ≥ S(x, 0) ≥ x,
15
S(x, y) = S(y, x) ≥ S(y, 0) ≥ y
This means that S(x, y) ≥ max{x, y}.
The operation intersection can be defined
by the help of triangular norms.
Definition 3. (t-norm-based intersection)
Let T be a t-norm. The T -intersection of
A and B is defined as
(A ∩ B)(t) = T (A(t), B(t)), ∀t ∈ X.
Let T (x, y) = TL(x, y) = max{x + y −
16
1, 0} be the Łukasiewicz t-norm.
Then we have
(A ∩ B)(t) = max{A(t) + B(t) − 1, 0},
for all t ∈ X.
The operation union can be defined by
the help of triangular conorms.
Definition 4. (t-conorm-based union) Let
S be a t-conorm. The S-union of A and
B is defined as
(A ∪ B)(t) = S(A(t), B(t)), ∀t ∈ X.
17
Weighted min and max aggregations
• R.R.Yager, On weighted median aggregation, International Journal of Uncertainty, Fuzziness and Knowledgebased Systems, 1(1994) 101-113.
In many applications of fuzzy sets as multicriteria decision making, pattern recognition, diagnosis and fuzzy logic control
one faces the problem of weighted aggregation.
Assume associated with each fuzzy set
Ai (characterizing the i-th attribute or
18
criterion) is a weight wi ∈ [0, 1] indicating its importance in the aggregation
procedure, i = 1, . . . , n.
The general process for the inclusion of
importance in the aggregation involves
the transformation of the fuzzy sets under the importance.
Let Agg indicate an aggregation operator, max or min, to find the weighted aggregation.
Let x be an alternative and let
19
(a1, a2, . . . , an)
denote the degrees to which x satisfies
the criteria, i.e.
ai = Ai(x), i = 1, . . . , n.
We first transform each of the membership grades using the weights
g(wi, ai) = âi, i = 1, . . . , n
20
and then obtain the weighted aggregate
A(x) = Aggâ1, . . . , ân.
The form of g depends upon the type of
aggregation being performed, the operation Agg.
As discussed by Yager in incorporating
the effect of the importances in the max
aggregation operator: Since it is the large
values that play the most important role
in the aggregation we desire to transform
the low importance elements into small
values and thus have them not play a sig21
nificant role in the max aggregation.
Yager suggested a class of functions which
can be used for importance transformation in max aggregation
h(wi, ai) = T (wi, ai)
where T is a t-norm.
The three corresponding manifestations
are
22
h(wi, ai) = min{wi, ai}
h(wi, ai) = wiai
h(wi, ai) = max{0, wi + ai − 1}.
We see that if wi = 0 then T (wi, ai) = 0
and the element plays no rule in the max.
Furthermore, as wi decreases then T (wi, ai)
decreases.
23
Example 1. Let
w = (0.3, 0.2, 0.1, 0.4, 0.0)
be the vector of weights and let
a = (0.4, 0.6, 0.6, 0.4, 0.8)
be the vector of aggregates. If h(wi, ai) =
min{wi, ai} then
h(w1, a1) = h(0.3, 0.4) = 0.3∧0.4 = 0.3
24
h(w2, a2) = h(0.2, 0.6) = 0.2∧0.6 = 0.2
h(w3, a3) = h(0.1, 0.6) = 0.1∧0.6 = 0.1
h(w4, a4) = h(0.4, 0.4) = 0.4∧0.4 = 0.4
h(w5, a5) = h(0.0, 0.8) = 0.0∧0.8 = 0.0
That is
max{â1, . . . , ân}
= min{0.3, 0.2, 0.1, 0.4, 0.0}
= 0.4
As discussed by Yager in incorporating
the effect of the importances in the min
operation we are interested in reducing
25
the effect of the elements which have
low importance.
In performing the min aggregation it is
the elements with low values that play
the most significant role in this type of
aggregation, one way to reduce the effect of elements with low importance is
to transform them into big values, values
closer to one.
Yager introduced a class of functions which
can be used for the inclusion of importances in the min aggregation
26
g(wi, ai) = S(1 − wi, ai)
where S is a t-conorm.
Three significant manifestations of this
operation are
g(wi, ai) = max{1 − wi, ai},
the max conorm,
g(wi, ai) = 1 − wi + aiwi,
27
with S(a, b) = a + b − ab,
g(wi, ai) = min{1, 1 − wi + ai},
with S(a, b) = min{1, a + b}.
We first note that if wi = 0 then from
the basic property of t-conorms it follows that
S(1 − wi, ai) = S(1, wi) = 1.
Thus, zero importance gives us one.
28
Example 2. Let
w = (0.3, 0.2, 0.1, 0.4, 0.0),
be the vector of weights and let
a = (0.4, 0.6, 0.6, 0.4, 0.8),
be the vector of aggregates. If
g(wi, ai) = max{1 − wi, ai},
29
then we get
g(w1, a1) = g(0.3, 0.4) = (1−0.3)∨0.4 = 0.7
g(w2, a2) = g(0.2, 0.6) = (1−0.2)∨0.6 = 0.8
g(w3, a3) = g(0.1, 0.6) = (1−0.1)∨0.6 = 0.9
g(w4, a4) = g(0.4, 0.4) = (1−0.4)∨0.4 = 0.6
g(w5, a5) = g(0.0, 0.8) = (1−0.0)∨0.8 = 1.0
That is
min{â1, . . . , ân}
= min{0.7, 0.8, 0.9, 0.6, 1} = 0.6
30