Topic 4: Decision Analysis Module 4.3: Economic Theory of

Topic 4: Decision Analysis
Module 4.3: Economic Theory of Information
Bayes Rule and Bayes Strategies
The economic theory of information is most easily stated for the risk neutral decision
maker. In what follows, we return to the maximum expected value (MEV) decision rule
of Pascal covered earlier.
The economic theory of information, due primarily to Jacob Marschak (1974), is a
decision theory wherein the objects of choice are information systems. An information
system for the decision problem <S,A > is a pair <Z,Pz/s> where Z is a set of signals
(related to the states in S), and Pz/s is a matrix of conditional probabilities on the signals z
in Z given states s in S. A probability of the form P(z/s) is called a reliability probability,
and the matrix Pz/s represents the quality of the information system <Z,Pz/s>.
Reliability probabilities and the decision maker’s own prior probabilities determine, via
Bayes Rule, the decision maker’s posterior probabilities Ps/z. Hence, this variation on
decision theory is often referred to as Bayesian decision theory. Bayes Rule is as
follows:
P(si )P(z k /si )
P(z k )
P(si )P(z k /si ) .
=
∑ P(s j)P(z k /s j)
P(si /z k ) =
j
The first equality is known as the basic form of Bayes Rule and is the theorem proved by
Bayes (1763); the second equality is known as the extended form of Bayes Rule. The
€ latter is the former with the theorem on total probability used to generate P(z ) in terms of
k
its component parts.
Consider the following example. Suppose we have a problem with two states where the
prior probabilities are P(s1) = .5 and P(s2) = .5, and an information system with two
signals where the matrix of reliability probabilities is
P(z/s)
z1
.8
.4
s1
s2
z2
.2
.6
In words, the information system is 80% reliable in detecting that the first state will occur
and 60% reliable in detecting that the second state will occur. The prior and reliability
probabilities yield the following posterior probabilities:
P(s)
s1
s2
.5
.5
P(z/s)
z1
.8
.4
P(s/z)
z2
.2
.6
z1
.4/.6
.2/.6
z2
.1/.4
.3/.4
The posterior probabilities are calculated via Bayes Rule as follows:
.4 /.6 =
(.5)(.8)
.4
=
,
(.5)(.8) + (.5)(.4) .4 + .2
.2 /.6 =
(.5)(.4)
.2
=
,
(.5)(.8) + (.5)(.4) .4 + .2
.1/.4 =
(.5)(.2)
.1
=
,
(.5)(.2) + (.5)(.6) .1+ .3
.3/.4 =
(.5)(.6)
.3
=
.
(.5)(.2) + (.5)(.6) .1+ .3
€
€
and
€
€
Note that if the decision maker receives the signal z1, then the probabilities .4/.6 and .2/.6
replace the prior probabilities .5 and .5, respectively, whereas if the decision maker
receives the signal z2, then the probabilities .1/.4 and .3/.4 replace the prior probabilities
.5 and .5, respectively. Therefore, to resolve the decision problem with the information
system the decision maker simply resolves the original decision problem twice, once for
the world where the decision maker receives signal z1 and once for the world where the
decision maker receives signal z2.
To resolve the decision problem with information we follow the same two-step procedure
we followed earlier. The first step is to calculate the (conditional) expected values for the
acts given each of the signals. The second step is to choose, for each signal, the act with
the greater expected value. The first step produces the table of conditional expected
values E[V(a)/z], as follows:
s1
s2
EV
Max
EV
Do a1
V(a,s)
a1
100
-50
25
25
P(s)
a2
0
20
10
.5
.5
P(z/s)
z1
.8
.4
P(s/z)
z2
.2
.6
z1
.4/.6
.2/.6
z2
.1/.4
.3/.4
E[V(a)/z]
a1
a2
z1
z2
30/.6
4/.6
-5/.4
6/.4
The entries in the lower table are generated from the original payoff table and the
posterior probabilities as follows:
E[V(a1)/z1] = (.4/.6)(100) + (.2/.6)(-50) = 30/.6
E[V(a2)/z1] = (.4/.6)(0) + (.2/.6)(20) = 4/.6
E[V(a1)/z2] = (.1/.4)(100) + (.3/.4)(-50) = -5/.4
E[V(a2)/z2] = (.1/.4)(0) + (.3/.4)(20) = 6/.4
The second step is to choose the optimal act for each signal. If the decision maker
receives signal z1, then the optimal act is a1 because 30/.6 is greater than 4/.6; and if the
decision maker receives signal z2, then the optimal act is a2 because 6/.4 is greater than
-5/.4.
The pair of propositions “if z1, then a1” and “if z2, then a2,” usually abbreviated <a1,a2>, is
called the Bayes Strategy. Such a strategy constitutes a contingency plan since the
decision maker knows the proper reaction to each signal before any signal is received.
The booklet that came with your car includes various Bayes strategies. For example,
most such booklets tell you that “if the warning light for oil pressure is on, then stop the
car,” and “if the warning light for oil pressure is not on, then keep driving.” Similarly,
the same booklets tell you that that “if the warning light for coolant is on, then drive the
car to the nearest service station,” and “if the warning light for coolant is not on, then
keep driving.”
In tabular form, we have the following:
V(a,s)
s1
s2
EV
Max
EV
a1
100
-50
25
25
P(s)
a2
0
20
10
.5
.5
P(z/s)
z1
.8
.4
P(s/z)
z2
.2
.6
a1
a2
Max E[V(a)/z]
Bayes Strategy
z1
z2
.4/.6
.1/.4
.2/.6
.3/.4
E[V(a)/z]
z1
z2
30/.6
4/.6
30/.6
a1
-5/.4
6/.4
6/.4
a2
The decision tree, drawn in TreePlan, is as follows:
s1
100
a1
.4/.6
30/.6
.2/.6
-50
z1
s2
do act 1
s1
0
a2
.4/.6
4/.6
.2/.6
20
s2
s1
100
a1
.1/.4
-5/0.4
.3/.4
-50
z2
s2
do act 2
s1
0
a2
.1/.4
6/.4
.3/.4
20
s2
PROBLEM:
Suppose we have the following information:
VALUE
state 1
state 2
state 3
P(s)
act 1
650
900
-800
act 2
650
600
-650
.2
.6
.2
z1
.8
.2
.0
P(z/s)
z2
.1
.6
.2
z3
.1
.2
.8
Find the Bayes strategy for this decision problem using the maximum expected value
(MEV) rule.