Review of “Strategic Information Transmission” Hanjoon Michael Jung1 The Institute of Economics, Academia Sinica October 2010 JEL Classi…cation Number: C72 Keywords: Nash equilibrium, Signaling Game, Cheap Talk Crawford and Sobel (1982) developed a model of strategic information transmission in which a better-informed sender sends a possibly informative signal to a decision-making receiver and studied how strategically transmitted information is related to the analogy between the two players’ interests. They adopted the Bayesian Nash equilibrium as their equilibrium concept and showed that the signal by the sender, the transmitted information, is more informative in Pareto-superior equilibrium when the players’ interests are more analogous. Their analyses, however, are not complete in two aspects. First, they analyzed the model based on partial consideration of the players’ behavior, mixed behavior of the 1 The Institute of Economics, Academia Sinica, 128, Sec. 2, Academia Road, Nangang, Taipei 115, Taiwan Email address: [email protected] 1 sender and pure behavior of the receiver. Second, their proofs for the results were not complete. In the present study, therefore, we review their model to complete their analyses. We adopt the Nash equilibrium as our equilibrium concept and show that, in this model, the Nash equilibrium is equivalent to the perfect regular equilibrium introduced by Jung (2010b). Finally, we conclude that results in our complete analyses are similar to those in Crawford and Sobel (1982). We …rst review the setting of the model in Crawford and Sobel (1982), hereinafter referred to as CS. There are two players, a sender and a receiver, and they play a signaling game. That is, only the sender observes her type m 2 [0; 1] and makes a signal n 2 N to the receiver where N is an uncountable Borel set of feasible signals. Then, after observing the signal n, the receiver chooses his action y 2 R. Here, the sender’s type m is a random variable from a probability distribution function F (m) with a probability density function f (m). Regarding their preferences, the sender has a twice continuously di¤erentiable von Neumann-Morgenstern utility function U S (y; m; b) where b is a scalar parameter. The receiver has another twice continuously di¤erentiable von Neumann-Morgenstern utility function U R (y; m). In addition, CS assumed, for each m and i 2 fR; Sg, 1) U1i (y; m) = 0 i i for some y 2 R, 2) U11 ( ) < 0, and 3) U12 ( ) > 0. Under this setting of the model, CS considered only partial behavior of the players. CS de…ned the sender’s strategy as a function q : N R N q(njm)dn = 1. In the equation R N [0; 1] ! R+ such that for each m 2 [0; 1], q(njm)dn = 1, the left-hand side is an integral of q with respect to the Lebesgue measure. Hence, this de…nition requires q to be integrable with 2 respect to the Lebesgue measure and its integral over N to be one for each …xed m, which in turn implies that q(njm) is a conditional probability density function in n given m. That is, the sender’s strategy was de…ned as a conditional probability density function in signals given her types. Next, the receiver’s strategy was de…ned as a function y : N ! R. Thus, it was de…ned as an action plan y( ) such that given a signal n, y(n) speci…es a single action for the receiver. These de…nitions of the strategies, however, limit the players’behavior, and as a result they do not allow full characterization of the equilibrium behavior, which in turn means that they leave the analyses in CS incomplete. In the sender’s case, a conditional probability density function q can represent only mixed behavior that assigns zero probability to every signal n 2 N . Accordingly, it cannot express any behavior that assigns positive probability to a single signal n. Thus, this de…nition excludes all pure strategy behavior under which each type m of the sender would send only one signal n(m) 2 N , and therefore it cannot characterize the sender’s pure strategy equilibrium behavior. For example, consider a partition equilibrium2 that allows N (b) number of actions y 2 R, taken by the receiver in equilibrium, and let [0; a1 ); :::; [aN (b) 1 ; 1] be the partition of the interval [0; 1] associated with the partition equilibrium. Then, the CS de…nition excludes the sender’s pure strategy equilibrium behavior such that the sender’s types m in the same interval [aj 1 ; aj ) play the same signal n 2 N , that is, if m; m0 2 [aj 1 ; aj ), then n(m) = n(m0 ) 2 A partition equilibrium in CS is an equilibrium in which i) signals sent by the sender partition its type space [0; 1] into a …nite number of intervals and ii) actions chosen by the receiver in equilibrium are monotonically associated with the intervals. In this de…nition, monotonicity implies that higher actions are associated with higher intervals in the partition. 3 2 N . In particular, the CS de…nition cannot represent the pure strategy uninformative equilibrium in which the sender plays only one signal regardless of her types and the receiver plays only one action regardless of the sender’s signals. In the receiver’s case, on the other hand, an action plan y( ) can describe pure behavior, but not mixed behavior. This is because y( ) speci…es only one action for each signal3 n. Consequently, CS considered only part of the players’behavior, the mixed behavior of the sender and the pure behavior of the receiver. The problem with this limited consideration of the players’behavior is that it also limits players’equilibrium behavior, and therefore we cannot fully characterize equilibrium behavior according to their de…nitions of the strategies. Since the analyses in CS are based on these de…nitions, their analyses are not complete. In addition, CS did not provide complete proofs for their results. This is because CS left Lemma 1 unproven and all other results were proven based on Lemma 1. Concretely, in the proof of Lemma 1, CS claimed the inequalities (8), which are u y R (m) v where y R (m) is an optimal action for the receiver when m is the sender’s type and u and v are the receiver’s actions induced in equilibrium such that u < v. These inequalities, however, were not veri…ed completely. CS derived these inequalities from the inequalities (5), which are u < y S (m; b) < v where y S (m; b) is an optimal action for the sender when m is her own type. From these inequalities (5), CS …rst derived the facts that, in equilibrium, the action u would not be induced by any sender’s type m > m, which is the fact (6), and that the 3 In Footnote 2 of CS, CS mentioned that the receiver would choose pure behavior only with probability one in equilibrium. Thus, it is possible that the receiver chooses mixed behavior in equilibrium, but all such equilibrium outcomes would happen with probability zero. 4 action v would not be induced by any sender’s type m < m, which is the fact (7). Then, without further clari…cation, CS insisted that the inequalities (5) and the facts (6) and (7) implied the inequalities (8). Their insistence postulates existence of a well-de…ned conditional probability rule according to which the receiver can update his beliefs on the sender’s types. In the CS model, the receiver observes only the sender’s signals. So, based on what he observes, the receiver forms his beliefs on the sender’s types according to his conditional probability rule. If the receiver could always …gure out the sender’s actual types according to his conditional probability rule, then the receiver could reason that the sender’s signals that induce u would imply the sender’s actual type would be less than or equal to m, and thus we would have u supm m y R (m) = y R (m) in equilibrium. Similarly, we would have y R (m) = inf m m y R (m) v in equilibrium. In the CS model, however, a conditional probability rule might not be well-de…ned for some signals n 2 N , and therefore the inequalities (5) and the facts (6) and (7) might not imply the inequalities (8). In the equilibrium de…nition (2), CS de…ned a conditional probability rule as a function p : [0; 1] p(mjn) = q(njm)f (m) R1 . 0 q(njt)f (t)dt N ! R+ such that for each m 2 [0; 1] and n 2 N , That is, p(mjn) is a conditional probability density function in types m given signals n. Since p(mjn) is de…ned as a fraction, if its denominator, R1 0 q(njt)f (t)dt, is zero, then the whole conditional probability density function p(mjn) is not well-de…ned. In this case, the receiver could not correctly …gure out the sender’s type given the sender’s signals. As a result, the cases y R (m) < u and v < y R (m) cannot be excluded in equilibrium 5 even when u would not be induced by any sender’s type m > m and v would not be induced by any sender’s type m < m. Since Lemma 1 did not consider these cases, it was not veri…ed completely. Moreover, since all other results were proven based on Lemma 1, they were not proven completely either. To complete the analyses in CS, we …rst propose new de…nitions for the players’strategies. First, the sender’s strategy is de…ned as a function : [0; 1] ß(N ) ! [0; 1] where ß(N ) is the class of the Borel subsets4 of N such that i) for each m 2 [0; 1], (m; ) is a probability measure on ß(N ) and ii) for each A 2 ß(N ), ( ; A) is ß[0; 1] measurable. Here, (m; A) denotes probability assigned to a set of signals A given a type m. Next, the receiver’s strategy is de…ned as another function :N ß(R) ! [0; 1] where ß(R) is the class of the Borel subsets of R such that i) for each n 2 N , (n; ) is a probability measure on ß(R) and ii) for each Y 2 ß(R), ( ; Y ) is ß(N ) measurable5 . Again, (n; Y ) denotes probability assigned to a set of actions Y given a signal n. In these de…nitions, the conditions i) require (m; ) and (n; ) to specify what to play at every information set m in [0; 1] and n in N , respectively. The conditions ii) require the strategies and to well-de…ne expected utilities. Then, it is easy to see that these de…nitions of the players’ strategies can properly describe both the pure and the mixed behavior, and as a result the analyses in CS can be completed through them. Let set of the strategies for the sender and let R S be the be the set of the strategies for the receiver. 4 Given a Borel set X, the class of the Borel subsets ß(X) is the smallest class of subsets of X such that i) ß(X) contains all open subsets of X and ii) ß(X) is closed under countable unions and complements. 5 These strategies and are also known as transition probabilities. For information on the transition probability, please refer to Neveu (1965, III) and Ash (1972, 2.6). 6 Note that these de…nitions originated from Balder (1988) and Jung (2010a) and are adapted to the CS model. The solution concept in CS also needs to be changed for the complete analyses because it is not compatible with our new de…nitions of the strategies. CS adopted the Bayesian Nash equilibrium under which the players formed conditional probabilistic beliefs about other’s actions and types, and they maximized their expected utilities with respect to their beliefs. Here, the receiver was assumed to use Bayes’rule to update his belief. Bayes’rule, however, would not properly condition the receiver’s belief with respect to our strategies. This is because Bayes’rule de…nes a belief as a fraction and our strategies do not always allow such a fraction. Consequently, we adopt the Nash equilibrium concept, which is not related to Bayes’rule. The Nash equilibrium concept is known to be weak in that it does not properly condition actions o¤ the equilibrium path, and the equilibrium concept used by CS also has this weakness in common with the Nash equilibrium concept. A Nash equilibrium is a pro…le of strategies that consist of rationalizable actions on the equilibrium path. Thus, it might contain irrational actions o¤ the equilibrium path. The problem with these irrational actions is that they could newly rationalize actions on the equilibrium path, which cannot be rationalized by only rationalizable actions. In this case, since we cannot actually rationalize these irrational actions o¤ the equilibrium path, we cannot properly rationalize those actions on the equilibrium path. Accordingly, some of the Nash equilibrium outcomes could not be rationalized properly. The equilibrium concept in CS shows the same weakness as the Nash 7 equilibrium. In their equilibrium, the receiver’s conditional expected utility functions are not well-de…ned o¤ the equilibrium path6 . Since the receiver’s rationalizable actions are based on his conditional expected utility functions, they are not well-de…ned o¤ the equilibrium path either. As a result, the equilibrium concept in CS does not properly condition actions o¤ the equilibrium path. This weakness of the Nash equilibrium, however, is innocuous in the CS model because it does not a¤ect outcomes in equilibrium almost surely. In the CS model, the sender’s signals are irrelevant to the players’utilities. They just function as a means of information transmission. Thus, each individual signal can be rationalized by the receiver’s rationalizable actions in equilibrium, either o¤ the equilibrium path or on the equilibrium path. That is, no signal is newly rationalized by the receiver’s irrational actions o¤ the equilibrium path, which accordingly means that these irrational actions do not a¤ect equilibrium outcomes. Therefore, the weakness of the Nash equilibrium, inability to condition actions o¤ the equilibrium path, has no e¤ect on the equilibrium outcomes. Theorem 1 formalizes this innocuousness of the Nash equilibrium in the CS model. So Theorem 1 states that the Nash equilibrium outcomes are almost surely the same as the perfect regular equilibrium outcomes. The perfect regular equilibrium introduced by Jung (2010b) is a revised version of the perfect Bayesian equilibrium, and it is designed for general 6 The receiver’s conditional expected utility functions are de…ned based on its conditional probability density functions. The conditional probability density functions are based on Bayes’ rule. However, o¤ the equilibrium path, Bayes’rule cannot be employed. Thus, the receiver’s conditional probability density functions are not well-de…ned o¤ the equilibrium path. Therefore, the receiver’s conditional expected utility functions are not well-de…ned o¤ the equilibrium path. 8 games that allow a continuum of types and strategies. The perfect regular equilibrium7 requires strategies to satisfy sequential rationality o¤ the equilibrium path as well as on the equilibrium path. As a result, it can properly condition actions o¤ the equilibrium path as well as on the equilibrium path, and thus the perfect regular equilibrium outcomes can be rationalized only by rationalizable actions. According to Theorem 1, since the Nash equilibrium outcomes are equivalent to the perfect regular equilibrium outcomes, they can almost surely be rationalized only by rationalizable actions as well. We …rst formally de…ne the Nash equilibrium in the CS model. For simplicity, let a probability measure P : ß[0; 1] ! [0; 1] satisfy P (M ) = R M f (m)dm for each M 2 ß[0; 1]. In the de…nition of the Nash equilibrium below, the integrals are well-de…ned for each strategy pro…le ( ; ) 2 S R according to Ash (1972, 2.6) because the utility functions U S and U R are assumed to be continuous and bounded above8 . De…nition 1 (Nash equilibrium) A strategy pro…le ( ; ) is a Nash equilibrium if they solve R R R max [0;1] N R U S (y; m; b) (n; dy) (m; dn)P (dm) and 2 S max 2 R R R R [0;1] N R U R (y; m) (n; dy) (m; dn)P (dm). Theorem 1 In the CS model, if a strategy pro…le ( ; ) is a Nash equilibrium, then there exists a perfect regular equilibrium strategy pro…le ( 0 ; 0 ) such that the product measures P and 0 0 P on ß (R) ß (N ) ß [0; 1] coincide. The main di¤erence between Nash equilibria and perfect regular equilibria occurs o¤ the equilibrium path. That is, while Nash equilibria can include irrational actions o¤ the 7 For a formal de…nition of the perfect regular equilibrium, please refer to Jung (2010b, Section 4). S R 0 (a ;m) CS assumed that for each m 2 [0; 1], there are a and a0 in R such that @U (a;m;b) = 0 and @U @a = 0. 0 @a S R Also, they assumed that U and U are strictly concave in a. These two assumptions imply that U S and U R are bounded above. 8 9 equilibrium path, perfect regular equilibria cannot. Note that, in the CS model, since only the receiver’s actions are relevant to the players’utilities, only the receiver can have irrational actions. Hence, the proof of Theorem 1 shows that, given any Nash equilibrium, we can replace the receiver’s irrational actions o¤ the equilibrium path with rationalizable actions without changing the sender’s incentive. For example, suppose that the receiver’s irrational actions o¤ the equilibrium path would be replaced with one of the rationalizable actions that the receiver would actually play in equilibrium. Then, in responding to the receiver’s new strategy, the sender has no incentive to deviate from her original strategy because the replaced action was already available to the sender, so inducing this replaced action would not increase the sender’s expected utility. Proof. Suppose that ( ; equilibrium strategy comes as 0 ) is a Nash equilibrium. We …rst construct a perfect regular of the receiver that almost surely induces the same equilibrium out- . Let y 2 arg maxy2R U R (y; m = 0) and y^ 2 arg maxy2R U R (y; m = 1). Then, y and y^ are uniquely determined according to the assumptions on the receiver’s utility function R which are, given each m, U1R (y; m) = 0 for some y 2 R and U11 ( ) < 0. In addition, we have R R y < y^ since U11 < 0 and U12 > 0. Note that, since U R is twice continuously di¤erentiable, we have [y; y^] = fy_ 2 R : y_ 2 arg max U R (y; m) for some m 2 [0; 1]g. So the interval [y; y^] y is the set of rationalizable actions which maximize the receiver’s utility function U R given some m 2 [0; 1]. Let N 0 be the union of supports of (m; ) for each m 2 [0; 1]. That is, N 0 is the smallest closed set of signals such that the sender actually signals only within N 0 according to her 10 strategy . Then, responding to almost every signal n 2 N 0 , the receiver would almost R ( ) < 0. surely play a single action y 2 [y; y^] in equilibrium because of the assumption of U11 So, given almost every signal n 2 N 0 , we have (n; y) = 1 for some y 2 [y; y^] since is a Nash equilibrium strategy of the receiver. In this theorem, we only consider almost every outcome in ß(R) ß(N ) ß[0; 1] that occurs with probability one. Thus, by changing (m; ) within a set of types with probability zero if necessary, we can assume that N n N 0 is an uncountable Borel subset of N without loss of generality. Furthermore, let a set of (n; y) = 1 for some n 2 N 0 g, and let N 00 be a set of the signals actions Y = fy 2 [y; y^] : such that, if n00 2 N 00 , (n00 ; y) < 1 for every y 2 [y; y^]. Now, we can de…ne the receiver’s new strategy is closed, then pick one signal n0 2 N 0 such that 0 as, when n 2 N 0 n N 00 , 0 (n; ) = If Y is not closed, then de…ne 2 N n (N 0 n N 00 ), 0 0 0 :N ß(R) ! [0; 1] as follows. If Y (n0 ; y) = 1 for some y 2 [y; y^] and de…ne (n; ) and, when n 2 N n (N 0 n N 00 ), as, when n 2 N 0 n N 00 , 0 (n; ) = 0 (n; ) = (n0 ; ). (n; ) and, when n (n; y) = 1 for some y 2 [y; y^] such that Y 0 = fy 2 [y; y^] : 0 (n; y) = 1 for some n 2 N g is the closure of Y , that is, Y 0 = Y . Note that, in this second case, we can …nd 0 that satis…es the condition, Y 0 = Y , since the set N n N 0 is uncountable. Therefore, we de…ne 0 from by eliminating irrational actions including incredible threats, which are y2 = [y; y^], and by adding actions of the limit points of Y . The requirement Y 0 = Y allows us to well-de…ne a perfect regular equilibrium strategy of the sender in responding to Next, we construct a perfect regular equilibrium strategy surely induces the same equilibrium outcomes as 11 0 0 . of the sender that almost . In the Nash equilibrium ( ; ), the sender plays signals in N n(N 0 nN 00 ) with probability zero. Thus, given almost every sender’s type m 2 [0; 1], the sender has no incentive to play signals in N n (N 0 n N 00 ) in responding to . Note that when we construct the receiver’s new strategy 0 , we replace the receiver’s actions contingent on the signals in N n (N 0 n N 00 ) either with the action contingent on the signal n0 (2 N 0 n N 00 ) or with the actions in Y n Y . As a result, the receiver’s strategy 0 excludes those actions in which the sender is not interested almost surely. Moreover, 0 introduces new actions in Y nY which the sender is not interested in almost surely. Therefore, 0 responding to , is still the sender’s best response. For this reason, given almost every m 2 [0; 1], we have (m; ) 2 arg max R R (m; ) N R U S (y; m; b) 0 (n; dy) (m; dn). (A) Let M = fm 2 [0; 1] : m satis…es the condition (A)g, then we have P (M ) = 1. Now, we can de…ne the sender’s new strategy 2 M and arg max (m; ) 0 (m; ) 2 arg max R R N R (m; ) R R N R 0 : [0; 1] ß(N ) ! [0; 1] as 0 (m; ) = (m; ) when m U S (y; m; b) 0 (n; dy) (m; dn) when m 2 [0; 1] n M . Here, U S (y; m; b) 0 (n; dy) (m; dn) for each m 2 [0; 1] n M is not an empty set since the set Y 0 = fy 2 [y; y^] : 0 (n; y) = 1 for some n 2 N g is compact. Finally, it is easy to see that the new pair ( 0 ; 0 ) is a perfect regular equilibrium strategy pro…le. Note that the same as 0 0 P on ß(R) 0 (m; ) is the same as (m; ) for almost every m 2 [0; 1] and 0 (n; ) is (n; ) for almost every n 2 N 0 . As a result, the product measures ß(N ) P and ß[0; 1] coincide. This …nal observation completes the proof. Remark 1 Theorem 1 means that, in the CS model, outcomes in a Nash equilibrium are almost surely perfect regular equilibrium outcomes. Note that all outcomes in a perfect regular equilibrium are Nash equilibrium outcomes according to Jung (2010b, Proposition 2). In the 12 CS model, therefore, Nash equilibria are equivalent to perfect regular equilibria in the sense that they both almost surely induce the same equilibrium outcomes. Finally, our analyses based on the full consideration of the players’ behavior lead to results similar to those in CS. Concretely, if the players’interests di¤er, then Nash equilibria induced under the complete de…nitions of the strategies are all partition equilibria that Crawford and Sobel (1982) derived under the partial de…nitions of the strategies with the exception of an action set that happens with probability zero. Therefore, every theorem in Crawford and Sobel (1982) almost surely stays true under the complete de…nitions of the players’strategies, and consequently the sender’s signals are more informative in Paretosuperior equilibrium when the players’interests are more analogous9 . Here, the exception in equilibrium happens because the Nash equilibrium concept cannot condition actions that do not a¤ect expected utilities. In a Nash equilibrium ( ; ) 2 happens with probability zero if and only if we have R S R R [0;1] N Y R , a set of actions Y R (n; dy) (m; dn)P (dm) = 0. Hence, those actions in the set Y do not a¤ect expected utilities, and thus they could be part of a Nash equilibrium regardless of the rationalizability of those actions. Acknowledgments I am grateful to Joel Sobel and an anonymous referee for their valuable comments that have improved this draft. I would also like to thank Wei-Ting Liao for his suggestions. 9 If the players’interests coincide, then one of the Nash equilibria contains perfectly informative signals by the sender. However, this Nash equilibrium is not an equilibrium in CS because the perfectly informative signals cannot be represented according to the partial de…nition of the sender’s strategies in CS. Therefore, if the players’interests coincide, then the results in CS di¤er from the results in the current study. 13 References 1. Ash, Robert B. (1972): “Real Analysis and Probability,”Academic Press, New York. 2. Balder, Erik J. (1988): “Generalized Equilibrium Results for Games with Incomplete Information,”Mathematics of Operations Research, 13, 265–276. 3. Crawford, Vincent P. and Sobel, Joel (1982): “Strategic Information Transmission,” Econometrica, 50, 1431–1451. 4. Jung, Hanjoon M. (2010a): “Complete Sequential Equilibrium,”Working paper. 5. Jung, Hanjoon M. (2010b): “Perfect Regular Equilibrium,”Working paper. 6. Neveu, J. (1965): “Mathematical Foundations of the Calculus of Probability,”HoldenDay, San Francisco. 14
© Copyright 2026 Paperzz