The interrogation game: Using coercion and rewards to elicit information from groups David Blake Johnson John Barry Ryan Department of Economics Department of Political Science University of Central Missouri Stony Brook University Forthcoming at Journal of Peace Research Abstract While there is widespread debate about techniques for obtaining information from noncooperative sources, there is little research concerning the efficacy of different methods. We introduce the ‘Interrogation Game’ as a simple model of the information a source will provide when faced with either: (1) an interrogator using coercive techniques or (2) an interrogator offering rewards. The model demonstrates that coercive interrogation results in a very slight increase in accurate information from knowledgeable sources, but much more inaccurate information from ignorant sources. In short, when ignorant group members refuse to provide information, this increases the inequality of payoffs among group members under coercion while the same behavior—truthfully revealing ignorance—decreases the inequality of payoffs under reward. If ignorant detainees have concern for the payoffs of their other group members, they will admit their ignorance in reward, but not coercion. We test the model with a group-based experiment. As the model predicts, coercion leads subjects to provide more inaccurate information. Contrary to the model’s expectations, however, accurate information is almost equally likely in the coercion and reward treatments. Keywords: interrogation, social dilemmas, cooperation, experiments. Corresponding Author: [email protected] Introduction The 2014 Senate Select Committee on Intelligence report on the steps the C.I.A. took to obtain information under the guise of protecting national security renewed the debate about how to weigh the need for information with the requirements of basic human rights. This debate has taken place with almost no scholarly research on what makes for an effective interrogation. The difficulties with studying this issue while upholding the standards of ethical research, as well as the understandable emotion tied to it, can hinder scientific research. As a result, both sides have relied on anecdotal and often contradictory evidence. Both sides point to examples where coercion supposedly did or did not ‘work’, but a systematic analysis of interrogation methods is largely lacking (Armshaw, 2011; Borum, 2006). There has been a great deal of research on the conditions under which governments will violate human rights to protect security or to maintain control over dissidents (e.g., Conrad & Moore, 2010; Conrad & Ritter, 2013; Davenport, 2007; Hill, 2010; Sullivan, 2014). This research, however, has not looked specifically at the efficacy of interrogation. Even Rejali (2009) admits his evidence, which comes from many countries across several centuries, is ‘too fragmentary to allow for precise, validated causal claims’ (p. 7). In fact, the Intelligence Science Board’s 2006 report on interrogation techniques, Educing Information, concluded that there was almost no actual research concerning interrogation.1 Further, when scientists do involve themselves in the issue, they often abandon rigor and nuance in favor of moral absolutes (Suedfeld, 2007). We argue that this is unfortunate because, if research demonstrated coercive For example, of the techniques outlined in the Army Field Manual 34-52: Intelligence Interrogation, the report states: ‘Even people intimately familiar with 34-52 are unaware of any studies or systematic analyses that support their effectiveness, or of any clear historical record about how the techniques were initially selected for inclusion’ (Borum, 2006, p. 17-18). The New York Times reports the techniques used at Guantanamo Bay were developed from a 1957 Air Force report on how Chinese soldiers obtained confessions during the Korean War (Shane, 2008). The story notes that the techniques were designed to obtain confessions regardless of their veracity. 1 1 techniques were ineffective, then there would be no justification for their use regardless of differing views as to their morality (Schiemann, 2012). This paper serves as a first attempt to test experimentally how individuals respond to coercion and rewards as interrogation techniques. Of primary concern is demonstrating when individuals will provide accurate information to interrogators—including truthfully telling the interrogator they do not know the information the interrogator desires. Ethical concerns obviously prevent research on the harshest methods, but it is difficult to evaluate any hypotheses through the observation of real-world interrogations–even in a setting as inconsequential as a high school principal’s office. The researcher must know a priori the information the source possesses to evaluate whether a source speaks truthfully (which would mean the interrogation was pointless). As a result, we examine interrogation with an incentivized, group-based experiment. This particular research method is best suited to answering a limited set of questions, but it can reveal the importance of the potential source’s knowledge state, as well as the source’s attitudes towards the other group members and the interrogator. In the experiment, subjects, in groups of four, are assigned a playing card, but only two of the subjects know what that card is. The computer prompts subjects to reveal the group’s card. If a subject reveals the true card, the other group members receive no payment. In the coercion treatment, subjects lose money if they fail to reveal the card. In the reward treatment, subjects receive money in exchange for the information. The results show that in coercion more subjects claim to know what the card is–including subjects who actually are ignorant. We will show that under coercion ignorant subjects will lie and act as if they know the card because this results in payoffs being similar among all group members. In reward, ignorant subjects who care about the payoffs of the other members of their 2 group should truthfully state that they ‘do not know’–this leads to equal payoffs. This difference results in the subjects providing lower quality information under coercion than reward. Interrogators are poor lie detectors When an interrogator questions an individual, the interrogator reveals his or her ignorance. The interrogator is not only ignorant about the information that he or she is asking, but the interrogator also does not know if the target under questioning knows the answer. Clearly, if interrogators were human lie detectors, the process of questioning would be easier. In short, when a source says, ‘I do not know’, the interrogator needs to determine whether or not the source is truly ignorant. If the source is ignorant, then further interrogation could only result in the source providing bad information. Unfortunately for the interrogator, techniques for determining the truthfulness of sources are not well-developed (Mann, Vrij & Bull, 2002). Consider Vrij et al.’s (2007) experiments in which police officers observed interrogators using different techniques to interview mock suspects. Afterwards, the officers were more likely to falsely accuse people of lying under harsher questioning techniques and were more likely to believe that they were accurate in those accusations. Even though this research shows that mildly coercive techniques could lead to inaccurate information, the use of more extreme techniques is prevalent even in countries that claim to protect human rights.2 For example, Parry (2010) details the frequent use of torture after World War II by democracies and other countries acting on behalf of a democracy that are signatories to the United Nations Convention Against Torture (CAT). These countries claim that exigent 2 Physically coercive techniques, especially instances of torture, raise additional concerns outside the purview of this study. The trauma resulting from such interrogation could even cause temporary memory loss making a knowledgeable source temporarily ignorant. In some cases, torture can be so harsh that the detainees die before they have a chance to provide information (Arrigo, 2004). 3 circumstances compel them to use extraordinary means of obtaining information. Democracies are constrained in ways that make the open use of physical coercion difficult. They do, however, practice techniques, such as sleep deprivation, that physically and mentally stress the potential source without leaving bruises and scars (Ron, 1997; Rejali, 2009). To understand the prevalence of coercive interrogation, Wantchekon & Healy (1999) model a ‘game’ between an interrogator and a non-cooperative source. In their model, the potential sources have varying levels of resolve and information. They find that once a state selects coercion as a means of obtaining information they will continue to do so. This occurs because all types of interrogators in their model wish to test the mettle of the victim (i.e., to determine what type of source they have). Unless all potential sources are of the ‘strong’ type— that is, they will never reveal any information—then all interrogators will use and continue to use some level of coercive interrogation. Wantchekon & Healy (1999) ‘assume that the state has the means to verify the truthfulness of the information provided by the victim’ (600). Without this assumption, states may be less willing to use coercive techniques. We argue that both knowledgeable and ignorant sources can provide inaccurate information and the state is unable to tell the difference at the time of the interrogation. If states act on information of dubious quality, it could prove quite costly for the state. Knocking down one door when a bomb is behind another one does not prevent the attack; it embarrasses and monetarily costs the state, and punishes the innocents whose door the state destroyed. Comparing coercion and reward 4 In developing our argument, we rely on a simple model allowing us to build hypotheses which we can test with an incentivized experiment. The full model is available in the online appendix, but in this section we present the intuition behind the model’s predictions. We use the model to generate hypotheses for the experiment, but the model we describe is general enough to allow for predictions based on other design choices. We are not the first to consider a formal model of interrogation. In addition to the Wantchekon & Healy (1999) model discussed earlier, Schiemann (2012) considers the efficacy of torture as an interrogation technique. Our model differs from these previous arguments in several important ways. First, we develop a means of evaluating the efficacy of interrogation techniques. In evaluating the efficacy of interrogation, we consider primarily the accuracy of the information gleaned. Schiemann (2012) discusses the reliability of information in his model, but, even if coercive interrogation led to reliable information, its use could only be justified if the same (or more reliable) information could not have been gathered in a different way. For this reason, we compare the quality of information received under coercive interrogation to the quality of information received under a rewarding interrogator. In our model, a coercive interrogator is one who reduces the detainee’s utility for not providing accurate information. A rewarding interrogator is one who provides the detainee with utility gains in exchange for providing accurate information.3 The second difference is how our model treats the detainee’s willingness to provide information. Both Wantchekon & Healy (1999) and Schiemann (2012) separate detainees into 3 It is important to note that a detainee can have an overall utility gain for not providing information in coercion. A detainee, for example, might provide information to an interrogator to avoid physical harm, but this may still result in a loss of utility because the detainee suffers a psychological cost from hurting the other members of the detainee’s group and from helping an interrogator he or she does not like. 5 ‘weak’ and ‘strong’ types - at the extremes, those who always provide accurate information and those who never provide accurate information. We argue that a detainee’s willingness to provide information is driven by (1) their attachment to other members of the detainee’s group and (2) his or her willingness to help the interrogator. Hence, in the language of the previous models, an extreme ‘weak’ type in our model is someone who does not care about the other group members and wants to provide accurate information. Extreme ‘strong’ types are detainees who care about the other group members as much as they care about themselves and feel no psychological pain from lying to the interrogator. In fact, they may view lying to the interrogator as a benefit. Finally, and most importantly, we argue the detainee’s knowledge state (i.e., do they possess the information the interrogator desires) interacts with the interrogator types (coercive or rewarding). The model predicts knowledgeable detainees will provide slightly more accurate information under coercion, but ignorant detainees provide more inaccurate information under coercion (ceteris paribus). We will explain the reason for this in more detail later. In short, when ignorant group members refuse to provide information, this increases the inequality of payoffs among group members under coercion while the same behavior decreases the inequality of payoffs under reward. If any ignorant detainees have concern for the payoffs of their other group members, they will admit their ignorance in reward, but not coercion. Payoffs in coercion and reward In the language of formal models, we refer to the detainee as a ‘player.’ There are two elements of the interrogation context that we consider. First, the player’s knowledge state (K∈{k, 6 u})--i.e., the player knows the information the interrogator desires (k) or they are uninformed (u). Second, the interrogator type: coercive or rewarding (T∈{c, r}). In the model, player i can take three actions: (1) provide accurate information; (2) say that they ‘do not know’; (3) provide inaccurate information. For knowledgeable players, either providing inaccurate information or saying that they ‘do not know’ is lying. For ignorant players, providing any information is a lie, but they could accidentally provide accurate information. The interrogator type (coercive or rewarding) determines the payoffs the player (i) receives for each type of action. We refer to the payoffs as monetary payoffs because that is what we use in the experiment, but outside of the lab interrogators would often use imposing (or relieving) physical or psychological pain as payoffs. All players begin with a status quo endowment (xi) prior to interrogation. Under Coercion (interrogator type c), we make the following payoff assumptions: 1. If accurate information is revealed, i gains no money while the other group members lose money; 2. If the player says they ‘do not know’, i pays a cost equal to the size of their endowment while the other group members lose nothing; 3. If inaccurate information is revealed, i loses some money while the other group members lose nothing. The amount of money player i loses is denoted as β*xi where 0 < β < 1. Under Reward (interrogator type r), we make these payoff assumptions: 1. If accurate information is revealed, i gains money equal to the size of their endowment while the other group members lose money; 2. If the player says they ‘do not know’, i gains nothing while the other group members lose nothing; 7 3. If inaccurate information is revealed, i gains some money (β*xi where 0 < β < 1) while the other group members lose nothing. Hence, in both coercion and reward, monetary payoffs for player i are ranked as follows: accurate information > inaccurate information > saying, ‘I do not know.’ Players receive a greater payoff for inaccurate information than stating ‘do not know’ because providing inaccurate information may appear like the player is cooperating–e.g., the information the player provides may have been accurate at one time. Finally, we also assume the punishment the coercive interrogator uses for not providing information is the same as the gain the rewarding interrogator provides for providing information. This keeps the payoffs symmetric and allows us to develop expectations for treatment effects in Coercion and Reward. The psychological benefits and costs The previous section outlined the monetary payoffs the interrogator offers based on the player’s decision. Player i’s utility, however, is not just determined by the monetary payoffs. We must also consider the player’s attitudes towards the other group members and the interrogator. Some players will be selfish and will take the largest payoff without worrying about harming the other members of their group. Other members will sacrifice their own monetary gains to ensure that other their group’s overall payoff is not hurt. Similarly, some player’s will view the interrogator much more negatively than others. In short, there are psychological benefits and costs that we must consider in addition to the monetary payoffs. Hence, when we develop a full utility function for player i there are three components: 8 1. Their own payoff (ln(xi)) which is logged to indicate diminishing returns. 2. Their preference for the average earnings of the other group members (α*ln(xj≠i)) with α < 1. The closer α is to 1 the more the player cares about the payoffs of others. If α equals zero, the player is selfish. The α parameter is less than 1 to indicate that all individuals care more about their own payoffs than the payoffs of the other group members–a common assumption in models which incorporate other player’s payoffs into player i’s utility function (Fehr & Schmidt, 1999). 3. The player’s willingness to provide information to the interrogator (L). This combines both the player’s affect towards the interrogator and their preference for telling the truth (Gneezy, 2005).4 If L > 0, then the player pays a cost to mislead the interrogator either because the player likes the interrogator or does not want to tell a lie. If L < 0 then the player receives a benefit from misleading the interrogator. Regardless of the interrogator type and knowledge state, i has an additive three component utility function seen in equation 1. The knowledgeable and ignorant players’ exact utility functions for different decisions with both interrogator types are summarized in Table I. 𝑈𝑖𝐾𝑇 = ln(𝑥𝑖 ) + 𝛼 ln(𝑥̅𝑗≠𝑖 ) − 𝐿 (1) Obviously, it is possible to build other considerations into the model, but this simple utility function is able to consider five key elements of any interrogation situation: (1) the detainee’s position in the group - i.e., knowledgeable or ignorant; (2) the interrogator type; (3) the detainee’s payoffs from interrogation; (4) the detainee’s feelings toward the other group members; (5) the detainee’s feelings towards the interrogator. In the following sections, we explain the intuitions behind the model’s expectations. 4 Hurkens & Kartik (2009) argue that there are two types of individuals: people who will lie whenever it would be beneficial to them even if the lie hurts others and people who will never lie. 9 [TABLE I ABOUT HERE] Behavior of ignorant players When confronted with an ignorant player (detainee), an interrogator is in a difficult situation. The interrogator wants the player to truthfully reveal his or her ignorance, but the interrogator has no way of knowing whether or not the player is telling the truth. Decisions on the part of the interrogator affect whether an ignorant player will admit that they do not have the information the interrogator desires. For example, if a rewarding (coercive) interrogator offers a very large reward (punishment), more ignorant individuals will provide false information in the hopes of receiving a payment for accidentally stumbling on useful information. Still, regardless of the size of the stakes, ignorant subjects will reveal more inaccurate information under coercion than reward.5 We can see that an ignorant player receives a greater benefit for telling the truth under reward by looking at the payoffs a player receives for a truthful revelation of ignorance, First, consider the player’s utility in Reward when he or she says, ‘I don’t know.’ 𝑈𝑖𝑢𝑟 (𝑡𝑟𝑢𝑡ℎ) = ln(𝑥𝑖 ) + 𝛼 ln(𝑥̅𝑗≠𝑖 ) − 𝐿 ∗ 0 (2) Under reward, the player keeps their endowment and does not harm the payoffs of the other group members if he or she says, ‘I don’t know’. Now, look at the utility in Coercion. 𝑈𝑖𝑢𝑐 (𝑡𝑟𝑢𝑡ℎ) = 𝛼 ln(𝑥̅𝑗≠𝑖 ) − 𝐿 ∗ 0 (3) The player receives no monetary payoffs. The only benefit they receive is the psychological benefit from seeing that their other group members are unharmed--assuming they 5 To determine how the size of the stakes affects player behavior, we ran 30 million simulations varying the size of the stakes, the probability the ignorant subject could ‘guess’ useful information and the size of punishment (reward) the interrogator offers for false information in coercion (reward). The expectation presented here (that lying is more common under coercion) held in 99.92% of the simulated combinations and the expectation reversed only when the stakes were extremely small (i.e., less than a nickel in the experiment). 10 care about the monetary payoffs of the other members of their group at all. Hence, an ignorant player receives a larger payoff for telling the truth under reward than coercion. One way to think about the difference between Coercion and Reward for ignorant detainees is that lying under coercion is the fairer option while lying for a reward is unfair. Our conception of fairness borrows from Fehr & Schmidt (1999): an outcome is fair when all group members’ payoffs are equal. In the case of Coercion, truthful revelation of ignorance is unfair to the detainee because they receive less money than the other members of their group. If the player guesses what the card is, then his or her expected payoff increases while the expected payoffs for other group members decreases. Hence, if the ignorant player guesses all group members have similar expected payoff which is fairer. In short, even if the player is concerned about the payoffs of others (i.e., they have an α near 1), the player will lie under coercion because lying decreases the inequality of payoffs. Under Reward, the reverse is true. If the player lies and acts as if they know the information, then this increases the inequality of payoffs. If the player says, ‘I don’t know’, then all group members receive the same payoffs (ln(xi)). This is because the rewarding interrogator takes no action without information. Thus, if the player cares about the payoffs of the other group members, the ignorant player will truthfully state, ‘I don’t know’ under reward. If the player does not care about fairness (i.e., α is about 0), they will generally lie under Reward, but they would lie under Coercion as well. Hence, we expect ignorant players to provide more truthful information under Reward. Behavior of knowledgeable players 11 It is easy to see that a knowledgeable player will never say, ‘I do not know’ because purposefully revealing inaccurate information provides the player with a larger payoff without harming other group members. There is a much less straightforward answer to the question of whether knowledgeable players will provide more accurate information in Coercion or Reward. The answer depends on the particular payoff the interrogators provide for each type of information. For knowledgeable players, the model is similar to a Dictator Game in that the player can set payoffs for him or herself and all the other group members. As concern for the other group members’ payoffs increases, then the likelihood he or she provides accurate information decreases. Similarly as disdain for the interrogator and willingness to lie increases, the likelihood of accurate information decreases. Both of these expectations are the same in Reward and Coercion. As such, they do not explain if the knowledgeable player will provide more information in either treatment. Recall that the punishment the interrogator imposes for inaccurate information in coercion is the same size as the reward the interrogator offers for inaccurate information in reward. The size of that is represented by β in the complete treatment of the model. In Reward, the monetary payoffs for accurate and inaccurate information are as follows: 𝐴𝑐𝑐𝑢𝑟𝑎𝑡𝑒 = ln(𝑥𝑖 + 𝑥𝑖 ) ; 𝐼𝑛𝑎𝑐𝑐𝑢𝑟𝑎𝑡𝑒 = ln(𝑥𝑖 + 𝛽𝑥𝑖 ) (4) In Coercion, the monetary payoffs for accurate and inaccurate information are as follows: 𝐴𝑐𝑐𝑢𝑟𝑎𝑡𝑒 = ln(𝑥𝑖 ) ; 𝐼𝑛𝑎𝑐𝑐𝑢𝑟𝑎𝑡𝑒 = ln(𝑥𝑖 − 𝛽𝑥𝑖 ) (5) When β = 0, we would expect more accurate information in reward than coercion. In this case, the player receives twice the payoff for accurate information in Reward and receives almost no punishment for false information in Coercion. Hence, someone willing to cooperate with the 12 interrogator will provide accurate information in Reward, but lie in Coercion. As the size of the β parameter increases, the probability that the knowledgeable player will provide more accurate information in Coercion increases because the punishment for false information increases. At the same time, in Reward, the probability the knowledgeable player will provide inaccurate information also increases as the difference in monetary payoffs for accurate and inaccurate information decreases at larger values of β. For the particular parameters in the experiment we use to test our model (and most values of β), we expect knowledgeable players to avoid the punishment for false information in coercion and provide more truthful information in coercion than reward–the opposite of what expect from the ignorant subjects. We expect, however, that the difference between coercion and reward among knowledgeable players will be smaller than the difference between coercion and reward among ignorant players. To understand the logic behind this expectation one needs to consider that the effect of the interrogator type is larger for ignorant players than knowledgeable players. Recall that for knowledgeable players this is essentially an n-player dictator game in that they can determine everyone’s payoffs with certainty. This is true for both coercion and reward. For knowledgeable players what determines their decision, therefore, is their feelings towards the other group members and the interrogator--the α and L parameters. As α decreases or L increases, a knowledgeable player is more likely tell the truth in either coercion or reward. The only difference is that a player with slightly smaller L parameter will tell the truth in coercion instead of reward.6 6 In fact, if we do not assume that an individual’s payoffs offer diminishing returns, Coercion and Reward are exactly the same for the knowledgeable player. 13 For ignorant players, the type of interrogator matters much more. If the interrogator offers rewards, a player who is indifferent about lying to the interrogator (L = 0) might truthfully admit ignorance if they want to protect the payoffs of the other group members (i.e., α is close to 1). The same ignorant player, however, will lie under Coercion. In coercion, telling the truth protects the payoffs of the other group members, but hurts the individual’s own payoff. This result is unfair to the player and they will lie to avoid it. In sum, this suggests that rewarding interrogators will receive superior information (more truth and less lies) because ignorant players lie less in reward while the interrogator type matters little to knowledgeable players. In sum, we have three hypotheses for the experiment: Hypothesis 1 The quality information will be higher in reward than coercion—correct information is slightly less likely in reward, but incorrect information is much less likely in reward; Hypothesis 2 Ignorant subjects will truthfully admit they are ignorant more frequently in reward compared to coercion; Hypothesis 3 Knowledgeable subjects will provide accurate information more frequently in coercion compared to reward. Experimental design We call our experiment ‘The Interrogation Game’.7 In the experiment, subjects are placed in groups and play against an outside party who requires a piece of information in order to take an action. Only some of the group members have the information. To ascertain the piece of information, the outside party ‘interrogates’ one randomly chosen group member, who may or 7 Like the Dictator Game, what we call the Interrogation Game is not a game in the strict sense because there is no direct intersection between player behaviors and payoffs. 14 may not know the required information. The interrogator takes an action based upon what that source says.8 The interrogator may punish sources for not revealing the information (Coercion) or reward them for revealing accurate information (Reward). That is, unlike most models of group behavior in which 3rd parties punish or reward to encourage cooperation (e.g., Fehr & Fischbacher, 2004), the 3rd party in this experiment encourages defection among the group members. If the source provides accurate information, the other group members receive a payoff of zero. Subjects and groups For each session, we recruited sixteen Florida State University students using the ORSEE recruitment system (Greiner, 2004). Upon being randomly seated, subjects were given instructions and anonymously assigned to groups of four.9 Subjects then either participated in a Coercion treatment or a Reward treatment for ten periods. We will explain the treatments in the following section. Following those ten periods, subjects were assigned to new groups and participated in the other treatment for ten periods. After completing these main experimental treatments subjects participated in a modified Dictator Game and a Holt & Laury (2002) risk aversion task. Subjects participated in these final two experiments to increase their earnings because some subjects may receive zero earnings in both of the main treatments. Each session lasted approximately 60 minutes. Subjects earned 8 If we had a subject play the role of the interrogator, then we would not know if the subjects attempted to achieve equal payoffs among just the members of their group or among all subjects–the members of the group plus the interrogator. 9 The exact instructions and screenshots can be found in the online appendix. The experiment is programmed using zTree - software for running economic experiments (Fischbacher, 2007). 15 $6.79 on average in the main experimental treatments in addition to the $10 show up fee and whatever they earned in the other tasks. In all, 160 subjects participated in ten sessions. In half of the sessions, subjects are randomly assigned to their groups. After the first ten periods, they are re-matched so that they do not play with the same subjects as they did in the previous 10 periods. This is public knowledge. In other sessions, subjects are assigned groups using a standard minimal group paradigm (MGP) procedure (Tajfel, 1970). Subjects are placed into groups based on their preferences for different abstract paintings (Chen & Li, 2009).10 As we find no statistically significant differences between the two types of group assignment, we present only the pooled results here.11 The results, broken down by group type, are available in the online appendix. In all cases, subjects are accurately told that their group members in one treatment may not be their group members in the other treatment. The treatments At the start of each round, each group is randomly assigned a playing card - one of the four aces. The subjects are told that this is their group’s card and that they are playing against a computer that is trying to determine what the card is. Half of the group members see the playing card while the other half of the group simply sees the text ‘unknown’. Subjects are randomly assigned to see the card or to see the text ‘unknown’ in each period. Subjects therefore, could be ignorant in one period and knowledgeable about the card in another. 10 Prior to the first treatment, subjects are divided into groups based on their preferences between paintings by Paul Klee and Wassily Kandinsky. In the second treatment, subjects are divided into groups based on their preferences between paintings by Koko the gorilla and Congo the chimpanzee. Like subjects’ preferences for artwork by Klee and Kandinsky, we have no reason to believe preferences for either primate is correlated with other subject characteristics. Subjects are told they were placed into those groups because they ‘liked the same paintings.’ 11 The instructions may have activated the minimal group identity in both types of groups. Specifically, subjects are told that they ‘are playing as a group against the computer.’ 16 All subjects are then prompted by the computer to reveal their group’s card. Subjects have the option of revealing a card, with no requirement of truthfully revealing the card, or to indicate that they do not know the card. The computer randomly then selects one of the group members to ‘listen to.’ Payoffs are assigned based upon the randomly selected subject’s decision. Knowledgeable subjects (those who know what the group’s card is) have three possible actions: (1) truthfully reveal the card, (2) reveal an incorrect card, or (3) claim to not know the card. If a subject does not know the card, there are only 2 possible actions: (1) truthfully state that they do not know the card or (2) reveal a card at random. If a subject does not know the card and reveals a random card there is a one in four chance the subject will randomly select the correct card. Table II displays the payoffs the subjects receive for each of the different possible decisions in the two treatments.12 These payoffs are common knowledge to all subjects. Therefore, subjects know that they may make a decision to raise their monetary payoffs that harms the monetary payoffs of others. [TABLE II ABOUT HERE] In the Coercion treatment, the subject receives less money than the other group members unless he or she reveals the true card. The subject chosen to act as the source earns $5 if they reveal the correct card and the other group members receive nothing. The source receives nothing if he or she says, ‘I don’t know’ and only $2.50 if he or she tells the computer the wrong card. In both cases, the other group members receive $5. 12 Subjects are paid based on two randomly chosen periods—one in each treatment. Subjects do not know their actual earnings until after both treatments are complete. 17 In the Reward treatment, subjects receive more money than the other subjects if he or she provides information. The source receives $10 if he she reveals the correct card and $7.50 if she reveals an incorrect card.13 The payoffs for the other group members are the same in both treatments. When knowledgeable subjects take an action, they know that everybody’s payoffs will be exactly what is displayed in Table II. When ignorant subjects reveal a card, however, they do not know whether it the accurate card or an inaccurate card. For this reason, the final row of the table displays the expected value of guessing for ignorant subjects. All subjects make a decision but the computer randomly selects to use one subject as its source. The computer then uses the card the chosen subject stated to see whether the computer now knows the group’s card. If the chosen subject claims to not know, then the computer tells subjects it was incorrect. The computer reveals whether or not it was able to determine the card. Subjects do not know who was chosen or the decisions of their fellow subjects; they only know the outcome of the period. Subjects participate in each treatment for ten periods. All of the analysis conducted use clustered standard errors to account for the repeated observations of the same subjects. Receiving accurate (and inaccurate) information Our first analysis looks at the overall quality of information from subjects regardless of the subject’s knowledge state. Figure 1 displays the proportion of the time the computer could have used the information obtained from subjects to correctly determine the card compared to how frequently the computer would state the wrong card or been told the chosen subject did not 13 Subjects in Reward receive $7.50 even for accurate information to keep the two treatments parallel and because the subjects are behaving as if they are trying to defect from the group. In fact, ignorant subjects who provide information are defecting from the group since they run the risk of leaving group members with the zero payoff in order to gain more money for themselves. 18 know the card. It is important to remember that often when subjects reveal the correct card, they are guessing the card because they did not know it. They happened to be correct, but they did not have any more information than the computer. [FIGURE 1 ABOUT HERE] Subjects reveal the correct card 48.4% of the time in Coercion compared to 44.8% in Reward.14 This difference is significant at the 0.1 level in an F-test of equal proportions with clustered standard errors (F = 3.47 with 150 degrees of freedom; p = 0.06). As importantly, subjects are significantly more likely to reveal the incorrect card in Coercion (F = 11.80; p = 0.00). Subjects reveal the incorrect card 43.8% of the time in Coercion, but are 7.1 percentage points less likely to reveal an incorrect card in Reward. The difference is largely the result of how much more frequently subjects clicked the ‘I don’t know’ button in Reward (F = 39.58; p = 0.00). In Coercion, subjects said they would take the zero payoffs for claiming not to know the card only 7.8% of the time. In Reward, subjects said that they did not know the card 18.6% of the time. In short, Figure 1 demonstrates a clear problem with coercion: much of the ‘information’ gained is incorrect. We get a clearer explanation of what causes these results in Figure 2 which displays the frequency of the decisions by period for knowledgeable and ignorant subjects in each treatment. The figure also divides the subjects by whether their session participated in Coercion or Reward first.15 The decisions for subjects who participated in a treatment first are displayed in periods 1 through 10, while the decisions for subjects who participated in a treatment second are displayed in periods 11 through 20. 14 Remember, all subjects made a decision. Only one subject decision is randomly chosen to set the payoffs. The results, therefore, show the decisions all subjects made and not just the chosen subject. 15 In the online appendix, we examine whether participating in either treatment first affected subject behavior. 19 [FIGURE 2 ABOUT HERE] The top graphs display the results for knowledgeable subjects, first in Coercion then in Reward. The figure shows knowledgeable subjects do not alter their behavior on the basis of the treatment. In both treatments, subjects revealed the correct card about 75% of the time, the wrong card about 22% of the time, and claimed to not know about 3% of the time. It appears on the basis of these results that subjects took on one of two types when they were knowledgeable. Most maximized their payoffs, but some took the greatest benefit they could without harming the other group members. These results support the model’s prediction that knowledgeable subjects would not say, ‘I don’t know,’ but the second hypothesis is not supported as the interrogator type had no effect on knowledgeable subjects. This is not that surprising, however, as we expected the difference between reward and coercion to be small. When subjects are ignorant, however, the treatment has the hypothesized effect: ignorant subjects provide more inaccurate information when faced with a coercive interrogator. In Coercion, ignorant subjects truthfully state they did not know the card about 12.5% of the time. In Reward, ignorant subjects honestly state they do not know the card 32.5% of the time. Why do we see this difference among ignorant subjects? Ignorant subjects truthfully claim ignorance more frequently under Reward than Coercion because the truth leads to the fair outcome in Reward, but an unfair outcome in Coercion. We know that fairness—i.e., the equality of payoffs—is important because if all subjects were simply maximizing their own selfinterest they would always take the highest payoff in both treatments. If they only cared about the payoffs of others, they would take the lowest payoffs in both treatments. Because there are some subjects who care about the equality of monetary payments, we observe different results in Reward and Coercion. 20 Who tells the truth? The preferred outcome for an interrogator is that a source who has the information reveals it and a source who does not know says ‘I don’t know’. The first probit model in Table III predicts who tells the truth based on the treatment and the subject’s knowledge state. Model 2 includes an interaction between these two variables because we argue that knowledge conditions the effect of the interrogator type. We also present Model 3 with five control variables. First, we include a period counter to account for potential period effects. We control for risk acceptance by including the results from the Holt & Laury task (2002). We also include one period lags on the subject’s behavior and whether the computer determined the correct card. Last, we estimate the effect of the previous period’s payoff on the subject’s behavior in the current period. These variables control for, respectively, the potential stickiness of subject behavior, the subject’s personality, and for whether subjects change their behavior based on how they believe other group members are behaving. [TABLE III ABOUT HERE] The dependent variable in the probit models in Table III is coded one if the subject told the truth and zero if the subject lied. Knowledgeable subjects tell the truth if they reveal the correct card and ignorant subjects tell the truth if they click the ‘don’t know’ button. The first model includes only the treatment and knowledge state variables. Both variables are statistically significant and in the expected direction. As expected, subjects tell the truth less frequently under Coercion and ignorant subjects are less likely to reveal their ignorance. When we include the interaction effect in the second model, we see that the effect of the treatment on truth telling only holds for the ignorant subjects as we expected. 21 The interaction between the treatment and the knowledge states holds when we include other control variables in the third model. The predicted probability that a subject will tell the truth based on Model 3 are shown in Figure 3. Remember, when knowledgeable subjects tell the truth, they are revealing the correct card while an ignorant subject tells the truth if he or she says, ‘I don’t know.’ As the figure shows, knowledgeable subjects tell the truth more frequently than ignorant subjects, but the behavior of knowledgeable subjects is not affected by the treatment. Treatment has a large effect on ignorant subjects, however, as they are 2.5 times more likely to tell the truth in Reward than Coercion. [FIGURE 3 ABOUT HERE] The results from the models confirm the problem with Coercion observed in the previous analysis. Even if we ignore all ethical concerns with interrogators using coercive techniques, Coercion is suboptimal because it increases the risk of receiving false information from ignorant sources. For ignorant subjects in Reward, telling the truth keeps everyone’s payoffs the same. On the other hand, admitting their ignorance in Coercion lowers their payoffs while increasing the difference in payoffs among the group members. Hence, ignorant subjects lie more frequently in Coercion because both the selfish behavior and the fair behavior are the same: guess a card. False statements from ignorant sources are often suspected of making Coercion an ineffective mechanism for educing information (e.g., Blakeley, 2007). This is clearly the case in our study. Behavior with no cost to lying The results in the previous section offered support for our expectation that ignorant subjects are more truthful under Reward. We also expected knowledgeable subjects to provide more accurate information in Coercion, but we saw no difference between the treatments. It is 22 possible the lack of a difference is the result of the high level of accurate information in both treatments. Many subjects may have provided accurate information because they could justify this selfish behavior as ‘truth telling.’ For this reason, we replicate the experiment removing the possibility for subjects to lie by removing the experiment’s frame. Unlike the previous experiment, there is no playing card that the computer is attempting to determine. Instead, subjects are offered the same payoff table as the previous experiment and make decisions that will assign payoffs. In effect, this removes the model’s L parameter for all subjects because it eliminates the penalty for lying that lie averse subjects feel as well as eliminating the desire to cooperate (or not cooperate) with the interrogator. Half the group members had three choices (BLUE, TEAL, and GREEN). These subjects are the equivalent of the knowledgeable subjects in the framed version of the experiment in that they can determine all payoffs with certainty. GREEN’s monetary payoff is identical to that of a subject selecting ‘don’t know’ in the framed experiment; BLUE is equivalent to a subject truthfully revealing the card; TEAL is equivalent to a subject lying about the card. The other half of the group had two choice (PURPLE, and GREEN). These subjects are the equivalent of the ignorant subjects in the framed experiment. They could choose GREEN (the payoff equivalent of ‘don’t know’) taking a lower payoff for themselves while maintaining the payoffs of the other group members. Alternatively, they could choose to gamble with everyone’s payoff by choosing PURPLE—which has the same payoffs as the ignorant subjects guessing a card in the framed experiment. This would offer the same payoff as the ‘Knowledgeable’ subjects choosing BLUE (the right card) 25% of the time and the same payoff as the ‘Knowledgeable’ subjects choosing TEAL (the wrong card) the other 75 % of the time. 23 The left side of Figure 4 shows the decisions made by subjects in the framed experiment by treatment and knowledge state. The right side displays the decisions made by subjects in the unframed version of the experiment. As before, subjects who can set payoffs with certainty (i.e. the subjects who knew the card in the framed treatment) do not alter their behavior by treatment. They make the same decisions regardless of whether they are choosing within the Coercion payoff scheme or the Reward payoff scheme. Subjects who have uncertain payoffs are affected by the treatment and in the same manner as in the framed experiment. It is unfair of them to take the lowest payoff under Coercion. Hence, they only take the lowest payoff 16% of the time. In Reward, taking the lowest payoff keeps the payoffs equal while inequality increases if they attempt to earn more money. [FIGURE 4 ABOUT HERE] The frame affects the decisions only of those people who can act set payoffs like a player in the Dictator Game. In the framed condition, subjects take the highest payoff leaving the other group members with the lowest possible payoff nearly 75% of the time. They can justify this action as not selfish, but acting truthfully as responding with a different card or ‘I don’t know’ would be a lie. In the unframed version, this justification is removed. Now, taking the largest payoff is only a selfish decision. Accordingly, subjects who can set the payoffs are much less likely to take the highest payoff in the unframed version. Instead more subjects take the TEAL option in the unframed version. This is the equivalent of revealing the wrong card in the framed version. The subject takes a lower individual payoff, but the group ends up with the social optimal outcome. In fact, the percentage of subjects taking the lowest payoff doubles in the unframed version. 24 What do these results mean for the quality of interrogation? Remember, this unframed treatment sets every subject’s L parameter at 0. This is a situation, like many hostile interrogations, in which an individual does not pay a cost for lying to the interrogator. With an L of 0, ignorant subjects will lie just as frequently as they do when they pay a cost for lying. At the same time, knowledgeable subjects would provide less quality information. Hence, we see that the amount of poor information that the interrogator would receive in Coercion is even greater when L=0 than what we observed in the main experiment. Discussion Our theoretical model makes three predictions: (1) overall information quality will be greater in reward than coercion; (2) ignorant subjects will truthfully admit they are ignorant more frequently in reward compared to coercion; (3) knowledgeable subjects will provide accurate information more frequently in coercion compared to reward. The results found support for the first two predictions, but not the third—though, we predicted the difference there would be small. The results, therefore, suggest that it might not matter whether an interrogator uses Coercion or Reward to obtain information from a knowledgeable source, but Reward is superior because ignorant detainees will truthfully admit ignorance more frequently under reward. Our theory improves upon previous models of interrogation (Wantchekon & Healy, 1999; Schiemann, 2012) by including a rewarding interrogator as a means of measuring the efficacy of coercive interrogation. Also, we systematically incorporate the detainee’s feelings towards other members of his or her group and the interrogator. The results also provide a theoretical explanation for and direct empirical evidence of Rejali’s (2007) claim regarding the most extreme form of coercive interrogation: torture. ‘The 25 problem of torture does not lie with the prisoner who has information,’ Rejali writes. ‘It lies with the prisoner with no information’ (461, emphasis in original). Consider the results in Figure 1 in light of this statement. Within a group in a period, Coercion produced more correct cards on average (Coercion mean=1.94 vs. Reward mean=1.79, t=2.42; p < .01), but the percentage of cards told that were correct was higher in Reward (Coercion mean=52.6% vs. reward mean=56.4%, t=2.12 p < .04). The key difference is the amount of poor information the ignorant subjects provide. In Coercion, ignorant subjects guess at what the card is and provide the interrogator with poor information. Further, the unframed experiment suggests that coercion leads to even more false information if groups are willing to lie to their interrogators. The experiment almost certainly underestimates how much trouble prisoners with no information cause. In the experiment, half the subjects had the desired information. Given the secretive nature of terrorist organizations, our experimental groups likely have too many knowledgeable individuals (Enders & Jindapon, 2010). If only one group member (25%) had been knowledgeable, then the computer would have been 1.5 times more likely to guess an incorrect card than a correct card assuming subject behavior did not change. These results cast doubt on a process of using Coercion following dragnet style sweeps. While rounding up a series of people connected with a group might yield some useful information, coercing them to talk will cause investigators to chase false leads. We have argued a combination of concerns for fairness and a desire to avoid lying can explain the pattern of behavior observed by subjects in our experiment. A potential alternative explanation for the decisions would rely on prospect theory (Kahneman & Tversky, 1979) or simply loss aversion for the knowledgeable subjects as there is no uncertainty for those subjects. In this case, the argument would state that the subjects react differently under Coercion because 26 they do not want to take a loss in payoffs. If this is the case, then we would not need to consider concerns about the equality of payoffs among the group members. Subjects never leave the lab with less money than they entered, but they would still have to cope with missing out on potential gains. These foregone gains do not seem to have the same psychological burden as actual losses (Kahneman, Knetsch & Thaler, 1986). Loss aversion alone, however, cannot explain the actions of the knowledgeable subjects or the subjects who can act as dictator in the unframed experiment. There is no treatment effect for these subjects at all. If they simply wanted to avoid losses, they would take the higher payoffs in the unframed version of Coercion, but not in Reward. As with all experimental results, we must proceed with caution concerning the external validity of all results. In interrogation settings outside the lab, whether in a principal’s office or a police station, the incentives are often much larger. However, the rewards an interrogator can offer are similarly much larger and go beyond simple monetary compensation for information. The 30 million simulations we performed suggest that the direction of the treatment effect should be similar even with larger stakes. Further, behavior in other experiments is often (but not always) similar regardless of the size of the payoffs. Behavior in Dictator Games and Ultimate Games, for example, is remarkably stable even when the monetary stakes are raised to levels greater than weekly wages (Slonim & Roth, 1998). Still, further experiments with higher stakes are necessary to show definitively whether the size of the treatment effect is conditional on the size of the stakes. Our study suggests that offering rewards for information dominates coercive interrogation techniques, but there are several reasons to expect that coercive interrogation would remain common. Coercive techniques are used for reasons other than simply the acquisition of 27 information. For example, many states use coercion during ethnic strife (Poe & Tate, 1994; Parry, 2010). In those cases, the visible effects of torture are required to demonstrate ruthlessness (Rejali, 2009). Further, our results only suggest coercive efforts are less effective if the interrogator cares about receiving poor information. During a crisis, the state may worry more about missing a critical piece of information than they do about acting on false information (Gartner, 2013). In this situation, interrogators may resort to coercive interrogation because it will provide more information quickly (Conrad et al., Forthcoming). Also, we might see more coercion for the same reason states often do not use positive inducements to influence other states during hostilities (Milburn & Christie, 1989). The detainee may doubt that the interrogator will actually provide the reward after the detainee provides the desired information. At the same time, the state may not want to offer a reward for information because the public does not want to reward people it considers the ‘enemy.’ Hence, coercive interrogation would continue even if everyone agreed it was less effective. 28 Data replication The dataset, codebook, and do-files for the empirical analysis in this article as well as the online appendix can be found at http://www.prio.org/jpr/datasets Acknowledgements The authors wish to thank the members of Florida State University’s xs/fs research cluster as well as the editors and the anonymous reviewers for their comments. Previous versions of this paper were written for the 2012 Southern Political Science Association and American Political Science Association meetings. 29 References Armshaw, Patrick (2011) The Torturer’s Dilemma: Analyzing the Logic of Torture for Information. http://diginole.lib.fsu.edu/cgi/viewcontent.cgi?article=4177&context=etd. PhD thesis Florida State University. Arrigo, Jean Maria (2004) A utilitarian argument against torture interrogation of terrorists. Science and Engineering Ethics 10(3): 543–572. Blakeley, Ruth (2007) Why torture? Review of International Studies 33(3): 373-394. Borum, Randy (2006) Approaching truth: Behavioral science lessons on educing information from Human Sources. In: Robert Fein (Ed.) Educing Information: Interrogation, Science, and Art, Foundations for the Future, Phase I Report. Washington, DC: National Defense Intelligence College Press, 17-43. Chen, Yan & Sherry Xin Li (2009) Group identity and social preferences. American Economic Review 99(1): 431–457. Conrad, Courtenay R; Justin Conrad, James A Piazza & James Igoe Walsh (Forthcoming) Who tortures the terrorists? Transnational terrorism and military torture. Foreign Policy Analysis. DOI: 10.1111/fpa.12066 Conrad, Courtenay R & Emily Ritter (2013) Treaties, tenure, and torture: The conflicting domestic effects of international law. Journal of Politics 75(2): 397–409. Conrad, Courtenay Ryals & Will H Moore (2010) What stops the torture? American Journal of Political Science 54(2): 459–476. Davenport, Christian (2007) State repression and political order. Annual Review of Political Science 10: 1–27. 30 Enders, Walter & Paan Jindapon (2010) Network externalities and the structure of terror networks. Journal of Conflict Resolution 54(2): 262-280. Fehr, Ernst & Klaus M Schmidt (1999) A theory of fairness, competition, and cooperation. The Quarterly Journal of Economics 114(3): 817–868. Fehr, Ernst & Urs Fischbacher (2004) Third-party punishment and social norms. Evolution and Human Behavior 25(2): 63–87. Fischbacher, Urs (2007) z-Tree: Zurich toolbox for ready-made economic experiments. Experimental economics 10(2): 171–178. Gartner, Scott Sigmund (2013) All mistakes are not equal: Intelligence errors and national security. Intelligence and National Security 28(5): 634–654. Gneezy, Uri (2005) Deception: The role of consequences. American Economic Review 95(1):384–394. Greiner, Ben (2004) An online recruitment system for economic experiments. In: Kurt Kremer & Volker Macho (eds) Forschung und Wissenschaftliches Rechnen [Research and Scientific Computing] Gottingen, Germany: Datenverarbeitung, 79–93. Hill, Daniel (2010) Estimating the effects of human rights treaties on state behavior. Journal of Politics 72(4): 1161–1174. Holt, Charles A & Susan K Laury (2002) Risk aversion and incentive effects. American Economic Review 92(5): 1644–1655. Hurkens, Sjaak & Navin Kartik (2009) Would I lie to you? On social preferences and lying aversion. Experimental Economics 12(2): 180–192. Kahneman, Daniel & Amos Tversky (1979) Prospect theory: An analysis of decision under risk. Econometrica, 47(2): 263–291. 31 Kahneman, Daniel; Jack L Knetsch & Richard Thaler (1986) Fairness as a constraint on profit seeking: Entitlements in the market. American Economic Review 76(4): 728–741. Mann, Samantha; Aldert Vrij & Ray Bull (2002) Suspects, lies, and videotape: An analysis of authentic high-stake liars. Law and human behavior 26(3): 365–376. Milburn, Thomas W & Daniel J Christie (1989) Rewarding in international politics. Political Psychology 10(4): 625–645. Parry, John T (2010) Understanding torture: law, violence, and political identity. University of Michigan Press. Poe, Steven C & C Neal Tate (1994) Repression of human rights to personal integrity in the 1980s: A global analysis. The American Political Science Review 88(4): 853–872. Rejali, Darius (2009) Torture and Democracy. Princeton University Press. Ron, James (1997) Varying methods of state violence. International Organization, 51(02): 275– 300. Schiemann, John W (2012) Interrogational torture or how good guys get bad information with ugly methods. Political Research Quarterly 65(1): 3–19. Shane, Scott (2008) China-inspired interrogations at Guantanamo. New York Times, 2 July 2008. www.nytimes.com/2008/07/02/us/02detain.html. Slonim, Robert & Alvin E Roth (1998) Learning in high stakes ultimatum games: An experiment in the Slovak Republic. Econometrica 66(3): 569-596. Suedfeld, Peter (2007) Torture, interrogation, security, and psychology: Absolutistic versus complex thinking. Analyses of Social Issues and Public Policy 7(1): 55–63. Sullivan, Christopher Michael (2014) The (in)effectiveness of torture for combating insurgency. The Journal of Peace Research 51(3): 388–404. 32 Tajfel, Henri (1970) Experiments in intergroup discrimination. Scientific American 223(5): 96– 102. Vrij, Alder; Samantha Mann, Susanne Kristen & Ronald P Fisher (2007) Cues to deception and ability to detect lies as a function of police interview styles. Law and Human Behavior 31(5): 499-518. Wantchekon, Leonard & Andrew Healy (1999) The ‘Game’ of torture. Journal of Conflict Resolution 43(5): 596–609. DAVID BLAKE JOHNSON, b. 1983, PhD in Economics (Florida State University, 2013); Assistant Professor, University of Central Missouri (2015-); Post-Doctoral Scholar, University of Calgary (2013-2015); Research Interests: experimental methods, behavioral economics, other regarding preferences, punishment, and altruism. JOHN BARRY RYAN, b. 1979, PhD in Political Science (University of California, Davis, 2009); Assistant Professor, Stony Brook University (2014- ); Assistant Professor, Florida State University (2009-2014); Research Interests: social networks, campaigns and elections, and experimental economics; most recent book: Experts, Activists, and Democratic Politics: Are Electorates Self-Educating? (Cambridge University Press 2014). 33 Table I. Utility functions in the experiment for knowledge and ignorant subjects by their decision. Coercion Knowledgeable Ignorant ln(𝑥𝑖 ) ln(𝑥𝑖 ) − 𝐿 Incorrect card ln(𝑥𝑖 − 𝛽𝑥𝑖 ) + 𝛼ln(𝑥̅𝑗≠𝑖 ) − 𝐿 ln(𝑥𝑖 − 𝛽𝑥𝑖 ) + 𝛼ln(𝑥̅𝑗≠𝑖 ) − 𝐿 ‘I don’t know’ 𝛼ln(𝑥̅𝑗≠𝑖 ) − 𝐿 𝛼ln(𝑥̅𝑗≠𝑖 ) Knowledgeable Ignorant ln(𝑥𝑖 + 𝑥𝑖 ) ln(𝑥𝑖 + 𝑥𝑖 )-L Incorrect card ln(𝑥𝑖 + 𝛽𝑥𝑖 ) + 𝛼ln(𝑥̅𝑗≠𝑖 ) − 𝐿 ln(𝑥𝑖 + 𝛽𝑥𝑖 ) + 𝛼ln(𝑥̅𝑗≠𝑖 ) − 𝐿 ‘I don’t know’ ln(𝑥𝑖 ) + 𝛼ln(𝑥̅𝑗≠𝑖 ) − 𝐿 ln(𝑥𝑖 ) + 𝛼ln(𝑥̅𝑗≠𝑖 ) Correct card Reward Correct card xi: Player’s utility from his or her own earnings. L: The player’s willingness to provide information to the interrogator (-∞<L<∞). α: The weight the player place on the payoffs of other players (0< α <1). β: The adjustment in monetary payments for suggesting the incorrect card rather than the correct card (0< β <1). 𝑥̅𝑗≠𝑖 : Average payment to other players. 34 Table II. Payoffs by treatment for the subject that the computer chooses and the other group members. Coercion Reward Others in both Accurate card 500 1,000 0 Inaccurate card 250 750 500 ‘I don’t know’ 0 500 500 312.5 812.5 375 Expected value of guessing 35 Table III. Predicting subject truth telling. Probit models with standard errors adjusted for clustering on subjects. (1) (2) (3) Coef. Z-value Coef. Z-value Coef. Z-value Coercion -0.312 -4.99** 0.035 0.40 0.095 1.07 Ignorant -1.407 -15.94** -1.067 -10.32** -1.083 -10.08** -0.738 -5.39** -0.763 -5.26** Coercion*ignorant -------- Risk acceptance -------- -------- 0.041 0.79 Previous truth teller -------- -------- 0.518 7.50** Previous correct card -------- -------- 0.016 0.26 Previous period pay -------- -------- -0.00003 -0.30 Period -------- -------- -0.022 -2.34* 0.514 4.20** Constant 0.797 N (Subjects) A.I.C. 10.29** 0.620 7.91** 3,200 (160) 3,200 (160) 2,880 (160) 3516.16 3462.43 3041.79 **p<.01 *p<.05 two-tailed tests. 36 Figure 1. Proportion of the time subject selected the right card, the wrong card, or said, ‘I don’t know’ regardless of knowledge state. Error bars represent 95% confidence intervals. 37 Figure 2. Subject decisions by period of ignorant and knowledgeable subjects. 38 Figure 3. Predicted probability of truth telling by treatment. Probabilities calculated from Model 3 in Table III. Error bars represent 95% confidence intervals. 39 Figure 4. Comparing behaviors in the framed and unframed version of the experiment. Error bars represent 95% confidence intervals. 40
© Copyright 2024 Paperzz