Causal Models of Decision Making: Choice as Intervention York Hagmayer ([email protected]) Department of Psychology, University of Göttingen, Gosslerstr. 14 37073 Göttingen, Germany Steven A. Sloman ([email protected]) Cognitive & Linguistic Sciences, Brown University, Box 1978 Providence, RI Abstract reflect causal relations is not considered. In fact, in games of chance, the probabilities are often independent of the choice made. The probability of red in roulette is determined by the number of red, black and green fields, not (unfortunately) by what you bet on. But in real world contexts the probabilities are fixed by underlying causal structures. An evidential relation may reflect a causal relation (climate → heating costs) or just a spurious relation due to a common cause (shoveling snow ↔ heating costs). This distinction is critical for good decision making. Whereas moving southward reduces your heating costs, failing to shovel does not. Evidential expected utility theory has no means to represent this distinction. Recent philosophical and computational theories are able to distinguish causal and spurious relations by explicitly representing causal structure (e.g., Pearl, 2000, Woodward, 2003). A spurious relation is a statistical relation between two events that is not due to a direct causal connection between them but rather to a common cause. A causal relation from A to B differs from a spurious relation in that it allows A to influence B by means of an intervention on A. Interventions on causes increase the probability of their effects; interventions on spuriously related events do not change the probability of those other events. Causal Bayes net theories (e.g. Pearl, 2000, Spirtes, Glymour & Scheines, 2000) therefore distinguish observational probabilities, reflecting the statistical relation between observed events, and interventional probabilities that represent the effect of intervention. This distinction is critical to decision making, because choice entails action, i.e. intervention. Therefore, if the goal of a decision is to increase the probability of getting a desired outcome, then interventional probabilities have to be used to calculate expected utilities and not evidential utilities. For example, the evidential probability relating shoveling snow and heating costs is fairly high. Despite this fact, moving should be preferred, because the interventional probability of low heating costs after moving is higher than the interventional probability of low heating costs after reducing shoveling. Note that in games of pure chance, evidential and Causal considerations must be relevant to making decisions. Nevertheless, traditional decision theories like evidential expected utility theory do not have the means to distinguish causal from merely evidential relations. As a result, they fail to distinguish cases where a choice influences consequences from cases where a choice does not affect consequences even though they are correlated. Therefore a causal model theory of choice is introduced built on the causal Bayes net framework. The theory claims that people decide using causal models of the decision situation. Choice is represented as an intervention. Two experiments are presented testing predictions of the theory. Introduction People who shovel a lot of snow have higher heating bills than those who shovel less. Would you therefore recommend to a friend who wants to reduce his heating bill that he should stop shoveling snow? Probably not. Shoveling less snow would make sense if it reduced heating costs. But the reason that shoveling snow and heating costs are correlated is presumably not that one is a cause of the other; it’s rather that both are effects of the climate. Stopping would merely reduce the effort, without affecting the underlying cause, and thus would do nothing to change the target effect, the amount paid. Therefore you would probably recommend moving south to someone worried about heating expenses. The example shows that it is not evidential relations but causal considerations that are crucial to good decision making. Yet the study of decision making has been dominated by evidential expected utility theory, a theory that fails to account for causal considerations. Evidential expected utility theory is the primary gold standard for good decision making. The theory is built on the gambling metaphor; the rational decision-maker is conceived of as someone playing a game like roulette which has a set of possible outcomes, each with some probability of occurring and each with some value or utility for the decision maker. The best options are the ones that have the highest likelihoods of delivering the most goods, those with the highest expected utility. Whether these likelihoods 881 A causal modeling framework for decision making Causal Bayes nets (e.g., Pearl, 2000) offer a formal framework for representing and reasoning about causal systems using causal models, a form of graphical representation of both deterministic and probabilistic causal systems. They also allow distinguishing observation from intervention, which is critical for modeling choice. Furthermore, they specify when and how interventional probabilities, i.e., the probabilities of events resulting from intervention, can be derived from observational evidence. Observations are represented by a conventional conditional probability (e.g., the probability that an individual has low heating costs given that he doesn’t shovel snow). To represent interventions, Pearl (2000) proposes a special operator called do. An intervention do(X=x) has the effect of setting the variable X to the value x and, less obviously, removes any links from other variables to X in the causal model. In other words an intervention disconnects the manipulated event from its usual causes. For example, preventing someone from shoveling snow is an action by an agent that constitutes an intervention. Its relevance to heating costs is represented by the interventional probability that someone has low heating costs given that we stop them. To represent the intervention in a causal model, we add a node for the intervention and set shoveling to no. Critically, we also disconnect shoveling from its normal cause (climate) because we have determined shoveling directly so other causes become irrelevant (see Figure 1). interventional probabilities coincide, because causal structure is irrelevant to the outcome. This might help explain why evidential expected utility theory has ignored this distinction. Two major conclusions can be drawn: (i) decisions require causal models, and (ii) choices involve interventions. Based on these insights we will propose a causal model theory of choice. First we offer a cursory review of previous work on causal decision making and an informal introduction to causal Bayes nets. After outlining our theory, we will report two preliminary experiments testing some predictions of our account. Causal Decision Making Philosophers have proposed several theories that are sensitive to causal structure (e.g. Nozick, 1995). These theories all use something other than evidential conditional utility to determine expected utilities. Instead they take the causal consequences of choice into account (cf. Meek & Glymour, 1994). For example, Nozick (1995) suggests calculating causal expected utilities based on probabilities that reflect the causal impact of the choices made. Despite the fact that issues of causality have framed debates about decision making in philosophy for 35 years, little work has been done on this topic in psychology. The influence of causality has been recognized in judgments of probability (e.g., Tversky & Kahneman, 1980) and has been the central issue in studies of attribution and explanation (e.g., Ahn, Kalish, Medin & Gelman, 1995, Kelley, 1967). Related work has also been done in the study of reasoning (e.g., Mandel, 2003; Sloman & Lagnado, 2005) and learning (e.g., Lagnado & Sloman, 2004; Waldmann & Hagmayer, 2005). In the domain of decision making per se, there is some evidence that people are persuaded by causal considerations. Pennington and Hastie (1993) showed that juries can be swayed by presenting evidence in an order consistent with a causal story. Studies on natural decision making (Klein, 1998) have also shown that people tend to build a causal model to simulate the consequences of a potential action and proceed if the results turn out to be satisfying. Several studies have found that there is an asymmetry between acts of omission and commission (e.g. Baron, 1992). People seem to prefer omission to commission given identical negative consequences, because they regard actions as more causally efficacious and therefore experience more regret when acting. Research has also found that people tend to neglect alternative causal models when making decisions, and sometimes rely on only one (e.g., Dougherty, Gettys & Thomas, 1997). Finally there is some evidence that people sometimes deviate from the optimal use of causal knowledge in their decisions and deceive themselves in a self-serving manner (Quattrone & Tversky, 1984). Our theory accommodates these findings and specifies boundary conditions for the use of causal models in decision making. Observation Climate Shovel Snow Heating Costs Intervention Intervention = Stop Shoveling = No Climate Heating Costs Figure 1: Causal models representing observation and intervention The observation model also shows that the evidential relation between shoveling and heating costs is merely correlational. It is a result of the common cause structure of the model. The intervention model shows that by virtue of removing the link shoveling snow is rendered independent of climate and heating costs. This inferential procedure of “undoing” a causal link captures the intuition that shoveling snow is no longer indicative of the climatic conditions because shoveling is no longer influenced by the climatic conditions. Therefore, it's also no longer related to heating costs. Based on the causal model and its parameters, i.e., the conditional probabilities reflecting the causal links in the model, evidential relations between the events in the model can be inferred. More important, based on the modified interventional model the interventional probabilities that would result from an intervention can be computed (see 882 value assigned by choice, and (iii) undoing is implemented, i.e., the manipulated variable is cut off from its normal causes. Figure 3 shows the choice model for insulating. Insulating would cut the connection between climate and amount of heat lost. Therefore the link connecting climate and heat loss has been deleted. Pearl, 2000, for a detailed description of how to do so in the general case). A causal model theory of choice The theory described here offers an analysis of the activity of a decision maker. It rests on two basic assumptions: (i) decisions involve deliberation concerning the outcomes of causal processes and (ii) choice can be conceived as an intervention on a causal structure. The decision maker is hypthosized to go through three phases of decision making: Climate: Cold Choice Intervention: Insulate Heat Loss: Low Phase 1. World model construction In this phase the decision maker first instantiates her goals as a distribution of preferred causal consequences (e.g., low heating costs). She then identifies the causal factors that are relevant to determining those consequences (e.g., climate and heat loss). Next these factors are separated into those that are determined by the decision maker’s choice options (e.g., heat loss) and other factors that are not (e.g., climate). These other factors comprise any other relevant variable in the context. Next the decision maker constructs a causal model of the decision environment describing how these factors bring about causal consequences. The construction of the world model may be facilitated if the given information already conforms to a causal model (see Pennington & Hastie, 1993). Finally she updates her model of the world by assigning all known values of factors and letting probability propagate to obtain a posteriori probability values for all states (methods for doing this can be found in, e.g., Halpern, 2003). For example, if the house is situated in the northern part of the United States, the factor climate is set to cold and the probability of low heating costs is decreased. Figure 2 shows such a partly instantiated causal model of the world for our heating example. In accordance with causal Bayes net theories, we assume that graphs are acyclic: Effects do not affect their causes. Climate: Cold Heat Loss Shoveling Snow: Yes Heating Costs Figure 3: Choice model constructed by the decision maker If more than one choice is available, a separate model for each choice has to be constructed. However, previous research (Klein, 1998) points out that people tend to construct a single causal model and consider one choice option at a time. Phase 3. Choice The probability of relevant consequences is calculated for each option based on the choice model. Thus interventional probabilities are used to infer the likelihood of consequences given each choice option. The choice is made by maximizing the likelihood of getting the most favorable causal consequences. First, the decision maker compares the probability distribution over the causal consequences that would obtain if no action were taken with the consequences resulting from action. When expected outcomes of omission and commission are equal, the default choice is to not act, because any kind of action incurs some cost. This prediction conforms to the omission bias found in previous research (e.g. Baron, 1992). Second, if different actions are available the consequences of these options are compared to each other. For the sake of simplicity, we assume that choice involves utility maximization: The action with the highest causal expected utility is chosen. The critical claim of the theory is that expected utilities are calculated using interventional probabilities and that utilities are determined by causal consequences. Computing both interventional probabilities and causal consequences requires a causal model. It is this dependence on a causal model that we deem crucial; the assumption of utility maximization might be substituted with a more psychologically realistic choice rule. As it stands, the framework assumes causal as opposed to evidential expected utility maximization. The experiments reported in this paper will investigate this crucial prediction. Shoveling Snow: Heating Costs Yes Figure 2: World model constructed by the decision maker Phase 2. Choice model construction In order to evaluate the consequences of a potential action an intervention model must be set up. In an intervention model, the variable being intervened on is disconnected from its normal causes. This is because a state variable set by choice is no longer diagnostic of its other natural causes, given that the choice is made deliberately. In other words, the logic of intervention applies to choice. The choice model differs from the decision maker’s model in three ways: (i) a choice-intervention node is added to represent the intervention, (ii) the manipulated variable is given the Boundary conditions A critical assumption made by the proposed theory is that choice equals an imaginary intervention in a causal model. When a choice variable is set by intervention, this variable has no diagnostic value for its other (normal) causes. Thus the theory implies undoing effects. In order to obtain such 883 way the evidential relation was presented. In Experiment 2 we manipulated the strength of the evidential relation. The strength of the evidential relation affects the causal expected utility given a direct causal link structure but not given a common cause structure. Therefore we expected participants to be sensitive to this manipulation only if they assumed a direct causal influence of the given variable upon the desired effect. The common core of both experiments was that participants were informed about the existence of an unknown albeit plausible evidential relation. They were also informed about the causal model underlying the evidential relation. Either the relation could be traced back to a causal link or to a common cause. Note that both causal models imply the existence of an evidential relation. Thus none of the models challenged the existence or validity of the evidential relation. Participants in both experiments had to decide whether an action should be pursued to achieve a desired outcome. Based on our theory we expected to observe undoing effects in both experiments. undoing effects several conditions must be met. First the decision maker has to fully consider the causal structure of the environment before making a choice. There is evidence however, that people do not always engage in elaborated reasoning (e.g. Petty & Cacioppo, 1986). Unmotivated or stressed participants may rely on evidential relations and neglect causal structure. Second the decision maker has to assume that her interventions are strong. Only interventions that are made deliberately and completely determine the value of the choice variable are strong. Otherwise the variable intervened on is not completely disconnected from its causes and remains diagnostic. The cases of selfdeception described by Quattrone and Tversky (1984) might serve as an example. Voters may assume that their decision to vote was not a fully deliberate choice but caused by some factor that also affects other voters’ decision to go to the polls. Third, undoing requires that the decision maker does not engage in analogical reasoning when predicting the actions taken by other persons. For example, an assumption made by some participants in games offering identical payoffs to all players is that other players will make the same choice. Reasoning of this sort has been described as the Stackelberg heuristic (Colman & Bacharach, 1997). Experiment 1 The goal of Experiment 1 was to provide evidence that participants are sensitive to causal structure and to the implications of the intervention resulting from their choice. Participants received four scenarios about familiar everyday activities and their evidential relation to desired outcomes. For example participants read the following story: Recent research has shown that of 100 men who help with the chores, 82 are in good health whereas only 32 of 100 men who do not help with the chores are. Imagine a friend of yours is married and is concerned about his health. He read about the research and asks for your advice on whether he should start to do chores or not to improve his health. What is your recommendation? Should he start to do the chores or not? In two experimental conditions we added a causal explanation for the evidential relation to the scenario. Participants were either informed that Research discovered that the cause of this finding was that doing the chores is an additional exercise every day (Direct cause condition). or that Research discovered that the cause of this finding was that men who are concerned about equality issues are also concerned about health issues and therefore both help to do the chores and eat healthier food (Common cause condition). Participants were given a forced choice either to recommend acting or to recommend not acting. The other three scenarios concerned the relation between exercise and caloric consumption, high risk sports and drug abuse, and chess and academic achievement. The evidential relations in all four scenarios were very strong. The probability of the desired outcome was 40-50% higher for persons taking the action than for persons who did not. We also manipulated the way the evidential relation was presented to investigate whether presentation format affects participants’ decisions. Participants either received information about conditional frequencies as in the example Testing the theory As an initial test of the theory, we focus on whether choices are construed as imaginary interventions on causal models. If so, undoing effects should result. We therefore confronted participants with an evidential relation between a variable and a desired outcome and asked whether they would recommend manipulating the variable in order to achieve the outcome. Different causal models explained the given relation, either a common-cause structure or a direct causal link. Given a common-cause model, an intervention would imply undoing and therefore independence of the variable intervened on and the desired outcome. In contrast, a direct causal link implies that an intervention would activate the causal relation and therefore increase the probability of the outcome in accordance with the evidential relation. Thus despite the fact that the evidential relation is identical in both cases, action should only be recommended given a direct causal relation. To test the generality of these predictions we constructed two experiments using different methodologies to investigate undoing effects. Experiment 1 was conducted on the world-wide-web, while Experiment 2 was conducted in the lab. New scenarios were constructed for each experiment. In addition, further factors that may affect participants’ answers were investigated. In Experiment 1 the way the evidential relation was presented was manipulated. As outlined earlier, deriving interventional expected utilities requires Bayesian reasoning based on causal models. While some researchers have found that Bayesian reasoning is impaired with probabilities but facilitated by frequencies (Gigerenzer & Hoffrage, 1995), others have found that such inferences are possible based on qualitative information alone (Sloman & Lagnado, 2005). Therefore the evidential relation was either described in a frequency format, in a probability format or just qualitatively. We expected participants to generate undoing effects regardless of the 884 containing a certain substance have a higher crop yield than other soils. The yield was either 65% higher (strong evidential relation) or only 6.5% higher than with other soils (weak evidential relation). It was explained that the relation was either due to soil fungi that released the substance and also provided nutrients for the crops (common-cause model) or that the substance stimulates the growth of fungi that release nutrients for the crops (causal-chain model). Note that in contrast to Experiment 1 the same events were part of both causal models. We also added a control condition in which no causal model explaining the evidential relation was provided. Participants in all conditions were asked to give advice to an agrochemical company contemplating whether to add the substance to a fertilizer. Answers were given on a rating scale ranging from 0 (“substance should definitely not be added”) to 100 (“substance should definitely be added”). The second scenario concerned the relation between the presence of certain molecules and the amount of insulin produced in bioreactors. Based on our theory we expected participants to recommend not adding the substances when the evidential relation could be traced back to a common-cause model regardless of the strength of the evidential relation. In contrast, we expected participants to consider the strength of the evidential relation when presented with a causalchain model. In this case the evidential probabilities conform to the interventional probabilities. Thus the causal expected utilities are higher when the evidential relation is strong than when it is weak. Therefore participants should recommend acting, but more emphatically given a strong relation. For the control condition, we speculated that participants would take the evidential relation into account, because conversational maxims (Grice, 1975) imply that given information is relevant for subsequent questions. Therefore the results in this condition should mirror the ones in the causal-chain condition. Seventy-two students from the University of Göttingen responded to the two scenarios. The order of scenarios and conditions was completely counterbalanced. As the answers to both scenarios were highly similar they were combined for further analysis. The results are depicted in Figure 5. given above or they were informed about the increase in probability (“men doing the chores are 50% more likely to be in good health than men who do not”) or they were just informed that there is an evidential relation without any specific numbers (“men doing the chores are substantially more likely to be in good health than men who do not”). This factor was manipulated between participants. The study was run online. Participants were recruited at various psychology websites and through university newspaper advertisements. They were rewarded for participation with a small chance (about 1/50) to win a small amount of money (about $50). Each participant received the four scenarios with either a common-cause model or a direct-cause model. The order of models and scenarios was completely counterbalanced. Eighty-one participants made a total of 324 decisions. Because there were no apparent differences between the four scenarios the results were aggregated for further analysis. The results for the six experimental conditions are depicted in Figure 4. 80 70 60 50 40 30 20 10 0 Common Cause Frequency Direct Cause Percent Qualitative Figure 4: Results Experiment 1. Percentage of recommendations to act. Overall only 23% of the participants given a commoncause model recommended acting in comparison to 69% of the participants given a direct causal link. The difference between causal models turned out to be significant in all three conditions, χ2frequency=18.8, p<.01, χ2percent=16.3, p<.01, χ2qualitative=16.9, p<.01. These results show that most participants took the causal structure into account when making their decisions. They did not simply base them on the evidential relation. 100 80 60 40 20 0 Experiment 2 The causal model theory of choice claims that people base their decisions upon causal expected utilities calculated using interventional instead of evidential relations. Experiment 1 showed that participants are sensitive to the causal structure underlying an evidential relation and to undoing effects. However, it did not test whether participants are sensitive to the magnitude of the interventional probability implied by the causal model and the observed evidential relation. In Experiment 2 we therefore manipulated the strength of the evidential relation as well as the structure of the causal model generating the data and again looked for undoing effects. Two scenarios describing biological processes were used. In the first scenario participants read that soils Common Causal Chain Control Cause Weak Relation Strong Relation Figure 5: Mean recommendations in Experiment 2. 0 = Action not recommended 100 = Action strongly recommended 885 An analysis of variance with ‘causal model’ and ‘evidential relation’ as between-participants factors yielded a significant main effect for causal model, F(2,138) = 28.5, p<.01, MSE = 725.9. No other effect was significant. Participants recommended action in the causal-chain and in the control condition, but advised omission in the common cause condition. Contrary to our prediction, no effect of the strength of the evidential relation was observed in the causal-chain condition. This finding may be due to the scenarios chosen. A 6.5% increase in crop yield or insulin production is already a big effect in real terms. Therefore the results may reflect a ceiling effect. The written justifications also pointed in this direction. Almost all participants assuming a causal-chain model strongly recommended adding the substance. Those who did not referred to other reasons like unknown side-effects to justify their advice. Therefore we expect to find the predicted effect in future experiments using a more sensitive task. Gigerenzer, G., & Hoffrage, U. (1995). How to improve Bayesian reasoning without instruction: Frequency formats. Psychological Review, 102(4), 684-704. Grice, H. P. (1975). Logic and conversation. Cambridge: Harvard University Press. Halpern J. (2003). Reasoning about uncertainty, Cambridge: MIT Press. Kelley, H. H. (1967). Attribution theory in social psychology. In D. Levine (Ed.), Nebraska Symposium on Motivation, 15, pp. 192-238. Lincoln: University of Nebraska Press. Klein, G. (1998). Sources of Power: How people make decisions. Cambridge: MIT Press Lagnado, D. A., & Sloman, S. A. (2004). The advantage of timely intervention. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30, 856-876. Mandel, D. R. (2003). Judgment dissociation theory: an analysis of differences in causal, counterfactual, and covariational reasoning. Journal of Experimental Psychology: General, 132, 419-434. Meek, C., & Glymour, C. (1994). Conditioning and intervening. British Journal for the Philosophy of Science, 45, 1001-1021. Nozick, R. (1995). The nature of rationality. Princeton. Princeton University Press. Pearl, J. (2000). Causality. Cambridge: Cambridge University Press. Petty, R. E., & Cacioppo, J. T. (1986). The elaboration likelihood model of persuasion. In L. Berkowitz (Ed.), Advances in experimental social psychology, 29, (pp. 123-205). New York: Academic Press. Pennington, N., & Hastie, R. (1993). Reasoning in explanation-based decision making. Cognition, 49, 123-163. Quattrone, G., & Tversky A. (1984). Causal versus diagnostic contingencies: On self-deception and on the voter's illusion. Journal of Personality and Social Psychology, 46, 237-248. Sloman, S.A., & Lagnado, D. A. (2005). Do we “do”? Cognitive Science, 29, 5-39. Spirtes, P., Glymour, C., & Scheines, R. (2000). Causation, prediction, and search. New York: Springer. Tversky, A., & Kahneman, D. (1980). Causal schemas in judgments under uncertainty. In Fishbein, M. (Ed.), Progress in Social Psychology. Hillsdale, N.J.: Lawrence Erlbaum. Waldmann, M.R., & Hagmayer, Y. (2005). Seeing versus doing: Two modes of accessing causal knowledge. Journal of Experimental Psychology: Learning, Motivation, and Cognition. Woodward, J. (2003). Making things happen. A theory of causal explanation. Oxford: Oxford University Press. Concluding Remarks Traditional evidential expected utility theory does not have the means to distinguish amongst evidential relations reflecting causal relations and spurious relations implied by common causes. However, the results of previous research have shown that participants distinguish between different causal models and their implications (e.g., Sloman & Lagnado, 2005, Waldmann & Hagmayer, 2005). Furthermore the results of the two experiments presented in this paper show that this distinction also affects participants’ decisions. The causal model theory of choice aims to integrate causal Bayes nets with an assessment of preference in order to develop an understanding of human choice. Its central claim is that people make decisions based on causal models which they use to infer the probability of desired consequences that result from interventions in the form of choices. Causal Bayes net theories provide the formal tools to model this process and to calculate causal expected utilities. The results of these two experiments add to the existing support in the literature for these claims. Further research will reveal the generality of these conclusions and afford further specification of the process of choice. References Ahn, W.-K., Kalish, C. W., Medin, D. L., & Gelman, S. A. (1995). The role of covariation versus mechanism information in causal attribution. Cognition, 54, 299352. Baron, J. (1992). The effect of normative beliefs on anticipated emotions. Journal of Personality and Social Psychology, 63, 320-330. Colman, A. M., & Bacharach, M. (1997). Payoff dominance and the Stackelberg heuristic. Theory and Decision, 43, 1-19. Dougherty, M. R. P., Gettys, C. F., & Thomas, R. P. (1997). The role of mental simulation in judgments of likelihood. Organizational Behavior and Human Decision Processes, 70, 135 - 148. 886
© Copyright 2026 Paperzz