Reactions to Decisions With Uncertain Consequences: Reliance on

Journal of Personality and Social Psychology
2009, Vol. 96, No. 1, 104 –118
© 2009 American Psychological Association
0022-3514/09/$12.00 DOI: 10.1037/a0013266
Reactions to Decisions With Uncertain Consequences: Reliance on
Perceived Fairness Versus Predicted Outcomes Depends on Knowledge
Kelly E. See
New York University
Many decisions made by authorities pose uncertain consequences for the individuals affected by them,
yet people must determine the extent to which they will support the change. Integrating the social justice
and behavioral decision theory literatures, the article argues that individuals determine their support for
proposed initiatives by assessing how knowledgeable they feel and using 2 main sources of information
more or less heavily: their prediction of how the outcome of the initiative is likely to affect them or the
perceived fairness of the decision maker. Three studies (2 experiments, 1 longitudinal field survey)
assessing support for proposed public policies reveal that when individuals feel very knowledgeable they
rely more on their prediction of how the outcome will affect them, whereas when they feel less
knowledgeable they rely more on an overall impression of procedural fairness. The theoretical account
and findings shed interdisciplinary insights into how people use process and outcome cues in reacting to
decisions under uncertainty and ambiguity.
Keywords: justice–fairness, decision-making process, uncertainty–ambiguity, multimethod, policypolitical
When reacting to proposed initiatives, it stands to reason that
people base their support in part on an uncertain judgment, or
prediction, of whether the outcome is likely to be favorable or
unfavorable. Indeed, fundamental social psychological theories of
reasoned action and planned behavior (e.g., Ajzen & Fishbein,
1980) rely on people being able to form clear beliefs or expectations about the positive or negative outcomes that will result from
particular actions. Moreover, classic decision theory and the field
of behavioral decision making, which have their roots in expected
utility theory (Savage, 1954; von Neumann & Morgenstern, 1944),
have strongly emphasized the importance of anticipated outcomes
in determining choices and behaviors. Decision theory approaches
to the study of judgment under uncertainty rest on people being
able to engage in a calculation of expected consequences by
considering the probability and valence of a particular outcome.
Some decision research has also suggested that people’s willingness to act on their forecasts can be affected by how knowledgeable or competent they feel (Fox & Tversky, 1995; Fox & Weber,
2002; Heath & Tversky, 1991). Thus, to the extent that people feel
sufficiently knowledgeable when making a prediction about the
consequences of a proposed initiative, they should presumably
treat their prediction as a useful source of information in determining their support.
However, in many decision contexts individuals must act on the
basis of predicted outcomes without feeling sufficiently knowledgeable concerning that prediction. A perceived lack of knowledge would thus create ambiguity and render one’s prediction as a
less reliable cue, necessitating the use of other cues to determine
reactions and support. Social psychologists have long demonstrated that a robust predictor of reactions to decisions made by
authorities is perceived procedural justice (Lind & Tyler, 1988;
Thibault & Walker, 1975). Across a wide variety of contexts,
justice studies have shown that individuals respond more posi-
Leaders, arbiters, corporations, governments, and other
decision-making entities often propose policies, initiatives, or
rules. Whether it is a new governmental or consumer policy, a
reallocation of resources, or an announcement of a merger, many
decisions made by authorities pose uncertain consequences for the
individuals who are affected by them, yet people must still look to
the future and determine the extent to which they will support the
change. Under such conditions of ambiguity, what is the basis for
how people react to decisions with uncertain consequences and
determine their support for initiatives? This investigation integrates the fields of social justice and behavioral decision theory to
assert that people determine their support by assessing how knowledgeable they feel and drawing on two main sources of information more or less heavily: their prediction of how the consequences
of the initiative are likely to affect them or the perceived procedural fairness of the decision maker. The theoretical account takes
an interdisciplinary perspective of how people react to decisions
with uncertain consequences, introducing subjective knowledge as
a critical moderator of the extent to which people draw on process
versus outcome cues.
Data collection for Study 3 was supported by National Science Foundation Grant SES-9906771 awarded to Allan Lind and Lynn Maguire, who
are gratefully acknowledged for their generosity and guidance. Thanks also
to Steve Blader, Chris Bell, Kristin Diehl, Craig Fox, Rick Larrick, Yuval
Rottenstreich, Sim Sitkin, Kees van den Bos, and seminar participants at
Duke, INSEAD, New York University, the University of Arizona, the
University of Chicago, and the Wharton School for their helpful comments
and suggestions. This article is based on the author’s dissertation written at
Duke University.
Correspondence concerning this article should be addressed to Kelly E.
See, Stern School of Business, New York University, 40 West 4th Street,
New York, NY 10012. E-mail: [email protected]
104
KNOWLEDGE, FAIRNESS, AND PREDICTIONS
tively if they believe that the outcomes they receive are determined
with fair decision-making processes and procedures (e.g., Colquitt,
Conlon, Wesson, Porter, & Ng, 2001; Folger, 1977; Folger &
Cropanzano, 1998; Lind & Tyler, 1988; Thibault & Walker, 1975).
Moreover, people are especially sensitive to process fairness when
they are unable to evaluate aspects of their social context, such as
the trustworthiness of the decision maker (e.g., Van den Bos, Lind,
Vermunt, & Wilke, 1997; Van den Bos, Wilke, & Lind, 1998).
This stream of findings suggests that people draw on procedural
fairness judgments as a heuristic substitute for missing information; and in this way, fairness can serve a general function of
managing informational or personal uncertainty (Lind & Van den
Bos, 2002; Van den Bos, 2001; Van den Bos & Lind, 2002; Van
den Bos & Lind, in press).
Building on the complementary yet distinct insights of the social
justice and behavioral decision theory literatures, this article provides a unified theoretical account examining how people’s subjective sense of their own knowledge dictates reliance on the two
sources of information discussed above: their prediction of how
the consequences of the initiative are likely to affect them and their
heuristic assessments of the procedural fairness of the decision
maker. In particular, the present account asserts that people rely on
their own uncertain forecasts if they feel relatively knowledgeable.
Otherwise they draw on different, more social or process-oriented
cues, such as procedural fairness, which has not typically been
considered in prior decision-making research. The current investigation also departs markedly from prior social justice investigations, which have usually assessed the influence of process and
outcome judgments on individuals’ reactions after both the process
and outcome are experienced (e.g., Brockner & Wiesenfeld, 1996).
Whereas prior justice accounts have provided a largely retrospective analysis of the way people react to processes and outcomes
that have already transpired, the present account offers a prospective psychological analysis of the way in which people draw on
procedural fairness judgments versus forecasted outcomes when
decisions are proposed and consequences are anticipated.
Understanding the way subjective knowledge moderates the
effects of forecasted outcomes and fairness perceptions can identify important boundary conditions for both the justice and
decision-making literatures, yielding new psychological insights
into the way people react to decisions under uncertainty and
ambiguity. The account is tested in three studies providing evidence from diverse public policy contexts that include both controlled laboratory experiments and longitudinal field survey data.
Low Subjective Knowledge Increases Reliance on
Procedural Fairness
When reacting to the actions of authorities or institutions, people
are sensitive to the decision-making processes that result in particular outcomes. One robust example of the importance of process
perceptions has been found in the social justice literature. Specifically, the fair process effect, in which individuals are more accepting and satisfied with an outcome if they believe the process
used to determine the outcome is fair, has been demonstrated in a
wide variety of policy, legal, and organizational contexts (for
reviews, see Colquitt et al., 2001; Folger, 1977; Folger & Cropanzano, 1998; Greenberg & Folger, 1983; Lind & Tyler, 1988;
Thibault & Walker, 1975). In addition to the capacity for fair
105
procedures to increase satisfaction with received outcomes, fairness judgments have been shown to affect a variety of other
dependent measures, such as trust, favorable attitudes toward
authorities and leaders, and organizational citizenship behavior
(e.g., Brockner & Wiesenfeld, 1996; Colquitt et al., 2001; Huo,
Smith, Tyler, & Lind, 1996; Lind, Kulik, Ambrose, & de Vera
Park, 1993).
Several investigations of fairness heuristic theory provide evidence that fairness judgments are more predictive of reactions
when people are missing information or unsure of the social
context (e.g., Van den Bos et al., 1997, 1998). For instance, Van
den Bos et al. (1998) found that procedurally fair treatment had an
impact on satisfaction with an outcome received by an authority
when participants were given no information on the trustworthiness of the authority, whereas fairness had no effect on those who
were given either positive or negative trustworthiness information.
Other empirical tests have documented that procedural fairness is
weighed more heavily when individuals lack social comparison
information about others’ outcomes relative to when they are
provided with such information (Van den Bos et al., 1997).
Furthermore, an experimental investigation of primacy effects in
the formation of justice judgments revealed that fair treatment
encountered early in a relationship was more influential than the
same information encountered later (Lind, Kray, & Thompson,
2001). In a field study of promotion and tenure decisions, Ambrose
and Cropanzano (2003) similarly found that the impact of procedural justice (relative to distributive justice) was more influential
prior to and just after outcome decisions were made. These findings imply that people draw on procedural fairness at points in
time when they presumably have the least information or certainty
about another party or the ultimate outcome. Interestingly, simply
asking people to think about nonspecific uncertainty seems to
heighten emotional sensitivity to procedural fairness interventions.
Van den Bos (2001) asked people to write about how uncertainty
makes them feel physically and emotionally and found that individuals who were primed in this general way exhibited more
extreme emotional reactions to subsequent fair and unfair treatment relative to those who had not been primed.
Recent theoretical work has attempted to account for the range
of findings discussed above by providing the general explanation
that people rely on fairness information more when they are
relatively uncertain, and as such fairness serves as a way to buffer
or manage uncertainty (Lind & Van den Bos, 2002; Van den Bos,
2001; Van den Bos & Lind, 2002). Van den Bos and Lind (in
press) assert that fairness is a tool for resolving both informational
uncertainty (when one is missing information) and personal selfuncertainty (arising from doubt or instability in self-views, world
views, or the interrelation between the two). Yet overall, the notion
of uncertainty has been conceptualized and interpreted quite
broadly in justice research.
As discussed in more detail in the subsequent section and the
general discussion, the current article draws on a specific conceptualization of uncertainty that is grounded in behavioral decision
theory. Because the focus of the investigation is on situations in
which individuals must attempt to determine their support for
initiatives with uncertain consequences, it involves a forecast or
prediction about how they might be affected by the outcome. In
addition, the present account presumes that, when making these
predictions, individuals will sometimes feel they are not suffi-
SEE
106
ciently knowledgeable, which creates uncertainty or ambiguity.
For example, research on political choice indicates that many
people lack knowledge about the personal or broader social implications of policies or leaders and hence may rely on ideological
cues as a basis for making decisions (e.g., see Bartels, 1996, 2005;
Conover & Feldman, 1989; Sniderman, Brody, & Tetlock, 1991).
Employees in organizations may similarly feel far removed from
strategic decisions with complex and unclear consequences, such
as layoffs or resource allocations. A perceived lack of knowledge
in such situations would constitute a source of uncertainty or
ambiguity for people.
It is important to note that the knowledge variable in this article
is an inherently subjective, internal assessment that is malleable
(e.g., Fox & Weber, 2002; Kagan, 1972; Van den Bos & Lind,
2002). Therefore, the focus here is not on how an actual objective
level of information or knowledge (as could be measured on a test)
might drive reliance on procedural fairness, but rather it is on a
subjective assessment of how knowledgeable people feel in the
particular domain at the time they try to forecast what the consequences will be.
On the basis of the fairness heuristic theory findings reviewed
above, perceived procedural fairness is hypothesized to have more
influence on support for an initiative for individuals who perceive
that they have low knowledge in a particular domain relative to
people who feel they are very knowledgeable. Feeling devoid of
knowledge when considering a decision made by an authority or
institution should render one’s prediction about the outcome as a
less reliable piece of information and in turn heighten sensitivity to
other, more social or procedural cues, such as fairness perceptions.
Although other cues may be available and used by people when
they lack knowledge, it is conceivable that procedural fairness is a
particularly salient and useful cue because it is a multidimensional
construct. Procedural fairness signals the quality of the decisionmaking system (Blader & Tyler, 2003; Leventhal, Karuza, & Fry,
1980) as well as socially rich information about group inclusion,
identity, and adherence to moral and social rules (Folger, 1998;
Tyler & Lind, 1992). In other words, if individuals must respond
to an initiative and do not feel knowledgeable when trying to
assess how it might unfold, they can try to resolve this ambiguity
by considering their perception of the soundness of the decisionmaking process or system that generated the initiative.
High Subjective Knowledge Increases Reliance on
Outcome Predictions
Forming expectations about the positive or negative outcomes
that will result from particular actions is a basic aspect of prominent social psychological theories (e.g., Ajzen & Fishbein, 1980)
and cognitive decision theories (e.g., Kahneman & Tversky, 1979).
As such, when people assess proposed institutional decisions, one
of the pieces of information they might consider is the way they
could be affected. Making an outcome prediction and subsequent
decision to support the proposed decision is akin to a naturalistic
judgment under uncertainty.1 The decision theory and behavioral
decision-making literatures provide many insights into how people
act under uncertainty. The section draws on research on judgment
under uncertainty and ambiguity to conceptualize knowledge as a
moderator and explain how it is hypothesized to influence reliance
on predicted outcomes.
In the field of decision theory, both normative and descriptive
decision theoretic approaches were founded on principles of expected utility theory (Savage, 1954; von Neumann & Morgenstern,
1944). These approaches emphasize the importance of people’s preferences over particular outcomes, their assessment of how likely those
outcomes are to occur, and the psychological means by which they
evaluate outcomes (e.g., Kahneman, Slovic, & Tversky, 1982;
Kahneman & Tversky, 1979). In the present article, reacting to
institutional decisions with uncertain consequences entails a prediction about an outcome (i.e., “How will this affect me?”), as
well as an assessment of one’s knowledge concerning the
domain of prediction (i.e., “How knowledgeable do I feel
making this prediction?”).
Although it is reasonable to expect that people’s reactions might
be influenced by the valence of their forecasted outcomes, the
descriptive study of judgment under uncertainty and ambiguity
suggests that people are reluctant to act on their forecasts unless
they feel relatively competent or knowledgeable in the domain or
subject area they are assessing (Chow & Sarin, 2002; Ellsberg,
1961; Fox & Tversky, 1995; Fox & Weber, 2002; Frisch & Baron,
1988; Heath & Tversky, 1991; Kunreuther, Meszaros, Hogarth, &
Spranca, 1995). As such, subjective knowledge in this article is
most similar to psychological interpretations of how ambiguity is
experienced in the behavioral decision-making literature. With
some exceptions (e.g., Kunreuther et al., 1995), much work on
ambiguity has used basic gambling paradigms without a social
context, in which participants have the choice of betting on their
uncertain judgments of the outcome of events, such as upcoming
sporting events or changes in economic indicators (e.g., Fox &
Weber, 2002; Heath & Tversky, 1991). Thus, the present article
extrapolates from the basic line of reasoning in the area of judgment under uncertainty and ambiguity to assert that subjective
knowledge determines the extent to which individuals draw on
outcome predictions to determine their support for institutional
initiatives.
Specifically, predicted outcome favorability should have more
influence for individuals who perceive that they have high knowledge in a particular domain, relative to those who feel less knowledgeable. Note that this interaction between knowledge and outcome prediction is in the opposite direction as the hypothesized
interaction between knowledge and fairness. That is, if individuals
feel relatively confident or certain in their knowledge, it renders
their forecasted outcome as a relatively reliable piece of information and heightens sensitivity to that prediction. But when people
feel they have less knowledge, it renders their outcome prediction
as a noisier cue, and they weigh their procedural fairness judgment
more heavily.
It is important to highlight that a great deal of social justice
research has considered the role of outcome favorability judgments
in conjunction with procedural fairness (for a review, see Brockner
& Wiesenfeld, 1996). However, in the justice literature the dependent measures are typically assessed after both the process and
outcome are realized. These are largely empirical settings in which
1
Following Knight (1921), a decision under risk is one for which
objective probabilistic information is known (e.g., a chance device),
whereas a decision under uncertainty is one for which probabilistic information is not known (i.e., most natural events).
KNOWLEDGE, FAIRNESS, AND PREDICTIONS
people have already received the outcome, and hence the consequences are known and there is no uncertain forecast or assessment
of one’s knowledge to be made. Brockner and Wiesenfeld (1996)
reviewed and synthesized a wide variety of independent samples
showing an interaction between procedural justice and (experienced, or certain) outcome favorability when the dependent variable was support for a decision or entity, documenting that procedural justice has a stronger effect when outcome favorability is
low. The authors posited that the interaction occurs because encountering an outcome that is unfavorable elicits sense-making
and attribution processes that direct people’s attention toward
procedural fairness information as a way to explain the basis for
the outcome.
Theoretically, such an attribution account suggests a retrospective analysis that occurs once the outcome is reasonably determined and assimilated. In other words, when making sense of an
unfavorable outcome that is already known or transpired, people
look to both process and outcome evaluations and weigh one more
or less heavily depending on the level of the other (Brockner &
Wiesenfeld, 1996). The present account, in contrast, offers a
prospective analysis of how people make sense of an outcome that
is anticipated in which uncertainty has been highlighted, and the
article uniquely argues that people assess their internal feeling of
knowledge and subsequently rely more on fairness information
(when they feel less knowledgeable) or their prediction about the
outcome (when they feel more knowledgeable) to determine their
support. The hypothesized moderating effects of knowledge are
tested in the following three studies.
Study 1
The first study asked undergraduate students at a private southeastern university to read a brief description of a real environmental policy being proposed by the state government. This policy was
selected because participants would presumably perceive its subject, the issue of water quality in a nearby part of the state, to be
important and potentially relevant to them. However, it was not
likely to be a policy about which people had a great deal of
information in advance of the study, thus providing an opportunity
to (a) capture natural variation in people’s predictions about the
policy outcome and (b) manipulate their subjective knowledge in
the context of the experiment. Procedural fairness of the state
government and predicted consequences of the policy were measured variables, and the dependent variable was their degree of
support for the proposed policy. The central hypotheses concern
the statistical interaction between knowledge and procedural fairness and the interaction between knowledge and predicted outcome favorability. Procedural fairness should have more influence
on policy support for those made to feel less (relative to more)
knowledgeable, whereas predicted outcome favorability should
have more influence on policy support for those made to feel more
(relative to less) knowledgeable.
Method
Participants. Participants were 119 undergraduate students at
a small private southeastern university. Participants were recruited
at the student union and were paid a total of $3 for a bundle of
unrelated tasks that took approximately 15 min to complete.
107
Design. Participants were randomly assigned to one of three
knowledge conditions: high-subjective knowledge, low-subjective
knowledge, or a control condition.
Procedure. The experiment was administered using paper and
pencil. All participants were asked to read the following short
description of the proposed public policy:
For this study you will be assessing a proposed environmental policy
that affects [your state], potentially including [your university]. Specifically, the state government is proposing a plan to improve water
quality in the [region] of [state]. The coastal rivers and estuaries of
[state] have been showing signs of problems associated with nutrient
pollution, including a growing number of fish deaths and an increase
in algae growth clouding the water. Several sources of nutrient pollution affect the water in [state], including the rapid growth of cities,
agricultural pesticides and fertilizers, and pollution from industrial
waste. If these problems are not adequately addressed, they could
have human health effects for the state. To address these potential
human health risks, the [state law making agency] is considering a
policy that would attempt to reduce the amount of pollution by X%
over the next X years.
After reading about the policy, participants encountered the
knowledge manipulation. The knowledge manipulation drew on
past research showing that people’s willingness to bet on their own
forecasts can be affected by comparisons to others who might be
more knowledgeable (Fox & Tversky, 1995), presumably because
such comparisons alter people’s confidence in their own knowledge. In the present study, participants’ attention was drawn to
other groups of people evaluating the policy who would automatically be perceived by undergraduates as either more or less than
knowledgeable than themselves. Depending on the references
made, implicit social comparisons should either enhance or undermine participants’ subjective sense of their own knowledge. Participants assigned to the low-knowledge condition were told of a
relatively more knowledgeable group of respondents: “Please indicate your responses to the short survey below. We will also be
administering this survey to experts in environmental science and
chemistry.” Participants assigned to the high-knowledge condition
were told of a relatively less knowledgeable group of respondents:
“Please indicate your responses to the short survey below. We will
also be administering this survey to [local city] high school students taking a political science course.” Finally, there was a
control condition in which no referent group was mentioned. Using
a subtle social comparison to manipulate knowledge is advantageous in this context because references to other respondents
should have no logical impact on evaluations of the decisionmaking authority, the policy itself, or predictions of the outcome.
The manipulation should affect only participants’ confidence in
their own knowledge, thus providing a pure manipulation of subjective knowledge.
After the knowledge manipulation, participants answered the
dependent variable and two additional measured independent variables in counterbalanced order. All variables had a 7-point ordinal
response scale (1 ⫽ not at all, 7 ⫽ extremely) and an “I don’t
know” response option, with the exception of the attitude control
variable discussed below. The dependent variable follows previous
justice research (e.g., Brockner & Wiesenfeld, 1996; Colquitt,
2001) and was an item assessing support for the proposed initiative. Specifically, the item asked participants to what extent they
would “support having the proposed policy become part of the
108
state’s laws to control water pollution in the [region].” The measured independent variables included a global procedural fairness
judgment about the entity proposing the initiative and a prediction
of the consequences of the policy. The fairness variable asked
participants to indicate their agreement with the statement: “The
procedures used by the state government to makes laws are fair.”
Higher ratings for the fairness variable reflect the perception that
the government is procedurally fair. The measure of predicted
outcome favorability asked participants to indicate whether “the
proposed policy is likely to be very favorable” to people like them.
Higher ratings for the outcome prediction indicate a higher perceived likelihood that the outcome of the policy would be favorable to the participant.
Finally, a control variable and manipulation check were included. The control variable assessed general environmental attitude, as such attitudes are often embedded in values or factual
information that can influence how people respond (Eagly &
Kulesa, 1997). This wording of the item was as follows: “There are
differing opinions about how far we’ve gone with environmental
protection laws and regulations. At the present time, do you think
environmental protection laws and regulations have gone too far,
not far enough, or have struck the right balance?” (1 ⫽ gone too
far, 2 ⫽ struck the right balance, 3 ⫽ not far enough). Participants
were asked to indicate how “knowledgeable” they felt about “environmental issues and problems” on a 7-point scale to check the
knowledge manipulation. Higher values on this item reflect greater
confidence in one’s knowledge in the domain of assessment.
Two interaction terms were used to test the hypotheses: the
interaction of the fairness item and the knowledge factor, and the
interaction of the predicted outcome item and the knowledge
factor. Before creating the interactions, the knowledge factor was
coded as –1 for the low-knowledge condition, 1 for the highknowledge condition, and 0 for the control condition. The procedural fairness and predicted outcome variables were meancentered to reduce correlation among interaction terms and their
associated variables in the analysis (Aiken & West, 1991; Jaccard
& Wan, 1996). The two interactions terms were created by multiplying the knowledge factor by each of the mean-centered measured variables.
Results
Manipulation check. Mean self-reported ratings of knowledge
were compared between conditions to assess whether the knowledge
manipulation was successful. As intended, participants in the highknowledge condition rated themselves as significantly more
knowledgeable (M ⫽ 3.97) than those in the low-knowledge
condition (M ⫽ 3.30), F(1, 77) ⫽ 4.27, p ⫽ .042, ␩2 ⫽ .05. The
control group gave knowledge ratings that were in the middle of
the other two conditions (M ⫽ 3.57). Results of analysis of
variance (ANOVA) polynomial contrasts revealed a significant
linear trend for mean knowledge ratings for the three conditions,
F(1, 116) ⫽ 4.8, p ⫽ .031, and a nonsignificant quadratic trend
(F ⬍ 1).2 No significant effects of the condition variable were
found for the two measured independent variables at either the
multivariate or univariate level. In addition, no order effects were
found at the multivariate or univariate level.3
Analysis. The dependent variable is a one-item ordinal measure, thus all hypotheses were tested using both ordered logit and
SEE
multiple regression analysis. The results were virtually identical,
and only the regression results are reported here because of ease of
interpretation. The dependent variable (policy support) was run in
the model first with the attitude control variable and the independent variables: the knowledge factor, procedural fairness, and
outcome prediction. In Step 2, the two-way interaction terms
(Knowledge ⫻ Fairness and Knowledge ⫻ Outcome Prediction)
were added.
As shown in Step 1 of Table 1, the procedural fairness judgment
(B ⫽ 0.20, SE ⫽ 0.09, p ⫽ .04) and outcome prediction (B ⫽ 0.30,
SE ⫽ 0.08, p ⬍ .001) variables had significant main effects and
were positively related to support for the proposed policy. More
critically, Step 2 of Table 1 provides the test of the hypotheses and
reveals that the main effects were qualified by two significant
interactions: the Knowledge ⫻ Fairness term (B ⫽ – 0.30, SE ⫽
0.13, p ⫽ .02) and the Knowledge ⫻ Outcome Prediction term
(B ⫽ 0.35, SE ⫽ 0.12, p ⫽ .004).4 Note that the two interaction
coefficients have opposite signs, indicating that they have the
expected opposite effects as a function of knowledge.
2
Coding the knowledge factor as 1, 0, 1 (low, control, and high,
respectively) assumes that the variable falls along the knowledge dimension in a linear fashion, which appears to be supported by the significant
linear trend for the knowledge manipulation check item. As a more direct
test of linearity, a second orthogonal nonlinear term coded –1, 2, –1 (low,
control, and high, respectively) was added to the model at various stages to
test whether it accounted for additional variance beyond the original linear
factor. The nonlinear contrast had no significant effect when included at
any step of the analysis and did not result in any additional explained
variance (⌬R2 ⫽ –.004, ns, for Step 1; ⌬R2 ⫽ .016, ns, for Step 2 of Table
1). Moreover, none of the results materially change (i.e., both hypotheses
remain supported) when the nonlinear contrast is accounted for in the
model.
3
The randomized order of variable measurement resulted in some participants answering the dependent variable (policy support) before the two
measured independent variables (procedural fairness and outcome prediction). Three dummy variables were created to code for each of the four
possible orderings to ensure that there was no untoward effect of this
procedure. When the order variables were run along with the condition
variable in a MANCOVA, none of the order variables significantly affected
the dependent or independent variables at the multivariate or univariate
level. Also, when added to both steps of the analysis in Table 1, none of the
order variables were significant ( ps ⬎ .28), and their presence in the
models resulted in a nominal reduction in explained variance (⌬R2 ⫽ –.10,
ns, for Step 1; ⌬R2 ⫽ –.15, ns, for Step 2). As a separate test, the orderings
were collapsed into a single dummy variable to code for whether the
dependent variable was administered before or after the independent variables. This dummy variable also had no effect on the dependent or
independent variables, was not significant when included in either step of
the analysis in Table 1 ( ps ⬎ .8), and did not explain additional variance
(⌬R2 ⫽ –.09, ns, for both Step 1 and Step 2). Moreover, none of the results
materially change (i.e., both hypotheses remain supported) when order is
accounted for in the model.
4
When all possible two-way interactions were included in the regression
model for Study 1 as a test of the stability of the results, the Knowledge ⫻
Fairness interaction (B ⫽ – 0.28, SE ⫽ 0.13, p ⫽ .035) and Knowledge ⫻
Predicted Outcome interaction (B ⫽ 0.30, SE ⫽ 0.12, p ⫽ .014) remained
statistically significant and virtually unchanged, whereas the Fairness ⫻
Predicted Outcome interaction falls short of significance (B ⫽ – 0.11, SE ⫽
0.07, p ⫽ .10).
KNOWLEDGE, FAIRNESS, AND PREDICTIONS
Table 1
Results of Regression Predicting Policy Support,
Study 1 (N ⫽ 119)
Variable
Step 1
Constant
Environmental attitude
Knowledge condition
Procedural fairness
Outcome prediction
Step 2
Constant
Environmental attitude
Knowledge condition
Procedural fairness
Outcome prediction
Knowledge ⫻ Fairness
Knowledge ⫻ Outcome prediction
B
SE B
␤
4.60
0.32
0.15
0.20
0.30
0.51
0.20
0.14
0.09
0.08
.17†
.10
.20ⴱ
.35ⴱⴱ
4.16
0.47
0.18
0.30
0.18
⫺0.30
0.35
0.51
0.19
0.13
0.10
0.09
0.13
0.12
.24ⴱ
.13
.31ⴱⴱ
.21ⴱ
⫺.23ⴱ
.31ⴱⴱ
Note. R2 ⫽ .27 for Step 1; ⌬R2 ⫽ .09 for Step 2 ( ps ⬍ .01).
ⴱ
p ⬍ .05. ⴱⴱ p ⬍ .01. † p ⬍ .10.
109
happened to demonstrate a particularly strong pattern, such that
low-knowledge individuals relied only on their procedural fairness
judgment, whereas high-knowledge individuals relied only on their
predictions of outcome favorability in this study.
The interaction between knowledge and predicted outcome extends research on competence effects in judgment under uncertainty and ambiguity (e.g., Fox & Tversky, 1995; Fox & Weber,
2002; Heath & Tversky, 1991) by demonstrating that when people
do not feel competent they will shift away from their prediction
and instead draw on social or procedural information cues, such as
fairness or other aspects of the decision process. The significant
interaction between knowledge and fairness also extends research
on fairness heuristic theory by showing that in addition to substituting for missing trust (Van den Bos, Wilke, & Lind, 1998) or
social comparison information (Van den Bos et al., 1997), fairness
can be used when domain knowledge is perceived to be lacking.
This pattern of results emerged despite a very subtle manipulation
of uncertainty that drew on implicit social comparisons to alter
Separate regressions were used on the low- and high-knowledge
conditions using only the environmental control variable, procedural fairness, and outcome prediction to probe the nature of the
interactions. For low-knowledge individuals, neither the environmental attitude control variable (B ⫽ 0.68, SE ⫽ 0.47, p ⬎ .17) nor
the predicted outcome variable (B ⫽ – 0.17, SE ⫽ 0.35, p ⬎ .62)
was significant, but the fairness coefficient (B ⫽ 0.71, SE ⫽ 0.31,
p ⫽ .034) was positive and significant. For the high-knowledge
individuals, the environmental attitude control variable was marginally significant (B ⫽ 0.61, SE ⫽ 0.32, p ⫽ .066) and the
procedural fairness coefficient was not significant (B ⫽ 0.06, SE ⫽
.014, p ⬎ .66), whereas the prediction of outcome favorability was
highly significant (B ⫽ 0.61, SE ⫽ 0.12, p ⬍ .001). This pattern
of results shows strong support for the hypotheses.
The hypothesized interactions are displayed in Figure 1. Following Aiken and West (1991), the interactions were graphed
using the slopes from the regression on the overall sample (Step 2
in Table 1, including covariates), plugging in values for the x-axis
at the mean, one standard deviation below the mean, and one
standard deviation above for procedural fairness or predicted outcome. As can be seen in the top panel of Figure 1, procedural
fairness had a much greater effect on policy support for those who
reported low knowledge relative to the high-knowledge group.
Figure 1 (bottom panel) also shows that outcome favorability had
a much greater effect on policy support for those who are more
knowledgeable relative to those who are less knowledgeable.
Discussion
The findings from this experiment provide support for the
hypothesized moderating role of knowledge in determining the
extent to which people rely on their procedural fairness judgment
or their outcome prediction to determine their support for a proposed policy. Although the hypothesized pattern is that the effect
of fairness decreases and the effect of predicted outcome increases
at higher levels of knowledge, this is not to say that fairness and
predicted outcome favorability should necessarily lose significance at high and low levels of knowledge. However, the results
Figure 1. Top panel: Graph of the interaction of knowledge (manipulated) and procedural fairness (measured) in Study 1, illustrating the effects
of fairness on water policy support as a function of subjective knowledge.
The x-axis plots the level of procedural fairness at the mean and one
standard deviation (St Dev) above and below the mean. Bottom panel:
Graph of the interaction of knowledge (manipulated) and outcome prediction (measured) in Study 1, illustrating the effects of predicted outcome on
water policy support as a function of subjective knowledge. The x-axis
plots the level of outcome prediction at the mean and one standard
deviation above and below the mean.
SEE
110
subjective feelings of knowledge. Finally, these results extend
research in social justice on the joint effects of process and
outcome by considering those sources of information in a context
in which outcomes are unknown and consequences are anticipated,
which is further elaborated in the general discussion.
Despite the strong support for the hypotheses, one consideration
of the current study is that the design included both manipulated
and measured independent variables, thus providing less control
relative to experiments with all variables manipulated. The following experiment seeks to improve on the first study by manipulating
all of the factors to establish greater causal evidence of the hypothesized relationships.
Study 2
The second study asked undergraduate and graduate students at
a large private eastern university to read a brief description of a
proposed bank lending policy that could affect personal and student loans in their geographic region. The stimulus was chosen
because student participants presumably find it relevant to them
both at the time they participate in the study and in the foreseeable
future. The dependent variable was their degree of support for the
proposed policy. All three independent variables of interest (subjective knowledge, procedural fairness judgments, and predicted
outcome) were manipulated. As before, the hypotheses concern the
statistical interaction between knowledge and procedural fairness
and between knowledge and outcome favorability prediction.
Method
Participants. Participants were 156 students at a large private
eastern university. Participants were recruited to come to the
behavioral lab and paid a total of $6 for a bundle of unrelated tasks
that took approximately 25 min to complete.
Design. Participants were randomly assigned to a 2 (high vs.
low knowledge) ⫻ 2 (procedural fairness vs. unfairness) ⫻ 2
(favorable prediction vs. unfavorable prediction) betweenparticipants factorial design.
Procedure. The experiment was administered on a computer.
All participants were asked to read on screen the following short
introduction to a proposed lending policy:
For this study you will be assessing proposed legislation that could
affect many kinds of loans granted to people in [your state] (including
some personal and student loans). Specifically, the [State] Assembly
is proposing a plan that would influence loan underwriting guidelines.
The amount of debt carried by the average American has increased by
more than 50 percent since 1990, due in large part to increasing
mortgage, credit card, and education debt. In order to maintain consumer financial integrity, the [state] government is considering legislation that would attempt to reduce the amount of household debt over
the next five years.
Participants then advanced to a new screen, which provided
more information related to evaluation of the policy. The information they encountered constituted either the manipulation of
procedural fairness or the manipulation of predicted outcome
favorability. The information for each manipulation was presented
on a separate screen in an order that was randomized within the
computer program for each participant.
The manipulation of procedural fairness involved reading a
short blurb on citizen reactions to the decision-making process
used by the entity to determine the policy. The information included language indicating whether or not the decision processes
were in line with established antecedents of procedural justice
perceptions, such as consistent procedures, bias suppression, accuracy, and stakeholder representation (Blader & Tyler, 2003;
Colquitt, 2001; Leventhal et al., 1980). For example, participants
randomly assigned to the procedural fairness condition read that
the policy-making process was positively viewed by citizens as
“unbiased and accurate” and included “the input of people who
could be affected by the legislation.” Participants in the procedural
unfairness condition were given the opposite information that
reflected citizen criticisms indicating that the policy-making process was biased, inaccurate, and did not include stakeholder input.
The outcome prediction manipulation was designed to subtly
highlight to participants the potential for the policy to have either
favorable or unfavorable effects on people like them (i.e., students
and recent graduates), while conveying uncertainty. Specifically,
the only difference between the high and low predicted outcome
conditions was whether participants were prompted to consider
“the potential degree of positive effects” (favorable prediction
condition) or “the potential degree of negative effects” (unfavorable prediction condition) that the policy might have on them. It
was important to alter people’s uncertain forecasts (or predictions)
about future outcomes, rather than provide information that they
treated as known (certain) outcome favorability, because past
research (e.g., Schroth & Shah, 2000) has speculated that particularly strong or clear outcome expectancies can be interpreted by
participants as an actual certain outcome. Thus, to remind people
of the uncertainty inherent in any forecast, all participants were
first told that “it is difficult to say for certain what effects the
proposed policy could have on college students and recent graduates” and that “economic indicators like inflation rates, unemployment, and the current yield on 10-year treasury bonds” were
among the many factors that could be relevant in predicting
potential effects of the policy.
After encountering the procedural fairness and outcome prediction manipulations, subjective knowledge was manipulated in a
manner similar to Study 1. Participants assigned to the lowknowledge condition encountered a screen with the following
statement, which was designed to remind them of individuals who
would be perceived as relatively more knowledgeable about the
lending policy in question: “Next please answer the following
questions. We will also be administering this survey to Wall Street
experts in economics and financial analysis.” Participants assigned
to the high-knowledge condition read a statement that was designed to remind them of individuals who would be relatively less
knowledgeable about the lending policy in question: “Next please
answer the following questions. We will also be administering this
survey online to freshman high school students taking a political
science course.” Participants clicked on a button on the screen to
continue on to the questions.
Participants were then asked questions pertaining to the dependent variable and the manipulation checks. Unless otherwise
noted, all ratings for these items were made on 7-point scales, with
higher ratings indicating greater agreement with the question. The
dependent variable assessed support for the proposed policy and
was similar to that of Study 1. Specifically, participants indicated
KNOWLEDGE, FAIRNESS, AND PREDICTIONS
the extent to which they would “support having the proposed
policy become part of the state’s laws to control bank lending
practices” in their geographic location. Then to assess the effectiveness of the manipulations, participants answered a series of
items that were presented in a random order for each participant.
The procedural fairness manipulation was checked by asking participants to respond to the statement: “The decision making procedures used by the state government are fair.” The manipulation
of predicted outcome favorability was assessed by asking participants to respond to the statement: “The proposed policy is likely to
be very favorable to people like me.” Participants were asked to
respond to the following question to check the knowledge manipulation: “How knowledgeable do you feel about finance and economic issues” (1 ⫽ not at all knowledgeable, 7 ⫽ extremely
knowledgeable).
Several control variables were included. General attitudes toward financial legislation was assessed by asking participants
whether they thought that “financial laws and regulations have
gone too far, not far enough, or have struck the right balance” (1 ⫽
gone too far, 2 ⫽ struck the right balance, 3 ⫽ not far enough).
The remaining control variables were age, gender (1 ⫽ female,
0 ⫽ male), and whether the participant was a resident of the state
currently considering the policy (1 ⫽ resident, 0 ⫽ nonresident).
When the participants had answered all of these questions and had
completed the other experiments in the lab session, they were
debriefed and paid for their participation.
Results
Manipulation checks. A 2 ⫻ 2 ⫻ 2 multivariate analysis of
variance (MANOVA) was run on the three manipulation check
items. For the procedural fairness item, the analysis yielded only a
main effect of the fairness manipulation, F(1, 148) ⫽ 6.90, p ⫽
.01, ␩2p ⫽ .05. As expected, participants in the fairness condition
agreed more with the statement that the government uses fair
decision-making procedures (M ⫽ 3.9) than did participants in the
unfairness condition (M ⫽ 3.4). For the outcome prediction item,
there was only a significant main effect of the outcome manipulation, F(1, 148) ⫽ 4.24, p ⫽ .041, ␩2p ⫽ .03. Participants who had
been prompted to focus on the uncertain, potentially positive
effects of the policy agreed more that the outcome was likely to be
favorable to them (M ⫽ 4.2) than did participants who were
prompted to focus on uncertain, potentially negative effects (M ⫽
3.6). Finally, for the knowledge item, the analysis yielded only a
main effect of the knowledge manipulation, F(1, 148) ⫽ 5.00, p ⫽
.027, ␩2p ⫽ .03. Participants in the high-knowledge condition rated
themselves as significantly more knowledgeable about finance and
economics (M ⫽ 3.9) than those in the low-knowledge condition
(M ⫽ 3.4). These results show that the three manipulations were
successful at influencing the independent variables as intended.
Analysis. The raw means for policy support by each condition
are presented in Table 2. A 2 ⫻ 2 ⫻ 2 univariate ANOVA was
conducted on support for the proposed policy, which included age,
gender, residency, and attitude as covariates. As shown in Table 3,
the analysis revealed a main effect of outcome prediction, F(l,
144) ⫽ 8.16, p ⫽ .005, ␩2p ⫽ .05, a marginal main effect of
procedural fairness, F(l, 144) ⫽ 2.77, p ⫽ .098, ␩2p ⫽ .02, and no
main effect of knowledge (F ⬍ 1). It is important that these effects
were qualified by two significant interactions: the interaction of
111
Table 2
Mean Policy Support by Condition, Study 2
Procedural fairness
Condition
Low-subjective
knowledge
High-subjective
knowledge
Procedural unfairness
Favorable
prediction
Unfavorable
prediction
Favorable
prediction
Unfavorable
prediction
4.57
4.35
4.06
3.74
4.72
3.67
4.63
4.00
Note. Raw cell means are reported.
knowledge and procedural fairness, F(l, 144) ⫽ 4.16, p ⫽ .043,
␩2p ⫽ .03, and the interaction of knowledge and outcome prediction, F(l, 144) ⫽ 4.52, p ⫽ .035, ␩2p ⫽ .03. ANOVAs were used
separately on the low- and high-knowledge conditions to examine
these interactions. As hypothesized, the effect of procedural fairness was stronger in the low-knowledge condition, F(l, 70) ⫽ 6.63,
p ⫽ .012, ␩2p ⫽ .09, than in the high-knowledge condition (F ⬍ 1).
In contrast, the effect of outcome prediction was stronger in the
high-knowledge condition, F(l, 72) ⫽ 4.52, p ⫽ .037, ␩2p ⫽ .06,
than in the low-knowledge condition (F ⬍ 1). These findings show
strong support for the hypotheses and are graphically depicted in
Figure 2.
Discussion
The results of this experiment show strong support for the
hypotheses in a controlled lab environment with all variables
manipulated. The findings mirror the previous study and indicate
that when people feel they lack knowledge their support for
proposed initiatives is more strongly influenced by variations in
procedural fairness, relative to when people feel they are knowledgeable, in which case their support is more strongly influenced
by their predicted outcomes. The following study seeks to further
enhance the relevance and implications of the findings from the
first two experiments by testing the hypotheses in a field setting in
which there are real consequences. Specifically, a panel survey
was administered twice to individuals in the general population
about to be directly impacted by a new public policy under
consideration.
Study 3
This study uses a two-wave panel survey from a sample of
households in a region of a southeastern state that was targeted by
the real proposed water quality policy described in Study 1. As
with repeated measures designs in experimental research, the panel
nature of the survey controls for intra-individual variance (the
tendency for a given participant to respond in a consistent way
across time) and measurement error, thus providing greater reliability of results than purely cross-sectional survey data. In combination with the causal evidence from the previous two experiments, the current study provides an opportunity to test the
generalizability of the results in a real-world setting, yielding
richer evidence for the theoretical account advanced in this article.
SEE
112
Table 3
Results of Full Factorial Analysis of Variance Predicting Policy
Support, Study 2 (N ⫽ 156)
Variable
dfs
F
␩2p
Attitude
Residency
Age
Gender
Subjective knowledge factor
Procedural fairness factor
Outcome prediction factor
Knowledge ⫻ Fairness
Knowledge ⫻ Outcome prediction
Fairness ⫻ Outcome prediction
Knowledge ⫻ Fairness ⫻ Outcome prediction
1,144
1,144
1,144
1,144
1,144
1,144
1,144
1,144
1,144
1,144
1,144
24.28ⴱⴱ
0.54
5.01ⴱ
1.06
0.04
2.77†
8.16ⴱⴱ
4.16ⴱ
4.52ⴱ
0.24
1.22
.14
.00
.03
.01
.00
.02
.05
.03
.03
.00
.01
ⴱ
p ⬍ .05.
ⴱⴱ
p ⬍ .01.
†
p ⬍ .10.
Method
Participants. The dataset comprised 332 individuals in the
general population living within three counties in the affected
region of the state. The sample was identified using randomly
generated lists of telephone numbers in selected urban, rural, and
suburban zip codes within the region. Demographics of the sample
mirrored census statistics for this region of the state. Fifty-three
percent of respondents were women, and 70% of respondents
were Caucasian, 27% were African American, and 3% identified another racial or ethnic group. Respondents’ ages ranged
from 18 to 87 years (median ⫽ 44). Annual household income
ranged from under $10,000 (12%) to over $100,000 (5%), with the
median falling in the upper end of the $25,000 to $35,000 category.
Design. The study uses a longitudinal panel survey design, in
which the identical survey instrument was administered on two
occasions, approximately 15 months apart, to the exact same panel
of individuals. At both times the survey was administered, the
policy was proposed and still under development, thus the outcome
was never realized within the time frame of the study. The variables for the present investigation were measured using items
embedded in a longitudinal survey instrument intended for a
different investigation, which queried respondents on their environmental protection attitudes and self-reported behaviors (e.g.,
recycling), political activity, past experience with government
officials, and preferred sources of public policy and scientific
information.
Procedure. The surveys were conducted by telephone, and the
total interview time for each survey was approximately 30 min.
Most of the survey items use a 4-point response scale with a
separate “I don’t know” response option. Participants were mailed
a check for $15 for completing the phone survey the first time and
an additional $30 for completing the phone survey the second time.
Variables. The dependent variable comprised the mean of the
responses to three specific items measuring support for different
components of the policy. The components of the policy pertained
to reducing pollution from three different sources: agricultural
sources, city and industrial waste, and housing development. For
each of these components of the overall policy, participants rated
how strongly they would “support having this become part of the
state’s laws to control pollution in the [region]” (1 ⫽ disagree
strongly, 4 ⫽ agree strongly). The reliability for the three items
was .70, thus they were averaged into one overall measure of
support for the policy.
The wording of the independent variables was similar to the
measured variables used in Study 1 and the manipulation checks of
Study 2. Participants were asked to indicate their agreement with
the following statement to measure global procedural fairness
perceptions: “Overall, the way the state legislature makes laws is
fair.” The outcome prediction measure asked participants to indicate the extent to which they agreed with the statement: “The
outcome of the policy is likely to be very favorable to people like
me.” Both of these items were rated on a 4-point scale (1 ⫽
disagree strongly, 4 ⫽ agree strongly). Knowledge was measured
using a self-report item: “How much do you feel you yourself
know about environmental issues and problems?” (1 ⫽ nothing,
4 ⫽ a lot). Finally, a time variable was included as a control in all
analyses to indicate the first and second survey responses (1 ⫽ first
response, 2 ⫽ second response).
Figure 2. The moderating effects of knowledge in Study 2. Marginal
means adjusted for covariates are displayed. Top panel: Effects of procedural fairness on loan policy support as a function of knowledge. Bottom
panel: Effects of outcome prediction on loan policy support as a function
of knowledge.
KNOWLEDGE, FAIRNESS, AND PREDICTIONS
Results
Descriptive statistics. Table 4 presents the means, standard
deviations, and bivariate correlations among the variables pooled
across the two time periods.
Analysis. Before analysis, the procedural fairness, predicted
outcome favorability, and environmental knowledge variables
were mean-centered, and the two interactions terms were created
by multiplying the mean-centered variables. Pooled cross-sectional
time-series linear regression was used to take advantage of the
panel nature of these data (Hsiao, 1986).5 Each respondent answered all independent and dependent variables on two occasions,
and this analysis pools both waves of data treating each observation on each respondent as a separate case. The individual respondent is then included as a separate panel (i.e., repeated group)
variable to take into account variation caused by the same individual responded on two occasions. A mixed effects model was
used in which the respondent variable was specified as a random
effect, and all of the independent variables (knowledge, procedural
fairness, and predicted outcome) were specified as fixed effects.
Treating participant as a random effect assumes that our random
sample of the population can be used to make inferences about
people in general. Note that the random effect for individual
participant is controlled for in the analysis but not estimated
directly or reported.
Treating the independent variables as fixed effects controls for
measurement error and all stable characteristics of an individual
respondent that might affect responses on those items, including
characteristics that are not observed or measured. Thus, panel
survey data analyzed with fixed or mixed effects modeling shares
many properties and benefits with repeated measures designs in
experimental research. The mixed effects analysis yields standard
regression coefficients for the independent variables that are estimated directly and presented in Table 5.
The dependent variable (the policy support scale) was regressed
on the independent variables in two steps. The time variable and
three independent variables were entered first, followed by the
focal interaction terms. The chi-square change was significant at
each step, as is the full model, ␹2(6, N ⫽ 640) ⫽ 115.8, p ⬍ .001.
As shown in Table 5, procedural fairness (B ⫽ 0.08, SE ⫽ 0.03,
p ⫽ .012) and predicted outcome (B ⫽ 0.22, SE ⫽ 0.03, p ⬍ .001)
were positively related to support for the initiative. More important, the Knowledge ⫻ Fairness interaction coefficient (B ⫽ – 0.10,
SE ⫽ 0.04, p ⫽ .013) and the Knowledge ⫻ Prediction interaction
coefficient (B ⫽ 0.10, SE ⫽ 0.05, p ⫽ .04) were statistically
significant, in support of both hypotheses.6
The two-way interactions were probed using the method and
associated internet utility discussed in Preacher, Curran, and Bauer
(2006), which calculates simple slopes for the effects of the procedural fairness and outcome prediction variables at specified
values of knowledge as well as regions of significance tests. In this
case, the simple slopes were analyzed at the mean level of knowledge and one standard deviation above and below the mean. The
analysis of the interaction between knowledge and procedural
fairness revealed that the effect of fairness on policy support is
significant at the mean level of knowledge (B ⫽ 0.08, SE ⫽ 0.03,
p ⫽ .013) but has the strongest effect when knowledge is a
standard deviation or more below the mean (B ⫽ 0.14, SE ⫽ 0.04,
p ⬍ .001) and is not significant when knowledge is one standard
113
deviation above mean knowledge ratings (B ⫽ 0.01, SE ⫽ 0.04,
p ⬎ .79). The interaction between knowledge and outcome prediction revealed that the effect of predicted outcomes on policy
support is significant at the mean level of knowledge (B ⫽ 0.22,
SE ⫽ 0.03, p ⬍ .001) but has the strongest effect when knowledge
is a standard deviation or more above the mean (B ⫽ 0.29, SE ⫽
0.05, p ⬍ .001) and becomes nonsignificant when knowledge is
more than one standard deviation below mean knowledge ratings
(B ⫽ 0.11, SE ⫽ 0.06, p ⫽ .06). The simple slope analysis
supports the hypothesized pattern, such that as knowledge increases the effect of fairness on policy support decreases while the
effect of outcome prediction increases.
The regions of significance tests specify the entire range of the
values of the knowledge moderator for which the slopes of fairness
and predicted outcome are statistically significant. The region of
significance test indicated that fairness had a significant effect on
support when the mean-centered value of knowledge fell outside
of the positive range (i.e., values below zero), whereas outcome
prediction had a significant effect on support when the meancentered value of knowledge fell outside of the negative range (i.e.,
values above zero). Because the mean-centered knowledge variable has four levels, ranging from ⫺1.66 to 1.31 (low to high),
these results translate into fairness influencing people who rated
their knowledge as 1 or 2 (which are negative values when mean
centered), and outcome prediction influencing people who rated
their knowledge as 3 or 4 (which are positive values when mean
centered).
Figure 3 graphically depicts the interactions by using the full
model from Step 2 of Table 5 (including the time control variable),
plugging in the mean and one standard deviation above and below
the mean for all variables. The figure clearly illustrates that the
effects of fairness on policy support are greater at lower levels of
knowledge (top panel), whereas the effects of predicted outcome
favorability are greater at higher levels of knowledge (bottom
panel).
Discussion
The panel survey results provide strong support for the hypotheses. Because the survey was administered in the field to a representative sample of the general population facing a real proposed
policy initiative, the findings do not suffer from some of the
5
Researchers have argued that ordinal Likert scale variables with sufficient levels can be used in regression analyses without severe departures
from normality that would alter Type I and Type II errors (Jaccard & Wan,
1996; Kim, 1975). Because the ordinal categorical response scales of the
current dataset had only four levels, as a conservative test all analysis were
rerun using the general linear and latent mixed models command in
STATA specifying a link to the ordered logit function (Rabe-Hesketh,
Skrondal, & Pickles, 2004). Results did not differ materially from the
regression analysis reported here.
6
When all possible two-way interactions were included in the regression
model for Study 3 as a test of the stability of the results, the Knowledge ⫻
Fairness interaction (B ⫽ – 0.10, SE ⫽ 0.04, p ⫽ .014) and Knowledge ⫻
Predicted Outcome interaction (B ⫽ 0.10, SE ⫽ 0.05, p ⫽ .04) remained
statistically significant and virtually unchanged, whereas the Fairness ⫻
Predicted Outcome interaction was far from significance (B ⫽ 0.004, SE ⫽
0.04, p ⬎ .91).
SEE
114
Table 4
Means, Standard Deviations, and Correlations for Panel Data Set, Study 3
Variable
1.
2.
3.
4.
5.
ⴱ
Support for policy
Subjective knowledge
Procedural fairness
Outcome prediction
Time/wave
M
SD
1
2
3
4
5
3.4
2.7
3.0
3.4
—
0.60
0.66
0.76
0.69
—
—
.16ⴱ
.18ⴱ
.31ⴱ
.17ⴱ
—
.03
.11ⴱ
.01
—
.17ⴱ
.12ⴱ
—
.05
—
p ⬍ .05.
common criticisms of experimental research, such as limited generalizability. Moreover, including two waves of panel data controls
for measurement error and unobserved heterogeneity of individual
respondents, rendering the results more stable and reliable than
cross-sectional survey data. Thus, the current study adds additional
support for the present theoretical account and adds external validity to the causal evidence of the first two experiments by
showing that the findings hold across a range of settings, including
a field setting with real and important consequences.
assessing they seem to be more influenced by their predictions of
how the consequences would affect them (see bottom of Figures 1,
2, and 3).
This article contributes to the understanding of the cognitive
underpinnings of the fair process effect and provides a specific
decision theoretic conceptualization of uncertainty that has not
been studied in prior justice research. As noted earlier, the notion
General Discussion
There are many instances in which individuals must react to a
proposed decision, change, or new initiative. Even though the
proposal might have uncertain consequences for those affected by
it, people must still determine whether they support the initiative.
Three studies using public policy contexts that included both
laboratory experiments and longitudinal field survey data provide
consistent evidence that the extent to which people rely on their
outcome predictions or their procedural fairness judgments depends on their subjective feelings of their own knowledge in a
particular domain. Procedural fairness influenced policy support
primarily for people experiencing ambiguity surrounding their
knowledge (see top of Figures 1, 2, and 3). In contrast, when
individuals feel relatively knowledgeable in the domain they are
Table 5
Results of Mixed Effects Time Series Regression Predicting
Policy Support, Study 3
Variable
Step 1
Constant
Time/wave
Subjective knowledge
Procedural fairness
Outcome prediction
Step 2
Constant
Time/wave
Subjective knowledge
Procedural fairness
Outcome prediction
Knowledge ⫻ Fairness
Knowledge ⫻ Outcome prediction
B
SE B
3.17
0.17ⴱⴱ
0.10ⴱⴱ
0.08ⴱ
0.22ⴱⴱ
0.04
0.04
0.03
0.03
3.16
0.18ⴱⴱ
0.10ⴱⴱ
0.08ⴱ
0.22ⴱⴱ
⫺0.10ⴱ
0.10ⴱ
0.04
0.04
0.03
0.03
0.04
0.05
Note. ␹2(4, N ⫽ 640) ⫽ 105.9 for Step 1; ⌬␹2(6, N ⫽ 640) ⫽ 9.1 for Step
2 ( ps ⬍ .05).
ⴱ
p ⬍ .05. ⴱⴱ p ⬍ .01.
Figure 3. Top panel: The interaction of knowledge and procedural fairness in Study 3. The x-axis plots the level of procedural fairness at the
mean and one standard deviation (St Dev) above and below the mean,
grouped by knowledge at the mean and one standard deviation above and
below the mean. Bottom panel: The interaction of knowledge and outcome
prediction in Study 3. The x-axis plots the level of predicted outcome at the
mean and one standard deviation above and below the mean, grouped by
knowledge at the mean and one standard deviation above and below the
mean.
KNOWLEDGE, FAIRNESS, AND PREDICTIONS
of uncertainty has been conceptualized and interpreted quite
broadly in the justice literature, and future research building on
uncertainty management views must be clear about the source of
uncertainty being considered. Recently, Van den Bos and Lind (in
press) have distinguished between informational and personal
(self) uncertainty. Using this very general dichotomy, the subjective knowledge construct in the present article might be broadly
construed as a type of subjective informational uncertainty (that
might also have implications for personal uncertainty). But more
precisely, in the present article the uncertainty people experience
stems from having to decide a course of action (e.g., support for a
new initiative) on the basis of a prediction or forecast of consequences, taking into account how knowledgeable they feel when
making that prediction. Because we are unable to know the future,
a prediction of consequences entails some uncertainty, and the
subjective knowledge moderator is akin to the psychological experience of ambiguity as discussed in the judgment under uncertainty literature (e.g., Ellsberg, 1961; Fox & Weber, 2002; Heath
& Tversky, 1991).
The findings from the present investigation suggest that people
selectively attend to both procedural fairness and forecasted outcomes as a function of confidence in their knowledge. These
findings have implications for the literature on human judgment
under uncertainty and ambiguity, which has typically emphasized
people’s explicit consideration of forecasted outcomes and consequences (but see Rottenstreich & Kivetz, 2006) often to the exclusion of other factors. It seems that people rely on their own
predictions if they feel competent or certain about their knowledge,
but when they feel less knowledgeable they may be reliably drawn
to different, more social or procedural sources of information, such
as fairness of procedures or trust (or perhaps even moral heuristics,
e.g., see Sunstein, 2004).
It is worth noting that the dependent measure in the current
investigation, support for initiatives, is intentional or attitudinal,
rather than behavioral. Although this measure is commonly used in
the social justice literature and was thus chosen to yield comparable insights to past work, future research incorporating behavioral measures would add further intriguing and convincing evidence. Despite the limitation of not including a behavioral
dependent variable, the present investigation has other empirical
advantages gained by the uncommon multimethod approach using
both experimental and survey data, which demonstrates the robustness and generalizability of the hypothesized relationships across
both lab and field settings.
Global Versus Specific Procedural Fairness Judgments
In some of the studies reported here, the measure of perceived
fairness is a general overall, or global, procedural fairness judgment, rather than a judgment pertaining to the fairness of the
specific decision-making procedure in question. The use of global
fairness judgments to predict reactions follows a great deal of
empirical and theoretical work in fairness heuristic theory (e.g.,
Lind, 2001; Lind et al., 1993). For instance, Lind et al. (1993)
demonstrated that an overall procedural fairness heuristic judgment mediated the effects of specific process perceptions on outcome satisfaction. More recently, justice researchers have provided evidence that global justice judgments are predictive of
people’s reactions and attitudes beyond facet justice dimensions
115
(e.g., Ambrose & Schminke, in press; Holtz & Harold, 2007; Jones
& Martens, 2007). These findings support the use of global fairness judgments as reliable predictors of people’s support for new
initiatives.
The prominence of global fairness judgments in people’s minds
prompts the important empirical question of how such fairness
heuristic are formed. Although we know a great deal about the
antecedents to specific procedural justice judgments within a defined decision context (see Colquitt et al., 2001), we know very
little about the formation of global fairness heuristic judgments
over time. It is conceivable that people form heuristic fairness
judgments composed of a host of past experiences, and additional
data from the panel survey (Study 3) provides some suggestive
evidence of this. In particular, one section of the survey instrument
asked respondents to answer questions about a recent experience
they had with a government official. Interjected in later sections of
the instrument, participants were asked the items used in the Study
3 analyses that concerned their global fairness judgments of state
government, their outcome prediction, and their policy support.
Bivariate correlations revealed that participants’ past experiences
with procedures and outcomes were unrelated to predictions of
future outcomes or support for the policy. But nearly all past
experience variables affected the global procedural fairness judgment, which in turn affected policy support. The correlations
provide some modest evidence for the notion that past experiences
might mostly influence responses as filtered by a global fairness
heuristic. Future research should use repeated measures experimental designs to examine how these past experiences are weighed
in the formation of fairness heuristics, when such judgments are
updated by new information, and how consciously people draw on
this information.
Connections to Heuristic Processing Accounts
Prior work has examined the use of procedural fairness as a
substitute for missing information that is inherently social, such as
trustworthiness of an authority (Van den Bos et al., 1998) or social
comparison information regarding one’s own and others’ outcomes (Van den Bos, et al., 1997). The findings here show that
fairness can be used to substitute for more cognitive information,
namely perceived subject matter knowledge. Weighing fairness
more heavily when one is missing information is consistent with
the concept of attribute substitution, the psychological process
proposed to underlie heuristic judgment in behavioral decision
research (Kahneman & Frederick, 2002). Attribute substitution
occurs when individuals attempt to assess one attribute by substituting a different heuristic property that comes to mind more
readily, a process that is presumed to occur nonconsciously. In
recalling past experiences, it is conceivable that people find procedural and interpersonal incidents more accessible, and such
heightened accessibility in memory would lead people to treat
those incidents as diagnostic of future events (Feldman & Lynch,
1988). It is possible that experiencing ambiguity surrounding one’s
knowledge limits the capacity for individuals to deliberately reason through the meaning and predicted consequences of a particular decision or initiative, thus shifting their attention to fairness or
other heuristics.
Alternatively, it may not necessarily be the case that people are
processing information superficially when they draw on their
116
process fairness judgments. Tiedens and Linton (2001), for example, found that individuals who experienced uncertainty induced
through affect tended to evaluate information more deeply relative
to those who were induced with certainty-associated emotions.
Because procedural justice is a multidimensional construct that
incorporates both the structural quality of the decision-making
system and some aspects of group inclusion (i.e., voice) and social
treatment (Blader & Tyler, 2003; Colquitt et al., 2001; Leventhal
et al., 1980), it might be viewed as a reasonably diagnostic cue that
actually signals systematic processing of available information.
This article can only generally characterize the current set of
findings as a cue substitution process. Future research should
attempt to tease apart these various underlying heuristic processing
accounts with respect to the current set of findings.
Relationship to Other Process and Outcome Effects
Because the current article is focused on situations in which
outcomes have not yet transpired, no hypothesis was made regarding whether an interaction between procedural fairness and predicted outcomes would occur and mimic the often replicated
interaction between procedural fairness and (known, or certain)
outcome judgments (Brockner & Wiesenfeld, 1996). Results of the
full factorial in Table 3, however, indicates that there was no
significant interaction of fairness and predicted outcome and no
three-way interaction of fairness, prediction, and knowledge for
Study 2 (for findings regarding Studies 1 and 3, refer to Footnotes
4 and 6).
The absence of the process by outcome interaction in this case
might imply that different psychological processes are in play
when people are looking toward a forecasted outcome versus
making sense of an outcome that has already been realized. In
other words, when making sense of an outcome that is known,
individuals retrospectively look to both process and outcome and
weigh one more or less heavily depending on the level of the other,
as has been previously shown (Brockner & Wiesenfeld, 1996). But
when making sense of an outcome that is anticipated, the present
theoretical account uniquely argues that people assess their internal feeling of knowledge and weigh either process cues (as knowledge decreases) or outcome cues (as knowledge increases) more
heavily. Future research should attempt to address the full nature
of the interaction of process and outcome under various subjective
knowledge conditions in the context of a study that includes both
forecasted and realized outcomes.
The moderating effect of knowledge on both process and outcome evaluations demonstrated in this article resembles conceptually similar effects documented in other fields. In fields as
diverse as political science and consumer behavior, several studies
suggest that the moderating effect of subjective knowledge on
sensitivity to procedure versus outcome predictions might be a
more general phenomenon that (a) extends to procedural characteristics beyond justice and fairness and (b) holds in situations in
which people are making their own decisions. For example, in the
context of political decision making, Rahn, Aldrich, and Borgida
(1994) found that voters with less political sophistication (which
might in part reflect subjective knowledge) were more sensitive to
structural differences in the way candidates were presented to them
(a process characteristic). In a marketing context, Godek, Brown,
and Yates (2002) tested the prediction that aspects of a consumer
SEE
purchase decision process would affect overall consumption satisfaction. They found that process evaluations only affected satisfaction for consumers who reported low knowledge of the product.
The authors concluded that process evaluations do not have an
impact when consumers can reach a reasonable value assessment
of the selected product on their own and that it is only when
consumers have little basis for assessing value that they turn to
process evaluations. These findings suggest that the moderating
effect of subjective knowledge on sensitivity to procedure versus
predictions is a more general phenomenon that extends beyond
justice and fairness effects.
Furthermore, research on individual decision making has revealed that people experience more satisfaction with their own
decisions when they use a process that fits with their own regulatory focus, which is a value system or orientation to the world
(Higgins, 2000). Brockner (2003) noted that a process that fits
one’s regulatory focus might affect satisfaction with outcomes in a
manner that mimics the fair process effect, and hence procedural
fairness and fit might both be manifestations of a more general
phenomenon. From the perspective of the current article, one
might expect that individuals are most affected by value from fit
when they lack knowledge to assess the outcome. Future research
should attempt to verify whether knowledge moderates reliance on
process versus outcome judgments across many individual and
group decision contexts. The hope is that the current investigation
will prompt additional interdisciplinary research into these questions.
References
Aiken, L. S., & West, S. G. (1991). Multiple regression: Testing and
interpreting interactions. Newbury Park: Sage Publications.
Ajzen, I., & Fishbein, M. (1980). Understanding attitudes and predicting
social behavior. Englewood Cliffs, NJ: Prentice-Hall.
Ambrose, M. L., & Cropanzano, R. (2003). A longitudinal analysis of
organizational fairness: An examination of reactions to tenure and promotion decisions. Journal of Applied Psychology, 88, 266 –275.
Ambrose, M. L., & Schminke, M. (in press). The role of overall justice
judgments in organizational justice research: A test of mediation. Journal of Applied Psychology.
Bartels, L. M. (1996). Uninformed votes: Information effects in presidential elections. American Journal of Political Science, 40, 194 –230.
Bartels, L. M. (2005). Homer gets a tax cut: Inequality and public policy
in the American mind. Perspectives on Politics, 3, 15–31.
Blader, S. L., & Tyler, T. R. (2003). A four-component model of procedural justice: Defining the meaning of a “fair” process. Personality and
Social Psychology Bulletin, 29, 747–758.
Brockner, J. (2003, August). Casting the Outcome ⫻ Process interaction
under a wider nomological net. Paper presented at the annual meeting of
the Academy of Management, Seattle, WA.
Brockner, J., & Wiesenfeld, B. M. (1996). An integrative framework for
explaining reactions to decisions: Interactive effects of outcomes and
procedures. Psychological Bulletin, 120, 189 –208.
Chow, C. C., & Sarin, R. K. (2002). Known, unknown, and unknowable
uncertainties. Theory and Decision, 52, 127–138.
Colquitt, J. A. (2001). On the dimensionality of organizational justice: A
construct validation of a measure. Journal of Applied Psychology, 86,
386 – 400.
Colquitt, J. A., Conlon, D. E., Wesson, M. J., Porter, C. O. L. H., & Ng,
K. Y. (2001). Justice at the millennium: A meta-analytic review of 25
years of organizational justice research. Journal of Applied Psychology,
86, 425– 445.
KNOWLEDGE, FAIRNESS, AND PREDICTIONS
Conover, P. J., & Feldman, S. (1989). Candidate perception in an ambiguous world: Campaigns, cues, and inference processes. American Journal of Political Science, 33, 912–940.
Eagly, A. H., & Kulesa, P. (1997). Attitudes, attitude structure, and
resistance to change: Implications for persuasion on environmental
issues. In M. H. Bazerman, D. M. Messick, A. E. Tenbrunsel, & K. A.
Wade-Benzoni (Eds.), Environment, ethics, and behavior (pp. 122–153).
San Francisco: Jossey-Bass.
Ellsberg, D. (1961). Risk, ambiguity, and the Savage axioms. Quarterly
Journal of Economics, 75, 643– 669.
Feldman, J. M., & Lynch, J. G. (1988). Self-generated validity and other
effects of measurement on belief, attitude, intention, and behavior.
Journal of Applied Psychology, 73, 421– 435.
Folger, R. (1977). Distributive and procedural justice: Combined impact of
“voice” and improvement of experienced inequity. Journal of Personality and Social Psychology, 35, 108 –119.
Folger, R. (1998). Fairness as a moral virtue. In M. Schminke (Ed.),
Managerial ethics: Moral management of people and processes (pp.
12–34). Mahwah, NJ: Erlbaum.
Folger, R., & Cropanzano, R. (1998). Organizational justice and human
resource management. Thousand Oaks, CA: Sage.
Fox, C. R., & Tversky, A. (1995). Ambiguity aversion and comparative
ignorance. Quarterly Journal of Economics, 110, 585– 603.
Fox, C. R., & Weber, M. (2002). Ambiguity aversion, comparative ignorance, and decision context. Organizational Behavior and Human Decision Processes, 88, 476 – 498.
Frisch, D., & Baron, J. (1988). Ambiguity and rationality. Journal of
Behavioral Decision Making, 1, 149 –257.
Godek, J., Brown, C. L., & Yates, J. F. (2002, November). Customization
decisions: The effects of task decomposition on process and product
evaluations. Paper presented at the annual meeting of the Society for
Judgment and Decision Making, Kansas City, MO.
Greenberg, J., & Folger, R. (1983). Procedural justice, participation, and
the fair process effect in groups and organizations. In P. B. Paulus (Ed.),
Basic group processes (pp. 235–256). New York: Springer-Verlag.
Heath, C., & Tversky, A. (1991). Preference and belief: Ambiguity and
competence in choice under uncertainty. Journal of Risk and Uncertainty, 4, 5–28.
Higgins, E. T. (2000). Making a good decision: Value from fit. American
Psychologist, 55, 1217–1230.
Holtz, B. C., & Harold, C. M. (2007, August). Global justice perceptions
as mediators of specific justice dimension effects. Paper presented at the
annual meeting of the Academy of Management, Philadelphia.
Hsiao, C. (1986). Analysis of panel data. New York: Cambridge University
Press.
Huo, Y. J., Smith, H., Tyler, T. R., & Lind, E. A. (1996). Superordinate
identification, subgroup identification, and justice concerns: Is separatism the problem, is assimilation the answer? Psychological Science, 7,
40 – 45.
Jaccard, J., & Wan, C. K. (1996). LISREL approaches to interaction effects
in multiple regression. Thousand Oaks, CA: Sage Publications.
Jones, D. A., & Martens, M. L. (2007). The mediating role of overall
fairness and the moderating role of trust certainty in justice-criteria
relationships: Testing fundamental tenets of fairness heuristic theory. In
G. T. Solomon (Ed.), Proceedings of the 67th annual meeting of the
Academy of Management [CD].
Kagan, J. (1972). Motives and development. Journal of Personality and
Social Psychology, 22, 51– 66.
Kahneman, D., & Frederick, S. (2002). Representativeness revisited: Attribute substitution in intuitive judgment. In T. Gilovich, D. Griffin, and
D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive
judgment. New York: Cambridge University Press, 2002.
Kahneman, D., Slovic, P., & Tversky, A. (Eds.). (1982). Judgment under
117
uncertainty: Heuristics and biases. Cambridge: Cambridge University
Press.
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of
decision under risk. Econometrica, 4, 263–291.
Kim, J. O. (1975). Multivariate analysis of ordinal variables. American
Journal of Sociology, 81, 261–298.
Knight, F. H. (1921). Risk, uncertainty, and profit. Boston: HoughtonMifflin.
Kunreuther, H., Meszaros, J., Hogarth, R. M., & Spranca, M. (1995).
Ambiguity and underwriter decision processes. Journal of Economic
Behavior and Organization, 26, 337–352.
Leventhal, G. S., Karuza, J., & Fry, W. R. (1980). Beyond fairness: A
theory of allocation preferences. In Mikulda, G. (Ed.), Justice and social
interaction (pp. 167–218). New York: Springer-Verlag.
Lind, E. A. (2001). Fairness heuristic theory: Justice judgments as pivotal
cognitions in organizational relations. In J. Greenberg and R. Cropanzano
(Eds.), Advances in organizational behavior (pp. 56 – 88). San Francisco:
New Lexington Press.
Lind, E. A., Kray, L., & Thompson, L. (2001). Primacy effects in justice
judgments: Testing predictions from fairness heuristic theory. Organizational Behavior and Human Decision Processes, 85, 189 –210.
Lind, E. A., Kulik, C. T., Ambrose, M., & de Vera Park, M. V. (1993).
Individual and corporate dispute resolution: Using procedural fairness as
a decision heuristic. Administrative Science Quarterly, 38, 224 –251.
Lind, E. A., & Tyler, T. R. (1988). The social psychology of procedural
justice. New York: Plenum Press.
Lind, E. A., & Van den Bos, K. (2002). When fairness works: Toward a
general theory of uncertainty management. In B. M. Staw & R. M.
Kramer, Research in organizational behavior (Vol. 24, pp. 181–223).
Greenwich: JAI Press.
Preacher, K. J., Curran, P. J., & Bauer, D. J. (2006). Computational tools
for probing interactions in multiple linear regression, multilevel modeling, and latent curve analysis. Journal of Educational and Behavioral
Statistics, 31, 437– 448.
Rabe-Hesketh, S., Skrondal, A., & Pickles, A. (2004). Generalized multilevel structural equation modeling. Psychometrika, 69(2), 167–190.
Rahn, W. M., Aldrich, J. H., & Borgida, E. (1994). Individual and contextual variations in political candidate appraisal. American Political
Science Review, 88, 193–199.
Rottenstreich, Y., & Kivetz, R. (2006). On decision making without
likelihood judgment. Organizational Behavior and Human Decision
Processes, 101, 74 – 88.
Savage, L. J. (1954). The foundation of statistics. New York: Wiley.
Schroth, H. A., & Shah, P. P. (2000). Procedures: Do we really want to
know them? An examination of the effects of procedural justice on
self-esteem. Journal of Applied Psychology, 85, 462– 471.
Sniderman, P. M., Brody, R., & Tetlock, P. E. (1991). Reasoning about
politics: Explorations in political psychology. Cambridge: Cambridge
University Press.
Sunstein, C. R. (2004). Moral heuristics and moral framing. Minnesota
Law Review, 88, 1556 –1597.
Thibault, J., & Walker, L. (1975). Procedural justice: A psychological
analysis. Hillsdale, NJ: Erlbaum.
Tiedens, L. Z., & Linton, S. (2001). Judgment under emotional certainty
and uncertainty: The effects of specific emotions on information processing. Journal of Personality and Social Psychology, 81, 973–988.
Tyler, T. R., & Lind, E. A. (1992). A relational model of authority in
groups. In M. Zanna (Ed.), Advances in experimental social psychology
(Vol. 25, pp. 115–191). New York: Academic Press.
Van den Bos, K. (2001). Uncertainty management: The influence of
uncertainty salience on reactions to perceived procedural fairness. Journal of Personality and Social Psychology, 80, 931–941.
Van den Bos, K., & Lind, E. A. (2002). Uncertainty management by means
SEE
118
of fairness judgments. Advances in experimental social psychology (Vol.
34, pp. 1– 60). New York: Academic Press.
Van den Bos, K., & Lind, E. A. (in press). The social psychology of
fairness and the regulation of personal uncertainty. In R. M. Arkin, K. C.
Oleson, & P. J. Carroll (Eds.), The uncertain self: A handbook of
perspectives from social and personality psychology. Mahwah, NJ:
Erlbaum.
Van den Bos, K., Lind, E. A., Vermunt, R., & Wilke, H. (1997). How do
I judge my outcomes when I don’t know the outcomes of others? The
psychology of the fair process effect. Journal of Personality and Social
Psychology, 72, 1034 –1046.
Van den Bos, K., Wilke, H. A. M., & Lind, E. A. (1998). When do we need
procedural fairness? The role of trust in authority. Journal of Personality
and Social Psychology, 75, 1449 –1458.
von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior. Princeton, NJ: Princeton University Press.
Received December 30, 2007
Revision received May 11, 2008
Accepted May 16, 2008 䡲
Correction to Schmitt et al. (2008)
In the article “Why Can’t a Man Be More Like a Woman? Sex Differences in Big Five Personality
Traits Across 55 Cultures,” by David P. Schmitt, Anu Realo, Martin Voracek, and Jüri Allik
(Journal of Personality and Social Psychology, 2008, Vol. 94, No. 1, pp. 168-182), some of the
sample sizes presented in Table 1 were incorrectly reported.
The below sample sizes are correct:
DOI: 10.1037/a0014651
Nation
n
France
Netherlands
Czech Republic
Brazil
Belgium
Italy
Slovakia
Austria
Spain
Latvia
New Zealand
Mexico
Morocco
Canada
Estonia
Australia
Lebanon
Romania
Switzerland
Chile
United States
Serbia
Argentina
Peru
Israel
Germany
Turkey
Bolivia
Lithuania
Poland
Malta
Croatia
United Kingdom
Slovenia
Cyprus
Hong Kong
South Africa
Bangladesh
Portugal
Philippines
Tanzania
Taiwan
Jordan
Ethiopia
Zimbabwe
Malaysia
Greece
Japan
South Korea
India
Finland
Botswana
Fiji
Congo
Indonesia
Total average
136
241
235
97
522
200
184
467
273
193
274
215
182
1,039
188
489
263
251
214
312
2,793
200
246
206
394
790
412
181
94
846
331
222
483
182
60
201
162
145
252
282
136
209
275
240
200
141
229
259
490
200
122
213
163
192
111
17,637