Image Theory: An Experimental Study of the Effect of Feedback on

IMAGE THEORY: AN EXPERIMENTAL STUDY OF THE EFFECT OF FEEDBACK ON
DECISION MAKING
Erin N. Gerbec
A Dissertation
Submitted to the Graduate College of Bowling Green
State University in partial fulfillment of
the requirements for the degree of
DOCTOR OF PHILOSOPHY
May 2012
Committee:
Milton D. Hakel, Advisor
William E. Knight
Graduate Faculty Representative
William K. Balzer
Dryw O. Dworsky
ii
ABSTRACT
Milton D. Hakel, Advisor
Image theory claims that one’s images, namely one’s values, goals, and plans to achieve
those goals, are key aspects in decision making. Decision options that are incongruent with one
or more of the images can lead to a “shock” to the status quo, resulting in abandoning the goal,
revising the goal, or changing the strategy to attain the goal. However, the characteristics of the
feedback that cause this shock have not been evaluated independently within the decision making
scenario. The goal of the current study is to better understand how people respond to positive
and negative feedback through the principles of image theory. To test the effect of feedback, 434
students stated their goal performance on a number task, including plans to achieve the goal,
goal-setting strategy, and how the goal performance was valued. Subjects received randomly
assigned false feedback at positive, moderately negative, or extremely negative levels after
completing the number task. Then, they indicated their plans, goal-setting strategy, and goal
value for a second chance to meet their goal on the number task. Results indicated that those in
the negative feedback conditions experienced the feedback as a “shock” at a significantly higher
rate than those in the positive feedback condition. However, participants in the extremely
negative feedback condition were not more likely than the other two feedback conditions to
change their plans to achieve their goal, goal-setting strategy, or value of the goal. Overall, the
results suggest that feedback valence plays a significant role in how one responds to adversity in
meeting a goal and can produce a shock. This study provides a significant first step into
understanding the role of feedback on shocks and image violations.
iii
This dissertation is dedicated to my parents and grandparents for teaching me the importance of
hard work and education.
iv
ACKNOWLEDGMENTS
I am grateful to Milt Hakel, my dissertation advisor, for his unwavering support,
guidance, and mentorship throughout both my dissertation and graduate education. I would also
like to thank my dissertation committee members, William Balzer, Dryw Dworsky, and William
Knight, for their helpful feedback on this study. I am grateful to Carissa Wott and Dev Dalal for
allowing me to collect data in their classes. I am also grateful to my husband, Dan Gerbec, and
my parents, Gary and Joanne, for their continuous support and encouragement.
v
TABLE OF CONTENTS
Page
INTRODUCTION ................................................................................................................
1
Review of Image Theory Research ............................................................................
3
Research on Shocks and Image Theory .....................................................................
11
Research on Feedback Acceptance ............................................................................
16
Current Study ............................................................................................................
19
METHOD ..............................................................................................................................
23
Participants and Design..............................................................................................
23
Measures and Manipulations .....................................................................................
24
Procedure ...................................................................................................................
34
RESULTS ..............................................................................................................................
36
Data Cleaning.............................................................................................................
36
Data Analysis .............................................................................................................
40
DISCUSSION ......................................................................................................................
61
Applications to Image Theory Research ....................................................................
64
Limitations .................................................................................................................
65
Conclusions ................................................................................................................
71
REFERENCES ......................................................................................................................
72
APPENDIX A. STIMULUS MATERIALS ......................................................................... 112
APPENDIX B. PARTICIPANT DEBRIEFING LETTER .................................................. 131
APPENDIX C. INFORMED CONSENT STATEMENT AND HUMAN SUBJECTS
REVIEW BOARD APPROVAL LETTER ........................................................................... 133
vi
LIST OF FIGURES/TABLES
Figure/Table
1
Page
Descriptive Statistics and Pearson Correlations for Variables at
Time 1 and Time 2 .....................................................................................................
76
2
Mean Goal Percentiles of Participants in Different Feedback Conditions ................
78
3
Analysis of Variance Summary for Time 1 Number Reduction Task
Accuracy Percentage by Feedback Condition ...........................................................
4
Analysis of Variance Summary for Time 1 Number Reduction Task
Time (s Per Item) by Feedback Condition .................................................................
5
81
Analysis of Variance Summary for Manipulation Check of Speed
by Feedback Condition ..............................................................................................
7
80
Analysis of Variance Summary for Manipulation Check of Accuracy
by Feedback Condition ..............................................................................................
6
79
82
Analysis of Variance Summary for Manipulation Check of Letter vs. Number
by Feedback Condition ..............................................................................................
83
8
Analysis of Variance Summary for Expectedness by Feedback Condition ..............
84
9
Analysis of Variance Summary for Positivity by Feedback Condition .....................
85
10
Analysis of Variance Summary for Personal Issues by Feedback Condition ............
86
11
Analysis of Variance Summary for Task Issues by Feedback Condition ..................
87
12
Analysis of Variance Summary for Accuracy Forecast
by Feedback Condition ..............................................................................................
13
88
Analysis of Variance Summary for Speed Forecast
by Feedback Condition ..............................................................................................
89
vii
14
Analysis of Variance Summary for Percentile Forecast
by Feedback Condition ..............................................................................................
15
Descriptive Statistics for Goal Percentiles, Accuracy, and Speed Variables
across Time by Feedback Condition ..........................................................................
16
98
Analysis of Variance Summary for Time 1 Accuracy Comparison
by Feedback Condition ..............................................................................................
24
97
Analysis of Variance Summary for Time 1 Accuracy Confidence
by Feedback Condition ..............................................................................................
23
96
Analysis of Variance Summary for Time 1 Goal Percentile
by Feedback Condition ..............................................................................................
22
95
Mixed Measures Analysis of Variance Summary for Tests of
Within-Subjects Effects of Speed Comparison..........................................................
21
94
Mixed Measures Analysis of Variance Summary for Tests of
Within-Subjects Effects of Speed Confidence ...........................................................
20
93
Mixed Measures Analysis of Variance Summary for Tests of
Within-Subjects Effects of Accuracy Comparison ....................................................
19
92
Mixed Measures Analysis of Variance Summary for Tests of
Within-Subjects Effects of Accuracy Confidence .....................................................
18
91
Mixed Measures Analysis of Variance Summary for Tests of
Within-Subjects Effects of Goal Percentile ...............................................................
17
90
99
Analysis of Variance Summary for Time 1 Speed Confidence
by Feedback Condition .............................................................................................. 100
viii
25
Analysis of Variance Summary for Time 1 Speed Comparison
by Feedback Condition .............................................................................................. 101
26
Analysis of Variance Summary for Time 2 Goal Percentile
by Feedback Condition .............................................................................................. 102
27
Analysis of Variance Summary for Time 2 Accuracy Confidence
by Feedback Condition .............................................................................................. 103
28
Analysis of Variance Summary for Time 2 Accuracy Comparison
by Feedback Condition .............................................................................................. 104
29
Analysis of Variance Summary for Time 2 Speed Confidence
by Feedback Condition .............................................................................................. 105
30
Analysis of Variance Summary for Time 2 Speed Comparison
by Feedback Condition .............................................................................................. 106
31
McNemar’s Chi-Square Test for Correlated Proportions for Plans
by Feedback Conditions ............................................................................................. 107
32
McNemar’s Chi-Square Test for Correlated Proportions for Goals
by Feedback Conditions ............................................................................................. 108
33
McNemar’s Chi-Square Test for Correlated Proportions for Values
by Feedback Conditions ............................................................................................. 109
34
McNemar’s Chi-Square Test for Correlated Proportions for
Most Important Plan, Goal, or Value by Feedback Conditions ................................. 110
35
Image Theory Model Divided into Two Types of Decisions .................................... 111
1
INTRODUCTION
Image theory is a decision making model intended to represent actual decision making
processes. Image theory started as a theorized alternative explanation to popular economic
decision models, such as linear weight-and-add approaches and utility analyses, and is a viable
theory to describe one’s automatic screening process when there are multiple options from which
to choose. Probabilistic decision making has been explained through many different models,
such as a Bayesian interpretation to account for changing conditional probabilities, or
Brunswik’s probabilistic functionalism that provides an analytic framework for describing the
behavioral decisions needed to achieve a target goal in a probabilistic environment. Image
theory differentiates itself as a “second generation behavioral decision theory” (Beach &
Connolly, 2005, p. 108) by describing how decision makers actually make decisions in response
to dynamic environmental factors, such as organizational constraints and others’ preferences, and
variable internal factors, such as modified professional goals or personal life changes. In this way,
image theory is sensitive to the flow of events over time and may be more dynamic than earlier
probabilistic decision making theories. Therefore, image theory provides the focus for this
investigation.
Research on image theory has been conducted in many applied psychology fields and
now spans the entire range of employment decisions, including applicant attraction (e.g., Ehrhart
& Ziegert, 2005), job choice (e.g., Ryan, Sacco, McFarland, & Kriska, 2000), promotion (e.g.,
Pesta, Kass, & Dunegan, 2005), employee turnover (e.g., Lee & Mitchell, 1994), and retirement
(e.g., Feldman, 1994). It has also been examined in the context of consumer marketing related to
pro-environment attitudes and behaviors (Nelson, 2004) and help-seeking behaviors to improve
future performance in one’s role (Smith, 2009).
2
Image theory states that a decision maker is influenced by three images, that is, one’s
values, goals, and plans to achieve the goals. Beach (1997) named these the value image,
trajectory image, and strategy image, respectively. The central tenet of image theory is that
people compare decision alternatives to these images when making a decision, and alternatives
that violate any of these images are automatically screened out. Based on image theory research,
researchers have developed prediction models of different behavioral paths one might take given
certain image violations (e.g., the unfolding model of employee turnover, Lee & Mitchell, 1994)
and have hypothesized what other psychological predictors will result in strong value, trajectory,
and strategy images (Nelson, 2004). Even with such varied and rigorous research applying
concepts of image theory to real-life situations, key components of image theory have not yet
been evaluated in experimental studies. Therefore, the causes of documented changes in value,
trajectory, and strategic images in previous research are still unknown, and need to be studied in
true experiments to better understand image theory.
The catalyst to initiate a decision making process in many previous studies has been
feedback. Any event that can disrupt one’s status quo and initiate a change in value, trajectory, or
strategic images is a psychological shock to the system (Lee & Mitchell, 1994), and performance
feedback can be a key aspect of these shocks. However, in studies of image theory, feedback has
not been studied in isolation in a laboratory. Therefore, it is still unknown whether feedback
itself is causing the shocks, or if another agent is acting on one’s decision making processes. The
present study is a laboratory experiment that manipulates feedback, designed to investigate the
consequences of feedback on image theory constructs.
3
Review of Image Theory Research
Image theory is a descriptive decision making theory intended to explain decision making
as it actually happens. It is based on individuals’ decision making strategies observed in both
laboratory and applied settings (for a review, see Beach 1996, 1997). Image theory is so named
because a decision maker’s choices are hypothesized to be largely determined by three distinct,
although interdependent, factors: value images, trajectory images, and strategy images (Beach &
Mitchell, 1998). Beach and Mitchell argue that the value image is the primary influence on
decisions. The value image is a reflection of the decision maker’s ethics, morals, and principles.
When faced with a decision, the decision maker views these values as the underlying motivation
for how a person should act in the given situation, or the decision that ought to be made by a
person with the same morals and ethics (Beach & Mitchell, 1998). Image theory does not assume
that people always make ethical choices, and, due to its nebulous boundaries, the value image of
a morally flexible decision maker can quickly change. However, Beach and Mitchell state that
large shifts in a person’s value image are uncommon and a person’s principles that influence
decision making are usually consistent over time. The choice one makes from all of the possible
alternatives, and ultimately the behavior which expresses that decision, is motivated by finding
compatibility between one’s values and behaviors.
Image theory’s second and third images, trajectory and strategy, are closely related. The
trajectory image portrays the decision maker’s goals, and the decision maker’s strategy image
encompasses the plan to achieve the goals. The scope of the decision maker’s goals can vary
widely, from short-term task completion of preparing and delivering a successful presentation to
clients, to long-term and complex goals of being promoted to the vice president level while still
maintaining one’s desired work-life balance. Goals are represented by the trajectory image
4
because it conveys how one views the future and the eventual end-state of their efforts (Beach &
Connolly, 2005).
If a decision maker has shown commitment to reaching a goal, a plan or strategy is
needed to reach that goal. The strategy image has tactical aspects, including concrete, observable
behaviors with a direct connection between the goal and explicit plans that are believed to
achieve the goal (Beach & Connolly, 2005). Strategy images also have forecast aspects in which
the decision maker predicts success or failure of the goal, taking into account both the planned
tactical behaviors thought to be sufficient to meet the goal and the environmental conditions that
might enable or counteract it (Beach & Connolly, 2005). Plans to achieve multiple goals must
not interfere with one another at the tactical or forecasting levels so as not to reduce the chances
of attaining a goal; trajectory images rely on quality strategy images. Weak plans to meet a goal
also inhibit the likelihood of meeting the goal. Therefore, trajectory images and strategy images
are necessarily related, as trajectory images cannot exist without the strategy image.
Shocks are another important aspect of image theory. A shock is a jarring event
experienced by the employee that can lead to an image violation, such as news that the
organization has been secretly polluting the environment. This may violate the employee’s
values of working for an environmentally-friendly organization and may result in the employee
deciding to leave the organization voluntarily. An event is only a shock if the individual
experiences it as a jarring event that disrupt the momentum of one’s plans to meet a goal, or the
status quo that one believes to be true. Therefore, the same event or knowledge of new
information may be a shock to one employee and not to another. Shocks occur after images are
well-defined, so more will be said about shocks after describing the process to set goals and
achieve them.
5
In image theory, a decision maker’s goals and the plans to achieve them are affected by
two kinds of decisions: adoption decisions and progress decisions. As shown in Figure 1,
adoption decisions are the decision maker’s choices to set additional goals to the trajectory image
and to add new plans to the strategy image (Beach & Connolly, 2005) and are heavily influenced
by the compatibility with the decision maker’s principles, or the value image. Progress decisions
are assessments of the compatibility between one’s goal and the plan to achieve it, or the
trajectory and strategy images, and occur in the middle of the goal attainment timeline. The
middle of the goal timeline is defined as the period between goal implementation and goal
attainment. This compatibility assessment is an evaluation of whether or not the present plan will
achieve the goal if its implementation is continued, or if the plan is no longer perceived to be
effective. If it is determined that the plan will no longer meet the goal, the plan must be revised
or replaced with a different plan, or the goal must to be revised or discarded (Beach & Mitchell,
1998). Failure to revise a plan when it is perceived that a goal will not be met will often result in
abandoning the goal (Beach & Mitchell, 1998).
Progress decisions are an example of what Beach & Mitchell (1998) termed the
“compatibility test” because decisions are driven by “fit” with one’s standards. This fit is the
perceived compatibility among the decision alternatives and one’s values, goals, and plans to
achieve the goals. A decision maker evaluates the various options and ultimately decides whether
those alternatives will help meet the goal by screening in good options that meet one’s decision
criteria and screening out options that violate one of the images.
As mentioned above, progress decisions are important in determining if the plan will
achieve the goal. Progress decisions occur after a plan has been implemented but before the goal
has been achieved, as the decision maker forecasts whether the existing plan (strategy image)
6
will meet the goal (trajectory image). In a series of laboratory studies, Dunegan and colleagues
(Dunegan, 1995; Dunegan, Duchon, & Ashmos, 1995) showed that the decision maker’s
perception of the compatibility between the goal and the forecast to meet the goal is critical to
progress decisions. In a 1995 study, Dunegan found that when both the goal and the forecast of
meeting the goal were viewed as compatible, subjects were more likely to continue with the
given plan and to commit new resources to it. The effect of the compatibility between the goal
and the forecast to meet the goal was found to be above and beyond the effect of feedback
(positively or negatively framed) or one’s dissatisfaction with performance, suggesting that
compatibility between the goal and the forecast can predict, on the average, whether one will
revise or abandon a given plan, or reenergize it with added resources and effort. In a second
study, Dunegan, Duchon, and Ashmos (1995) found that the degree of incompatibility between
the goal and the forecast to meet the goal in a progress decision is feedback on its own. This
feedback has been shown to motivate the decision maker to make changes to the plan, put forth
greater effort, or find other ways to make progress toward the goal (Benson & Beach, 1996).
Together, these studies by Dunegan and colleagues support image theory’s tenet that
compatibility tests between goals (trajectory images) and plans to achieve the goals (strategy
images) are key aspects of the decision making process. Research by Schepers & Beach (1998)
supports this by finding that low compatibility between the goal and the forecast, which is
inherent in progress decisions, increases the likelihood of changing a behavior to make progress
toward that goal.
Together, these laboratory studies support the importance and further examination of
compatibility tests when studying progress decisions and the decision maker’s values, goals, and
plans to achieve the goals. Research consistently supports that progress decisions are heavily
7
influenced by compatibility tests. Perceptions of compatibility have been affected by discrepancy
magnitude between the goal and one’s forecasted likelihood of meeting the goal, which is
informed by the feedback received before the progress decision. In progress decisions,
compatibility between the goal (trajectory image) and perceived ability to achieve that goal
through the plan (strategy image) strongly influences whether people continue with the stated
plan, commit more resources to it, revise the plan, or abandon the goal (Beach & Mitchell, 1998).
These research findings on progress decisions within image theory are important to the
present study. The progress decision made is largely based on the perceived compatibility
between the goal and one’s current forecast of attaining the goal given the plan’s progress at that
point. Feedback about one’s performance before completing a task or between a series of trials
can provide a realistic forecast of future performance and suggest to the decision maker whether
or not it is likely the goal will be met as planned. Using the explicit relationships of image theory,
feedback can be examined more closely to investigate its effect on a decision maker’s images
(value, trajectory, and strategy) before ultimately affecting a behavioral decision. Because
progress decisions are based on one’s perceived compatibility between the goal and the
forecasted goal attainment, and the compatibility is driven by current performance feedback, it is
expected that performance feedback can initiate a progress decision to continue with the plan,
revise the plan (e.g., commit more resources), or abandon the goal altogether. To date, no known
image theory studies have examined the feedback one receives in a progress decision as a
predictor of the decision to continue, revise, or abandon one’s goals or plans to achieve them. An
experimental manipulation is necessary to understand whether a shock and its image violation
can be created with feedback, leading to a progress decision that elicits a change in goals or plans.
8
Image theory recognizes that a single event can disrupt one’s psychological view of the
goal and one’s stated plans to achieve it. This event is a “shock” (Lee & Mitchell, 1994). Shocks
can take on many characteristics: positive, neutral, or negative; unexpected or expected; and
internal or external to the decision maker (Lee, Mitchell, Wise, & Fireman, 1996; Lee, Mitchell,
Holtom, McDaniel, & Hill, 1999). Shocks are thought to be discrete, meaning shocks are
defined by the presence or absence of a jarring event that leads to comparisons against the status
quo, not the extremity of the event or its consequences. Seminal decision making research on
shocks utilized image theory constructs to explain employee turnover in what Lee and Mitchell
(1994) called the unfolding model of voluntary employee turnover. Many theories of turnover
were heavily influenced by more economic models and utility approaches to predict whether an
employee would leave the organization. Lee and Mitchell (1994) applied image theory constructs
to the turnover paradigm to better explain why employees were actually leaving the organization
instead of focusing on the forecast of prediction models. In their unfolding model, these
unexpected and jarring events, or shocks, such as being passed over for promotion, activate a
comparison between the current state and the ideal future state. If the decision maker determines
that the ideal future state is not attainable, the decision maker engages in a preexisting behavioral
plan called a script (Lee & Mitchell, 1994). Behavioral scripts, such as leaving the organization
or starting an aggressive search for a new job, are not outlined in image theory, but are key
components in explaining reactions to shocks within the unfolding model of employee turnover
(Lee & Mitchell, 1994). If a script does not exist, the shock violates the decision maker’s values,
goals, or plans to achieve the goals, which may be resolved by abandoning the goal (the ideal
future state). Abandoning the goal may mean changing one’s value image (e.g., “I don’t care if I
advance my career right now”), trajectory image (e.g., “I will work to be a project leader instead
9
of manager”), or strategy image (e.g., “I will avoid situations that show my weaknesses instead
of finding situations to show my strengths”). Voluntary turnover decisions have been shown to
fit into Lee and Mitchell’s four theorized decision paths of the unfolding model’s (see Lee &
Mitchell, 1994), with Paths 1, 2, and 3 being initiated by a shock, and no shock in Path 4. Key
classification criteria for each of the four decision paths includes the presence of an experienced
shock, the presence of an existing behavioral script when the shock is experienced, and whether
the shock leads to an image violation of one of the value, trajectory, or strategy images (Lee et
al., 1999). Classification of the quitting decision into the four paths has been found at rates as
high as 92.6% among employees who left an organization (Lee et al., 1999).
Shocks can be positive (e.g., winning the lottery), neutral (e.g., a spouse’s unanticipated
job offer), or negative (e.g., being passed over for promotion), but research has shown that
negative shocks are more likely to result in an avoidable quitting decision (Morrell, Loan-Clarke,
& Wilkinson, 2004). Avoidable or controllable quitting decisions, such as quitting because of a
work-related incident without another job offer in hand, are based on a taxonomy of avoidable
and unavoidable turnover by Dalton, Krackhardt, and Porter (1981). The distinction between
avoidable and unavoidable leavers on a variety of turnover variables was empirically supported
by Abelson (1987), and then applied to shocks in the study by Morrel et al. (2004). Shocks are
believed to jar “an employee toward deliberate judgments about his/her job and may lead the
employee to voluntarily quit” (Holtom, Mitchell, Lee, & Inderrieden, 2005, p. 341). Holtom and
colleagues analyzed exit interview data from employees leaving organizations and
dichotomously coded whether a shock was experienced, such as an unsolicited job offer, a clash
with a coworker over business ethics, or a merger, then categorized the experienced shocks to
investigate the unfolding model of employee turnover. The results show that the majority of
10
those who experienced an image violation and left the organization without another job offer,
which means they followed the second path of the unfolding model, did so shortly after
experiencing an unexpected (89%) and negative (58%) shock. This group, however, only
accounted for 9% (n = 19) of the total sample. The majority of participants in the exit interviews,
83% (n = 169), were categorized in the third path of the unfolding model. These participants
reported experiencing a shock, such as an unsolicited job offer from a different company or the
announcement of a company merger, which led to an image violation that initiated a comparison
between the current position and new alternatives. These employees evaluated the options and
were more likely to secure an offer before leaving the organization. Of these participants, 91% (n
= 153) reported experiencing an unexpected shock, but the reported valence of the shock was
equally positive (47%) and negative (45%). Shocks can also be expected, such as being laid off
several months after a company merger, and neutral (e.g., a spouse’s unanticipated job offer), but
these shock characteristics are rarely endorsed in the second and third paths of the unfolding
model where a shock and image violation lead to employee turnover in Holtom et al.’s study
(2005). Due to the low base rate of expected shocks (9% to 11%) and neutral shocks (5% to 8%)
in Holtom et al.’s (2005) data, more attention will be given to unexpected shocks that are both
positive and negative.
At the aggregate level in Holtom et al.’s (2005) data, unexpected and negative shocks are
not the most prevalent characteristics reported. Unexpected shocks are reported in only 20% of
all the cases (n = 94), and negative shocks are reported in 16% (n = 77). However, unexpected
shocks are the overwhelming majority reported (90.4%, n = 170) when a decision maker
experiences an image violation and are categorized in the second and third paths of the unfolding
model. Image violations occur when an employee’s values, goals, and strategies to achieve the
11
goals no longer align with their company or with their personal situation. Examples of image
violations may include conflicting business ethics (value image), a new company policy to focus
on exclusively promoting from outside of the organization (trajectory image), or a new company
policy of mandatory overtime that competes with one’s plans to spend more time with family
(strategy image). Therefore, shocks, both positive and negative, are fundamental to studying
image theory. The strong relationship between unexpected shocks and image violations suggests
that one may be able to predict the behavioral reaction an employee will have when an
unexpected shock is experienced. Knowledge of these relationships may be beneficial to
organizations so the probability of the employee’s decision to quit can be predicted and then
minimized. Within the extant image theory literature, no known studies to date have examined
the likelihood of one’s behavioral reaction to an unexpected shock, and to do so would require an
experimental manipulation of the shock. This shock would come through performance feedback,
or by introducing an ecologically valid specific event to the participants. Positive and negative
valence is reported nearly equally when an unexpected shock is also reported, so any
manipulation of the unexpected shock would need to include both positive and negative feedback
to evaluate valence’s role in predicting a behavioral reaction to the unexpected shock.
Research on Shocks and Image Theory
Two notable studies examined shocks, or constructs similar to shocks, and will be
described in greater detail because they help inform the research questions for the current study.
First, Campion and Lord (1982) conceptualized the dynamic goal-setting process through a
control systems model of motivation and feedback. They tested this model in a longitudinal (one
semester long) study of college students and measured students’ grade goals, scholastic ability,
effort, and performance to determine the effect of test feedback on the students’ academic
12
performance and studying decisions. Namely, they measured students’ future grade goals,
perceived effort spent preparing for the previous test, hours studied for the previous test, and
expected effort on the next test. Results showed that student goals, behaviors, and strategies
changed in response to the exam feedback as the semester unfolded. Large discrepancies
between the goal grade and actual grade led to increased effort, which the authors suggested was
caused by poor performance that created a discrepancy signal between one’s goals and actual
performance. Overall, negative feedback on test scores was more likely to result in students
lowering their long-term goals to be in line with actual performance, and consecutive failures in
meeting their test goals were more likely to lead to reduced course goals. Negative feedback,
however, had little effect on short-term goals for the next test.
Campion and Lord’s (1982) study conceptually tested many image theory components,
though it was “not [their] intention to investigate decision-making processes,” (p. 284). The
control theory model they tested is based on goals, behaviors, and strategies, which translates to
image theory’s designated trajectory images and strategy images. Students’ course grade goals
were the trajectory image and feedback received after every exam provided a progress decision
to continue with the existing plan, modify the plan or course grade goal, or abandon the course
goal altogether. Also, Campion and Lord’s discrepancy signal caused by poor performance
parallels image theory’s “shock” terminology and elicited a behavioral response, including
increased effort and altering one’s stated goals. When Campion and Lord’s results are interpreted
through image theory terminology, it could be stated that discrepancy signals between expected
and actual exam grades were experienced as a shock, and that a large discrepancy led to a change
in the plans to achieve the goal (i.e., increasing effort) that were portrayed in the strategy image.
Progress decisions later in the semester, however, resulted in a decision to modify the trajectory
13
image (i.e., the course grade goal) because it was no longer reasonable to forecast high
performance on subsequent trials.
A similar study explicitly used image theory to research a related question to Campion
and Lord’s (1982) longitudinal study of college students’ goal-setting in response to feedback,
and found similar results. Smith (2009) examined an unfolding model of help-seeking across the
academic semester. College students responded to behavioral measures of help-seeking to
determine if more help was sought out in response to negative exam scores. Students also
responded to image theory questions regarding the most recent exam, stating whether the exam
score was expected or unexpected, and positive, neutral, or negative. Data from students who
classified their exam score as unexpected and negative were analyzed separately to determine the
effect on subsequent grade goals for upcoming exams. Much like data from Campion and Lord,
results showed that those who reported being unexpectedly and negatively shocked by their first
exam score did not significantly lower their goals for the upcoming exam to be in line with past
performance, even though the exam average was two letter grades lower than the average of the
stated exam goals. Performance on the second exam improved slightly for those saying they
were shocked by the first exam, but exam goals for the final exam again did not decrease to
match their lower performance. Over time, however, the course goal grade was modified to be in
line with actual performance in the class. Unlike Campion and Lord’s results, students in the
Smith study did not show more effort on the next exam, as measured by help-seeking, when
shocked by previous exam performance. Shocked students did not seek out more help to meet
their goal than they reported seeking earlier in the semester, even though across all students,
help-seeking increased as the semester progressed in response to an increased perceived need for
help.
14
The studies by Campion and Lord (1982) and Smith (2009) describe students’ reactions
to shocking performance feedback throughout a semester as they work toward their course goals.
Every student made a progress decision when feedback was received in which the student
evaluated his or her goals and forecasted whether it was reasonable to attain them. This led to
reenergizing resources toward an existing plan, modifying the plan or goal, or abandoning the
goal altogether. In both studies, unexpected and negative shocks, or discrepancy signals of poor
performance, led to changes in one or more image, such as the goal (trajectory image) or plan to
achieve the goal (strategy image), especially later in the semester.
Exam feedback produced measurable changes in students’ decisions, yet each study’s
non-experimental methodology limits our understanding of feedback in the decision process. The
exam feedback is necessarily tied to a student’s actual performance on the exam, so this feedback
cannot be analyzed independently to understand the role of feedback in the progress decision
process. Actual performance in the classroom or other tasks used to study feedback and shocks
may covary with other variables that affect how someone responds to feedback. Also, little
attention is given in the studies by Campion and Lord (1982) or Smith (2009) to explain shocks
or discrepancy signals caused by positive performance, or any resulting change in goals or effort
that occurred in response to positive feedback. Therefore, despite two similar studies that
measured image theory constructs with similar results, the role of feedback in this process is still
unknown. In addition, it is unknown whether feedback in a realistic condition could produce
predictable and consistent behavioral changes across participant responses. An experimental
manipulation of feedback is necessary to better understand the relationships between feedback,
goals, and plans to achieve those goals as they occur in a progress decision.
15
Based on a review of the image theory research on employee turnover and students’
classroom performance, it is believed that experimental research in this area will need to follow
two guidelines. First, the incentive for good performance must be standardized across all
participants. Without this standardization, a decision maker’s response to feedback, such as
whether a student changes study plans for the next exam, will necessarily covary with external
standards of success (e.g., pass or fail). Individuals will still hold their own standards of success,
but the external incentive for good performance must be held constant to remove this confound.
Participants may still set their own goals, but the reward level should be set at a strict cutoff to
motivate everyone to the same level of performance. This approach will eliminate some of the
variability in participant-set goals and the difficulty of defining “good” performance. Second, the
feedback must be bogus and randomly assigned from two or more levels of positive and negative
feedback. Randomly assigning the feedback and manipulating the valence allows researchers to
study feedback in isolation and measure how the trajectory and strategy images change in
response to the feedback, without being confounded by environmental or personal factors. When
the reward for good performance is set to a specific level and feedback is manipulated, the
feedback can be studied to determine its role on changing values, goals, and plans to achieve the
goals, especially when unexpected and negative feedback is received.
Because randomly assigned false feedback may be easily detected, it is imperative to
design this experimental research to maximize feedback acceptance and the apparent ecological
validity of the feedback. The next section briefly and selectively summarizes the feedback
literature to identify key aspects of feedback that lead to feedback acceptance and to hypothesize
the effect of feedback on progress decisions as it relates to image theory decision making.
16
Research on Feedback Acceptance
Research on feedback interventions has shown inconsistent effects on performance,
despite a near universal acceptance among most feedback intervention researchers that these
interventions regularly improve performance (Kluger & DeNisi, 1996). Results of Kluger and
DeNisi’s meta-analysis on feedback intervention effects showed that, on the average, feedback
interventions improve performance by nearly half a standard deviation (d = .41), but with large
variability. In approximately one third of cases, feedback interventions actually reduced
performance rather than improving it, and in many other cases had no effect on performance.
Inclusion in the meta-analysis was not restricted to a certain type of feedback intervention,
though the primary study must have reported relevant feedback variables like the presence of
treatment group, control group, and objective performance measures. Therefore, this metaanalysis is believed be a meaningful representation of the feedback intervention literature and
applicable to the present study. There is a great body of literature on the feedback–performance
relationship, but Kluger and DeNisi’s meta-analysis shows that there is still much to understand
about the effect of feedback on performance. One can extrapolate from these results that a
decision maker’s psychological reaction to that feedback is also difficult to predict, which limits
the predictions one can make about a decision maker’s performance that results from the
behavioral choices of that psychological reaction.
Kluger and DeNisi (1996) also benefit the feedback intervention literature by proposing
four strategies to eliminate the gap between feedback received and personal or externally set
standards for performance. The four strategies represent a self-regulation process to accept the
feedback and align future performance with the performance expectations, and have many
parallels to image theory, specifically compatibility tests and progress decisions. Kluger and
17
DeNisi’s first strategy to close the gap between the feedback received and the performance
standards is to set a new goal to improve behavior. Similarly, Beach & Mitchell (1998) stated
that progress decisions are based on compatibility between a goal and the forecast for success,
with larger gaps increasing the likelihood of abandoning the goal and setting a new trajectory
image. The second strategy is to abandon the performance standard when the gap between
performance and standard isn’t likely to shrink through a behavior change. Again, this is a stated
consequence of incompatibility in progress decisions, that along with setting a new goal, one
may abandon the original goal altogether and alter the trajectory image (Beach & Mitchell).
Kluger and DeNisi’s third strategy is to change the performance standard to something more
attainable, which in Beach and Mitchell’s words, is to revise the goal that is deemed
incompatible with forecasted performance during a progress decision. The fourth strategy to
shrink the gap between the feedback and performance standard is to reject the feedback message,
which is more likely to happen if the feedback is negative rather than positive. Ilgen, Fisher, and
Taylor (1979), in their summary of behavioral consequences of individual feedback, identified
the sign of the feedback, whether positive or negative, to be an important determining factor in
how feedback was perceived. Research has shown that negative feedback is generally perceived
less accurately than positive feedback (Ilgen, 1971; Ilgen & Hamstra, 1972), which may lead to
the negative feedback being denied as inaccurate or perceived as not reflective of the real
progress being made toward the goal. Beach and Mitchell do not suggest an aspect of image
theory that parallels rejecting feedback during a progress decision. However, it seems reasonable
that a decision maker may reject negative feedback information in a progress decision if
motivated to persistently believe that the plan will work, and will forecast successful goal
attainment even when there is feedback to the contrary.
18
To predict how a decision maker will react to feedback in a progress decision, it is
helpful to begin with the aspects of feedback that have been shown to have the greatest impact
on feedback recipients’ behaviors. Ilgen et al.’s (1979) feedback process model identifies the
effects of feedback on recipients and describes four stages in the feedback process: perceived
feedback, acceptance of feedback, desire to respond to feedback, and the intended response.
Each stage is further broken down to meaningful components that affect the feedback recipient’s
response to the feedback. Common themes at every stage of their feedback model include the
source of the feedback, the feedback message, and individual characteristics of the feedback
recipient that ultimately influence the final behavioral response to the feedback. Specifically,
credibility of the source and the perceived power of the feedback source over the feedback
recipient, such as a supervisor with the authority to fire subordinates based on poor performance,
are common themes across multiple stages. As stated above, research has also shown that the
sign of the feedback message, whether positive or negative, often affects whether or not the
feedback recipient will perceive the feedback as it is intended and to accept the feedback as a
true reflection of one’s performance (Ilgen, 1971; Ilgen & Hamstra, 1972). Kluger & DeNisi’s
(1996) meta-analysis on feedback interventions, however, showed no moderating effect for
feedback sign on the feedback-performance relationship.
According to these guidelines from the feedback literature, an experimental manipulation
of bogus feedback must be concerned with participants’ perceptions of the credibility of the
feedback source. If a typical high performer receives bogus negative feedback, he or she will be
less likely to believe the feedback is genuine, which may affect the resulting progress decision.
Negative feedback, especially, is susceptible at a higher rate than positive feedback to being
19
denied as inaccurate (Ilgen, 1971; Ilgen & Hamstra, 1972), so false negative feedback must be
relayed in a way that appears to be a genuine reflection of one’s performance.
A goal of this research is to better understand how decision making processes change in
relation to feedback under the specifics of image theory. Image theory does not specify the
relative strength or flexibility of the three images when faced with a progress decision. That is, it
is unknown if value, trajectory, and strategy images are equally considered in the decision
making process when responding to feedback, or if one image (value, trajectory, or strategy) is
more susceptible to change when a psychological shock is experienced. Hence, that is one
exploratory question addressed in this research. It is also unknown if negative feedback alone
can trigger a shock, or if positive feedback can alter one’s images and influence the decision
maker’s progress decision. These and other questions will be addressed in the present study.
Current Study
In the current study, the goal is to better understand through the tenets of image theory
how people respond to positive and negative feedback. Feedback has been identified as a cause
for psychological shocks that can disrupt one’s status quo and initiate a change in one’s images,
but the aspects of feedback that are driving the shock are still unknown. This study uses an
experimental laboratory setting (with random assignment) in which participants receive false
negative or false positive feedback to examine the role of feedback in experienced shocks.
Previous research on feedback acceptance suggests several strategies to increase the
participants’ acceptance and internalization of the feedback results, which is important in this
study because the false feedback received is randomly assigned and in no way related to one’s
actual performance on the task. The task chosen is a number reduction task (adapted from Bell,
Gardner, & Woltz, 1997), in which specific rules are applied to reduce a four-digit string of
20
numbers to a single digit. Because this task exclusively uses numbers in the question and as the
answer, participants with a past history of poor performance in math and numbers may doubt
their ability to perform well on the task, even before trying. Mikulincer (1994) found that people
learn to accept their bad performance as unavoidable when repeated negative feedback follows
performance, no matter their efforts to fix it. This leads to a reduction in task effort and lower
performance on subsequent tasks. To combat this learned helplessness of participants with prior
failure in math or numbers, the Number Reduction Task instructions will include a statement that
there is no known relationship between math skill and the task. The statement emphasizes that
those who do well in math have been known to fail the task, and those who perform poorly in
math have scored very high. To do well on the task, one simply has to follow the rules, not be
good in math. These statements about the task’s relationship to math ability are not supported by
research or any other anecdotal evidence, but instead are included to assuage participants’ fear of
failure in a math-related task if they have a past history of failure or struggles in math or
numbers-related activities. A manipulation check will be conducted to confirm the effectiveness
of this statement.
Another strategy to increase participants’ feedback acceptance is in how the feedback is
reported. Studies by Ilgen and colleagues showed that satisfaction with performance is based on
comparing the feedback to a norm or reference group and the deviation from prior expectations,
such as a preset goal before beginning a task (Ilgen, 1971; Ilgen & Hamstra, 1972). Therefore,
performance will be reported in percentiles and is said to be calculated from student data on the
same task during a previous semester. Participants will be told that both accuracy and speed on
the task are combined to produce one score, which is then converted into a percentile ranking of
all student data on the task. This strategy is beneficial in two ways. First, pilot testing of the
21
Number Reduction Task (n=37) resulted in 56.7% (n=21) of respondents completing the task
with at least 95% accuracy within the 30 minutes provided. Because there is only one right
answer to each four-digit string of numbers in the task, high-performing participants who are
very confident in their accuracy and randomly assigned to receive negative feedback may be
suspicious of the feedback and guess the main research questions of the study. Reporting
performance as a combination of accuracy and speed on the task, therefore, is expected to
produce a feedback percentile that is more believable to participants than reporting performance
as accuracy alone. Participants who are highly accurate on the task are expected to recognize that
other highly accurate responders may have been faster, which would drive down comparison
percentiles for the target participant. Second, by stating that the percentiles are based on student
data from a previous semester, a legitimate peer comparison is created for the participant. This is
believed to increase the value they place in the percentile feedback as a meaningful comparison
to similar others, and in turn to increase feedback acceptance. Again, manipulation checks will
be conducted to assess the believability of the percentile feedback.
The present study was designed to investigate the effects of feedback on a decision
maker’s strategy to meet a pre-set goal. This takes place in what image theory describes as a
progress decision – a time at which feedback for current performance is available and a decision
has to be made whether to continue with the stated plan, or to abandon or revise the plan because
it is believed not to meet the goal in a satisfactory manner. To study how different levels of
feedback affect whether or not a decision maker experiences a shock, feedback will be given at
three levels: extremely negative feedback, moderately negative feedback, and positive feedback.
Previous image theory research on shocks suggests that experiencing a large discrepancy
22
between the expected feedback and actual feedback during a progress decision is more likely to
result in a shock. Therefore, the first hypothesis states:
Hypothesis 1: Those receiving extremely negative feedback will experience shocks at a
higher frequency than those receiving moderately negative or positive
feedback.
Image theory research also has shown that those who experience a negative shock will
abandon the current strategy and adopt a new strategy at a higher rate than those not
experiencing a negative shock (e.g., Morrell et al., 2004). Therefore, the second hypothesis states:
Hypothesis 2: Those receiving extremely negative feedback will change their plans to
achieve their goal from the first trial to the second trial at a higher rate
than those receiving moderately negative or positive feedback.
These hypotheses have not been previously tested in a true experiment. Additional
exploratory research questions for the proposed study include:
(1) Does extremely negative feedback on a task lead to a goal change on the next trial?
(2) Does positive feedback on a task reduce strategy changes on the next trial?
(3) Do moderately negative feedback and extremely negative feedback produce distinctly
different goal changes and strategy changes for the next trial?
23
METHOD
Participants and Design
Participants included n=434 undergraduate psychology students who participated in
exchange for course credit. These students were recruited from select psychology classes as well
as the psychology department research subject pool. The sample was mostly female (66.3%)
with an average age of M = 19.2 years (SD = 1.83).
Participants were randomly assigned to receive false feedback at one of three levels. Each
level was chosen to represent a category of feedback – extremely negative, somewhat or
moderately negative, and positive. However, the valence attached by the recipient of the
feedback at any percentile varies widely. For some, performance at the 97th percentile may
represent failure. For others, performance at the 25th percentile may be a success. The valence
distribution for negative, neutral, and positive performance in this population is unknown.
Therefore, percentile values that represent the extremely negative, moderately negative, and
positive feedback conditions are widely distributed across the range and anchored around a
definition of “success” at the 51st percentile. Participants’ valences of the feedback were
assessed to determine if the feedback percentile was interpreted as intended.
Participants were randomly assigned to receive extremely negative feedback
(performance at the 28th percentile), moderately negative feedback (performance at the 45th
percentile), or positive feedback (performance at the 73rd percentile), regardless of actual
performance on the task. The final sample yielded balanced random assignment to each of the
three conditions, with n=145 receiving extremely negative feedback, n=137 receiving
moderately negative feedback, and n=152 receiving positive feedback.
24
Power analyses were conducted to examine this study’s potential to detect small to
medium effect sizes with the given sample size of n=434. Statistical significance was set to α
= .05 and power was set to 0.95. The sample size of n=434 was found to be sufficiently sensitive
to detect small to medium effect sizes in each of the core planned data analysis techniques,
including chi-square “goodness of fit” tests (effect size w = 0.173; small to medium), one-way
ANOVA (effect size f = 0.189; small to medium), and mixed measures ANOVA with a withinbetween interaction (effect size f = 0.095; small).
Measures and Manipulations
Motivation manipulation
Participants were told at the beginning of the study that those who scored in the top 50%
of all participants who completed the task, or between the 51st and 99th percentiles, would be
entered into a raffle for one of four $50 gas gift cards. The 51st percentile was chosen as the
lower boundary of success because it was believed that nearly all participants would see it as an
attainable goal for which to strive. Likewise, the monetary amount was expected to be a sizable
prize and therefore motivating to participants to strive to meet the goal of scoring in the top 50%
of participants. Those who did not score above the 50th percentile were initially told they would
not be rewarded for their effort beyond the usual course credit. This was intended to motivate
performance on the first trial of the task.
After false feedback was given on the first trial, all participants were provided with an
incentive to perform well on the second trial: those in the extremely negative (28th percentile)
and moderately negative (45th percentile) feedback conditions would have a second opportunity
to qualify for the raffle, and those in the positive feedback (73rd percentile) condition could
double their chances of winning a gift card by earning a second entry into the raffle. For
25
participants in the negative feedback conditions, maintaining the reward level at the 51st
percentile may have seemed too large of a gap to surmount in only one additional trial, thereby
decreasing motivation to try for a goal that was perceived to be impossible. This may be
especially true for those in the extremely negative feedback condition who believed they
performed at the 28th percentile. Therefore, to qualify for the raffle during the second trial, the
goal percentile was self-set by the participant and must have been equal to or greater than the
percentile score earned during the first trial. For example, participants in the extremely negative
feedback condition were told that their performance was at the 28th percentile. These participants
could qualify for the gift card raffle in the second trial of the task by setting and meeting their
new goal at or above the 28th percentile.
Participants who received false positive feedback were told that they qualified for the gift
card raffle and were being rewarded with an opportunity to earn a second chance in the raffle due
to their high performance. Like the negative conditions, the positive condition must have also set
and met a new goal that was at or above their feedback percentile on the first trial to win another
raffle entry. In the positive condition, this value was at or above the 73rd percentile. This was a
more difficult goal compared to the first trial’s experimenter-set 51st percentile and was expected
to motivate those who already believed they were high performers to give their best effort on the
second trial, while not rewarding them for allowing their percentile to slip between the 51st and
72nd percentiles.
Requiring all participants to set their own goal based on their believed performance on
the first trial and not an arbitrary cutoff at the 51st percentile was believed to motivate participant
effort through the second trial of the task. Those who set a goal substantially higher than their
feedback percentile did so voluntarily, and those who set a goal below their previous feedback
26
percentile did so knowing that they would not be rewarded with a chance in the raffle for
meeting their lower goal. Thus, setting the goal for the second trial serves as a manipulation
check (did the participant attend to the rules?) and as a dependent variable.
Task
A number reduction task (adapted from Bell et al., 1997) was used in this study. In the
task, specific rules are applied to reduce a four-digit string of numbers to a single digit. Working
left to right, every two consecutive numbers is reduced to a single digit by following a rule, and
then combined with the remaining numbers to repeat the process. For each four-digit string of
numbers, there is only one correct answer. The four rules are: the same rule (if two digits are the
same, the answer is the same digit); the contiguous rule (if two digits begin an ascending or
descending series, the answer is the next digit in the series); the midpoint rule (if two digits differ
by two, the answer is the digit midway between them); and the last rule (if two digits differ by
more than two, the answer is the latter of the two digits). Any combination of the four rules is
permitted. For example, the four-digit number string 5435 would first be reduced to 335, because
the first two numbers, (54), reduce to 3 by applying the contiguous rule. Then, 335 is reduced to
35, because the first two numbers, (33), equal 3 by applying the same rule. Finally, 35 is reduced
to 4, because the two numbers (35), reduce to 4 by applying the midpoint rule. Therefore, the
final answer is 4. In the task, rules and examples were presented first, followed by a set of 16
two-digit number combinations for training the rules. Then, in the first trial a set of 30 four-digit
number strings were answered by the participant. In the second trial, 20 unique four-digit number
strings were used. Every two-digit and four-digit number combination had only one correct
single digit answer.
27
The set of 16 two-digit number combinations included four training items for each of the
four rules. Participants saw the four rules at the top of the page and were told to practice the rules
on the 16 pairs of numbers by typing in the correct digit next to each two-digit item. Participants
were also told that the practice section would not count toward the percentile score, and to
memorize the rules as they went so they could work as quickly as possible. The next survey page
showed the correct answer for each of the 16 items and listed the rule that solved the item, but
did not score the participant’s responses on the previous page against the answer key.
In keeping with the task details outlined in Bell et al. (1997), each four-digit string of
numbers used in the task had three non-repeating rules. For example, to correctly solve the item
7244 one must use the last, midpoint, and contiguous rules to reduce the number string to 5,
whereas 3347 uses the same, contiguous, and midpoint rules to correctly reduce the number
string to 6. There are twenty-four combinations of three non-repeating rules. To control for item
difficulty, the 30 items in the first trial and 20 items in the second trial were randomly sampled
from a larger data base that equally represented all twenty-four possible combinations of the nonrepeating rules.
A separate sample of thirty-seven (n=37) participants completed a pilot test of the
Number Reduction Task. Participants were told that the Number Reduction Task was a 60-item
measure of mental processing speed and to work as quickly as possible while striving for 100%
accuracy. The pilot test was administered as a paper and pencil task with 20 two-digit training
items, and then 32 four-digit practice items, followed by the “official” 60 four-digit items. The
maximum time allowed to complete the pilot test was 30 minutes; ninety-two percent (n=34)
finished in the time allowed.
28
While working to solve a four-digit item in the pilot test, many participants wrote down
each reduced digit when a rule was applied until a final answer was produced, even though every
item can be solved by holding the three rules’ digit reductions in working memory before writing
down the final answer. The online survey administration used in this study was necessary to
provide immediate and false feedback. However, it also required participants to hold in working
memory each rule application’s reduced digit to combine with the next digit and apply the next
rule. Removing the option to write down each rule’s answer before a final answer is reached may
increase the time it takes to complete each item for some participants, thereby increasing the total
time to complete the survey. Some participants may not finish the study within the expected 60
minutes, or may omit responses for questions asked later in the survey. Therefore, the pilot test
procedure was revised, such that the four-digit practice items were eliminated, and the 60-item
task used in the pilot test was reduced to only 30 items. Twenty unique items were used in the
second trial.
Pilot test participants who completed the paper and pencil version of the task in under 30
minutes had an average accuracy of 79% correct (SD = 31%), with scores ranging from a low of
11% accuracy to a high of 100% accuracy (n=9). Participants in the pilot group completed the
task in an average of 18.4 minutes (SD = 4.8 minutes), with completion times ranging from 7.5
to 30 minutes, and 8% (n=3) of participants failing to complete the task in the 30 minutes
allowed. The wide ranges of accuracy and speed suggest that performance on the Number
Reduction Task is likely to produce sufficient variance between subjects, whereby calculating a
percentile that is relative to other students and based on both accuracy and speed is believably
representative of actual between-student percentile comparisons.
Feedback manipulation
29
Participants received false feedback about their percentile ranking on the Number
Reduction Task compared to other students who had completed the task, independent of their
actual performance. Values that approximate the first, second, and third quartiles of the
percentile distribution from 1 to 99 and that could be generally interpreted as extremely negative,
moderately negative, and positive feedback values were chosen to cover the wide range of the
valence distribution. Again, those who were randomly assigned to receive extremely negative
feedback were told that their performance was in the 28th percentile, which disqualified them for
the raffle. Those randomly assigned to receive moderately negative feedback were told that their
performance was in the 45th percentile, also disqualifying them for the raffle. In the third
condition, participants randomly assigned to receive positive feedback were told that their
performance was in the 73rd percentile, which is a qualifying score for the gift card raffle. The
gap between the false feedback percentile and the qualifying score is standardized for both
extremely negative and positive feedback conditions; a 28th percentile performance is 23 points
away from qualifying for the gift card at the 51st percentile, and a 73rd percentile performance is
23 points away from disqualifying for the gift card at the 50th percentile. Though this difference
is mathematically standardized, it is unknown if values at the 28th and 73rd percentiles are
perceived to be psychologically equidistant from the experimenter-set criterion at the 51st
percentile. The moderately negative feedback condition at the 45th percentile was selected to fall
below the gift card eligibility criterion of the 51st percentile, but near enough to the
experimenter-set goal to motivate high effort on subsequent trials. Participants’ open-ended
comments after receiving the false feedback suggest the feedback levels were generally
interpreted as they were intended. Participants were never told their true performance on the task
during the experiment.
30
Participants used their false feedback percentile from the first trial of the task to set a new
performance goal on the second trial. In the second trial, participants could set any goal, but were
told they would only earn a gift card raffle entry if their second trial percentile score was equal to
or higher than the percentile score they believed they earned (the 28th, 45th, or 73rd) on the first
trial.
Shock characteristics.
Participants’ reaction to the feedback was assessed with image theory questions
previously used to determine if a shock occurred (Lee et al., 1999) and adapted to the feedback
context. Feedback can only be classified as a shock if the decision maker experienced it as a
jarring event that led to comparisons between the current and ideal future states. Therefore, after
the false feedback was delivered, participants responded “Yes,” “No,” or “Unsure” to the
question: “When you received your score at the [28th / 45th / 73rd] percentile, did you think that
your plans to meet your goal were the reason for your score?” Responding “yes” indicated that
the feedback itself may have been experienced as a shock. Participants also responded “Yes,”
“No,” or “Unsure” to the question: “You will have one more chance to meet your goal on this
task to qualify for the raffle. Does your score at the [28th / 45th / 73rd] percentile make you think
that your plans to meet your goal will affect your score on the next trial?” Responding “yes” to
the second question indicated the feedback caused an image violation, which was expected to
lead to changes in the stated plan for the second trial. Each question was followed by an openended text box for participants to explain their response. An additional open-ended question
stated, “Please briefly describe your reaction to your percentile score.” The open-ended format
provided qualitative description to better understand how the participant reacted to the feedback
received.
31
To further describe the reaction to the feedback, participants responded on a 5-point
Likert scale to the following questions: “To what extent was your percentile score negative or
positive? (1 – negative; 2 – somewhat negative; 3 – neither negative nor positive; 4 – somewhat
positive; 5 – positive),” and “To what extent was your percentile score unexpected or expected?
(1 – totally unexpected; 2 – somewhat unexpected; 3 – neither unexpected nor expected; 4 –
somewhat expected; 5 – totally expected).” Also, the extent to which personal issues (e.g., skill
at solving number puzzles) and task issues (e.g., unclear instructions) were believed to influence
performance was measured by having participants respond on a 5-point Likert scale to the
following questions: “To what extent does your percentile score reflect personal issues?” and
“To what extent does your percentile score reflect task issues? (1 – not at all; 2 – somewhat; 3 –
moderately; 4 – significantly; 5 – completely).” Each question was followed by an open-ended
space for participants to elaborate on their response.
Participants also predicted their future success on the task by responding to how they
believe they would perform if given another chance to perform the task immediately, without
reviewing the rules for accuracy or practicing to be faster. Participants indicated their expected
percentile relative to their peers, and then answered two questions regarding accuracy and speed.
The questions were: “With respect to accuracy, would your responses be: More accurate; The
same percentage of accuracy as before; Less accurate,” and “With respect to speed, would your
responses be: Faster; The same speed as before; Slower.” Responses to these questions were
intended to forecast one’s confidence in improving on the next trial.
Dependent measures
The primary dependent measures in this study were the participants’ stated goals on the
task and plans to achieve those goals. Participants stated their performance goal in percentiles
32
(1st to 99th), then responded on a 5-point Likert scale to the following two questions: “How
confident are you that you can complete this task very accurately? (1 – not at all confident; 2 –
somewhat confident; 3 – moderately confident; 4 – very confident; 5 – completely confident)”
and “How confident are you that you can complete this task very quickly? (1 – not at all
confident; 2 – somewhat confident; 3 – moderately confident; 4 – very confident; 5 – completely
confident).” Participants also made comparisons to others for both accuracy and speed,
responding with comparative percentiles on a 7-point Likert scale (1 – Bottom 5%; 2 – Bottom
20%; 3 – Bottom 35%; 4 – Middle 50%; 5 – Top 35%; 6 – Top 20%; and 7 – Top 5%).
To measure one’s plans to achieve the goal, participants checked as many as applied to
three separate lists of behaviors and attitudes that correspond to image theory components.
Responses in each list were designed to measure one’s (1) strategy image (e.g., “I am going to
emphasize accuracy” and “I am going to emphasize speed”), (2) trajectory image (e.g., “I set my
performance goal low on purpose so it would be easy to meet” and “I set my performance goal at
a challenging level to motivate me to perform well”), and (3) value image (e.g., “I don’t care
whether or not I meet my goal” and “I will be disappointed in myself if I do not meet my goal”).
Participants were told to check as many as apply and could also check an “Other” option and
write in a unique response if the listed behaviors and values did not represent their own. Openended responses followed the list to allow participants to explain their selections. If more than
one option was checked within a list, a follow up question appeared in the survey that forced a
single response from the participant’s multiple responses. For the strategy image, participants
saw: “Of the items you checked to state your plans to meet your performance goal, which is most
important to you?” Options to this question were copied from the participant’s multiple checked
responses of plans to achieve the goal on the previous question. Identical questions appeared for
33
the trajectory image and value image lists, if necessary. Allowing the participant to first check all
that apply before identifying the most important option gathered more information about the
participant’s priorities regarding the possible behaviors and attitudes about the goal.
The participant’s stated plans to achieve the goal from the strategy image checklist were
validated immediately after completing the first trial of the task. Participants responded to the
statement, “Check the box next to each plan that you actually did during the Number Reduction
Task.” The statement was followed by all of the options previously selected by the participant in
the strategy image behavior list (including “Other” options with write-in plans). The options
“None of these” and “Something else,” with a write-in response, were added to every list.
Identical dependent measures for the goal percentile and plans to meet the goal, as
measured by the three lists of behaviors and attitudes, were administered twice: once before the
first trial, and again before the second trial.
Manipulation Checks
After completing the second trial of the Number Reduction Task, participants responded
to several manipulation checks designed to gauge their effort, acceptance of false feedback, and
belief that the task medium of numbers affected their performance. First, participants were told
that the researchers would like to update the database of percentile scores, and that the
researchers need the participant’s permission to include their data. It was stated that only data
from responders who gave a “good effort” on the task is desired to prevent contamination of the
percentiles. Participants were encouraged to only select “yes” if they were confident they gave a
good effort on the task, and were told they will not be penalized for selecting “no.” Second,
participants responded to a manipulation check for feedback acceptance of their percentile score
with respect to accuracy and speed. Participants were told to think back to how well they
34
believed they were doing on the task while completing it, without comparisons to others or using
percentiles. They responded to two questions with a 5-point Likert response. The first was, “How
would YOU describe your accuracy while you were completing the task? (5-point Likert
response: 1 - not at all accurate; 2 – somewhat accurate; 3 – moderately accurate; 4 – very
accurate; 5 – completely accurate),” and the second was, “How would YOU describe your speed
while you were completing the task? (5-point Likert response: 1 - not at all quick, 2 – somewhat
quick; 3 – moderately quick; 4 – very quick; 5 - extremely quick). Finally, participants were
asked to imagine that the task they just completed was an Alphabet Reduction Task using letters
instead of a numbers. Participants responded with a 5-point Likert response to the question:
“How do you think your performance on an Alphabet Reduction Task would compare to your
performance on the Number Reduction Task? (1 – significantly lower scores with letters; 2 –
moderately lower scores with letters; 3 – no difference in score; 4 – moderately higher scores
with letters; 5 – significantly higher scores with letters).”
Procedure
Participants were provided a link to individually access the online Number Reduction
Task and survey questions. It was explained that performance on the task is measured as a
combination of accuracy and speed, and that those who perform in the top 50% of all participants
will be entered into a raffle for one of four $50 gift cards. The rules of the task were then
explained, including examples, and participants completed the 16 two-digit training items. Then,
participants wrote in their percentile goal for the task and selected the behaviors and attitudes
representing their plans to achieve the goal (strategy image), how the goal was set (trajectory
image), and value placed on the performance (value image). The 30-item Number Reduction
35
Task followed for the first trial. After the participant clicked “Next” to submit their answers, a
new page appeared for five seconds that stated the computer was scoring their responses.
Based on the condition to which each participant was randomly assigned, the participant
saw feedback stating their performance score fell in the 28th percentile (extremely negative
condition), 45th percentile (moderately negative condition), or the 73rd percentile (positive
condition). Next, every participant reported the actual behaviors they performed during the
Number Reduction Task to meet their percentile goal, and then completed the shock
characteristic questions. Participants were then told the conditions for earning an entry into the
gift card raffle on a second trial and to set a performance goal they believed they could achieve
that was equal to or higher than their percentile feedback from the first trial. Dependent measures
of goal percentile and plans to meet the goal were repeated, followed by the 20-item second trial
of the Number Reduction Task.
Participants completed the manipulation checks of effort, feedback acceptance, and letter
vs. number preference, reported their age and gender, and were then directed to a debriefing page.
In the debriefing, participants were told the true purpose of the study to examine whether people
changed their goals or plans to achieve the goals based on positive or negative feedback, and that
the feedback received was completely false. It stated that performance on the task did not affect
one’s chances to win a gift card, and participants could provide an e-mail address if they wish to
be included in the gift card raffle (see Appendix for complete debriefing script).
36
RESULTS
Data Cleaning
The online survey was accessed by n=689 unique responders, n=581 (84.3%) of whom
were recruited through their psychology course in exchange for credit and n=108 (15.7%) who
were recruited through the psychology department research subject pool. Incomplete cases with
missing responses to items of shock characteristics, goals, or plans to meet the goals were
automatically eliminated because the data would not contribute to the statistical analyses
comparing changes between Time 1 and Time 2. In sum, n=184 (26.7%) cases were incomplete
through the beginning of the second trial of the Number Reduction Task and were deleted.
However, participants exited the online survey at varying times: 21 individuals (11.4% of the
n=184 leavers) exited the online survey without giving consent to participate in the study; 99
participants (53.8% of leavers) gave their consent to participate and read through the stated
purpose of the study, but exited before attempting the practice items for the Number Reduction
Task; 45 participants (24.5% of leavers) completed the practice items and stated their
performance goal and plans to achieve the goal for Time 1, then exited without attempting the
first trial of the Number Reduction Task; 4 participants (2.2% of leavers) completed the first trial
of the Number Reduction Task and exited immediately without viewing the randomly assigned
feedback percentile score; 3 participants (1.5% of leavers) exited the survey immediately after
receiving the false feedback on the first trial of the Number Reduction Task; and 12 participants
(6.5% of leavers) answered the image theory questions and set their performance goal for Time 2
before exiting the survey, having not completed the plans to meet the goal or attempted the
second trial of the Number Reduction Task. Of the three participants who immediately exited
the survey after receiving their false feedback on the first trial, each was in a different feedback
condition: the first participant set a performance goal at the 52nd percentile and exited when told
37
he or she scored at the 28th percentile, the second participant set a performance goal at the 70th
percentile and exited when told he or she scored at the 45th percentile, and the third participant
set a performance goal at the 60th percentile and exited after learning he or she scored at the 73rd
percentile. Because participants who left after viewing their feedback percentile came from all
three conditions, and because of the low overall base rate of this occurring, exiting the survey
because of disbelief or anger about a negative feedback percentile is not suspected as being a
meaningful factor for leaving the study.
Removal of the incomplete cases resulted in a data set of n=505 cases, which is a
response rate of 73.3% of those who originally accessed the survey. This reduced sample is
comprised of n=443 participants (87.7 % of n=505) who were recruited through their psychology
course and n=62 (12.3% of n=505) recruited through the psychology department research subject
pool. Random assignment to feedback condition was balanced in this data set, with n=169
(33.5%) receiving feedback at the 28th percentile, n=160 (31.7%) receiving feedback at the 45th
percentile, and n=176 (34.9%) receiving feedback at the 73rd percentile.
Data cleaning for the n=505 complete cases
The n=505 complete cases were cleaned through the following steps. Case deletion was
planned for participants who clearly did not follow the instructions, or who completed the task
unrealistically fast. All Number Reduction Task data for the first trial were scored for accuracy
and assigned a percentile rank for speed. Twenty-eight cases (5.5%) were automatically deleted
because the participant scored at or below the 10th percentile for accuracy and at or above the
90th percentile for speed, indicating low attention to the task. Any case with less than 50%
accuracy on the task and not among the n=28 cases that had been automatically deleted was also
individually checked for suspicious guessing patterns. An additional n=75 cases scored less than
38
50% accuracy on the task, but all were retained because closer examination of the pattern of
errors and open-ended responses suggested a genuine effort on the task.
Planned data cleaning procedures also included a thorough examination of factors that
could potentially contaminate the dependent measures or Number Reduction Task performance,
including time spent on the survey, inconsistent responding, suspicion of the research hypotheses,
and low effort. Time (in seconds) spent on each online survey page was recorded by the survey
software. Thirteen participants whose total time on the survey exceeded three hours - three times
the expected one hour time limit - were deleted from the data set because they spent
unnecessarily long periods of time on single survey pages containing minimal information (as
long as ten hours on a single page). This suggested the participant was not actively engaged in
the survey for extended periods of time. When these most extreme cases were deleted, the
median total time on the survey was 27.7 minutes (M = 32.5 minutes, SD = 19.8 minutes). These
descriptive statistics were used to create a rule for the remaining outliers such that all cases with
a total survey time exceeding three standard deviations of the mean were flagged for further
examination. This rule identified ten additional cases, all of which were found to have
unnecessarily long periods of time on single survey pages. Not knowing the reason for the
excessive pauses within the survey would make it difficult to decipher if significant changes in
the dependent measures, or null results, were due to genuine reactions to the study design or
environmental distractions. All n=23 participants (5.3%) were omitted from the sample, resulting
in a sample median of 27.6 minutes (M = 30.3 minutes, SD = 12.5 minutes) for the total time
spent on the survey.
The data were also cleaned for inconsistencies in setting and describing the performance
goal percentile at Time 1 and Time 2. Participants’ performance goals were expected to relate
39
consistently with their reported accuracy confidence, accuracy comparison to others, speed
confidence, and speed comparison to others. Discrepancies were individually examined. For
example, one participant set his goal at the 99th percentile but reported being “not at all confident”
in his accuracy or speed, and expected to fall in the “bottom 20%” compared to his peers on
accuracy and speed. These inconsistencies may indicate a misunderstanding of how the
percentile scores for the task would be calculated, confusion regarding how the participants
would be scored compared to others on the task, or a lack of effort in responding genuinely about
one’s performance goal. All cases in which the participant set a goal above the 80th percentile or
below the 20th percentile were examined for these inconsistencies because more extreme
responses on the goal percentile would require similarly extreme responses on the additional
measures of accuracy and speed to be consistent; n=13 cases fit these criteria and were removed.
Finally, examination of the open-ended response fields throughout the survey discovered
a small number of participants whose data may have been compromised by suspicion or
extremely low effort on the study. Four cases were deleted because the participant guessed the
research hypothesis after receiving the false feedback score for the first trial. These four
participants wrote phrases such, “I feel like you are lying” and “[this] study is how I react to bad
percentile score[s].” An additional three cases were deleted for apparently low effort, such as
typing nonsense words into all of the open-ended response fields to advance the survey.
In total, n=43 cases (8.5%) were individually eliminated because of the total time spent
on the survey (n=23), inconsistencies between the performance goal accuracy and speed
descriptions of the performance (n=13), suspicion of the research hypotheses (n=4), and apparent
low effort (n=3). Combined with the n=28 cases (5.5%) automatically deleted for scoring at or
40
below the 10th percentile in accuracy and at or above the 90th percentile for speed, n=71 cases
(14.1%) were removed from the complete data set.
The final sample size is n=434 and represents 85.9% retention of the n=505 complete
cases. Random assignment to conditions remained balanced, with n=145 (33.4%) in the 28th
percentile feedback condition, n=137 (31.6%) in the 45th percentile feedback condition, and
n=152 (35.0%) in the 73rd percentile feedback condition. All reported analyses from this point
forward were conducted using the retained sample of n=434.
Data Analysis
Data reporting summary
Planned statistical tests and exploratory analyses are reported next to attempt to answer
the stated hypotheses and research questions. The results section begins by reporting notable
descriptive statistics and intercorrelations of the dependent measures, image theory questions,
and manipulation checks. Second, chi-square tests of independence are used to assess whether a
negative shock was experienced at a higher frequency in one feedback condition (namely, the
extremely negative feedback condition) over another (see Fitzgerald, Dimitrov, & Rumrill, 2001,
for justification). Third, mean comparisons across feedback conditions using an analysis of
variance (ANOVA) are reported for the image theory questions. Fourth, mean comparisons
across time and between the feedback conditions are analyzed using mixed measures ANOVA
tests to assess changes in the dependent measures. Finally, changes in the plans to meet the goal
(strategy image), how the goal was set (trajectory image), and how performance is valued (value
image) are analyzed with McNemar’s test for correlated proportions. Thirty-nine tests are
reported (from thirteen options in the lists for each of the three feedback conditions), as well as
nine tests of the “most important” plan, goal, and value endorsed by each feedback condition.
41
Descriptive statistics & variable relationships
Means, standard deviations, and intercorrelations among the variables are presented in
Table 1. The average goal percentile at Time 1 was M = 64.33 (SD = 18.46) and above the
minimum 51st percentile score that was required for entry into the gift card raffle. When
responding to the performance goal percentile items, participants used the full range of the scale
from the 1st to the 99th percentile, including n=72 participants (16.6%) who set their performance
goal at or below the 50th percentile. Of these n=72 participants, n=18 were in the extremely
negative feedback condition, n=28 were in the moderately negative feedback condition, and
n=26 were in the positive feedback condition, although this goal was set before random
assignment to feedback was administered. When the goal percentiles at Time 1 are examined by
feedback condition, the average goal percentile was M = 65.75 (SD = 18.25) in the extremely
negative feedback condition, M = 62.65 (SD = 19.21) in the moderately negative feedback
condition, and M = 64.49 (SD = 17.97) in the positive feedback condition, as shown in Table 2.
After false feedback was randomly assigned to participants, the average goal percentile for the
second trial of the task reduced to M = 55.72 (SD = 18.23). When examined by feedback
condition, those receiving extremely negative feedback reported significantly lower goal
percentiles at Time 2 compared to Time 1 (t(144) = -19.56, p < .001; M = 40.62, SD = 15.47), as
did those receiving moderately negative feedback (t(136) = -14.04, p < .001; M = 51.37, SD =
9.41). Participants receiving positive feedback reported significantly higher goal percentiles at
Time 2 compared to their goals at Time 1 (t(151) = 14.02, p < .001; M = 74.06, SD = 8.42).
Participants again used the full range of the percentile scale at Time 2, even though Time 2
performance goals less than the false feedback percentile received at Time 1 would not be
rewarded. Four participants (2.8% of n=145) in the 28th percentile condition reported a Time 2
42
goal less than the 28th percentile, n=4 participants (2.9% of n=137) in the 45th percentile
condition reported a Time 2 goal less than the 45th percentile, and n=16 participants (10.5% of
n=152) in the 73rd percentile condition reported a Time 2 goal less than the 73rd percentile.
Image theory literature has emphasized the use of one-item measures for determining
whether a shock has occurred. Following the precedent set in that literature, participants
responded to one-item measures that assess confidence in one’s accuracy and speed and
comparisons to others’ accuracy and speed. When responding to these items, participants used
the full range of five-point and seven-point Likert-response options, respectively, at Time 1 and
Time 2. Participants reported higher confidence in their accuracy on the task (Time 1: M = 3.20,
SD = 1.01; Time 2: M = 2.90, SD = 0.99) than their speed on the task (Time 1: M = 2.78, SD =
0.88; Time 2: M = 2.55, SD = 0.92) before both trials of the task. A similar pattern shows higher
expected performance for accuracy (Time 1: M = 4.58, SD = 1.21; Time 2: M = 4.14, SD = 1.19)
than for speed (Time 1: M = 4.00, SD = 1.17; Time 2: M = 3.75, SD = 1.18) when participants
compared themselves to their peers. At Time 1, participants rated themselves near the middle of
the scales for all four of the items (accuracy confidence, accuracy comparison to peers, speed
confidence, and speed comparison to peers), suggesting moderate confidence in one’s accuracy
and quickness and ability to succeed compared to others on the task. Mean scores were lower for
all four questions at Time 2, but still in the middle range of each scale.
Each accuracy and speed item is designed to facilitate an understanding of how the
participant came to his or her goal percentile. It is expected that accuracy confidence and
accuracy comparisons to peers are positively correlated and show convergent relations among
two distinct accuracy measures. This was supported, as the zero-order correlation between
accuracy confidence and accuracy comparison to peers is r = .67, p < .001 at Time 1 and r = .59,
43
p < .001 at Time 2, as shown in Table 1. A positive, convergent relationship was also found
between speed confidence and speed comparison to peers, r = .74, p < .001 at Time 1 and r = .74,
p < .001 at Time 2. Convergent relationships within the two accuracy items and within the two
speed items would suggest that discriminant tendencies, or smaller correlations, might be found
when relating accuracy and speed items together. The data do not strongly support these
discriminant tendencies, as the zero-order correlations for accuracy and speed together were
large and ranged from r = .56 to r = .73 (p < .001) at Time 1 and r = .43 to r = .64 (p < .001) at
Time 2. However, as mentioned above, each accuracy and speed item was included to
understand how the participant’s goal percentile was affected by one’s confidence and
comparisons to others on accuracy and speed. Instructions for setting the goal percentile
emphasized that one must be both accurate and quick to do well on the task. Therefore, it is
sensible that a participant would rate oneself similarly on each measure of accuracy and speed,
because a believable goal percentile requires consistent expectations on both. Due to this
reasoning, the lack of support for discriminant tendencies between the accuracy and speed
measures is not considered a limitation in interpreting the results.
The performance goal set before each trial is also expected to positively correlate with
each measure of accuracy confidence, accuracy comparison to peers, speed confidence, and
speed comparison to peers, such that higher performance goals are associated with higher
confidence and percentile comparisons to others. This relationship was found at Time 1 and
correlations ranged from r = .41 (p < .001) to r = .61 (p < .001). The positive relationship
between the performance goal percentile and the four measures of accuracy and speed at Time 2
were lower than the values at Time 1, but were still statistically significant and ranged from r
= .27 (p < .001) to r = .54 (p < .001). See Table 1.
44
For the image theory questions, means are calculated for the one-item measures that
assess the dimensions of experienced shocks, which were scaled on a five-point Likert scale.
Overall, participants reported the feedback received (which was false feedback, but believed by
participants to be genuine at the time) to be slightly unexpected (M = 2.42, SD = 1.14) and
negative (M = 2.60, SD = 1.35). Participants in the present study are believed to have
experienced a shock when they answer “yes” to the question, “When you received your score at
the [28th / 45th / 73rd] percentile, did you think that your plans to meet your goal were the reason
for your score?” because this question was adapted from Lee et al.’s (1999) original shock
question. For the n=137 who answered “yes,” the relationship between expectedness and
positivity is r = .24, (p < .01). Morrel et al. (2004) found that shocks which are reported as being
expected are more likely to be positive (r = 0.36, p < .001), which is consistent with the results
of the present study (r = .24, p < .01). Morrel and colleagues also reported a significant
relationship between negative shocks and work-related shocks (r = .49, p <.001), but this
relationship did not replicate in the present study for the negative and task-related dimensions of
shocks.
All task data were scored for accuracy. On the average, participants made few errors on
the two-digit practice items (M = 90.9% correct, SD = 17.0%), and had an average accuracy of M
= 78.9% (SD = 28.0%) on the first trial of the Number Reduction Task. Accuracy on the second
trial of the task increased slightly to M = 81.2% (SD = 27.3%). Participants’ actual performance
on the Number Reduction Task was very consistent from Time 1 to Time 2. As shown in Table
1, the relationship between the percentage correct on the Number Reduction Task at Time 1 and
at Time 2 is r = .87, p < .001. This value can be interpreted as the test-retest reliability of the
Number Reduction Task, and is satisfactorily high. The first trial of the Number Reduction Task
45
had 30 items with an average speed of M = 8.28 seconds per item (SD = 4.01), while the second
trial had 20 items with an average speed of M = 6.16 seconds per item (SD = 3.75). Time per
item is significantly faster on the second trial compared to the first trial, t(433) = -11.90, p < .001.
The relationship between the speed on the Number Reduction task at Time 1 and Time 2 is also
strong (r = .54, p < .001). As shown in Tables 3 and 4, actual accuracy percentages at Time 1 of
the Number Reduction Task did not vary by feedback condition (F(2, 431) = .684, p > .05), nor
did actual speed on the task (F(2, 431) = 1.524, p > .05).
There is a significant relationship between the goal percentile set by the participant at
Time 1 and his or her actual accuracy on the first trial of the Number Reduction Task (r = .37, p
< .001). The participant’s Time 2 goal percentile also correlates with accuracy on the second
trial of the task, r = .14, p < .001, suggesting that those who set a higher goal, on average,
achieved higher accuracy. Although the magnitude of this correlation is significantly smaller at
Time 2 than at Time 1 (z = 3.68, p < .001), the significant relationship at Time 2 was not washed
out by the false feedback.
Manipulation Checks
To assess the efficacy of the feedback manipulation, participants’ responses to several
items are examined. Participants were led to believe that the researchers needed permission to
use their scores to update the percentile database, and n = 421 (97.0%) checked “yes” to indicate
their permission and that they gave a good effort on the task. A second question asked
participants to rate their own accuracy and speed without consideration to the feedback they
received earlier in the study. Actual accuracy on the first trial of the Number Reduction Task did
not vary by feedback condition (F(2, 431) = .684, p > .505), nor did actual speed (F(2, 431)
= .1.524, p > .505), so perceived accuracy and speed should not vary by feedback condition
46
unless one believed strongly that the feedback percentile given earlier in the study was genuine.
A significant difference was found between feedback conditions when participants rated their
own accuracy (F(2, 431) = 6.801, p < .01). Participants in the 73rd percentile condition rated their
accuracy significantly higher (M = 3.63, SD = .84) than those in the 45th percentile condition (M
= 3.32, SD = .91) or 28th percentile condition (M = 3.26, SD = 1.06). The mean difference
between the 45th and 28th percentile conditions was not significant. A significant difference was
also found between feedback conditions when participants rated their own speed (F(2, 431) =
10.254, p < .001). Again, participants in the 73rd percentile condition rated their speed
significantly higher , (M = 3.19, SD = .85) than those in the 45th percentile condition (M = 2.91,
SD = .80) or 28th percentile condition (M = 2.72, SD = 1.02), with no significant mean difference
between the 45th and 28th percentile conditions. Finally, a third question was asked to assess the
impact of a math phobia or fear of numbers on participants’ task performance. Overall,
participants responded on a five-point Likert scale to a hypothetical scenario of substituting
numbers with letters and believed their scores would be lower with letters (M = 2.49, SD = .98).
This response varied by feedback condition (F(2, 431) = 3.669, p < .05), with participants in the
45th percentile condition (M =2.67, SD = 1.02) significantly more likely to endorse a desire to
substitute letters for numbers than participants in the 73rd percentile condition (M = 2.38, SD
= .90); no differences were found between the 28th (M = 2.43, SD = 1.01) and 45th percentile
conditions, or the 28th and 73rd percentile conditions. See Tables 5-7 and the supporting
descriptive statistics and multiple comparisons for complete statistical reporting.
Frequency of Shocks
In image theory literature on turnover, a shock is said to have occurred if participants
respond “yes” to the item, “Was there a single, particular event that caused you to think about
47
leaving?” (Morrel, et al., 2004). For the present study, the question was adapted to read: “When
you received your score at the [28th / 45th / 73rd] percentile, did you think that your plans to meet
your goal were the reason for your score?” Responding “yes” indicates that the plans to meet the
goal are perceived to have influenced the score, perhaps to the point of experiencing a shock.
Characterizing the feedback as negative and unexpected supports that a negative shock occurred.
It was expected that participants in the extremely negative feedback condition (those
who received feedback the 28th percentile) would endorse this item at a higher rate than the other
two groups and would report their feedback as being negative and unexpected. A 3×2 chi-square
test of independence was conducted to assess whether a negative shock was experienced at a
higher frequency in one feedback condition over another by counting the number of respondents
who (1) responded “yes” to the “plans” question, (2) indicated their score was “negative” or
“somewhat negative” in response to the question, “To what extent was your percentile score
negative or positive,” and (3) indicated their score was “totally unexpected” or “somewhat
unexpected” in response to the question, “To what extent was your percentile score unexpected
or expected?” Participants who endorsed all three items were compared to those who did not
endorse all of the criteria for experiencing a negative shock. Results show significant differences
in the frequency with which negative, unexpected shocks were experienced between feedback
conditions, χ2((2, n = 434) = 15.62, p < .001 , with higher endorsement from participants
receiving extremely negative feedback (n = 16) compared to those in the moderately negative (n
= 9) or positive (n = 1) feedback conditions. Post-hoc 2×2 chi-square tests of independence
showed significant differences between the 28th and 73rd percentile feedback conditions (χ2(1, n
= 297) = 15.962 p <..001) and between the 45th and 73rd percentile feedback conditions (χ2(1, n =
48
289) = 7.538, p < .01). However, no significant difference in experiencing a negative shock was
found between the 28th and 45th percentile feedback conditions (χ2(1, n = 282) = 2.236, p > .05).
Responding “yes” to the question, “Does your score at the [28th / 45th / 73rd] percentile
make you think that your plans to meet your goal will affect your score on the next trial?” is a
secondary assessment of whether a shock was experienced and was modeled after turnover
questions used in the image theory literature. Again, to determine the frequency with which
negative, unexpected shocks were experienced between the conditions, a 3×2 chi-square test of
independence was conducted for participants who responded “yes” to this question, reported that
their score was “negative” or “somewhat negative,” and reported that their score was “totally
unexpected” or “somewhat unexpected.” Results show a significant difference between feedback
conditions, χ2(2, n = 434) = 51.978, p < .001, such that participants receiving extremely negative
feedback (n = 53) and moderately negative feedback (n = 40) endorsed the item at a higher rate
than participants receiving positive feedback (n = 5). Post-hoc 2×2 chi-square tests of
independence again showed significant differences between the 28th and 73rd percentile feedback
conditions (χ2(1, n = 297) = 52.245, p < .001) and between the 45th and 73rd percentile feedback
conditions (χ2(1, n = 289) = 36.789, p < .001). No significant differences were detected between
the 28th and 45th percentile feedback conditions (χ2(1, n = 282) = 1.724, p > .05).
One subgroup of participants reported unexpected reactions to their randomly assigned
feedback percentiles. Twenty participants in the 73rd percentile feedback condition reported they
perceived the feedback as being negative. Within this subset, the average performance goal for
Time 1 (M = 79.25%, SD = 18.77%) was significantly higher than the average performance goal
for the rest of the positive feedback condition cell of n = 132 at Time 1, (M = 62.38, SD = 16.82),
t(19) = 4.019, p < .001. Average responses to accuracy confidence (M = 3.65, SD = .93),
49
accuracy comparison to peers (M = 5.35, SD = .99), speed confidence (M = 3.25, SD = .79), and
speed comparison to peers (M = 4.65, SD = .99) were all rated more favorably than average
performance on their respective Likert-response scales. These twenty participants, however, did
not score significantly higher on the Number Reduction Task than the rest of the 73rd percentile
condition cell at Time 1, t(19) = 0.539, p > .05 (n=20: M = 78.67%, SD = 29.78%; n=132: M =
82.26%, SD = 25.22%), so their inflated goal and supporting accuracy and speed confidence
were not supported by superior abilities compared to their peers.
The hypothesis that negative shocks would be experienced at a higher rate in the
extremely negative feedback condition was partially supported. Negative shocks were measured
by responding “yes” to the adapted shock questions from the turnover literature and describing
the feedback as both negative and unexpected. Negative shocks were reported at significantly
higher rates in the 28th percentile and 45th percentile feedback conditions compared to the
positive 73rd percentile feedback condition, but there was not a significant difference in negative
shocks reported between the 28th percentile and 45th percentile conditions. Negative
interpretations in the 73rd percentile feedback condition, however, show that there is still a great
amount of variability in perceived shocks than can be predicted by feedback condition alone.
Mean Comparisons Across Feedback Conditions
It was hypothesized that participants in the three feedback conditions would report
differences in the expectedness and valence of the feedback received. One-way analysis of
variance (ANOVA) was used to analyze responses to the questions, “To what extent was your
percentile score unexpected or expected?” and “To what extent was your percentile score
negative or positive?” Results indicated significant differences in reported expectedness, F(2,
431) = 22.94, p < .001, η2 = 0.01. Post-hoc tests using the Bonferroni correction showed that
50
participants in the positive feedback condition reported significantly higher average expectedness
of their scores (M = 2.88, SD = 1.18) than either the extremely negative (M = 2.05, SD = 1.03) or
moderately negative (M = 2.30, SD = 1.05) feedback conditions. No significant differences were
found between the 28th and 45th percentile conditions. Results of the analysis of variance for
feedback valence, or positivity, also indicated significant differences between feedback
conditions, F(2, 431) = 238.05, p < .001, η2 = 0.53. Post-hoc tests using the Bonferroni
correction revealed significant differences between all of the conditions, such that the positive
feedback condition reported the most positive scores (M = 3.90, SD = 1.05), followed by the
moderately negative feedback (M = 2.16, SD = .91) and extremely negative feedback (M = 1.66,
SD = .81) conditions. The hypothesis that perceived expectedness and valence of the feedback
would differ by feedback condition is supported.
Mean comparisons across feedback conditions were also used to examine responses to
the questions, “To what extent did personal issues affect your score?” and “To what extent did
task issues affect your score?” The one-way analysis of variance showed no difference between
feedback conditions for personal issues, F(2, 431) = 1.37, p >.05), or task issues, F(2, 431) = .74,
p > .05, suggesting that participants reported similar situational and task-related experiences
during the study. No known image theory studies report the prevalence of personal issues or task
issues when making a progress decision, so it is unknown if these differences are representative
of typical decision making experiences.
One-way analysis of variance tests were also conducted to examine differences in
forecasted changes in accuracy, speed, and percentile performance on the second trial. Results of
the ANOVA revealed a significant mean difference for accuracy forecast, F (2,431) = 3.81, p
< .05, η2 = 0.02. Post-hoc tests using the Bonferroni correction detected a significant difference
51
between the extremely negative (M =2.08, SD = .68) and moderately negative (M = 2.28, SD
= .64) feedback conditions, but no differences between the extremely and moderately negative
feedback conditions with the positive feedback condition (M = 2.10, SD =.69). Results of the
ANOVA found no differences by feedback condition for speed forecast F(2, 431) = .55, p > .05.
When participants forecasted their future percentile compared to their peers, a significant mean
difference was detected, F(2, 431) = 135.29, p < .001, η2 = 0.39. Post-hoc tests using the
Bonferroni correction revealed significant differences among all three feedback conditions, such
that those receiving positive feedback forecasted higher percentiles on the next trial (M = 66.92%,
SD = 15.15) than those receiving moderately negative feedback (M = 50.11%, SD = 12.16), or
those receiving extremely negative feedback (M = 38.69%, SD = 16.83). See Tables 8-14 and the
supporting descriptive statistics and multiple comparisons for complete statistical reporting of
the mean comparisons across feedback conditions.
Mean Comparisons Across Trials
A mixed measures ANOVA was performed to detect changes in the stated goal percentile
across two measurements (before the first trial and before the second trial) and changes between
the feedback conditions. A between-subjects by within-subjects interaction effect was
hypothesized, such that the influence of trial (recorded before the first trial and before the second
trial) on the stated goal percentile depended on the feedback condition. Results indicated a
significant interaction effect, F (2, 431) = 144.61, p < .001, η2partial = 0.40, supporting the
hypothesis that stated goal percentile depends on the interaction of feedback condition and trial.
Main effects were also detected for the within-subject effect (comparing Time 1 goal percentiles
to Time 2 goal percentiles), F (1, 431) = 110.57, p < .001, η2partial = .20, and for the betweensubject effect (random assignment to feedback condition), F (2, 431) = 66.44, p < .001, η2partial
52
= .24. For the significant main effect of feedback condition, post-hoc comparisons using the
Bonferroni correction showed significant mean differences between all of the feedback
conditions, such that the extremely negative feedback condition (M = 53.19%, S.E. = 1.05) was
significantly less than the moderately negative feedback condition (M = 57.01%, S.E., = 1.08),
which was significantly less than the positive feedback condition (M = 69.28%, S.E., = 1.02).
See Table 16 for full reporting of the mixed measures ANOVA results and post-hoc tests.
After stating the goal percentile, participants indicated their attitudes toward their
accuracy on the task by responding to the questions, “How confident are you that you can
complete this task very accurately?” and “How do you think your accuracy will compare to
others?” A mixed measures ANOVA was performed to test for between-subjects by withinsubjects interaction effects, such that the influence of trial (first v. second) on the reported
attitudes toward accuracy depended on the feedback condition. Results revealed a significant
interaction effect for the first question measuring accuracy confidence, F (2, 431) = 6.55, p < .01,
η2partial = 0.03, with main effects for both the within-subject effect (comparing Time 1 accuracy
confidence to Time 2 accuracy confidence), F (1, 431) = 38.74, p < .001, η2partial = .08, and for
the between-subject effect (random assignment to feedback condition), F (2, 431) = 7.08, p < .01,
η2partial = .03. Post-hoc tests of the feedback condition main effect revealed that the accuracy
confidence of the extremely negative feedback condition (M = 2.87, S.E., = .07) only differed
significantly from the positive feedback condition (M = 3.24, S.E., = .07); no differences were
found when conditions were compared to the moderately negative feedback group (M = 3.04, S.E.
= .07). A significant interaction effect for the second question measuring accuracy comparisons
to peers, F (2, 431) = 38.86, p < .001, η2partial = 0.15, also with significant a within-subjects main
effect (comparing Time 1 accuracy comparison to peers with Time 2 accuracy comparison to
53
peers), F (1, 431) = 73.31, p < .001, η2partial = .15, and a significant between-subjects main effect
(feedback condition) , F (2, 431) = 16.98, p < .001, η2partial = .07. Significant differences between
feedback conditions were detected with post-hoc tests using the Bonferroni correction, such that
positive feedback condition (M = 4.74, S.E, = .08) was significantly higher than the extremely
negative (M = 4.10, S.E. = .08) or moderately negative (M = 4.22, S.E. = .09) feedback
conditions, though the extremely negative and moderately negative feedback conditions were not
significantly different from each other; see Tables 17-18 for full reporting of accuracy variable.
Participants also indicated their attitudes toward their speed on the task by responding to
the questions, “How confident are you that you can complete this task very quickly?” and “How
do you think your quickness will compare to others?” A mixed measures ANOVA was
performed to test for between-subjects by within-subjects interaction effects, such that the
influence of trial (first v. second) on the reported attitudes toward speed depended on the
feedback condition. Results showed a significant interaction effect for the first question
measuring speed confidence, F (2, 431) = 13.17, p < .001, η2partial = 0.06, with significant main
effects for both within-subjects (comparing Time 1 speed confidence to Time 2 speed
confidence), F (1, 431) = 26.26, p < .001, η2partial = .06, and between-subjects (feedback
condition), F (2, 431) = 9.42, p < .001, η2partial = .04. Post-hoc tests of the between-subjects main
effect using the Bonferroni correction revealed that the positive feedback condition (M = 2.85,
S.E. = .06) was significantly higher than the extremely negative (M = 2.50, S.E, = .06) and
moderately negative (M = 2.63, S.E. = .06) feedback conditions. The difference between the
extremely negative and moderately negative feedback conditions was not significant. A
significant interaction effect for the second question measuring speed comparisons to peers was
also detected, F (2, 431) = 51.87, p < .001, η2partial = 0.19, with significant within-subjects main
54
effects (comparing Time 1 speed comparison to peers to Time 2 speed comparison to peers), F (1,
431) = 26.87, p < .001, η2partial = .06, and between-subjects main effects (feedback condition), F
(2, 431) = 28.05, p < .001, η2partial = .06. Post-hoc tests using the Bonferroni correction again
revealed the positive feedback condition (M = 4.22, S.E. = .08) to be significantly higher than the
extremely negative (M = 3.65, S.E. = .08) or moderately negative (M = 3.73, S.E. = .09) feedback
conditions, with no significant difference between the extremely or moderately negative
conditions. See Tables 19-20 for full reporting of the mixed measures ANOVA for speed
confidence and speed comparison to peers.
Planned comparisons of the mixed measures ANOVA results of the interaction effects
show a consistent pattern from the first trial to the second trial. A one-way ANOVA was
performed for each of the five questions (i.e., goal percentile, accuracy confidence, accuracy
comparison to peers, speed confidence, speed comparison to peers) at the first trial to determine
if responses varied by feedback condition. Randomly assigned feedback had not been delivered
to the participants, so as expected, no differences were detected between feedback conditions
before the first trial (Goal percentile: F (2, 431) = 1.00, p > .05; Accuracy confidence: F (2, 431)
= 1.12, , p > .05; Accuracy comparison to peers: F (2, 431) = 0.76, p > .05; Speed confidence: F
(2, 431) = 0.29, p > .05; Speed comparison to peers: F (2, 431) = 0.79, p > .05; see Tables 21-25).
One-way analysis of variance tests for each of the five questions at the second trial, however,
showed significant differences between feedback conditions. For Time 2 goal percentile, F (2,
431) = 326.97, p < .001, η2 = 0.60, post-hoc tests with the Bonferroni correction showed
significant mean differences between all of the feedback conditions, with the 73rd percentile
condition reporting a mean goal of 74.06% (SD = 8.42%), the 45th percentile condition reporting
a mean goal of 51.37% (SD = 9.41%), and the 28th percentile condition reporting a mean goal of
55
40.62% (SD = 15.47%). Accuracy confidence at Time 2 also varied significantly by condition,
F (2, 431) = 13.42, p < .001, η2 = 0.06. Significant mean differences were detected with the
Bonferroni correction between the 28th percentile (M =2.60, SD = 1.03) and 45th percentile (M =
2.91, SD = .95) conditions, as well as between the 28th percentile and the 73rd percentile (M =
3.18, SD = .90) conditions. The mean difference between the 45th and 73rd percentile conditions
was not significant. Accuracy comparison to peers at Time 2 was significant, F (2, 431) = 48.43,
p < .001, η2 = 0.18, with all feedback conditions reporting significantly different means from
each other in the post-hoc tests (23rd percentile condition: M = 3.63, SD = 1.24; 45th percentile
condition: M = 3.94, SD = .91; 73rd percentile condition: M = 4.82, SD = 1.06). Likewise, the
overall ANOVA test of speed confidence between the conditions at Time 2 was significant, F (2,
431) = 19.19, p < .001, η2 = 0.08. Post-hoc tests detected significant differences between all of
the conditions for speed confidence, such that those in the 73rd percentile condition had
significantly greater average speed confidence (M = 2.88, SD = .88) than those in the 45th
percentile condition (M = 2.50, SD = .86), who had significantly greater average speed
confidence than those in the 28th percentile condition (M = 2.25, SD = .92). Time 2 speed
comparison to peers was also significant, F (2, 431) = 48.49, p < .001, η2 = 0.18), with post-hoc
tests showing significantly higher mean peer comparisons in the 73rd percentile condition (M =
4.41, SD = 1.10) than in the 45th percentile (M = 3.55, SD = .95) or 28th percentile (M = 3.23, SD
= 1.14) conditions. The 45th percentile and 28th percentile conditions were also significantly
different from each other. The full list of descriptive statistics and significant differences by
condition are presented in Tables 26 – 30.
Changes in Plans, Goals, and Values
56
It was hypothesized that those receiving extremely negative feedback would change their
plans to achieve their goal at a higher rate than those in the other two conditions. To measure this
change in one’s plans to achieve the goal from the first trial to the second trial, McNemar’s test
for correlated proportions was conducted (Fitzgerald, et al. 2001). This test assesses the
significance of the difference between two correlated proportions. A 2 × 2 contingency table was
created for each option in the lists of behaviors and attitudes for plans to meet the goal (strategy
image), how the goal was set (trajectory image), and how one values performance (value image),
with Variable A being Time 1 (before the first trial) and Variable B being Time 2 (before the
second trial). There are a total of thirteen options in the lists (5 for strategy, 4 for trajectory, and
4 for value). This is further broken down by feedback condition, such that thirteen contingency
tables were created for each of the three feedback conditions, for a total of 39 tests, shown in
Tables 31 32, and 33. Participants also selected the most important plan, goal, and value chosen,
which are analyzed by condition in an additional 9 tests. Due to the large number of tests, one
would need to worry about the increased probability of making a Type I error by erroneously
rejecting a null hypothesis when the null hypothesis is actually true. To correct for this, the
adjusted alpha level is calculated by diving α = .05 by the total number of McNemar tests. The
resulting adjusted alpha value for all McNemar tests in this section is α = .001.
As shown in Table 31, changes in one’s plans to achieve the goal percentile show mixed
patterns across the three feedback conditions. Those receiving extremely negative feedback
were hypothesized to change their plans at a higher rate than the other two conditions, but that
hypothesis is not supported by the data. Overall, significant changes were detected three times in
the extremely negative feedback condition, four times in the moderately negative feedback
condition, and two times in the positive feedback condition. Of the five stated plans from which
57
to choose, the three options regarding accuracy and speed (“I am going to emphasize accuracy,”
“I am going to emphasize speed,” “I am going to emphasize accuracy and speed equally) provide
mixed results. The first plan (emphasizing accuracy) produced significant changes in the
moderately negative (χ2(1, n=137) = 14.23, p < .001) and positive (χ2(1, n=152) = 14.44, p
< .001) feedback conditions, but no change in the extremely negative feedback condition (χ2(1,
n=145) = 8.80, p > .001). Participants in both the 45th and 73rd percentile conditions endorsed
this plan at a significantly higher rate at Time 1 than at Time 2. The second plan (emphasizing
speed), however, did have a significant change in the extremely negative (χ2(1, n=145) = 27.22,
p < .001) and moderately negative (χ2(1, n=137) = 11.76, p = .001) feedback conditions.
Participants receiving feedback at the 28th and 45th percentiles endorsed this plan significantly
less at Time 1 than at Time 2. No significant differences were detected in the positive feedback
condition (χ2(1, n=152) = 0.25, p > .001). The third plan (emphasize accuracy and speed equally)
showed significant changes in the extremely negative (χ2(1, n=145) = 13.07, p < .001) and
moderately negative (χ2(1, n=137) = 19.57, p < .001) feedback conditions, with participants in
both conditions endorsing the plan at a higher rate at Time 1 than at Time 2.
The fourth plan (“I will reduce distractions to increase my overall focus”) elicited
significant changes across all three conditions (extremely negative feedback condition: χ2(1,
n=145) = 20.57, p < .001; moderately negative feedback condition: χ2(1, n=137) = 13.37, p
< .001; positive feedback condition: χ2(1, n=152) = 16.67, p < .001), which does not support the
hypothesis that changes will occur more frequently in the extremely negative feedback condition
than in the other two conditions. Participants in all three feedback conditions were more likely
to endorse the fourth plan at a higher rate at Time 1 than at Time 2. McNemar’s test of
correlated proportions for the fifth plan (“I will not focus on anything so I don’t “over think” it”)
58
did not detect a significant change from Time 1 to Time 2 in any condition (extremely negative
feedback condition: χ2(1, n=145) = .36, p > .001; moderately negative feedback condition: χ2(1,
n=137) = 4.84, p > .001; positive feedback condition: χ2(1, n=152) = 8.33, p > .001). Significant
plan changes from Time 1 to Time 2 were less likely to occur in the positive feedback condition,
as expected, but the extremely negative feedback condition did not change plans more frequently
than the other conditions. Therefore, this hypothesis was not supported.
Table 32 lists results for McNemar’s test of correlated proportions to assess changes in
how the goal was set from the first trial to the second trial. Notably, participants in the positive
feedback condition at the 73rd percentile and participants in the moderately negative feedback
condition at the 45th percentile did not significantly change their goal-setting strategy on any of
the four stated goal-setting behaviors. Significant changes were found, however, in the extremely
negative feedback condition for the goal-setting behavior, “I set my performance goal at a
challenging level to motivate me to perform well” (χ2(1, n=145) = 15.00, p < .001). Participants
receiving feedback at the 28th percentile endorsed this goal-setting behavior at a significantly
higher rate at Time 1 than at Time 2. No significant changes from Time 1 to Time 2 were found
in the extremely negative feedback condition for the other three goal-setting behaviors (“I set my
performance goal low on purpose so it would be easy to meet,” “I set my performance goal
where I did because achieving it would reward me with a chance at the gift card,” or “I set my
performance goal without a reason in mind”).
McNemar’s test of correlated proportions indicated changes in one of the four statements
about how one values the goal, as shown in Table 33. To the statement, “I will be happy with
my performance if I give my best effort, even if I do not meet my goal,” participants in the
extremely negative feedback condition and those in the moderately negative feedback condition
59
both indicated a change from the first trial to the second trial (extremely negative feedback
condition: χ2(1, n=145) = 15.13, p < ..001; moderately negative feedback condition: χ2(1, n=137)
= 12.50, p < .001). Participants receiving feedback at the 28th and 45th percentiles endorsed this
value at a significantly higher rate at Time 1 than at Time 2. Participants in the positive
feedback condition did not significantly change their frequency of endorsing this item, χ2(1,
n=152) = 2.29, p > .001. No significant changes from Time 1 to Time 2 were detected for the
three other value items (“I don’t care whether or not I meet my goal,” “It is important to me to
meet my goal,” or “I will be disappointed in myself if I do not meet my goal”) in any condition.
Participants in all conditions indicated the plan, goal, and value from the behavioral lists
believed to be “most important” to the participant. Nine additional McNemar tests of correlated
proportions were conducted to analyze the change in importance for each of the three lists (i.e.,
plans, goals, and values) from the first trial to the second trial within the three feedback
conditions, and are shown in Table 34. To guard against the inflated probability of making Type
I errors on these nine McNemar tests, the adjusted alpha level of α = .001 was used again.
Participants in all three feedback conditions endorsed the option “I am going to emphasize
accuracy and speed equally” as the most important plan to meet the performance goal in the first
trial. Using the adjusted alpha level, McNemar’s test did not detect a significant change in any of
the feedback conditions for this plan, meaning that no significant number of participants in any
of the conditions changed their most important plan to meet the performance goal to another plan
during the second trial (28th percentile condition: χ2(1, n=145) = 7.37, p > .001; 45th percentile
condition: χ2(1, n=137) = 6.08, p > .001; 73rd percentile condition: χ2(1, n=152) = 2.33, p > .001).
Participants in all three feedback conditions also endorsed the option, “I set my
performance goal at a challenging level to motivate me to perform well,” as the most important
60
goal-setting option in the first trial. McNemar’s test revealed a significant change in the most
important goal-setting option endorsed by the extremely negative feedback condition in the
second trial, χ2(1, n=145) = 11.66, p < .001. However, no change was detected in the moderately
negative feedback condition, χ2(1, n=137) = 1.14, p > .001, or positive feedback condition, χ2(1,
n=152) = 1.53, p > .001.
All three feedback conditions were in agreement again when endorsing the most
important statement about how one values performance. All of the conditions chose the
statement, “I will be happy with my performance if I give my best effort, even if I do not meet
my goal” as the most important statement at the first trial. McNemar’s tests indicated significant
changes in the importance of this statement for the extremely negative and moderately negative
feedback conditions (χ2(1, n=145) = 9.14, p < .001, and χ2(1, n=137) = 17.89, p < .001,
respectively). Participants in the positive feedback condition continued to endorse this statement
as the most important way to value one’s performance in the second trial, χ2(1, n=152) = 0.36, p
> .001.
This concludes the reporting of results.
61
DISCUSSION
The results of this study extend image theory literature beyond the unfolding model of
employee turnover to performance feedback. Specifically, this study empirically tested whether
feedback valence alone could produce a systematic change when participants are faced with what
image theory has termed a progress decision (Beach & Connolly, 2005). Results partially support
the hypothesis that negative shocks occur more frequently in response to extremely negative
feedback, since the prevalence of negative shocks was highest in the extremely negative
feedback condition, but not significantly different from the reported negative shocks in the
moderately negative feedback condition. The subset of participants in the positive feedback
condition who reported their feedback at the 73rd percentile as “negative” suggests that
individual goals still heavily influence short-term perceptions of that feedback. The results
showed consistent effects for feedback condition, including significantly different mean
comparisons for the expectedness and valence of the shock, with greater endorsement of
“unexpected” and “negative” feedback in the extremely negative feedback condition. Therefore,
it is believed that the feedback conditions had the intended effects on the average, and add some
support to the hypothesis that those receiving extremely negative feedback would experience
more negative shocks than those in the other feedback conditions.
Consistent effects of the feedback manipulation were also found across time, showing
that the change in a participant’s goal percentile from Time 1 to Time 2 depended on the
randomly assigned feedback condition. Feedback conditions had a significant effect on changes
in the dependent measures from Time 1 to Time 2, as evidenced by the five significant mixed
measures ANOVAs for goal percentile, accuracy confidence, accuracy comparison to peers,
speed confidence, and speed comparison to peers. No differences between the groups were
62
detected on these variables at Time 1, but significant and consistent differences arose
immediately after false feedback was received, and in the expected direction, with participants at
the 28th percentile reporting lower goals, accuracy, and speed than those in the 45th percentile,
who reported lower scores than those at the 73rd percentile.
Image theory describes a progress decision as occurring at a time when feedback for
current performance is available and a decision must be made concerning whether to continue
with the stated plan, to abandon or revise the plan, or to abandon or revise the goal (Beach &
Mitchell, 1998). Those receiving extremely negative feedback were hypothesized to change
their plans at a higher rate than those in the other two conditions, but that hypothesis is not
supported by the data. Significant changes in the plans to meet the goal were found for all of the
feedback conditions, and the extremely negative feedback condition was not more likely to
change their plans than the other two groups. Only one significant change was detected in the
goal-setting strategy from Time 1 to Time 2, and it was as expected: participants in the 28th
percentile condition endorsed the item “I set my performance goal at a challenging level to
motivate me to perform well” far more at Time 1 than at Time 2. Since none of the other goalsetting approaches were significantly changed from Time 1 to Time 2 in any of the conditions, it
cannot be assumed that participants receiving extremely negative feedback are systematically
revising their goal-strategy at a higher rate than the other two feedback conditions. As could be
expected, the 73rd percentile group did not feel a need to change their current approach to
pursuing the goal, likely because participants believed their abilities had qualified them for the
gift card raffle. However, the goal-setting approaches of the participants in the 45th percentile
condition are less easy to interpret because these participants did not change their goal-setting
strategy even though they believed that their previous performance disqualified them for the gift
63
card raffle. Future research is necessary to better understand progress decisions when
performance feedback moderately misses the target performance level.
Changes in how one values the goal were not explicitly hypothesized, yet provide an
opportunity for a post-hoc exploration of why specific items were endorsed by certain feedback
conditions. The only statement that elicited changes from the first trial to the second trial was “I
will be happy with my performance if I give my best effort, even if I do not meet my goal” from
the 28th percentile group and 45th percentile group. A significant number of participants
receiving extremely negative and moderately negative feedback who initially stated that doing
one’s best is more important than meeting the goal no longer endorsed this belief at Time 2. This
change in value may suggest that participants in the 28th percentile and 45th percentile conditions
did not experience personal satisfaction in their performance feedback at Time 1, and therefore
shifted their values at Time 2 to be more in line with goal attainment instead of striving for one’s
personal best.
The most important plans, goals, and values also showed interesting patterns. First,
participants in all of the conditions chose the same plan, goal, and value as the most important at
Time 1. The most important plan to meet the goal endorsed by all conditions at Time 1 (“I am
going to emphasize accuracy and speed equally”) did not significantly change at Time 2 for any
of the feedback conditions. Participants in the 28th percentile feedback condition significantly
changed their goal-setting strategy from Time 1 to Time 2, while participants in the 45th
percentile feedback condition changed how they valued their goal at the second trial. Participants
in the 73rd percentile feedback condition continued to endorse these items at Time 2 with no
significant change in what they believed to be most important to meet their stated plan, goal, or
value.
64
Applications to Image Theory Research
The goal of the current study was to better understand how people respond to positive
and negative feedback through the principles of image theory. This is the first known study on
image theory to manipulate feedback through random assignment to study the effects of
feedback on experienced shocks and stated beliefs (value images), goals (trajectory images), and
plans to meet those goals (strategy images). Although some of the hypotheses were supported by
the results, many more questions arise about why participants in the three feedback conditions
responded as they did.
This study contributes to image theory research by directly measuring the impact of
feedback on changes in value images, trajectory images, and strategy images. Image theory
researchers have postulated that the feedback present in progress decisions leads one to modify
the plans to achieve the goal, revise the strategy or goal, or abandon the goal (Beach & Mitchell,
1998). The results of this study suggest that plans are more frequently changed than goals across
all feedback conditions, and goals are more frequently changed than values. Over more trials,
however, this may change as the decision maker realizes that the goal is no longer compatible
with the plans to meet the goal, making it unattainable in its current state. Future experimental
research on progress decisions would benefit from extending the number of feedback trials to
three or more to be able to assess changes in plans, goals, and values across longer periods of
time.
The most important result of this study is that it demonstrated how feedback alone can
cause a shock and influence image theory’s stated dimensions of that shock, such as
expectedness and valence. This was done by studying feedback in isolation with a true
experiment and random assignment to conditions, allowing causal inferences between the
65
feedback condition and changes in the dependent measures. There are many other aspects of
image theory, however, that this study does not address. For example, the unfolding model of
turnover identifies four distinct decision paths for leavers in an organization, based on behavioral
scripts, image violations, job satisfaction, and searching for alternatives before making the
decision to leave one’s job (Lee & Mitchell, 1994). Significant experimental and field research
into all of these aspects of the unfolding model of turnover needs to be conducted to understand
the role of these variables in the decision to leave an organization.
Limitations
When critically examining a study and the potential impact of its results, it is useful to
examine the study design through the four interrelated types of validity identified in Cook,
Campbell, & Perrachio (1990): statistical conclusion validity, internal validity, external validity,
and construct validity. Statistical conclusion validity refers to one’s confidence in the strength of
the observed relationships between variables. Threats to statistical conclusion validity can
include low statistical power, range restriction, and inflated error rates. The final sample size
exceeded the minimum sample size calculated with power analyses in the design phase of the
study, none of the independent or dependent variables were affected by range restriction, and
multiple McNemar chi-square tests were corrected with an adjusted alpha level. Therefore,
statistical conclusion validity is likely not a serious concern in this study.
Likewise, this study’s true laboratory experimentation with random assignment to
feedback conditions increases the internal validity. Internal validity is associated with the
confidence that the treatment alone is responsible for causing the observed changes in the
dependent measures. All other potential confounds were held constant across conditions, so any
66
differences between conditions can confidently be attributed to the level of the feedback at the
73rd, 45th, or 28th percentile levels.
External validity is the extent to which study results generalize to other populations and
settings. Cook et al. (1990) explain that internal validity and external validity are often at odds
with one another, so increased internal validity, such as in this true laboratory experiment, may
reduce external validity. The generalizability of these findings to other settings, such as the
workplace or a high-stakes scenario with real consequences for poor performance, may be
limited due to the simplicity of the Number Reduction Task and its inability to represent a
realistic job task. Likewise, participant motivation does not generalize to an incumbent
employee population because the study had no real consequences to the subjects, even if they
were motivated to earn entry into a gift card raffle. However, the psychological process of
reacting to feedback and deciding how to proceed within a progress decision that was utilized in
this study may generalize to real-life settings. Further research is needed to determine if this
study’s feedback reactions within a progress decision are similar to feedback reactions that
employees experience when faced with an important decision to continue, revise, or abandon a
work goal. Cook et al. (1990) acknowledge that it is important for researchers to design studies
that maximize the type of validity most relevant to their research questions, and thus internal
validity was emphasized.
Finally, construct validity is present when the operational definitions of the variables are
appropriately measuring the intended construct. Construct validity is problematic in this study.
In an effort to stay true to the one-item measures in the image theory turnover literature, oneitem measures were adapted and used throughout the study. Construct validity could be
demonstrated by developing and validating scale measures of the image theory constructs, such
67
as shocks, expectedness, and valence, for future use in image theory research. Future research
also would benefit from validating the statements provided as plans, goals, and values to
determine if the statements should be revised, deleted, or combined with new statements for each
of the image categories. For example, participants may view their past performance as a strong
indicator of future performance and believe that little can be done to improve on subsequent
trials. This belief provides little incentive to change one’s plan to achieve the goal, goal-setting
strategy, or value of the goal from Time 1 to Time 2. Participants whose feedback fell short of
their original goal may be arbitrarily selecting a different plan to achieve the goal or a different
goal-setting strategy in response to being told that their first strategy was not effective. Construct
valid measures of these statements about plans, goals, and values would provide greater
confidence in the results that changes from the first trial to the second trial were intentional and
directed toward the new goal set at Time 2. This study, and image theory research in general,
would benefit greatly by using reliable, construct-valid measures instead of the one-item
measures traditionally used in the turnover literature.
There are several limitations specific to this study’s methodology. Credibility of the
source is a known moderator of feedback acceptance (Ilgen, 1979), and great lengths were taken
in the survey construction to make the participants believe the feedback was genuine. Because of
the complexity of the research, this was a first pass at operationalizing some image theory claims
into a testable framework. The perceived credibility of the source was not directly measured in
this study, so it is unknown if participants internalized the feedback as believable when setting
new goals and changing their plans at the second trial.
Also, it is also unknown whether the gift card raffle was sufficiently motivating for
participants. In the first trial, the incentive for good performance was standardized across all
68
participants such that all participants were told that performance above the 51st percentile would
yield one chance in the gift card raffle. For the second trial, the extremely negative feedback
condition and moderately negative feedback condition were told that they would have one more
chance to be entered into the raffle, while the positive feedback condition was offered a second
chance. It is not known if the second chance at the raffle was sufficient motivation to perform
for the positive feedback condition, or if it had no effect. Additional research in this area would
benefit from examining the motivating properties of the incentive structure to standardize the
external motivation of all participants.
In addition, the valence distribution of the feedback percentiles for this population is
unknown. Percentile values were anchored around the experimenter-imposed performance
criterion at the 51st percentile for inclusion in the gift card raffle. The moderately negative
feedback condition at the 45th percentile was arbitrarily chosen to be a few points below the
cutoff for inclusion in the raffle. The extremely negative (28th percentile) and positive (73rd
percentile) feedback conditions were chosen to span a wide range of the valence distribution
while being mathematically equidistant from the performance criterion. However, it is unknown
if participants perceived these values to be psychologically equidistant. Additional research on
the valence distribution and the feedback recipient’s interpretation of the feedback values would
strengthen this study’s feedback manipulation and interpretation of the results.
Another limitation to this study is the lack of a control group. Many significant
differences were found between feedback conditions in experienced negative shocks, changes in
goal percentile, accuracy, and speed measures from Time 1 to Time 2, and changes in the stated
plans, goals, and values. All participants received false feedback, so it is unknown if the
provision of feedback itself was the cause of the changes in the groups, or if the level of the
69
feedback percentile received (28th vs. 45th vs. 73rd) independently produced the changes. The
simplest explanation for the results may be that the feedback itself is causing the differences
from Time 1 to Time 2 and the feedback levels had no effect. The true relationship between
feedback and these image theory measures would be better understood by comparing the effects
to a control group.
Consistencies across the feedback conditions, such as all three feedback conditions
reporting significant changes from Time 1 to Time 2 in the same plan to achieve the goal, further
confound the role of feedback itself compared to the levels of the feedback provided. More
specifically, an alternative explanation for the pattern of results in the plans to achieve the goal
may be that there are significant practice effects for accuracy on the task. If participants believe
their accuracy cannot improve on later trials, they may shift their attention to “speed” or another
plan (like “reducing distractions”) simply because focusing on accuracy alone is no longer a
viable option to attain one’s goals. A control group that did not receive any feedback throughout
the course of the study may help to clarify these relationships.
The online data collection presented many benefits and challenges to data collection.
Features of the survey software, such as time (in seconds) spent on each page, resulted in better
data cleaning by eliminating those participants who were distracted for extended periods of time
on the survey. Participants who were extremely fast on the Number Reduction Task, signifying
low effort in conjunction with their low accuracy scores, were also easily detected and removed
from the data set. The final sample used to analyze the hypotheses contained less error as a
result of the removing cases with the online survey software’s timing features. The online
survey interface was ideal for implementing false feedback as well, such that participants were
led to believe the survey software scored their actual performance and could provide immediate
70
and legitimate feedback on which to base decisions at the second trial. This increased the
credibility in one’s feedback percentile, thereby increasing the likelihood it would be believed to
be genuine and producing a stronger manipulation of feedback condition. However, one
particular concern with data quality using the online survey method is the inability to monitor
participant effort, in comparison to administering the survey with paper and pencil in the
presence of an experimenter. Three cases of the n=505 complete cases were deleted for obvious
low effort, such as typing nonsense words into the open-ended response fields to advance the
survey. These n=3 cases were identified because the participant explicitly typed nonsense words
into the open-ended response fields; more participants may have given low effort on the task
without being flagged in that particular manner.
Finally, it could be argued that the most important limitation of the present study is image
theory itself. Although its very name implies otherwise, image theory lacks the critical elements
of good theory. Namely, none of the known image theory research has been integrated into a
comprehensive model of testable hypotheses and causal pathways between critical elements of
the decision making process. The tenets of image theory are therefore unfalsifiable, restricting its
usefulness to science or practice. Due to these limitations and image theory’s misleading name,
consumers of image theory research may benefit from thinking of the value, trajectory, and
strategy images as part of an image “heuristic” rather than an image “theory.” In response to this
known limitation, the objective of this study was to identify and merge concepts from various
image theory studies into a meaningful operationalization of a progress decision within the
laboratory setting. Within this decision, standardized feedback was found to elicit a shock that
initiates the decision making process and results in changes to the value, trajectory, and strategy
images. This study contributes to image theory literature by providing an experimental
71
manipulation of image theory’s concepts; however, numerous replications, extensions, and
refinements of this and other image theory studies are essential before image theory can be
considered a good theory of decision making that will significantly influence science and
practice.
Conclusions
This study provides a significant first step into understanding the role of feedback on
shocks and image violations. False feedback in a realistic condition produced some of the
hypothesized behavioral changes across participant responses, but many inconsistencies within
the feedback conditions. Future research will need to be conducted to understand how feedback
affects progress decisions and how image theory contributes to decision making research.
72
REFERENCES
Abelson, M. A. (1987). Examination of avoidable and unavoidable turnover. Journal of Applied
Psychology, 72, 382-386.
Beach, L. R. (1996). Decision making in the workplace: A unified perspective. Hillsdale, NJ:
Lawrence Erlbaum.
Beach, L. R. (1997). The psychology of decision making: People in organizations. Thousand
Oaks, CA: Sage Publications, Inc.
Beach, L. R., & Connolly, T. (2005). The psychology of decision making: People in
Organizations (2nd ed.) & Connolly, 2005
Beach, L. R., & Mitchell, T. R. (1998). A contingency model for the selection of decision
strategies. In L. R. Beach (Ed.), Image theory: Theoretical and empirical foundations (pp.
145-158). Mahweh, NJ: Lawrence Erlbaum.
Bell, B. G., Gardner, M. K., & Woltz, D. J. (1997). Individual differences in undetected errors in
skilled cognitive performance. Learning and Individual Differences, 9, 43-61.
Benson, L, & Beach, L. R. (1996). The effects of time constraints on the prechoice screening of
decision options. Organizational Behavior and Human Decision Processes, 67, 222-228.
Campion, M. A., & Lord, R. G. (1982). A control systems conceptualization of the goal-setting
and changing process. Organizational Behavior & Human Performance, 30, 265-287.
Cook, T. D., Campbell, D. T., & Perrachio, L. (1990). Quasi experimentation. In M.D. Dunnette
& L.H. Hough (Eds.), Handbook of Industrial and Organization Psychology (2nd Ed.).
Palo Alto, CA: Consulting Psychologists Press. Vol. 1.
Dalton, D. R., Krackhardt, D. M., & Porter, L. W. (1981). Functional turnover: An empirical
assessment. Journal of Applied Psychology, 66, 716-721.
73
Dunegan, K. J. (1995). Image theory: Testing the role of image compatibility in progress
decisions. Organizational Behavior and Human Decision Proceses, 62, 79-86.
Dunegan, K. J., Duchon, D. & Ashmos, D. (1995). Image compatibility and the use of problems
pace information in resource allocation decisions: Testing a moderating effects model.
Organizational Behavior and Human Decision Processes 64, 31-37.
Ehrhart, K. H., & Ziegert, J. C. (2005). Why are individuals attracted to organizations? Journal
of Management, 31, 901-919.
Feldman, D. C. (1994). The decision to retire early: A review and conceptualization. The
Academy of Management Review, 19, 285-311.
Fitzgerald, S., Dimitrov, D., & Rumrill, P. (2001). The basics of nonparametric statistics. Work,
16, 287-292.
Holtom, B. C., Mitchell, T. R., Lee, T. W., & Inderrieden, E. J. (2005). Shocks as causes of
turnover: What they are and how organizations can manage them. Human Resource
Management, 44, 337-352.
Ilgen, D. R. (1971). Satisfaction with performance as a function of the initial level of expected
performance and the deviation from expectations. Organizational Behavior & Human
Performance, 6, 345-361.
Ilgen, D. R., Fisher, C. D., & Taylor, M. S. (1979). Consequences of individual feedback on
behavior in organizations. Journal of Applied Psychology, 64, 349-371.
Ilgen, D. R., & Hamstra, B. W. (1972). Performance satisfaction as a function of the difference
between expected and reported performance at five levels of reported performance.
Organizational Behavior & Human Performance, 7, 359-370.
74
Kluger, A. N., & DeNisi, A. (1996). Effects of feedback intervention on performance: A
historical review, a meta-analysis, and a preliminary feedback intervention theory.
Psychological Bulletin, 119, 254-284.
Lee, T. W., & Mitchell, T. R. (1994). An alternative approach: The unfolding model of voluntary
employee turnover. The Academy of Management Review, 19, 51-89.
Lee, T. W., Mitchell, T. R., Holtom, B. C., McDaniel, L. S., & Hill, J. W. (1999). The unfolding
model of voluntary turnover: A replication and extension. Academy of Management
Journal, 42, 450-462.
Lee, T. W., Mitchell, T. R., Wise, L., & Fireman, S. (1996). An unfolding model of voluntary
employee turnover. Academy of Management Journal, 39, 5-36.
Mikulincer, M. (1994). Human learned helplessness: A coping perspective. New York, NY:
Plenum Press.
Morrell, K., Loan-Clarke, J., Wilkinson, A. (2004). The role of shocks in employee turnover.
British Journal of Management, 15, 335-349.
Nelson, K. A. (2004). Consumer decision making and image theory: Understanding value-laden
decisions. Journal of Consumer Psychology, 14, 28-40.
Pesta, B. J., Kass, D. S., Dunegan, K. J. (2005). Image theory and the appraisal of employee
performance: To screen or not to screen? Journal of Business and Psychology, 19, 341360.
Ryan, A. M., Sacco, J. M., McFarland, L. A., & Kriska, S. D. (2000). Applicant self-selection:
Correlates of withdrawal from a multiple hurdle process. Journal of Applied Psychology,
85, 163-179.
75
Schepers, D. H., & Beach, L. R. (1998). An image theory view of worker motivation. In L. R.
Beach (Ed.), Image theory: Theoretical and empirical foundations (pp. 125-131).
Mahwah, NJ: Lawrence Erlbaum.
Smith, E. N. (2009). Longitudinal accounts of help-seeking behavior: An image theory
alternative (Unpublished master’s thesis). Bowling Green State University, Bowling
Green, OH.
76
Table 1
Descriptive Statistics and Pearson Correlations for Variables at Time 1 and Time 2
Variable
1. Time 1 Goal percentile
2. Time 1 Accuracy confidence
3. Time 1 Accuracy comparison to others
4. Time 1 Speed confidence
5. Time 1 Speed comparison to others
6. Image theory – expectedness
7. Image theory – positivity
8. Image theory - personal issues
9. Image theory - task issues
10. Accuracy forecast
11. Speed forecast
12. Percentile forecast
13. Time 2 Goal percentile
14. Time 2 Accuracy confidence
15. Time 2 Accuracy comparison to others
16. Time 2 Speed confidence
17. Time 2 Speed comparison to others
18. Number Reduction Task – practice correct %
19. Number Reduction Task – trial 1 correct %
20. Number Reduction Task – trial 2 correct %
21. Number Reduction Task – trial 1 time (s per item)
22. Number Reduction Task – trial 2 time (s per item)
Note. Correlations ≥ .10 are significant at p < .05.
Correlations ≥ .13 are significant at p < .01.
Min
1
1
1
1
1
1
1
1
1
1
1
2
1
1
1
1
1
0.0
0.0
0.0
0.97
0.95
Max
99
5
7
5
7
5
5
5
5
3
3
99
99
5
7
5
7
100
100
100
29.03
59.85
M
64.33
3.20
4.58
2.78
4.00
2.42
2.60
2.26
1.81
2.15
2.30
52.18
55.72
2.90
4.14
2.55
3.75
90.86
78.93
81.20
8.28
6.16
SD
18.46
1.01
1.21
.88
1.17
1.14
1.35
1.12
1.01
.67
.71
18.96
18.23
.99
1.19
.92
1.18
17.03
27.97
22.72
4.01
3.75
1
2
3
4
5
6
7
.44
.61
.41
.53
-.06
-.29
-.01
-.04
.10
.07
.31
.23
.24
.34
.10
.23
.23
.37
.31
.08
-.05
.67
.60
.56
-.07
-.05
.05
-.07
.12
.18
.27
.18
.49
.42
.25
.30
.30
.45
.43
.05
-.00
.58
.73
-.05
-.18
.00
-.08
.14
.20
.30
.20
.42
.51
.18
.32
.33
.45
.39
.02
-.03
.74
-.03
-.09
-.07
-.03
.17
.18
.24
.18
.39
.40
.42
.43
.21
.27
.24
-.01
-.04
-.00
-.17
-.06
-.04
.16
.16
.24
.17
.36
.47
.38
.51
.23
.31
.27
-.04
-.07
.31
.10
.08
-.08
.01
.15
.18
.08
.10
.14
.20
.06
-.00
.04
.12
.04
-.05
-.01
-.07
-.11
.35
.50
.15
.20
.28
.33
-.09
-.12
-.12
.02
.04
77
Table 1 (cont.)
Descriptive Statistics and Pearson Correlations for Variables at Time 1 and Time 2
Variable
1. Time 1 Goal percentile
2. Time 1 Accuracy confidence
3. Time 1 Accuracy comparison to others
4. Time 1 Speed confidence
5. Time 1 Speed comparison to others
6. Image theory – expectedness
7. Image theory – positivity
8. Image theory - personal issues
9. Image theory - task issues
10. Accuracy forecast
11. Speed forecast
12. Percentile forecast
13. Time 2 Goal percentile
14. Time 2 Accuracy confidence
15. Time 2 Accuracy comparison to others
16. Time 2 Speed confidence
17. Time 2 Speed comparison to others
18. Number Reduction Task – practice correct %
19. Number Reduction Task – trial 1 correct %
20. Number Reduction Task – trial 2 correct %
21. Number Reduction Task – trial 1 time (s per item)
22. Number Reduction Task – trial 2 time (s per item)
Note. Correlations ≥ .10 are significant at p < .05.
Correlations ≥ .13 are significant at p < .01.
8
9
10
11
12
13
14
15
16
17
18
19
20
21
.24
-.10
.07
-.05
-.08
.11
-.04
-.04
-.05
.02
.02
.04
.08
-.01
.03
.02
.03
-.01
-.06
-.05
.08
.06
-.06
-.16
-.11
-.02
-.03
.21
.25
.11
.14
.20
.10
.14
.15
.13
.07
-.02
.00
.19
.05
.18
.27
.18
.24
.14
.20
.17
-.04
-.09
.75
.38
.61
.37
.54
.15
.24
.20
.09
.02
.27
.54
.33
.53
.10
.16
.14
.12
.06
.59
.48
.43
.18
.36
.32
.03
-.01
.43
.64
.19
.35
.31
.03
.00
.74
-.09
.01
.01
-.09
-.06
.06
.12
.12
-.01
-.03
.49
.48
.18
.12
.87
.20
.10
.24
.17 .54
78
Table 2
Mean Goal Percentiles of Participants in Different Feedback Conditions
Time 1
Time 2
Feedback Condition
n
M
SD
M
All Participants
434
64.33
18.46
55.72
28th Percentile
145
65.75
18.25
40.62a
45th percentile
137
62.65
19.21
51.37 a
rd
73 Percentile
152
64.49
17.97
74.06 a
a
Significant (p < .001) change from Time 1 to Time 2, within feedback condition
SD
18.23
15.47
9.40
8.42
79
Table 3
Analysis of Variance Summary for Time 1 Number Reduction Task Accuracy Percentage by
Feedback Condition
Sum of Squares df Mean Square
F
Between Groups
.107
2
.054 0.684
Within Groups
33.774 431
.078
Total
33.881 433
p > .05
Table 3a.
Descriptive Statistics of Time 1 Number Reduction Task Accuracy Percentage by Feedback
Condition
Feedback Condition
n
M
SD
th
28 Percentile
145
77.13%
28.71%
th
45 Percentile
137
78.64%
27.40%
73rd Percentile
152
80.90%
27.83%
Table 3b.
Multiple Comparisons of Time 1 Number Reduction Task Accuracy Percentage by Feedback
Condition
Mean
Feedback Condition
Difference
Std. Error
Sig.
th
th
28 Percentile
45 percentile
-.015
.033
1.000
73rd percentile
-.038
.033
.739
th
th
45 Percentile
28 percentile
.015
.033
1.000
rd
73 percentile
-.023
.033
1.000
rd
th
73 Percentile
28 percentile
.038
.033
.739
th
45 percentile
.023
.033
1.000
p > .05 with Bonferroni Correction
80
Table 4
Analysis of Variance Summary for Time 1 Number Reduction Task Time (s Per Item) by
Feedback Condition
Sum of Squares df Mean Square
F
Between Groups
48.967
2
24.484 1.524
Within Groups
6922.202 431
16.061
Total
6971.169 433
p > .05
Table 4a.
Descriptive Statistics of Time 1 Number Reduction Task Time (s Per Item) by Feedback
Condition
Feedback Condition
n
M
SD
th
28 Percentile
145
7.81
3.90
45th Percentile
137
8.54
4.28
73rd Percentile
152
8.51
3.85
Table 4b.
Multiple Comparisons of Time 1 Number Reduction Task Time (s Per Item) by Feedback
Condition
Mean
Feedback Condition
Difference
Std. Error
Sig.
28th Percentile
45th percentile
-.727
.477
.385
73rd percentile
-.698
.465
.403
th
th
45 Percentile
28 percentile
.727
.477
.385
73rd percentile
.030
.472
1.000
73rd Percentile
28th percentile
.698
.465
.403
th
45 percentile
-.030
.472
1.000
p > .05 with Bonferroni Correction
81
Table 5
Analysis of Variance Summary for Manipulation Check of Accuracy by Feedback Condition
Sum of Squares df Mean Square
F
η2
Between Groups
12.018
2
6.009 6.801** .030
Within Groups
380.796 431
.884
Total
392.813 433
**p < 0.01
Table 5a.
Descriptive Statistics of Manipulation Check of Accuracy by Feedback Condition
Feedback Condition
n
M
th
28 Percentile
145
3.26
th
45 Percentile
137
3.32
73rd Percentile
152
3.63
SD
1.06
.91
.84
Table 5b.
Multiple Comparisons of Manipulation Check of Accuracy Between Feedback Conditions
Mean
Feedback Condition
Difference
Std. Error
Sig.
th
th
28 Percentile
45 percentile
-.066
.112
1.000
73rd percentile
-.376*
.109
.002
th
th
45 Percentile
28 percentile
.066
.112
1.000
rd
73 percentile
-.310*
.111
.016
rd
th
73 Percentile
28 percentile
.376*
.109
.002
th
45 percentile
.310*
.111
.016
*p < .05 with Bonferroni Correction
82
Table 6
Analysis of Variance Summary for Manipulation Check of Speed by Feedback Condition
Sum of Squares df Mean Square
F
η2
Between Groups
16.474
2
8.237 10.254*** .045
Within Groups
346.199 431
.803
Total
362.673 433
***p < 0.001
Table 6a.
Descriptive Statistics of Manipulation Check of Speed by Feedback Condition
Feedback Condition
n
M
th
28 Percentile
145
2.72
th
45 Percentile
137
2.91
73rd Percentile
152
3.19
SD
1.02
.80
.85
Table 6b.
Multiple Comparisons of Manipulation Check of Speed Between Feedback Conditions
Mean
Feedback Condition
Difference
Std. Error
28th Percentile
45th percentile
-.181
.107
rd
73 percentile
-.467*
.104
th
th
45 Percentile
28 percentile
.181
.107
rd
73 percentile
-.286*
.106
rd
th
73 Percentile
28 percentile
.467*
.104
45th percentile
.286*
.106
*p < .05 with Bonferroni Correction
Sig.
.273
.000
.273
.021
.000
.021
83
Table 7
Analysis of Variance Summary for Manipulation Check of Letter vs. Number by Feedback
Condition
Sum of Squares df Mean Square
F
η2
Between Groups
6.971
2
3.485 3.669* .017
Within Groups
409.472 431
.950
Total
416.442 433
*p < 0.05
Table 7a.
Descriptive Statistics of Manipulation Check of Letter vs. Number by Feedback Condition
Feedback Condition
n
M
SD
th
28 Percentile
145
2.43
1.01
th
45 Percentile
137
2.67
1.02
73rd Percentile
152
2.38
.90
Table 7b.
Multiple Comparisons of Manipulation Check of Letter vs. Number Between Feedback
Conditions
Mean
Feedback Condition
Difference
Std. Error
th
th
28 Percentile
45 percentile
-.237
.116
73rd percentile
-.059
.113
th
th
45 Percentile
28 percentile
.237
.116
rd
73 percentile
.297*
.115
rd
th
73 Percentile
28 percentile
-.059
.113
th
45 percentile
-.297*
.115
*p < .05 with Bonferroni Correction
Sig.
.126
1.000
.126
.030
1.000
.030
84
Table 8.
Analysis of Variance Summary for Expectedness by Feedback Condition
Sum of Squares df Mean Square
F
η2
Between Groups
54.417
2
27.30 22.94*** 0.01
Within Groups
511.260 431
1.186
Total
565.677 433
***p < 0.001
Table 8a.
Descriptive Statistics of Expectedness by Feedback Condition
Feedback Condition
n
th
28 Percentile
145
45th Percentile
137
73rd Percentile
152
M
2.05
2.30
2.88
SD
1.03
1.05
1.18
Table 8b.
Multiple Comparisons of Expectedness Between Feedback Conditions
Mean
Feedback Condition
Difference
Std. Error
th
th
28 Percentile
45 percentile
-.251
.130
rd
73 percentile
-.833*
.126
45th Percentile
28th percentile
.251
.130
rd
73 percentile
-.582*
.128
rd
th
73 Percentile
28 percentile
.833*
.126
th
45 percentile
.582*
.128
*p < .05 with Bonferroni Correction
Sig.
.161
.000
.161
.000
.000
.000
85
Table 9.
Analysis of Variance Summary for Positivity by Feedback Condition
Sum of Squares df Mean Square
F
η2
Between Groups
411.406
2
205.71 238.05*** 0.53
Within Groups
372.428 431
.864
Total
783.834 433
***p < 0.001
Table 9a.
Descriptive Statistics of Positivity by Feedback Condition
Feedback Condition
n
th
28 Percentile
145
th
45 Percentile
137
73rd Percentile
152
M
1.66
2.16
3.90
SD
.810
.909
1.05
Table 9b.
Multiple Comparisons of Positivity Between Feedback Conditions
Mean
Feedback Condition
Difference
28th Percentile
45th percentile
-.499*
rd
73 percentile
-2.239*
th
th
45 Percentile
28 percentile
.499*
rd
73 percentile
-1.741*
rd
th
73 Percentile
28 percentile
2.239*
45th percentile
1.741*
*p < .05 with Bonferroni Correction
Std. Error
.111
.108
.111
.110
.108
.110
Sig.
.000
.000
.000
.000
.000
.000
86
Table 10.
Analysis of Variance Summary for Personal Issues by Feedback Condition
Between Groups
Within Groups
Total
p > .05
Sum of Squares df Mean Square F
3.455
2
1.727 1.37
542.601 431
1.259
546.055 433
Table 10a.
Descriptive Statistics of Personal Issues by Feedback Condition
Feedback Condition
n
th
28 Percentile
145
th
45 Percentile
137
73rd Percentile
152
M
2.24
2.39
2.17
SD
1.18
1.9
1.10
Table 10b.
Multiple Comparisons of Personal Issues Between Feedback Conditions
Mean
Feedback Condition
Difference
Std. Error
th
th
28 Percentile
45 percentile
-.145
.134
73rd percentile
.070
.130
45th Percentile
28th percentile
.145
.134
rd
73 percentile
.216
.132
rd
th
73 Percentile
28 percentile
-.070
.130
th
45 percentile
-.216
.132
p > .05 with Bonferroni Correction
Sig.
.831
1.000
.831
.310
1.000
.310
87
Table 11.
Analysis of Variance Summary for Task Issues by Feedback Condition
Between Groups
Within Groups
Total
p > .05
Sum of Squares df Mean Square F
1.515
2
.758 .740
440.992 431
1.023
442.507 433
Table 11a.
Descriptive Statistics of Task Issues by Feedback Condition
Feedback Condition
n
th
28 Percentile
145
th
45 Percentile
137
73rd Percentile
152
M
1.77
1.90
1.78
SD
.972
1.01
1.05
Table 11b.
Multiple Comparisons of Task Issues Between Feedback Conditions
Mean
Feedback Condition
Difference
Std. Error
th
th
28 Percentile
45 percentile
-.132
.121
73rd percentile
-.011
.117
th
th
45 Percentile
28 percentile
.132
.121
rd
73 percentile
.121
.119
rd
th
73 Percentile
28 percentile
.011
.117
th
45 percentile
-.121
.119
p > .05 with Bonferroni Correction
Sig.
.819
1.000
.819
.926
1.000
.926
88
Table 12.
Analysis of Variance Summary for Accuracy Forecast by Feedback Condition
Sum of Squares df Mean Square
F
η2
Between Groups
3.417
2
1.709 3.813* 0.02
Within Groups
193.145 431
.448
Total
196.562 433
*p < 0.05
Table 12a.
Descriptive Statistics of Accuracy Forecast by Feedback Condition
Feedback Condition
n
th
28 Percentile
145
45th Percentile
137
73rd Percentile
152
M
2.08
2.28
2.10
SD
.68
.64
.69
Table 12b.
Multiple Comparisons of Accuracy Forecast Between Feedback Conditions
Mean
Feedback Condition
Difference
Std. Error
28th Percentile
45th percentile
-.202*
.080
rd
73 percentile
-.023
.078
th
th
45 Percentile
28 percentile
.202*
.080
rd
73 percentile
.179
.079
73rd Percentile
28th percentile
.023
.078
th
45 percentile
-.179
.079
*p < .05 with Bonferroni Correction
Sig.
.036
1.000
.036
.072
1.000
.072
89
Table 13.
Analysis of Variance Summary for Speed Forecast by Feedback Condition
Between Groups
Within Groups
Total
p > .05
Sum of Squares df Mean Square F
.555
2
.277 .546
218.904 431
.508
219.459 433
Table 13a.
Descriptive Statistics of Speed Forecast by Feedback Condition
Feedback Condition
n
th
28 Percentile
145
45th Percentile
137
73rd Percentile
152
M
2.31
2.34
2.26
SD
.77
.67
.70
Table 13b.
Multiple Comparisons of Speed Forecast Between Feedback Conditions
Mean
Feedback Condition
Difference
Std. Error
28th Percentile
45th percentile
-.033
.085
rd
73 percentile
.054
.083
th
th
45 Percentile
28 percentile
.033
.085
rd
73 percentile
.086
.084
73rd Percentile
28th percentile
-.054
.083
th
45 percentile
-.086
.084
p > .05 with Bonferroni Correction
Sig.
1.000
1.000
1.000
.911
1.000
.911
90
Table 14.
Analysis of Variance Summary for Percentile Forecast by Feedback Condition
Sum of Squares df Mean Square
F
η2
Between Groups
60005.175
2
30002.588 135.29*** 0.39
Within Groups
95579.445 431
221.762
Total
155584.620 433
*p < 0.05
Table 14a.
Descriptive Statistics of Percentile Forecast by Feedback Condition
Feedback Condition
n
M
th
28 Percentile
145
38.69
45th Percentile
137
50.11
73rd Percentile
152
66.92
SD
16.83
12.16
15.15
Table 14b.
Multiple Comparisons of Percentile Forecast Between Feedback Conditions
Mean
Feedback Condition
Difference
Std. Error
28th Percentile
45th percentile
-11.420*
1.77
rd
73 percentile
-28.231*
1.73
th
th
45 Percentile
28 percentile
11.420*
1.77
73rd percentile
-16.812*
1.75
rd
th
73 Percentile
28 percentile
28.231*
1.73
th
45 percentile
16.812*
1.75
*p < .05 with Bonferroni Correction
Sig.
.000
.000
.000
.000
.000
.000
91
Table 15
Descriptive Statistics for Goal Percentiles, Accuracy, and Speed Variables Across Time by Feedback Condition
Time 1
Time 2
Average
Variable
Feedback Condition
n
M
SD
M
SD
M
S.E.
Goal percentile
28th Percentile
145 65.75 18.25
40.62 15.47
53.19
1.05
th
45 Percentile
137 62.65 19.21
51.37
9.41
57.01
1.08
73rd Percentile
152 64.49 17.97
74.06
8.42
69.28
1.02
All
434 64.33 18.46
55.72 18.23
Accuracy confidence
28th Percentile
145 3.13
1.05
2.60
1.03
2.87
.07
th
45 Percentile
137 3.16
1.01
2.91
.95
3.04
.07
73rd Percentile
152 3.30
.98
3.18
.90
3.24
.07
All
434 3.20
1.01
2.90
.99
Accuracy comparison to others
28th Percentile
145 4.58
1.20
3.63
1.24
4.10
.08
th
45 Percentile
137 4.49
1.30
3.94
.91
4.22
.09
rd
73 Percentile
152 4.66
1.13
4.82
1.06
4.74
.08
All
434 4.58
1.21
4.14
1.19
Speed confidence
28th Percentile
145 2.75
.87
2.25
.92
2.50
.06
45th Percentile
137 2.76
.89
2.50
.86
2.63
.06
rd
73 Percentile
152 2.82
.89
2.88
.88
2.85
.06
All
434 2.78
.88
2.55
.92
Speed comparison to others
28th Percentile
145 4.07
1.19
3.23
1.14
3.65
.08
th
45 Percentile
137 3.90
1.11
3.55
.95
3.73
.09
rd
73 Percentile
152 4.02
1.20
4.41
1.10
4.22
.08
All
434 4.00
1.17
3.75
1.18
92
Table 16.
Mixed Measures Analysis of Variance Summary for Tests of Within-Subjects Effects of Goal
Percentile
Sum of Squares df Mean Square
F
η2
Goal Percentile
17350.81
1
17350.81 110.57*** 0.20
Goal × Feedback Condition
45385.83
2
22692.92 144.61*** 0.40
Error (goal percentile)
67633.88 431
156.92
***p < 0.001
Table 16a.
Mixed Measures Analysis of Variance Summary for Tests of Between-Subjects Effects
Sum of Squares df Mean Square
F
η2
Intercept
3100820.69
1 3100820.69 9792.54*** .96
Feedback Condition
42076.62
2
21038.31
66.44*** .24
Error
136476.72 431
316.65
***p < 0.001
Table 16b.
Pairwise Comparisons of Goal Percentile Across Time
Mean
Within-subjects Measure
Difference
Std. Error
Time 1
Time 2
8.95*
.85
Time 2
Time 1
-8.95*
.85
*p < .05 with Least Significant Difference (equivalent to no adjustments)
Sig.
.000
.000
Table 16c.
Multiple Comparisons of Goal Percentile Between Feedback Conditions
Mean
Feedback Condition
Difference
Std. Error
th
th
28 Percentile
45 percentile
-3.82*
1.50
rd
73 percentile
-16.09*
1.46
45th Percentile
28th percentile
3.82*
1.50
73rd percentile
-12.27*
1.48
rd
th
28 percentile
16.09*
1.46
73 Percentile
th
45 percentile
12.27*
1.48
*p < .05 with Bonferroni Correction
Sig.
.033
.000
.033
.000
.000
.000
93
Table 17.
Mixed Measures Analysis of Variance Summary for Tests of Within-Subjects Effects of Accuracy
Confidence
Sum of
Mean
Squares
df
Square
F
η2
Accuracy confidence
19.39
1
19.39 38.74*** .08
Accuracy confidence × Feedback
.03
6.56
2
3.28
6.55**
Condition
Error (accuracy confidence)
215.77 431
.50
**p < 0.01, ***p < 0.001
Table 17a.
Mixed Measures Analysis of Variance Summary for Tests of Between-Subjects Effects
Sum of Squares df Mean Square
F
η2
Intercept
8040.40
1
8040.40 5546.10*** .93
Feedback Condition
20.53
2
10.27
7.08** .03
Error
624.84 431
1.45
***p < 0.001
Table 17b.
Pairwise Comparisons of Accuracy Confidence Across Time
Mean
Within-subjects Measure
Difference
Std. Error
Sig.
Time 1
Time 2
.299*
.058
.000
Time 2
Time 1
-.299*
.048
.000
*p < .05 with Least Significant Difference (equivalent to no adjustments)
Table 17c.
Multiple Comparisons of Accuracy Confidence Between Feedback Conditions
Mean
Feedback Condition
Difference
Std. Error
th
th
28 Percentile
45 percentile
-.17
.101
rd
73 percentile
-.37*
.099
45th Percentile
28th percentile
.17
.101
73rd percentile
-.20
.100
rd
th
73 Percentile
28 percentile
.37*
.099
th
45 percentile
.20
.100
*p < .05 with Bonferroni Correction
Sig.
.278
.001
.278
.139
.001
.139
94
Table 18.
Mixed Measures Analysis of Variance Summary for Tests of Within-Subjects Effects of Accuracy
Comparison
Sum of
Mean
Squares
df
Square
F
η2
Accuracy comparison
43.72
1
43.72 73.31*** .15
Accuracy comparison × Feedback
.15
46.35
2
23.18 38.86***
Condition
Error (accuracy comparison)
257.06 431
.606
***p < 0.001
Table 18a.
Mixed Measures Analysis of Variance Summary for Tests of Between-Subjects Effects
Sum of Squares df Mean Square
F
η2
Intercept
16417.52
1
16417.52 8108.46*** .95
Feedback Condition
68.75
2
34.37
16.98*** .07
Error
872.66 431
2.03
***p < 0.001
Table 18b.
Pairwise Comparisons of Accuracy Comparison to Peers Across Time
Mean
Within-subjects Measure
Difference
Std. Error
Sig.
Time 1
Time 2
.449*
.052
.000
Time 2
Time 1
-.449
.052
.000
*p < .05 with Least Significant Difference (equivalent to no adjustments)
Table 18c.
Multiple Comparisons of Accuracy Comparison to Peers Between Feedback Conditions
Mean
Feedback Condition
Difference
Std. Error
Sig.
th
th
28 Percentile
45 percentile
-.11
.120
1.000
rd
73 percentile
-.64*
.117
.000
45th Percentile
28th percentile
.11
.120
1.000
73rd percentile
-.52
.119
.000
rd
th
73 Percentile
28 percentile
.64*
.117
.000
th
45 percentile
.52*
.119
.000
*p < .05 with Bonferroni Correction
95
Table 19.
Mixed Measures Analysis of Variance Summary for Tests of Within-Subjects Effects of Speed
Confidence
Sum of Squares df Mean Square
F
η2
Speed confidence
11.78
1
11.78 26.26*** .06
Speed confidence × Feedback Condition
11.82
2
5.91 13.17*** .06
Error (speed confidence)
193.39 431
.45
***p < 0.001
Table 19a.
Mixed Measures Analysis of Variance Summary for Tests of Between-Subjects Effects
Sum of Squares df Mean Square
F
η2
Intercept
6135.72
1
6135.72 5485.26*** .93
Feedback Condition
18.83
2
9.42
8.42*** .04
Error
482.11 431
1.12
***p < 0.001
Table 19b.
Pairwise Comparisons of Speed Confidence Across Time
Mean
Within-subjects Measure
Difference
Std. Error
Time 1
Time 2
.233*
.046
Time 2
Time 1
-.233*
.046
*p < .05 with Least Significant Difference (equivalent to no adjustments)
Sig.
.000
.000
Table 19c.
Multiple Comparisons of Speed Confidence Between Feedback Conditions
Mean
Feedback Condition
Difference
Std. Error
th
th
28 Percentile
45 percentile
-.13
.089
rd
73 percentile
-.35*
.087
45th Percentile
28th percentile
.13
.089
73rd percentile
-.22*
.088
rd
th
28 percentile
16.09*
1.46
73 Percentile
th
45 percentile
12.27*
1.48
*p < .05 with Bonferroni Correction
Sig.
.423
.000
.423
.038
.000
.000
96
Table 20.
Mixed Measures Analysis of Variance Summary for Tests of Within-Subjects Effects of Speed
Comparison
Sum of
Mean
Squares
df
Square
F
η2
Speed Comparison
17350.807
1
17350.807 110.569*** 0.204
Speed comparison × Feedback
0.402
45385.830
2
22692.915 144.612***
Condition
Error (speed comparison)
67633.875 431
156.923
***p < 0.001
Table 20a.
Mixed Measures Analysis of Variance Summary for Tests of Between-Subjects Effects
Sum of Squares df Mean Square
F
η2
Intercept
3100820.687
1 3100820.687 9792.540*** .958
Feedback Condition
42076.617
2
21038.308
66.440*** .236
Error
136476.720 431
316.651
***p < 0.001
Table 20b.
Pairwise Comparisons of Speed Comparison to Peers Across Time
Mean
Within-subjects Measure
Difference
Std. Error
Time 1
Time 2
8.95*
.85
Time 2
Time 1
-8.95*
.85
*p < .05 with Least Significant Difference (equivalent to no adjustments)
Sig.
.000
.000
Table 20c.
Multiple Comparisons of Speed Comparison to Peers Between Feedback Conditions
Mean
Feedback Condition
Difference
Std. Error
th
th
28 Percentile
45 percentile
-3.82*
1.50
rd
73 percentile
-16.09*
1.46
45th Percentile
28th percentile
3.82*
1.50
73rd percentile
-12.27*
1.48
rd
th
73 Percentile
28 percentile
16.09*
1.46
th
45 percentile
12.27*
1.48
*p < .05 with Bonferroni Correction
Sig.
.033
.000
.033
.000
.000
.000
97
Table 21.
Analysis of Variance Summary for Time 1 Goal Percentile by Feedback Condition
Between Groups
Within Groups
Total
p > .05
Sum of Squares df Mean Square
F
683.983
2
341.992 1.003
146916.238 431
340.873
147600.221 433
Table 21a.
Descriptive Statistics of Time 1 Goal Percentile by Feedback Condition
Feedback Condition
n
M
th
28 Percentile
145
65.75
th
45 Percentile
137
62.65
73rd Percentile
152
64.49
SD
18.25
19.21
17.97
Table 21b.
Multiple Comparisons of Time 1 Goal Percentile Between Feedback Conditions
Mean
Feedback Condition
Difference
Std. Error
th
th
28 Percentile
45 percentile
3.102
2.200
73rd percentile
1.258
2.143
th
th
45 Percentile
28 percentile
-3.102
2.200
rd
73 percentile
-1.844
2.175
rd
th
73 Percentile
28 percentile
-1.258
2.143
th
45 percentile
1.844
2.175
p > .05 with Bonferroni Correction
Sig.
.478
1.000
.478
1.000
1.000
1.000
98
Table 22.
Analysis of Variance Summary for Time 1 Accuracy Confidence by Feedback Condition
Between Groups
Within Groups
Total
p > .05
Sum of Squares df Mean Square
F
2.303
2
1.152 1.121
442.655 431
1.027
444.959 433
Table 22a.
Descriptive Statistics of Time 1Accuracy Confidence by Feedback Condition
Feedback Condition
n
M
th
28 Percentile
145
3.13
45th Percentile
137
3.16
73rd Percentile
152
3.30
SD
1.05
1.01
.98
Table 22b.
Multiple Comparisons of Time 1 Accuracy Confidence Between Feedback Conditions
Mean
Feedback Condition
Difference
Std. Error
th
th
28 Percentile
45 percentile
-.030
.121
73rd percentile
-.165
.118
th
th
45 Percentile
28 percentile
.030
.121
rd
73 percentile
-.135
.119
rd
th
73 Percentile
28 percentile
.165
.118
th
45 percentile
.135
.119
p > .05 with Bonferroni Correction
Sig.
1.000
.484
1.000
.771
.484
.771
99
Table 23.
Analysis of Variance Summary for Time 1 Accuracy Comparison to Others by Feedback
Condition
Sum of Squares df Mean Square F
Between Groups
2.218
2
1.109 .762
Within Groups
627.460 431
1.456
Total
629.677 433
p > .05
Table 23a.
Descriptive Statistics of Time 1 Accuracy Comparison to Others by Feedback Condition
Feedback Condition
n
M
SD
th
28 Percentile
145
4.58
1.20
45th Percentile
137
4.49
1.30
73rd Percentile
152
4.66
1.13
Table 23b.
Multiple Comparisons of Time 1 Accuracy Comparison to Others Between Feedback Conditions
Mean
Feedback Condition
Difference
Std. Error
Sig.
th
th
28 Percentile
45 percentile
.090
.144
1.000
73rd percentile
-.085
.140
1.000
th
th
45 Percentile
28 percentile
-.090
.144
1.000
rd
73 percentile
-.175
.142
.653
rd
th
73 Percentile
28 percentile
.085
.140
1.000
th
45 percentile
.175
.142
.653
p > .05 with Bonferroni Correction
100
Table 24.
Analysis of Variance Summary for Time 1 Speed Confidence by Feedback Condition
Between Groups
Within Groups
Total
p > .05
Sum of Squares df Mean Square F
.448
2
.224 .287
336.317 431
.780
336.765 433
Table 24a.
Descriptive Statistics of Time 1 Speed Confidence by Feedback Condition
Feedback Condition
n
M
th
28 Percentile
145
2.75
th
45 Percentile
137
2.76
73rd Percentile
152
2.82
SD
.87
.89
.89
Table 24b.
Multiple Comparisons of Time 1 Speed Confidence Between Feedback Conditions
Mean
Feedback Condition
Difference
Std. Error
28th Percentile
45th percentile
-.007
.105
rd
73 percentile
-.071
.103
th
th
45 Percentile
28 percentile
.007
.105
rd
73 percentile
-.063
.104
rd
th
73 Percentile
28 percentile
.071
.103
45th percentile
.063
.104
p > .05 with Bonferroni Correction
Sig.
1.000
1.000
1.000
1.000
1.000
1.000
101
Table 25.
Analysis of Variance Summary for Time 1 Speed Comparison to Others by Feedback Condition
Between Groups
Within Groups
Total
p > .05
Sum of Squares df Mean Square F
2.177
2
1.089 .794
590.820 431
1.371
592.998 433
Table 25a.
Descriptive Statistics of Time 1 Speed Comparison to Others by Feedback Condition
Feedback Condition
n
M
th
28 Percentile
145
4.07
th
45 Percentile
137
3.90
73rd Percentile
152
4.02
SD
1.19
1.11
1.20
Table 25b.
Multiple Comparisons of Time 1 Speed Comparison to Others Between Feedback Conditions
Mean
Feedback Condition
Difference
Std. Error
Sig.
th
th
28 Percentile
45 percentile
.171
.139
.662
73rd percentile
.049
.136
1.000
45th Percentile
28th percentile
-.171
.139
.662
rd
73 percentile
-.122
.138
1.000
rd
th
73 Percentile
28 percentile
-.049
.136
1.000
th
45 percentile
.122
.138
1.000
p > .05 with Bonferroni Correction
102
Table 26.
Analysis of Variance Summary for Time 2 Goal Percentile by Feedback Condition
Sum of Squares df Mean Square
F
η2
Between Groups
86778.463
2
43389.232 326.969*** 0.60
Within Groups
57194.357 431
132.702
Total
143972.820 433
***p < 0.001
Table 26a.
Descriptive Statistics of Time 2 Goal Percentile by Feedback Condition
Feedback Condition
n
M
th
28 Percentile
145
40.62
th
45 Percentile
137
51.37
73rd Percentile
152
74.06
SD
15.47
9.41
8.42
Table 26b.
Multiple Comparisons of Time 2 Goal Percentile Between Feedback Conditions
Mean
Feedback Condition
Difference
Std. Error
th
th
28 Percentile
45 percentile
-10.744*
1.373
73rd percentile
-33.439*
1.337
th
th
45 Percentile
28 percentile
10.744*
1.373
rd
73 percentile
-22.694*
1.357
rd
th
73 Percentile
28 percentile
33.439*
1.337
th
45 percentile
22.694*
1.357
*p < .05 with Bonferroni Correction
Sig.
.000
.000
.000
.000
.000
.000
103
Table 27.
Analysis of Variance Summary for Time 2 Accuracy Confidence by Feedback Condition
Sum of Squares df Mean Square
F
η2
Between Groups
24.787
2
12.393 13.423*** 0.05
Within Groups
397.953 431
.923
Total
422.740 433
***p < 0.001
Table 27a.
Descriptive Statistics of Time 2 Accuracy Confidence by Feedback Condition
Feedback Condition
n
M
th
28 Percentile
145
2.60
th
45 Percentile
137
2.91
73rd Percentile
152
3.18
SD
1.03
.95
.90
Table 27b.
Multiple Comparisons of Time 2 Accuracy Confidence Between Feedback Conditions
Mean
Feedback Condition
Difference
Std. Error
th
th
28 Percentile
45 percentile
-.312*
.114
73rd percentile
-.578*
.112
45th Percentile
28th percentile
.312*
.114
rd
73 percentile
-.265
.113
rd
th
73 Percentile
28 percentile
.578*
.112
th
45 percentile
.265
.113
*p < .05 with Bonferroni Correction
Sig.
.020
.000
.020
.059
.000
.059
104
Table 28.
Analysis of Variance Summary for Time 2 Accuracy Comparison to Others by Feedback
Condition
Sum of Squares df Mean Square
F
η2
Between Groups
112.878
2
56.439 48.431*** 0.18
Within Groups
502.265 431
1.165
Total
615.143 433
***p < 0.001
Table 28a.
Descriptive Statistics of Time 2 Accuracy Comparison to Others by Feedback Condition
Feedback Condition
n
M
SD
th
28 Percentile
145
3.63
1.24
45th Percentile
137
3.94
.91
rd
73 Percentile
152
4.82
1.06
Table 28b.
Multiple Comparisons of Time 2 Accuracy Comparison to Others Between Feedback Conditions
Mean
Feedback Condition
Difference
Std. Error
Sig.
28th Percentile
45th percentile
-.314*
.129
.045
73rd percentile
-1.188*
.125
.000
th
th
45 Percentile
28 percentile
.314*
.129
.045
73rd percentile
-.874*
.127
.000
rd
th
73 Percentile
28 percentile
1.188*
.125
.000
th
45 percentile
.874*
.127
.000
*p < .05 with Bonferroni Correction
105
Table 29.
Analysis of Variance Summary for Time 2 Speed Confidence by Feedback Condition
Sum of Squares df Mean Square
F
η2
Between Groups
30.206
2
15.103 19.192*** 0.08
Within Groups
339.179 431
.787
Total
369.385 433
***p < 0.001
Table 29a.
Descriptive Statistics of Time 2 Speed Confidence by Feedback Condition
Feedback Condition
n
M
th
28 Percentile
145
2.25
th
45 Percentile
137
2.50
73rd Percentile
152
2.88
SD
.92
.86
.88
Table 29b.
Multiple Comparisons of Time 2 Speed Confidence Between Feedback Conditions
Mean
Feedback Condition
Difference
Std. Error
th
th
28 Percentile
45 percentile
-.255*
.106
73rd percentile
-.633*
.103
45th Percentile
28th percentile
.255*
.106
rd
73 percentile
-.378*
.105
rd
th
73 Percentile
28 percentile
.633*
.103
th
45 percentile
.378*
.105
*p < .05 with Bonferroni Correction
Sig.
.048
.000
.048
.001
.000
.001
106
Table 30.
Analysis of Variance Summary for Time 2 Speed Comparison to Others by Feedback Condition
Sum of Squares df Mean Square
F
η2
Between Groups
110.869
2
55.435 48.487*** 0.18
Within Groups
492.755 431
1.143
Total
603.624 433
***p < 0.001
Table 30a.
Descriptive Statistics of Time 2 Speed Comparison to Others by Feedback Condition
Feedback Condition
n
M
th
28 Percentile
145
3.23
th
45 Percentile
137
3.55
73rd Percentile
152
4.41
SD
1.14
.95
1.10
Table 30b.
Multiple Comparisons of Time 2 Speed Comparison to Others Between Feedback Conditions
Mean
Feedback Condition
Difference
Std. Error
Sig.
th
th
28 Percentile
45 percentile
-.320*
.127
.037
73rd percentile
-1.180*
.124
.000
th
th
45 Percentile
28 percentile
.320*
.127
.037
rd
73 percentile
-.860*
.126
.000
rd
th
73 Percentile
28 percentile
1.180*
.124
.000
th
45 percentile
.860*
.126
.000
*p < .05 with Bonferroni Correction
107
Table 31.
McNemar’s Chi-Square Test for Correlated Proportions for Plans by Feedback Conditions
Feedback Condition
28th
45th
73rd
Plans
Percentile
Percentile
Percentile
“I am going to emphasize accuracy.”
8.80
14.23*
14.44*
“I am going to emphasize speed.”
“I am going to emphasize accuracy and speed
equally.”
“I will not focus on anything so I don’t ‘over
think’ it.”
“I will reduce distractions to increase my overall
focus.”
Alpha = .05 / 48 tests
*p < 0.001
27.22*
11.76*
0.25
13.07*
19.57*
0.14
0.36
4.84
8.33
20.57*
13.37*
16.67*
108
Table 32.
McNemar’s Chi-Square Test for Correlated Proportions for Goals by Feedback Conditions
Feedback Condition
28th
45th
73rd
Goals
Percentile
Percentile
Percentile
“I set my performance goal low on purpose so it
2.94
4.12
1.00
would be easy to meet.”
“I set my performance goal at a challenging level
to motivate me to perform well.”
15.00*
4.59
0.67
“I set my performance goal where I did because
achieving it would reward me with a chance at
the gift card.”
“I set my performance goal without a reason in
mind.”
6.10
1.78
0.68
1.06
5.54
1.81
Alpha = .05 / 48 tests
*p < 0.001
109
Table 33.
McNemar’s Chi-Square Test for Correlated Proportions for Values by Feedback Conditions
Feedback Condition
28th
45th
73rd
Values
Percentile
Percentile
Percentile
“I don’t care whether or not I meet my goal.”
3.56
1.09
4.57
“It is important to me to meet my goal.”
“I will be disappointed in myself if I do not meet
my goal.”
“I will be happy with my performance if I give
my best effort, even if I do not meet my goal.”
Alpha = .05 / 48 tests
*p < 0.001
0.00
0.22
3.13
3.13
0.23
9.76
15.13*
12.50*
2.29
110
Table 34.
McNemar’s Chi-Square Test for Correlated Proportions for Most Important Plan, Goal, or
Value by Feedback Conditions
Feedback Condition
th
28
45th
73rd
Most Important
Percentile
Percentile
Percentile
Plan: “I am going to emphasize accuracy and
7.37
6.08
2.33
speed equally.”
Goal: “I set my performance goal at a
challenging level to motivate me to perform
well.”
Value: “I will be happy with my performance if I
give my best effort, even if I do not meet my
goal.”
Alpha = .05 / 48 tests
*p < 0.001
11.66*
1.14
1.53
9.14
17.86*
0.36
111
Image Theory
Model
Adoption
Decisions
Compatibility Test
between Value Image
and potential goal
Progress
Decisions
Compatibility Test
between Trajectory
Image and Strategy
Image
Profitability Test
Decision rule to select
the best remaining
alternative
Figure 1. Image Theory Model divided into two types of decisions, as identified by Beach and
Connolly (2005). Adoption decisions require a compatibility test using the value image, while
progress decisions consist of both compatibility tests and profitability tests to maximize the best
outcome from the alternatives.
112
APPENDIX A: STIMULUS MATERIALS
Introduction
Thank you for participating in this study. The purpose of this study is to find out how people set
goals and make plans to meet those goals.
To study your goals and your plans to meet those goals, you will complete a test called the
Number Reduction Task. To do well on the task, just correctly follow the rules and work
quickly.
That is key – to do well, you need to be ACCURATE and QUICK.
Here's why…
Last year, Bowling Green State University Psyc 1010 students participated in the task you are
about to complete.
First, their responses were scored for ACCURACY. Next, their responses were scored for
SPEED.
Then, ACCURACY and SPEED were combined with a formula to create one percentile ranking
for every student. The student who was most accurate and the quickest is the top ranking student,
at the 99th percentile. The student with the second most accurate and quickest responses
according to the formula was at the 98th percentile, and so on.
This was done for every student's scores until all students were assigned a percentile for
ACCURACY and SPEED.
In this study, you will complete the same number task and be compared to last year's Psyc 1010
students.
If you score in the top 50% of all students, which is the 51st to 99th percentiles for accuracy and
speed, you will be entered into a raffle for one of four $50 gas gift cards!
Maybe you're thinking: "But you said this is a number task. What if I'm not good at math?"
No problem. Even though the task looks like a math exercise, there is no known relationship
between math skill and doing well on the task - those who do well in math have failed the task,
and those who perform poorly in math have scored very high.
So everyone has a chance of doing well and scoring in the top 50%, even if you don't like
numbers or don't excel in math!
113
Number Reduction Task Rules
The objective of the Number Reduction Task is to reduce each four-digit number string to a
single digit response. Nine digits, 1 through 9, are used in this test. They are combined in various
ways, based on four rules.
The rules are:
Same rule: If two digits are the same, the answer is the same digit.
Examples: 77 = 7, 22 = 2
Contiguous rule: If two digits begin an ascending or descending series, the answer is the next
digit in that series.
Examples: 67 = 8, 54 = 3
Midpoint rule: If two digits differ by two, the answer is the digit midway between them.
Examples: 46 = 5, 31 = 2
Last rule: If two digits differ by more than two, the answer is the latter of the two digits.
Examples: 27 = 7, 85 = 5
Example of Rules
Remember the rules:
Same rule
Continguous rule
Midpoint rule
Last rule
Apply each rule to pairs of digits, from left to right. Each rule reduces two digits to a single digit
answer. This digit becomes the first digit in the next pair, and another rule is applied to this pair
of digits, from left to right. Continue this pattern until only a single digit remains.
There is only one correct answer for each four-digit number string.
The rules are best understood by an example.
114
5435 = _____
First, 5435 is reduced to 335, because the first two numbers, (54), reduce to 3 by applying the
contiguous rule.
Then, 335 is reduced to 35, because the first two numbers, (33), equal 3 by applying the same
rule.
Finally, 35 is reduced to 4, because the two numbers (35) reduce to 4 by applying the midpoint
rule.
Therefore, the final answer is 4.
Practice
Practice using the 4 rules on the following pairs of numbers. Each response should be a single
digit between 1 and 9. Type your answer in the blank.
This practice section will not count toward your percentile score. Try to memorize the rules as
you go so you can work as quickly as possible while still achieving 100% accuracy.
44 =: _______
68 =: _______
51 =: _______
65 =: _______
11 =: _______
37 =: _______
42 =: _______
48 =: _______
46 =: _______
43 =: _______
98 =: _______
33 =: _______
78 =: _______
95 =: _______
75 =: _______
99 =: _______
115
Answers to Practice Items
Check your answers below. *YOUR RESPONSE* is listed first.
Practice
Now, apply the rules from left to right to reduce each four-digit number string to a single digit.
Work the following example and type the answer in the box.
3757 = 8
First, 3757 is reduced to 757, because the first two numbers, (37), reduce to 7 by applying the
last rule.
Then, 757 is reduced to 67, because the first two numbers, (75), reduce to 6 by applying the
midpoint rule.
Finally, 67 is reduced to 8, because the two numbers (67) reduce to 8 by applying the contiguous
rule.
Therefore, the answer is 8.
Practice: Answer
2249 = 9
First, 2249 is reduced to 249, because the first two numbers, (22), reduce to 2 by applying the
same rule.
Then, 249 is reduced to 39, because the first two numbers, (24), reduce to 3 by applying the
midpoint rule.
Finally, 39 is reduced to 9, because the two numbers (39) reduce to 9 by applying the last rule.
Therefore, the answer is 9.
116
Set Your Goal
Now that you are familiar with the task, set your performance goal in percentiles.
Remember - higher percentiles are better than lower percentiles. For example, at the 80th
percentile, you scored higher than 80% of everyone else who took the test. The 25th percentile
means you scored better than only 25% of people, and 75% of people scored higher than you.
The highest percentile is the 99th percentile, and the lowest percentile is the 1st percentile.
Remember that percentiles are based on both how ACCURATELY and how QUICKLY you
complete the task.
You have to score in the 51st percentile or higher to be eligible for the gas gift card raffle, but
you can set any performance goal you want.
What is your performance goal?* _________
Set Your Goal
You said your goal is [question("value"), id="69"]%.
Let's break it down:
How confident are you that you can complete this task very accurately?
( ) Not at all confident
( ) Somewhat confident
( ) Moderately confident
( ) Very confident
( ) Completely confident
How do you think your accuracy will compare to others?
( ) Bottom 5%
( ) Bottom 20%
( ) Bottom 35%
( ) Middle 50%
( ) Top 35%
117
( ) Top 20%
( ) Top 5%
How confident are you that you can complete this task very quickly?
( ) Not at all confident
( ) Somewhat confident
( ) Moderately confident
( ) Very confident
( ) Completely confident
How do you think your quickness will compare to others?
( ) Bottom 5%
( ) Bottom 20%
( ) Bottom 35%
( ) Middle 50%
( ) Top 35%
( ) Top 20%
( ) Top 5%
Describe Your Plans to Meet Your Goal
Please respond honestly to the following questions - there are no right or wrong answers (check
as many as apply).
Now that you know your goal, state your plans to meet your performance goal of
[question("value"), id="69"]%.
[ ] I am going to emphasize accuracy.
[ ] I am going to emphasize speed.
[ ] I am going to emphasize accuracy and speed equally.
[ ] I will not focus on anything so I don't "over think" it.
[ ] I will reduce distractions to increase my overall focus (e.g., turn off cell phone or TV, log off
of e-mail, ask roommate to leave).
[ ] Other
[ ] Other
118
In the box below, briefly explain why you chose this plan to meet your performance goal of
[question("value"), id="69"]%.
Describe How You Set Your Goal
Please respond honestly to the following questions - there are no right or wrong answers (check
as many as apply).
Check the box(es) that describe(s) how you set your goal of [question("value"), id="69"]%.
[ ] I set my performance goal low on purpose so it would be easy to meet.
[ ] I set my performance goal at a challenging level to motivate me to perform well.
[ ] I set my performance goal where I did because achieving it would reward me with a chance at
the gift card.
[ ] I set my performance goal without a reason in mind.
[ ] Other
[ ] Other
Briefly explain why you chose to set your performance goal of [question("value"), id="69"]%
this way.
Describe How You Value Your Performance
Please respond honestly to the following questions - there are no right or wrong answers (check
as many as apply).
Check the box(es) that describe(s) how you value your performance on this task.
[ ] I don't care whether or not I meet my goal.
[ ] It is important to me to meet my goal.
[ ] I will be disappointed in myself if I do not meet my goal.
[ ] I will be happy with my performance if I give my best effort, even if I do not meet my goal.
[ ] Other
[ ] Other
Briefly explain why you value your performance on this task this way.
119
Select the Most Important
Of the items you checked to state your plans to meet your performance goal, which is most
important to you? (choose only 1)*
Of the items you checked to describe how you set your goal for the task, which is most important
to you? (choose only 1)*
Of the items you checked to describe how you value your performance on the task, which is most
important to you? (choose only 1)*
Number Reduction Task
You are now ready to begin the Number Reduction Task. Remember, the Number Reduction
Task you are about to complete is NOT a measure of math skill.
Try your best to achieve your performance goal of [question("value"), id="69"]% while working
ACCURATELY and QUICKLY.
Tip: If you notice an error in your work, do NOT go back to fix it.
Complete every item on the task before submitting your answers at the bottom of the page.
Click NEXT to continue to the Number Reduction Task and start the timer.
Number Reduction Task
Complete the following Number Reduction Task ACCURATELY and QUICKLY.
2369 =: _______
4667 =: _______
4355 =: _______
4656 =: _______
2244 =: _______
4497 =: _______
6753 =: _______
4464 =: _______
8733 =: _______
120
7783 =: ________
7895 =: ________
4468 =: ________
8354 =: ________
9856 =: ________
3376 =: ________
9553 =: ________
2646 =: ________
1235 =: ________
4125 =: ________
3155 =: ________
3313 =: ________
4687 =: ________
4643 =: ________
8132 =: ________
2323 =: ________
1366 =: ________
5675 =: ________
6648 =: ________
9844 =: ________
5611 =: ________
Click NEXT to submit your responses and receive your score.
Scoring
Please wait while the system scores your performance for accuracy and speed.
When the progress bar is full, please click NEXT to receive your score.
Results
Your accuracy and speed combined fall in the 28th percentile of all participants.
Unfortunately, you did not perform in the top 50% of participants. This means you do NOT
qualify for the gift card raffle.
121
The researchers would like to give everyone a fair chance to qualify for the gift card raffle.
You will have one more chance to qualify.
Relating Performance to Your Goal and Plans
Before finding out how you can increase your chances in the $50 gas gift card raffle, please
answer these questions about your performance at the 28th percentile.
This will help the researchers understand how the goal you set and your plans to achieve that
goal relate to your performance.
Before completing the Number Reduction Task, you stated your plans to achieve your goal of
achieving [question("value"), id="69"]%. Your plans are listed below.
Check the box next to each plan that you ACTUALLY DID during the Number Reduction Task.
[ ] Something else
[ ] None of these
Relating Performance to Your Goal and Plans
When you received your score at the 28th percentile, did you think that your plans to meet your
goal were the reason for your score?
( ) Yes
( ) No
( ) Unsure
Please briefly explain your answer:
You will have one more chance to meet your goal on this task to qualify for the raffle.
Does your score at the 28th percentile make you think that your plans to meet your goal will
affect your score on the next trial?
( ) Yes
( ) No
( ) Unsure
Please briefly explain your answer:
122
Relating Performance to Your Goal and Plans
Please briefly describe your reaction to your percentile score.
To what extent was your percentile score unexpected or expected?
( ) Totally unexpected
( ) Somewhat unexpected
( ) Neither unexpected nor expected
( ) Somewhat expected
( ) Totally expected
To what extent was your percentile score negative or positive?
( ) Negative
( ) Somewhat negative
( ) Neither negative nor positive
( ) Somewhat positive
( ) Positive
Relating Performance to Your Goal and Plans
Personal issues and task issues have been known to affect performance in some people on the
Number Reduction Task.
Personal issues can be positive or negative. Examples of personal issues include: your skill at
solving number puzzles, or focus under time pressure.
To what extent did personal issues affect your percentile score?
( ) Not at all
( ) Somewhat
( ) Moderately
( ) Significantly
( ) Completely
123
Briefly explain any personal issue(s) that affected your score.
Task issues can be positive or negative. Examples of task issues include: unclear instructions, or
practice at filling in online surveys.
To what extent did task issues affect your percentile score?
( ) Not at all
( ) Somewhat
( ) Moderately
( ) Significantly
( ) Completely
Briefly explain any task issue(s) that affected your score.
Relating Performance to Your Goal and Plans
Past research on the Number Reduction Task suggests performance on the task tends to change
over time.
Think about how YOU would do on the task if you completed another Number Reduction Task
RIGHT NOW, without any time to look over the rules or practice to be faster.
With respect to ACCURACY, would your responses be (choose only one):
( ) More accurate
( ) The same percentage of accuracy as before
( ) Less accurate
With respect to SPEED, would your responses be (choose only one):
( ) Faster
( ) The same speed as before
( ) Slower
In what percentile do you think you would fall compared to your peers?
____________________________________________
124
Instructions to Qualify
Again, your accuracy and speed combined fall in the 28th percentile of all participants. You did
not perform in the top 50% of participants, but you will have a second chance to qualify for the
gift card raffle.
This time, set a performance goal that you think you can achieve that is equal to or higher than
the 28th percentile. To qualify for the gift card raffle this time, you just need to meet the
new goal you set for yourself, even if it is lower than the 51st percentile.
Research shows that those who state their goal and describe how they plan to meet the goal in
advance of starting it are more likely to meet their goal. Please answer the following questions.
Set Your New Goal
State your NEW performance goal in percentiles for the second trial of the Number Reduction
Task.
Remember that percentiles are based on both how ACCURATELY and how QUICKLY you
complete the task.
You can set any goal you want, but to be entered into the raffle for one of the $50 gas gift cards,
you must:
(1) Set your performance goal equal to or higher than the 28th percentile
(2) Score equal to or above your performance goal
What is your new performance goal?* ____________
Set Your New Goal
You said your goal is [question("value"), id="151"]%.
Let's break it down:
How confident are you that you can complete this task very accurately?
( ) Not at all confident
( ) Somewhat confident
( ) Moderately confident
125
( ) Very confident
( ) Completely confident
How do you think your accuracy will compare to others?
( ) Bottom 5%
( ) Bottom 20%
( ) Bottom 35%
( ) Middle 50%
( ) Top 35%
( ) Top 20%
( ) Top 5%
How confident are you that you can complete this task very quickly?
( ) Not at all confident
( ) Somewhat confident
( ) Moderately confident
( ) Very confident
( ) Completely confident
How do you think your quickness will compare to others?
( ) Bottom 5%
( ) Bottom 20%
( ) Bottom 35%
( ) Middle 50%
( ) Top 35%
( ) Top 20%
( ) Top 5%
Describe Your New Plans
Please respond honestly to the following questions - there are no right or wrong answers (check
as many as apply).
Now that you have set your new goal, state your plans to meet your performance goal of
[question("value"), id="151"]% on the next trial.
126
[ ] I won't change my strategy - I will do what I did last time.
[ ] I am going to emphasize accuracy.
[ ] I am going to emphasize speed.
[ ] I am going to emphasize accuracy and speed equally.
[ ] I will not focus on anything so I don't "over think" it.
[ ] I will reduce distractions to increase my overall focus (e.g., turn off cell phone or TV, log off
of e-mail, ask roommate to leave).
[ ] Other
[ ] Other
In the box below, briefly explain why you chose this plan to meet your new performance goal of
[question("value"), id="151"]%.
Describe How You Set Your New Goal
Please respond honestly to the following questions - there are no right or wrong answers (check
as many as apply).
Check the box(es) that describe(s) how you set your new goal of [question("value"), id="151"]%.
[ ] I set my performance goal low on purpose so it would be easy to meet.
[ ] I set my performance goal at a challenging level to motivate me to perform well.
[ ] I set my performance goal where I did because achieving it would reward me with a chance at
the gift card.
[ ] I set my performance goal without a reason in mind.
[ ] Other
[ ] Other
Briefly explain why you chose to set your new performance goal of [question("value"),
id="151"]% this way.
Describe How You Value Your Performance
Please respond honestly to the following questions - there are no right or wrong answers (check
as many as apply).
Check the box(es) that describe(s) how you value your performance on this task.
127
[ ] I don't care whether or not I meet my goal.
[ ] It is important to me to meet my goal.
[ ] I will be disappointed in myself if I do not meet my goal.
[ ] I will be happy with my performance if I give my best effort, even if I do not meet my goal.
[ ] Other
[ ] Other
Briefly explain why you value your performance on this task this way.
Select the Most Important
Of the items you checked to state your plans to meet your new performance goal, which is most
important to you? (choose only 1)*
Of the items you checked to describe how you set your new goal for the task, which is most
important to you? (choose only 1)*
Of the items you checked to describe how you value your performance on the task, which is most
important to you? (choose only 1)*
Number Reduction Task
You are now ready to begin the second trial of the Number Reduction Task.
Try your best to achieve your new performance goal of [question("value"), id="151"]% while
working ACCURATELY and QUICKLY.
Tip: If you notice an error in your work, do NOT go back to fix it.
Complete every item on the task before submitting your answers.
Do you want to review the rules of the task before starting the second trial?
( ) YES, show me the rules
( ) NO, take me to the Number Reduction Task and start the timer
Number Reduction Task
128
Complete the following Number Reduction Task ACCURATELY and QUICKLY.
3215 =: _______
2557 =: _______
5343 =: _______
3134 =: _______
1142 =: _______
1251 =: _______
1126 =: _______
7652 =: _______
7872 =: _______
9331 =: _______
8678 =: _______
3364 =: _______
7895 =: _______
6454 =: _______
8857 =: _______
6523 =: _______
3123 =: _______
9898 =: _______
5224 =: _______
2323 =: _______
Click NEXT to submit your responses.
Permission to Use Scores
Your percentile score will be available shortly. First, please answer these questions.
The researchers would like to continue using the Number Reduction Task for future research
studies on goals and a person’s plans to achieve those goals. To do so, we need additional
Number Reduction Task scores to continually update the percentiles in the database.
We are asking your permission to use your Number Reduction Task scores and combine them
with the existing data set to calculate updated percentile scores.
129
It is important, however, that the ONLY data included in the percentile scores come from
responders who gave a good effort on the task. A good effort means trying to work
ACCURATELY and QUICKLY throughout the task.
If low effort scores are included, future percentiles will be skewed and we will be unable to
interpret true comparisons of ACCURACY and SPEED. Examples of low effort may be guessing
the answer instead of following the rules, or allowing distractions to significantly slow down
one’s speed.
Note: receiving a low percentile score on the task does NOT necessarily suggest low
effort. Percentiles are calculated from last year’s Psyc 1010 students, and even those who gave a
good effort may receive lower scores compared to last year’s students.
Please only select YES if you are confident you gave a good effort on the task.
Please respond honestly - there is no penalty for selecting NO.
Do you give the researchers permission to use your ACCURACY and SPEED scores to update
the database of percentile scores?
( ) Yes
( ) No
Your Perceptions of Performance
Now, try to forget about the percentile score you received on the first trial, or the score you think
you are about to receive for your performance on the second trial of the Number Reduction Task.
Think back to how well YOU believed you were doing on the task while you were completing
it. Don’t compare yourself to others, or use percentiles for comparison.
How would YOU describe your ACCURACY while you were completing the task?
( ) Not at all accurate
( ) Somewhat accurate
( ) Moderately accurate
( ) Very accurate
( ) Completely accurate
How would YOU describe your SPEED while you were completing the task?
( ) Not at all quick
130
( ) Somewhat quick
( ) Moderately quick
( ) Very quick
( ) Extremely quick
Imagine this study was an Alphabet Reduction Task that reduced a string of letters down to a
single letter. There would be no numbers in the task.
How do you think your performance on an Alphabet Reduction Task would compare to your
performance on the Number Reduction Task?
( ) Significantly lower scores with letters
( ) Moderately lower scores with letters
( ) No difference in scores
( ) Moderately higher scores with letters
( ) Significantly higher scores with letters
Briefly explain your response.
Demographic Information
What is your age in years?*
What is your sex?*
( ) Male
( ) Female
( ) Prefer not to answer
Click NEXT to submit your responses on this page and to view your percentile score for the
second trial of the Number Reduction Task.
Thank you for Participating
131
APPENDIX B: PARTICIPANT DEBRIEFING LETTER
Your percentile score on the second trial of the Number Reduction Task is not available, and the
study is over. Please read on.
The research study you just completed examines a decision making theory that takes into
account the decision maker’s values, goals, and plans to achieve the goals as they make
decisions. This theory was studied by asking you to state your goals and plans, and then
measuring how your goals and plans were affected by your performance feedback on a
challenging task.
On the Sona website and the first page of the survey, you were told that the purpose of the study
is to understand how people respond to setting goals and how they make plans to achieve their
goals. In addition, the purpose of this study is to understand how people respond to positive and
negative feedback when setting goals and making plans. Specifically, I am interested in whether
people change their goals or their plans to achieve their goals based on the type of feedback.
Therefore, your feedback on the Number Reduction Task was completely bogus – those were
NOT your real percentiles compared to others who have completed the task.
Some research questions require deception, in this case false feedback, because the participants
may behave differently if they know the true and complete purpose of the study. It is unfortunate
that you had to be deceived to complete this research study, and I appreciate your participation in
this study.
Your performance on the task does not affect your chances to win a gas gift card. You are
eligible to win one of four $50 gas gift cards simply for participating in the study. This is in
addition to the 1.0 Sona credits you will receive.
PLEASE do not discuss the deceptive nature of this study with your classmates enrolled in Sona.
Doing so will contaminate my data and harm the integrity of the study. It may also increase the
number of people who participate, which would lower your odds of winning one of the four gift
cards.
If you would like to be eligible for the raffle of one of four $50 gift cards to the BGSU Bookstore,
please include your e-mail address in this box:
You will be contacted by e-mail if you are one of the four gift card winners.
If you would like a copy of your REAL accuracy and speed percentiles on the Number
Reduction Task, please select YES below:
____ YES, please email my results to: [email address required if YES selected]
____ NO
132
If you have any questions about this survey, you may contact the Principal Investigator, Erin
Smith, at (419) 372-4396 and [email protected]. You may also contact the advising faculty,
Milton Hakel, at (419) 372-8144 or [email protected]. If you have any questions about this
study or your rights as a research participant, you may contact the Chair of Bowling Green State
University’s Human Subjects Review Board at (419) 372-7716 or [email protected].
Click SUBMIT to complete this study.
133
APPENDIX C: INFORMED CONSENT STATEMENT AND HUMAN SUBJECTS REVIEW
BOARD APPROVAL LETTER
Dear participant,
You are invited to participate in a research study investigating the effect of goal
setting on performance.
Purpose of the Study
By completing this task and survey, you will be contributing to research about how
people respond to setting goals and making plans to achieve them. The results of
this study will be used to help researchers at Bowling Green State University
understand people's tendencies toward setting a short-term goal when performance
is based on comparisons to their peers.
Procedures
In this study, you will be asked to complete a number task and answer several
questions about your task goals. If you do well, you will have a chance to win one of
four $50 gas gift cards! It is expected to take approximately 60 minutes, for which
you will receive extra credit in this course for your participation. Individuals must
be at least 18 years old to participate in this study.
Risks
The anticipated risks are no greater than those normally encountered in daily life.
Benefits
If you do well, you may qualify to win one $50 gas card! If you do not do well, there
may be no direct benefit to you, aside from receiving extra credit in this course. You
may benefit from learning about your own goal setting process throughout the
study. You will be provided with a copy of this document.
Confidentiality
Information you provide will remain confidential. Information linking you and your
survey responses will be kept in a secure location and will only be available to
members of the research team. Your name will not be tied to any published results.
Voluntary Nature
Your participation is entirely voluntary. You are free to withdraw at any time. You
may decide to skip questions (or not do a particular task) or discontinue
participation at any time without penalty. Deciding to participate or not will not
affect your grades, class standing, or your relationship with Bowling Green State
University. Stopping before the survey is complete, however, will forfeit your
chances for a gift card.
134
Contact Information
If you have any questions about this survey, you may contact the Principal
Investigator, Erin Smith, at (419) 372-4396 and [email protected]. You may also
contact the advising faculty, Milton Hakel, at (419) 372-8144 or [email protected].
If you have any questions about this study or your rights as a research participant,
you may contact the Chair of Bowling Green State University's Human Subjects
Review Board at (419) 372-7716 or [email protected].
By clicking NEXT, you confirm that you have been informed of the purposes,
procedures, risks, and benefits of this study. You have had the opportunity to have
all your questions answered and you have been informed that your participation is
completely voluntary. You agree to participate in this research.
HSRB # H11D173GFB
Effective Date: 10-14-11
Expiration Date: 3-01-12
Clicking NEXT constitutes your consent to participate in this study.