Submitted Article Deception in Experiments: Towards

Applied Economic Perspectives and Policy (2015) volume 37, number 3, pp. 524– 536.
doi:10.1093/aepp/ppv002
Advance Access publication on 4 March 2015.
Submitted Article
Deception in Experiments: Towards Guidelines
on use in Applied Economics Research
Matthew C. Rousu*, Gregory Colson, Jay R. Corrigan,
Carola Grebitus, and Maria L. Loureiro
Submitted 11 March 2014; accepted 5 November 2014.
Abstract Many applied economics journals ban the use of deception in experiments, which contrasts with the policies in other academic disciplines. We examine
the cases for and against deception, and describe the ways deception can be employed
in applied economics experiments. We create a general ranking of harms from deception in experiments and present evidence from a survey (conducted in summer
2014) of agricultural and applied economists eliciting attitudes towards ten different deceptive practices. Survey respondents view inflicting physical or psychological
harm on participants and not making promised payments as the most severe forms
of deception. Less severe forms of deception include providing participants with incomplete product information and conducting an experiment using participants
who are not aware they are part of an experiment. Finally, we provide recommendations for policies addressing deception in experiments.
Key words:
JEL codes:
Experimental economics, deception.
C80, C81, C90.
Introduction
Deception (noun): An act or statement intended to make people believe
something that is not true. (Merriam-Webster Dictionary 2014)
The use of deception has become one of the defining characteristics of psychological and sociological experiments, where subjects may be deceived
about the purpose, design, or setting of the experiments they are participating in (Bortolotti and Mameli 2006). For example, Hertwig and Ortmann’s
(2008a) survey of experimental social psychology research shows that 54%
of the studies published in the Journal of Experimental Social Psychology
# The Author 2015. Published by Oxford University Press on behalf of the Agricultural and Applied
Economics Association. All rights reserved. For permissions, please e-mail:
[email protected]
524
Downloaded from http://aepp.oxfordjournals.org/ at :: on August 18, 2015
Matthew C. Rousu is the Warehime Chair and professor of economics at
Susquehanna University, Gregory Colson is an assistant professor of agricultural
and applied economics at the University of Georgia, Jay R. Corrigan is a professor of
economics at Kenyon College, Carola Grebitus is an assistant professor of food
industry management at Arizona State University, and Maria Loureiro is an
associate professor in the Department of Economic Theory at University of Santiago
de Compostela, Spain.
*Correspondence may be sent to: [email protected].
Deception in Experiments
1
Another potential cost of deception occurs if it increases legal liability (Hey 1998). If legal risks are a
concern, one could argue this should not affect the profession’s or a journal’s decision to ban deception,
since it is a private risk. This is especially true if the journal requires IRB approval of experimental research, as the risk would more clearly fall on the researcher and/or the IRB regulating body.
525
Downloaded from http://aepp.oxfordjournals.org/ at :: on August 18, 2015
in 2002 involved deception of one kind or another. Remarkably, this
number is down from 85% of the studies published in the same journal in
1965 that involved deception (Gross and Fleming 1982). This is in stark contrast with the economics discipline, where journals such as the American
Economic Review, the American Journal of Agricultural Economics, and
Experimental Economics refuse to publish the results of studies that involve
deception.
The case for allowing deception is quite simple. Its proponents argue that
there are certain research questions that can only feasibly be answered
using deception. Further, these proponents argue, under certain circumstances there is little to no cost to participants, society, or other researchers
from deception in experiments, or at least the benefits outweigh such costs
(Weiss 2001; Barrera and Simpson 2012).
At the same time, opponents (mainly economists) cite several reasons for
defending a prohibition on deception in experiments. The first is that deception is ethically wrong (Barrera and Simpson 2012). Methodological deception
might entail the risk of psychological harm to study participants and, opponents argue, could violate their human rights (Bortolotti and Mameli 2006).
The second argument is the possibility that participants previously
exposed to deception become suspicious or will even expect deception in
future experiments. In this case, non-deception is a public good, and those
who deceive are contaminating participants for other researchers (Bonetti
1998; Cooper 2014). This line of argument is most relevant for research conducted with student subjects who might participate in multiple experiments.
It does seem plausible that if subjects are deceived once, they may behave
differently in future experiments. The evidence is mixed, however. Jamison,
Karlan, and Schechter (2008) find that some participants (those who were
previously deceived) altered their behavior after repeated exposure to deception. The authors do qualify their findings by noting a selection bias.
Barrera and Simpson (2012) replicate the experiments by Jamison, Karlan,
and Schechter (2008) and correct for selection bias. They find no statistically
significant effects of deception.
According to Hertwig and Ortmann (2008b), conformity experiments are
the only area where social psychologists have systematically studied the
impact of deception on subjects’ behavior. These authors found fourteen
studies (Hertwig and Ortmann 2008a) where experimenters tried to identify
suspicious subjects with post-experiment questions such as the following,
taken from Geller and Endler (1973):.“Do you feel this experiment was deceptive (involved lying) in any way?” In ten of these fourteen studies,
Hertwig and Ortmann (2008a) found that suspicious subjects displayed significantly less conformity than unsuspicious subjects.
Potential spillover effects are not the only problem caused by repeated
draws for research pools. For example, Matthey and Regner (2013) find that
subjects’ decisions in laboratory experiments are not independent of previous experiment participation even in the absence of deception.1
Many economics journals will not publish articles reporting results of
experiments that used deception (except for the explicit purpose of studying
Applied Economic Perspectives and Policy
Defining Deception and Deceptive Practices
Krawczyk (2013) provides a definition of deception that distinguishes
between whether the researchers are engaging in “explicit deception” or
“deception by omission”; he indicates that an experiment is explicitly deceptive if the researcher intentionally provides false information. Krawczyk
(2013) also claims that an experiment is deceptive by omission if the researcher intentionally withholds information. Further, to be deceptive by
omission, Krawczyk states that this omission must either be likely to change
participant behavior or might change the probability of the participant
being willing to participate in the experiment. Hey (1998) makes a similar
distinction, but believes it is only deception if the deception is explicit,
saying “There is a world of difference between not telling subjects things
and telling them the wrong things. The latter is deception, the former is not,”
(Hey’s emphasis).
526
Downloaded from http://aepp.oxfordjournals.org/ at :: on August 18, 2015
the effects of deception) (Brian Roe, personal communication). The toprated journal in agricultural economics, the American Journal of Agricultural
Economics, implemented this publication policy in 2011. This puts AJAE in
the company of general economics journals such as the American Economic
Review and more discipline-specific journals such as the Journal of Economic
Behavior and Organization and Experimental Economics. The new submission
guidelines for the AJAE require researchers to state that “the data collection
efforts do not involve deception of human subjects even if the protocol was
approved by all appropriate Institutional Review Boards” (American Journal
of Agricultural Economics 2014).
This dichotomy between the ban on deception in the economics literature
and the pervasive use of deception in psychological research raises the important question of whether either discipline has the optimal academic
policy towards deception. While there are clear and reasonable questions
regarding the benefits and ultimate consequences of the extent to which
social psychology research employs deception in experiments, the converse
can be asked of economic research.
Opinions and arguments supporting answers to these questions tend to
fall under two competing ethical views: deontological ethics and consequentialism. Deontologists (also referred to as the prohibitionists by Bonetti
(1998)), contend that the human rights of participants cannot be violated
under any circumstances, and stress that economists should be able to test
any theory without being deceptive (Ortmann and Hertwig 2002). In contrast, consequentialists follow the cost-benefit calculus of the American
Psychological Association, and contend that there are research topics of sufficient importance to society that the benefits of using deception outweigh
the costs from violating participants’ rights.
We attempt to add to the literature by providing an overview of the arguments for banning deception, along with the arguments for allowing deception. We provide a list of potential deceptive practices one may see in
agricultural and applied economics research, and rank these deceptive practices in terms of the potential harms they may cause. In addition to our
general taxonomy, we present evidence from a survey of agricultural and
applied economists eliciting attitudes towards ten different deceptive practices. Finally, we provide recommendations that could be used to support
crafting research policies with regards to deception in economic experiments.
Deception in Experiments
A sensible view is that, if the psychological harm inflicted to the research participant
goes beyond a certain threshold, then the experiment is not permissible, independently
of how great its potential benefits to society are. When the threshold is surpassed, the
experiment becomes an act of injustice against the research participant. On this proposal, one can decide whether to conduct an experiment involving deception by using
the following procedure. The first step is to ask whether the experiment is likely to
cause significant psychological harm to the participant. If it is, then the experiment is
not morally permissible. If it is not, then one should ask whether the harm to the participant (if any) is outweighed by the potential benefits of the research. Only if it is,
then the experiment can be justified.
This view leaves open the question of how participants could be harmed
by deception. In psychology experiments, there are many examples. The
best example of a study that potentially caused significant psychological
harm is Milgram’s (1963) study on obedience. Milgram’s forty subjects
believed they were participating in a study on memory and learning. Each
Table 1 Suggested Deception Taxonomy for Agricultural and Applied Economics
Journals
General category
Specific examples
1. (Most severe)Potential harm
to subjects
Psychological trauma, financial consequences,
selling a product that is not as described,
not making payments that were promised.
Future subjects from a university subject pool
know that there has been deception in a
previous experiment and act differently.
Misleading subjects about the purpose of the
study, misleading subjects about products
available (as long as subjects do not
consume or purchase the product), use of
confederates.
2. Potential spillover effects
3. (Least severe)Neither harm to
subjects nor spillover effects
527
Downloaded from http://aepp.oxfordjournals.org/ at :: on August 18, 2015
Sieber, Iannuzzo, and Rodriquez (1995) divide deceptive practices into
the following eight categories: 1) Providing subjects with false information
about a study’s main purpose; 2) Exposing subjects to a “bogus device” that
does not function the way subjects are led to believe it does; 3) Providing
subjects with false feedback about their own performance in the study; 4)
Providing subjects with false feedback about other subjects’ performance in
the study; 5) The use of confederates who appear to be bystanders or other
participants in a study but are actually following the experimenter’s instructions; 6) Subjects may be unaware that they are participating in a study. That
is, they are being observed without their consent; 7) Subjects may be aware
they are part of a study but are not aware of how and when they are being
observed; 8) The experimenter invites the subject to participate in two
related studies but tells the subjects that the studies are actually unrelated.
These separate definitions and lists of potentially deceptive practices are
useful. However, given the many ways deception can be implemented, we
have created a general ranking of what we think are more severe and less
severe forms of deception based upon the degree of harm to individuals,
future research, and the profession (see table 1). The most severe form is
any deception that could cause actual physical, emotional, or other injuries
or harm to participants. Bortolotti and Mameli (2006) discuss harm by
saying:
Applied Economic Perspectives and Policy
What Types of Deception Do We See in Agricultural and Applied Economics?
Three primary categories of experiments emerge when reviewing the
articles employing experimental methods in the agricultural and applied
economics literature: consumer willingness-to-pay (WTP) studies (e.g., auctions, choice experiments, and natural experiments for food products);
computer-based environmental studies (e.g., resource management experiments, public good contribution games); and tests of experimental methods
(e.g., auctions vs. choice experiments, hypothetical bias, posted-price bias).
For each of these categories, it is possible that deception could harm experimental participants. In WTP studies where actual products are sold to research participants, the potential for harm exists if participants are deceived
about the true nature of a product. For example, consider an auction exploring consumers’ WTP for genetically modified (GM) foods. If the researcher
mislabels products such that participants think they are purchasing a
non-GM food when the food is in fact genetically modified, there is the potential for harm even if the participant is unaware of the deception. With
credence quality attributes (Nelson 1970) where participants cannot know
whether the non-GM claims are authentic even after consumption, it is not
possible for participants to uncover whether they were deceived (unless
revealed by the researcher). Because participants are purchasing and consuming a product counter to the claims of the researcher, we would consider the participant as being harmed—even if they never discovered the
deception.
In computer-based environmental behavior studies and tests of experimental methods, researchers might expose subjects to a “bogus device”,
provide false feedback about a subject’s own performance or others subjects’ performance, or use confederates. For example, in order to study willingness to contribute to a public good (e.g., environmental quality) in a
528
Downloaded from http://aepp.oxfordjournals.org/ at :: on August 18, 2015
subject was paired with a confederate. As a first example of a bogus device,
the subject and the confederate drew slips of paper to determine who
would play the role of teacher and who would play the role of learner in the
experiment. This drawing was rigged so that the subject would always be
assigned the role of teacher. The experimenter then strapped the confederate into what appeared to be an electrified chair—another bogus device—
under the pretense that the confederate would be shocked each time he
failed to remember the second word in a series of word-pair memory tasks.
The experimenter seated the subject at an instrument panel with thirty
levers marked with voltages ranging from 15 to 450 volts. The levers also
included text descriptions of the shock they purportedly delivered to the
confederate, such as “Slight Shock”, “Strong Shock”, “Danger: Severe
Shock”, and finally “XXX”. This instrument panel was a third bogus device.
The panel delivered no current, but the confederate was trained to act as if
he was suffering as a result of electrical shocks. All forty subjects administered at least twenty shocks to the confederate. After the twentieth shock the
confederate kicked the walls and refused to answer further word-pair questions. Twenty-six subjects delivered all thirty increasingly powerful shocks,
continuing to the final 450-volt lever marked “XXX”.
This is obviously an extreme example, and the potential for significant
psychological harm stemming from deception would generally be much
lower for the agricultural and applied economics experiments.
Deception in Experiments
Perceptions of Deception by Academic Researchers
The Case of Social Psychologists
According to Korn (1997), the first psychological study to use deception
was likely Solomon’s 1897 study on sensory perception. Solomons was
interested in subjects’ ability to distinguish a single touch from a dull
compass point from two simultaneous touches. The author originally
found that untrained subjects could reliably distinguish one touch from
two only when the two simultaneous touches were at least 1.5 inches
apart. In a second experiment, he told subjects whether they were about to
be touched in one or two places. Naturally, this information improved subjects’ accuracy. In a third experiment, Solomons again told subjects
whether they were about to be touched in one or two places but warned
them that they might be deceived (e.g., told they would receive one touch
when they would actually receive two). According to Solomons, “The influence of the expectation predominated, so that when touched by one
point he would perceive two if he had been led to expect two; and when
touched by two, set farther apart than was necessary for perceiving them
as two ordinarily, he would perceive them as one if told to expect one,”
(Korn 1997).
Since Solomons’s study, the use of deception has become one of the defining characteristics of experimental social psychology. Figure 1 tracks the frequency of deception in articles published in the Journal of Experimental Social
Psychology and the more frequently cited Journal of Personality and Social
Psychology from 1965 to 2002. Though remarkably common, the use of
529
Downloaded from http://aepp.oxfordjournals.org/ at :: on August 18, 2015
dynamic framework where subjects observe the past behavior of other potential contributors, a researcher could provide false information about
other participants’ contributions. In this situation there is clear potential for
harm if participants earn monetary rewards based on their behavior in the
experiment. Even in the absence of potential monetary losses, this practice
could harm other researchers if it creates a perception that economists routinely use deception. This is especially true on university campuses where
students might take part in multiple experiments with different researchers
over time as part of experimental laboratories.
When testing the relative effectiveness of different experimental methods,
a researcher might provide false feedback about other participants’ performances. For example, a researcher might post artificially high winning
prices after early auction rounds in order to test whether this causes an increase in the bids that auction participants submit in later auction rounds. If
these artificially high prices do indeed lead to increased bids, the winners in
later rounds would suffer financially in that they would pay higher prices
for the goods they purchase.
Table 2 presents a non-comprehensive list of studies that have used some
form of deception and were published by agricultural and applied economists. Most of the deceptive techniques would fall into the least severe category of deception from table 1. These techniques pose no personal risks for
subjects. Also, in most cases there were no negative spillover concerns
because the studies from table 2 used subjects from the general population,
not from a university student body.
Article source and year of
publication
American Journal of Agricultural
Economics, 1995
American Journal of Agricultural
Economics, 2006
Economic Inquiry, 2007
Deception used
Number of citations
according to Google
Scholar on Feb. 4,
2014
Different participants were given different probabilities of being harmed by the product for sale in 298
an experimental auction.
Confederates placed unusually high bids in experimental auctions.
56
Downloaded from http://aepp.oxfordjournals.org/ at :: on August 18, 2015
530
81
Deception by omission. Participants received different information treatments: positive, negative,
both positive and negative, with and without third-party information. Only a subsample
received all information.
Journal of Risk and Uncertainty,
Deception by omission. Participants received positive or negative information about the product, 165
2002
depending on the random treatment to which they were assigned. Only a subsample received
both positive and negative information.
10
Journal of Agricultural and Resource Participants were told they were bidding on transgenic and non-transgenic products, though all
Economics, 2011
products were actually non-transgenic. The experiment was designed so the binding round was
for a non-transgenic product.
European Review of Agricultural
Deception by omission. Participants bid on animal products. Only a subset of participants
53
Economics, 2006
received information on humane animal care practices.
188
European Review of Agricultural
Deception by omission. Participants bid on a genetically modified food product. Participants in
Economics, 2004
different treatments received different information about the benefits of genetic modification.
Only a subset of participants received all types of information.
Review of Economics and Statistics, Participants were not aware of their participation in an experiment.
21
2010
Land Economics, 2003
Participants were not aware of their participation in an experiment.
60
Journal of Nutrition Education and Participants were aware of their participation in a survey, although they were not aware of the full 189
Behavior, 2005
details of the experiment.
Applied Economic Perspectives and Policy
Table 2 A Sampling of Published Papers by Agricultural Economists that Have Used Some Form of Deception.
Deception in Experiments
Figure 1 Percentage of studies using deception in the Journal of Personality and Social
Psychology and the Journal of Experimental Social Psychology
deception in social psychology experiments receives surprisingly little attention. For example, a bestselling social psychology textbook (Aronson,
Wilson, and Akert 2010) devotes just two sentences to deception:
Deception in social psychological research involves misleading participants about the
true purpose of a study or the events that transpire. (Note that not all research in social
psychology involves deception.)
Moreover, the American Psychological Association’s code of conduct
devotes only three sentences to the issue:
(a) Psychologists do not conduct a study involving deception unless they
have determined that the use of deceptive techniques is justified by the
study’s significant prospective scientific, educational or applied value,
and that effective nondeceptive alternative procedures are not feasible.
(b) Psychologists do not deceive prospective participants about research
that is reasonably expected to cause physical pain or severe emotional
distress.
(c) Psychologists explain any deception that is an integral feature of the
design and conduct of an experiment to participants as early as is feasible, preferably at the conclusion of their participation, but no later than
at the conclusion of the data collection, and permit participants to withdraw their data.
By comparison, the code of conduct devotes seven sentences to the care and
use of animals in psychological research.
The Case of Agricultural and Applied Economists
While the field of social psychology has revealed a level of acceptance
regarding deception in experiments through its long history and continued
practice of publishing such research, there is no evidence explicitly demonstrating the same for agricultural and applied economists (American
531
Downloaded from http://aepp.oxfordjournals.org/ at :: on August 18, 2015
Sources: Gross and Fleming (1982); Nicks, Korn, and Mainieri (1997); Epley and Huff (1998);
Hertwig and Ortmann (2008a).
Applied Economic Perspectives and Policy
Table 3 Survey Responses Ranking the Severity of Different Deceptive Practices
(1 ¼ least severe, 10 ¼ most severe)
Severity Ranking
25th
75th
Percentile Median Percentile
Deceptive Practice
9
10
10
7.75
9
9
6
7.5
8
5
6
7
4
5.5
6.25
3
5
6
3
4
6.25
Continued
532
Downloaded from http://aepp.oxfordjournals.org/ at :: on August 18, 2015
Subjecting participants to physical or
physiological trauma. For example,
researchers might tell a participant that a
relative is in the hospital to test their
reaction.
Promises of payment not fulfilled. The
experimenter promises payments during
the session, but then does not make the
promised payments after the session is
over.
Having participants purchase a product that
is not as it is described. For example,
participants think they are purchasing an
organic product, but the product is actually
not organic.
Providing subjects with false feedback
about their own performance in the study.
For example, subjects participate in a
knowledge test and then receive feedback
that their score is below or above the score
of the other subjects. Afterwards, they
participate in another knowledge test to
measure the influence of positive and
negative feedback.
Providing subjects with false feedback
about other subjects’ performance in the
study. For example, subjects participate in a
knowledge test and then receive feedback
that the other subjects’ score is below or
above the score of their own score when it
was not.
Having participants bid on a product that is
mislabeled, while setting up the auction
in a way so the participant will not
purchase that item. For example, in an
auction where some products are correctly
labeled and some mislabeled, the
researcher uses a non-random device to
ensure that only one of the correctly labeled
products is selected and sold to the auction
winner.
The use of confederates. For example,
researchers may employ people who
appear to be normal participants but are
actually following the experimenter’s
instructions.
Deception in Experiments
Table 3 Continued
Severity Ranking
25th
75th
Percentile Median Percentile
Deceptive Practice
2
3.5
5
2
3
7
1
2
4
Journal of Agricultural Economics, 2014). Given the oftentimes unique
nature and focus of experiments conducted by agricultural economists, as
well as the recent editorial policy shift at AJAE, better understanding how
researchers in this field view various deceptive practices and the general
taxonomy delineated in table 1 could help guide the development of more
balanced guidelines on the use of deception. Towards this end, in the fall of
2014 we invited agricultural and applied economists known to conduct experimental research to participate in an online survey administered by
Qualtrics. We reached out to approximately 60 economists, and recipients
were free to forward the survey invitation to their colleagues. Hence, the
initial sample was a convenience sample extended to a snowball sample
due to the forwarding of the invitation (Smith and Albaum 2005).
Forty-eight researchers completed the survey, but a concrete response rate
cannot be determined due to the fact that no data are available on how
many colleagues received the forwarded recruitment note, although at most
the response rate was 80%. That said, we believe we identified most applied
economists who conduct experiments, so we feel this sample is representative of the applied economists who conduct experiments. The survey was
approved by Arizona State University’s Institutional Review Board (IRB).
As part of the online survey, respondents were asked to evaluate ten different deceptive practices that have been or could be used in experiments
533
Downloaded from http://aepp.oxfordjournals.org/ at :: on August 18, 2015
Subjects may be unaware they are part of a
study. Or participants may be aware they
are part of a study but not aware of how
and when they are being observed. For
example, participants in a field auction
experiment conducted at a grocery store
may be observed after the auction ends to
see how their purchases in the grocery store
compare to those of other shoppers.
Providing subjects with false information
about a study’s main purpose. For
example, researchers may not want to
“prime” participants or get them thinking
about a particular topic prior to
participating in the experiment.
Providing participants with some
information about a product, but not
“complete information”. For example,
suppose there are positive and negative
perspectives about a product or process,
but participants in some treatments receive
only the positive perspective, while
participants in another treatment receive
the negative treatment.
Applied Economic Perspectives and Policy
Figure 2 Box plot showing the interquartile range and median rankings from a survey ranking
the severity of deceptive practices
534
Downloaded from http://aepp.oxfordjournals.org/ at :: on August 18, 2015
commonly employed in agricultural economics and related fields, and then
rank order them from least severe to most severe.
Table 3 presents summary statistics for the ranking of the ten different deceptive practices tested in this survey (1 being least severe, 10 being most
severe). Figure 2 presents the same information as a box plot where the bars
represent the interquartile range and the points represent the median
ranking. In line with the suggested deception taxonomy presented in
table 1, the three practices that respondents viewed as most severe are subjecting participants to physical or physiological trauma, promising participants a payment that is not fulfilled, and having participants purchase a
product that is not as described. Also in line with table 1, respondents generally agreed that providing participants with some information about a
product but not “complete information”, providing false information about
a study’s purpose, and using confederates were among the least harmful
forms of deception.
While our sample was non-random, there is a key insight we can gain
from these results: agricultural and applied economists feel that some
Deception in Experiments
deceptive practices are more severe than others. We are unaware of previous
studies involving physical trauma or unfulfilled payments in agricultural
experiments, and cannot imagine a study like that being conducted, but the
possibility of auctioning a product that is not as it is described is within the
realm of possibility. This is a practice that respondents viewed as a severe
form of deception.
Discussion and Recommendations
Acknowledgement
Thanks to Jayson Lusk, Brian Roe, Spiro Stefanou, an anonymous reviewer, and
participants in a spirited session at the 2013 AAEA meetings in Washington DC for
helpful comments and suggestions.
References
American Journal of Agricultural Economics. 2014. Instructions to Authors.
Available online at http://www.oxfordjournals.org/our_journals/ajae/for_
authors/general.html (accessed July 2, 2014).
American Psychological Association. 2010. Ethical Principles of Psychologists and
Code of Conduct Including 2010 Amendments. Available online at http://www.
apa.org/ethics/code/ (accessed July 14, 2013).
Aronson, E., T.D. Wilson, and R.M. Akert. 2010. Social Psychology, 7th Ed. Boston,
MA: Pearson.
Barrera, D., and B. Simpson. 2012. Much Ado About Deception: Consequences of
Deceiving Research Participants in the Social Sciences. Sociological Methods &
Research 41 (3): 383 –413.
Bonetti, S. 1998. Experimental Economics and Deception. Journal of Economic
Psychology 19: 377 –95.
Bortolotti, L., and M. Mameli. 2006. Deception in Psychology: Moral Costs and
Benefits of Unsought Self-knowledge. Accountability in Research 13: 259–75.
535
Downloaded from http://aepp.oxfordjournals.org/ at :: on August 18, 2015
If we as a profession want to limit deception, we must start by defining it,
being careful to list which potentially deceptive practices are allowed and
which are forbidden. If we do not, researchers cannot know which studies
are publishable. For example, consider a case where researchers are interested in the effects that positive and negative information have on consumers’ WTP for GM foods. If researchers provide some experimental auction
participants with only negative information and others with only positive
information, is this an example of deception? According to Hey’s (1998) definition of deception it is not, but it is according to Krawczyk’s (2013) definition of deception by omission. The profession is not well served by this
kind of ambiguity. There are obvious costs and benefits associated with the
use of deception in economic experiments, and a great deal of work must be
done to identify those circumstances, if any, where the benefits exceed the
costs. Until more evidence is available, the most prudent step the profession
can take is to carefully define deception and to detail which potentially deceptive practices are allowed and which are prohibited. The taxonomies of
deception we provide in this paper could serve as a starting point for a
careful and unambiguous definition of prohibited deceptive practices.
Applied Economic Perspectives and Policy
536
Downloaded from http://aepp.oxfordjournals.org/ at :: on August 18, 2015
Cooper, D.J. 2014. A Note on Deception in Economic Experiments. Journal of Wine
Economics 9 (2): 111 –14.
Geller, S., and N. Endler. 1973. The Effects of Subject Roles, Demand Characteristics,
and Suspicion on Conformity. Canadian Journal of Behavioral Science 5: 46 –54.
Gross, A., and I. Fleming. 1982. Twenty Years of Deception in Social Psychology.
Personality and Social Psychology Bulletin 8: 402–08.
Epley, N., and C. Huff. 1998. Suspicion, Affective Response, and Educational Benefit
as a Result of Deception in Psychology Research. Personality and Social Psychology
Bulletin 24: 759 –68.
Hertwig, R., and A. Ortmann. 2008a. Deception in Experiments: Revisiting the
Arguments in its Defense. Ethics and Behavior 8: 59–92.
———. 2008b. Deception in Social Psychology Experiments: Two Misconceptions and
a Research Agenda. Social Psychology Quarterly 71: 222– 27.
Hey, J.D. 1998. Experimental Economics and Deception: A Comment. Journal of
Economic Psychology 19: 377 –95.
Jamison, J., D. Karlan, and L. Schechter. 2008. To Deceive or Not to Deceive: The
Effect of Deception on Behavior in Future Laboratory Experiments. Journal of
Economic Behavior and Organization 68: 477–88.
Korn, J. 1997. Illusions of Reality: A History of Deception in Social Psychology. Albany:
State University of New York Press.
Krawczyk, M. 2013. Delineating Deception in Experimental Economics: Researchers’
and Subjects’ Views. University of Warsaw, Faculty of Economic Sciences Working
Paper (No. 2013–11 ).
List, J.A. 2011. Why Economists Should Conduct Field Experiments and 14 Tips for
Pulling One Off. The Journal of Economic Perspectives 25 (3): 3–15.
Matthey, A., and T. Regner. 2013. On the Independence of History: Experience
Spill-overs between Experiments. Theory and decision 75 (3): 403–19.
Merriam-Webster Dictionary. 2014. Deception. http://www.merriam-webster.com/
dictionary/deception (accessed September 3, 2014).
Milgram, S. 1963. Behavioral Study of Obedience. The Journal of Abnormal and Social
Psychology 67 (4): 371– 8.
Nelson, P. 1970. Information and Consumer Behavior. Journal of Political Economy
78 (2): 311 –29.
Nicks, S.D., J.H. Korn, and T. Mainieri. 1997. The Rise and Fall of Deception in Social
Psychology and Personality Research, 1921 to 1994. Ethics & Behavior 7 (1): 69–77.
Ortmann, A., and R. Hertwig. 2002. The Costs of Deception: Evidence from psychology. Experimental Economics 5 (2): 111 –31.
Sieber, J.E., R. Iannuzzo, and B. Rodriguez. 1995. Deception Methods in Psychology:
Have They Changed in 23 Years? Ethics & Behavior 5 (1): 67– 85.
Smith, S.M., and G.S. Albaum. 2005. Fundamentals of Marketing Research. Thousand
Oaks, London, New Delhi: Sage Publications.
Solomons, L. 1897. Discrimination in Cutaneous Sensations. Psychological Review 4:
246 –50.
Weiss, D.J. 2001. Deception by researchers is necessary and not necessarily evil
Behavioral and Brain Sciences 24 (3): 431 –2.