Situated Experiments in Organizations: Transplanting

Journal of Management 2004 30(5) 703–724
Situated Experiments in Organizations:
Transplanting the Lab to the Field
Jerald Greenberg∗
Department of Management and Human Resources, Fisher College of Business,
The Ohio State University, 2100 Neil Avenue, Columbus, OH 43210, USA
Edward C. Tomlinson
Department of Management and Human Resources, Fisher College of Business,
The Ohio State University, 2100 Neil Avenue, Columbus, OH 43210, USA
Received 4 September 2003; received in revised form 13 November 2003; accepted 26 November 2003
Available online 15 June 2004
Both laboratory and field experiments have limitations that likely account for the recent decline
in their usage among organizational researchers. In this article, we introduce situated experiments as an experimental approach that optimizes the strengths of both laboratory and field
experiments in organizational research while mitigating the weaknesses of each. We highlight
four recently published studies using situated experiments. Drawing on these examples, we
illustrate how the proper use of situated experiments can minimize threats to internal validity
and ensure the ethical treatment of research participants.
© 2004 Elsevier Inc. All rights reserved.
In their quest to isolate the causal impact of key variables on various behaviors and attitudes of interest, organizational researchers long have relied on the experimental method as
one of their primary data-gathering techniques (Stone-Romero, Weaver & Glenar, 1995).
This popularity is predicated on the belief that experiments are well suited to providing
stringent evidence of causal relationships and are designed to eliminate plausible alternative effects (Smith, 2000). Therefore, it is not surprising that the experiment has been
touted as “the most powerful technique available for demonstrating causal relationships between variables” (Jones, 1985: 282) and the “preferred mode for the observation of nature”
(Rosenthal, 1967: 356).
Despite these bold proclamations, there has been a decline in the publication of experimental studies by organizational scientists in recent years, both those conducted in the
∗
Corresponding author. Tel.: +1 614 292 9829; fax: +1 740 548 7965.
E-mail addresses: [email protected] (J. Greenberg), [email protected] (E.C. Tomlinson).
0149-2063/$ – see front matter © 2004 Elsevier Inc. All rights reserved.
doi:10.1016/j.jm.2003.11.001
Downloaded from jom.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
704
J. Greenberg, E.C. Tomlinson / Journal of Management 2004 30(5) 703–724
laboratory and the field. This is evidenced by Scandura and Williams’s (2000) analysis of
the research methods used in articles appearing in the Academy of Management Journal,
Administrative Science Quarterly, and the Journal of Management for two 3-year periods
a decade apart, 1985–1987 and 1995–1997. These investigators reported a significant decrease in the percentage of empirical studies published using laboratory experiments from
10.70 percent in 1985–1987 to 4.90 percent in 1995–1997. Also decreasing, although not
significantly, was the incidence of field experiments during this same period: from 3.90
percent to 2.20 percent.
Along with these trends in the publication of empirical journal articles, it also is instructive to examine differences in the inclusion of chapters on the experimental method
in major books published during the past four decades. As recently as the 1970s and
1980s, for example, chapters on the experimental method appeared in books on organizational research methods (Fisher, 1984), and handbooks devoted to organizational behavior
(Blackburn, 1987) and industrial-organizational psychology (Fromkin & Streufert, 1976).
However, more current handbooks dedicated to organizational behavior (Golembiewski,
2000), personnel psychology (Anderson, Ones & Sinangil, 2002), organizational
psychology (Anderson, Ones, Sinangil & Viswesvaran, 2002), industrial and organizational psychology (Dunnette & Hough, 1990), and even research methods in industrial and
organizational psychology (Rogelberg, 2002) have failed to devote a single chapter to experimentation. Such omissions, together with the decline in empirical articles using the
experimental method, indicate that lab and field experiments have fallen out of favor among
organizational scientists.
This decline in popularity of experiments appears to be based on four major factors. First,
as noted by Stone-Romero et al. (1995), modern computer programs performing covariance
structure analyses (e.g., Amos, EQS and LISREL) provide an alternative method of assessing the plausibility of models positing causality on the basis of covariation between observed
variables (Skrondal & Rabe-Hesketh, 2004). A second and related consideration is that data
analyzed using covariance structure analysis generally are collected using questionnaires,
which tend to be far easier to administer than complex experimental manipulations. This
is especially so among samples of people currently employed in organizations. Third, we
also note what Mook (1983) has referred to as “a misplaced preoccupation with external
validity” (p. 379), which has led some scientists (e.g., Gordon, Slade & Schmitt, 1986) to reject experiments involving the use of college students on the grounds that such investigations
lack generalizability to non-student samples. Finally, there also are ethical considerations.
Specifically, many laboratory experiments involve the use of deception, which is a practice
to which some scientists have objected on moral grounds (e.g., Baumrind, 1985). As Seeman
(1969) observed some 35 years ago, if scientific knowledge “is won at the cost of some
essential humanness in one person’s relationship to another, perhaps the price is too high”
(p. 1028). Related to this ethical concern is the allied practical consideration that scientists may face challenges justifying some experimental manipulations to their universities’
institutional review boards (e.g., Chastain & Landrum, 1999).
Although we acknowledge these considerations, we are concerned about the potential extinction of experimental research in organizations—or, at least, voids created by continued
declines in its use. Specifically, we fear that by reducing the prominence of the experimental method in the organizational scientist’s research toolkit we are restricting the range
Downloaded from jom.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
J. Greenberg, E.C. Tomlinson / Journal of Management 2004 30(5) 703–724
705
of questions that we are capable of investigating as well as the quality of the insight we
are inclined to derive about human behavior in organizations. In this regard, we echo the
sentiment of Scandura and Williams (2000) that this state of affairs is “disheartening” and
that it “doesn’t bode well for the future of management research” (p. 1261). Our primary
concern is epistemological in nature. Namely, we must keep in mind the principle purpose
of experimental research in organizations—namely, to understand the psychological processes underlying organizational phenomena. To us, this must remain a key objective of
organizational scientists. After all, one of the major reasons for conducting experimental research is not to determine what actually happens in the field, but to test inductively derived
hypotheses regarding what might happen under certain conditions (Greenberg & Folger,
1988). For this purpose, no research technique is superior to the experiment.
Needed is an experimental research method that satisfies many of the concerns of those
who have rejected it. In addition to being methodologically rigorous, this technique also
must be ethically appropriate, and a source of insight into real organizational processes. We
offer the situated experiment as such an approach. A situated experiment is a laboratory-type
experiment conducted in a natural setting, such as an organization. Situated experiments
result from transplanting the typical laboratory experiment into the field, making adjustments that capitalize on the richness of the naturalistic environments in which they occur.
Such research involves using both carefully controlled independent variables of the type
that typically appear only in the laboratory in addition to more complex independent variables that may be found only in naturalistic settings. Likewise, situated experiments may
use both dependent variables that are exclusive to the experiment as well as those that occur
naturally.
In the present article we will describe the nature of situated experiments in depth, sharing
as examples four such studies conducted by the senior author appearing in three recently published articles (Chen, Brockner & Greenberg, 2003, Study 2; Greenberg, 2002; Greenberg
& Roberge, in press). Drawing on these examples, we then juxtapose situated experiments
with lab and field experiments to highlight their unique features. We conclude by discussing
two key issues, which although are central to all social science research, manifest themselves in special ways in situated experiments—minimizing threats to internal validity and
the ethical treatment of research participants. To set the stage for this discussion, we begin
by comparing experiments conducted in the laboratory and in the workplace.
Laboratory and Field Experimentation in Organizational Research
Over 100 years ago, bicycling enthusiast Norman Triplett made a curious observation—
cyclists performed better when racing against others on the same track than when racing
against the clock. Because Triplett also was a social scientist, he recognized that a key way to
explain this phenomenon was to test hypotheses about the presence or absence of others on
task performance by isolating variables of interest under controlled conditions. To do so, he
didn’t go to the bicycle track, but instead invited people to a laboratory where they performed
a different competitive task that also involved repetitive physical movements—winding
spools of line onto fishing reels. This investigation (Triplett, 1897–1898) was important
not only because it provided the earliest evidence of the now widely recognized social
Downloaded from jom.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
706
J. Greenberg, E.C. Tomlinson / Journal of Management 2004 30(5) 703–724
facilitation effect (Gurrin & Innes, 1993), but also because it was one of the first systematic
efforts to investigate a social phenomenon under controlled laboratory conditions. Little
could Triplett have imagined how strongly experimental research would dominate as a
means of studying social phenomenon in the century that followed his modest contribution
(Higbee, Lott & Graves, 1976).
Three features of experimental research makes it especially well-suited to isolating and
testing the effects of variables of interest: (1) the grouping of participants for comparison
purposes, (2) researcher-administered manipulation of conditions within those groupings,
and (3) control over extraneous influences that might affect the dependent variable (Cook &
Campbell, 1976). By systematically isolating the effects of interest, experimental research
provides evidence of causal relationships not afforded by nonexperimental research. As
such, experimental research has been acknowledged for its capacity to test inferences drawn
from theory as well as to “hunt for phenomena” (Fromkin & Streufert, 1976: 429)—that is,
to see if something can occur under certain circumstances.
Laboratory Experiments
As Festinger (1971) described them over three decades ago, laboratory experiments
occur when “the investigator creates a situation with the exact conditions he [or she] wants
to have and in which he [or she] controls some, and manipulates other, variables” (p. 9). They
are, “experimental events [that] occur at the discretion of the experimenter” (Weick, 1965:
198). Most experiments in management and related fields are conducted in the laboratory
(Sackett & Larson, 1990; Scandura & Williams, 2000). The vast majority of these are impact
experiments—that is, experiments in which a researcher measures participants’ reactions
to situations that they have created for purposes of influencing them (Aronson, Brewer &
Carlsmith, 1985).
Some impact experiments, referred to as simulations, create rich environments by faithfully replicating key characteristics of settings of interest (Weick, 1965). A classic example
is the complex simulation study by Pritchard, Dunnette and Jorgenson (1972). These experimenters hired part-time workers for several weeks to staff a company created solely for
purposes of testing hypotheses about people’s reactions to inequitable payment. A more
recent example may be seen in the research on team learning using an elaborate military
simulation task (Ellis, Hollenbeck, Ilgen, Porter, West & Moon, 2003).
The prototypical impact experiment, however, is far less elaborate. Usually, the behavior
of college students is measured as they respond to settings devoid of contextual cues that
are not of interest to the researcher (for some recent examples, see Folkes & Whang, 2003;
Kuhn & Yockey, 2003). This involves intentionally creating artificial conditions in an effort
to isolate variables of interest from extraneous variables that might affect the dependent
variable (Mook, 1983). Studies of this type sometimes are criticized on the grounds that
they lack realism (Babbie, 1975)—that is, the experimental laboratory does not emulate
key features found outside the laboratory, such as in organizations (Fromkin & Streufert,
1976). Thus, although laboratory experiments are widely regarded for the opportunities
they provide to control variables of interest, it also has been argued that their artificiality
and lack of generalizability limits their usefulness in studying organizational phenomena
(Fromkin & Streufert, 1976).
Downloaded from jom.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
J. Greenberg, E.C. Tomlinson / Journal of Management 2004 30(5) 703–724
707
Field Experiments
In contrast, given that field experiments are conducted in naturalistic settings, they are
not subject to the same criticisms of artificiality and lack of generalizability. However,
because it is difficult, if not impossible, to control the impact of variables in a field experiment, they tend to lack the same high degree of control found in most lab experiments.
As explained in one popular textbook, field experiments may be seen as an “attempt to
achieve a balance between control and naturalism in research by studying people’s natural
behavioral responses to manipulated independent variables in natural settings” (Whitley,
1996: 370). Some of the best-known social science studies are experiments conducted
in naturalistic settings. Sherif’s ingenious “Robber’s Cave” field experiment on building
inter-group relations (Sherif, Harvey, White, Hood & Sherif, 1961), the classic Hawthorne
studies of work productivity conducted by Mayo (1933) and his associates (Roethlisberger &
Dickson, 1939), Milgram’s (1963) “shocking” laboratory studies on obedience to authority, and the controversial Stanford experiments simulating a prison environment (Zimbardo,
Haney, Banks & Jaffe, 1973) stand out as visible examples because of the keen insight they
provide into social phenomena.
Despite their compelling nature, such investigations are more likely to be acknowledged
for their historic value than emulated for their methodological rigor or their ethical treatment
of research participants. Indeed, limitations associated with their lack of randomization and
control over extraneous variables (Jones, 1990) and their potential abuse of human subjects
in some cases (Diener & Crandall, 1978) suggest that such investigations today would be
unlikely to pass the muster of either an editorial review board or an institutional review
board. Accordingly, such fascinating studies tend to be relegated to the history sections of
our textbooks.
As modern concerns about methodological rigor and respect for the rights of human research participants have developed, they appear to have taken a casualty in their wake—the
naturalistic research experiment (Scandura & Williams, 1990). With their disappearance
has come a widespread belief that many social science experiments “just aren’t that interesting anymore” (Reis & Stiller, 1992). Although we readily acknowledge and endorse the
importance of rigor and ethics in conducting research, we object to the wholesale rejection
of experimentation in organizations that appears to have occurred among organizational
scientists as a backlash against such concerns.
As we see it, this is precisely where situated experiments enter the picture. They represent
an opportunity to create the best of both worlds by bringing the rigor and control of the
laboratory to the natural realism of the field. We now will endeavor to illustrate this point
by describing four recent situated experiments.
Examples of Situated Experiments
In presenting these examples, we are not claiming that ours are the only situated experiments ever conducted. Indeed, some classic field experiments (e.g., Matarazzo, Wiens,
Jackson & Manaugh, 1970) and experimental simulations (e.g., Pritchard et al., 1972) contain many of the same features of situated experiments. Our intent in this section is simply
Downloaded from jom.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
708
J. Greenberg, E.C. Tomlinson / Journal of Management 2004 30(5) 703–724
to illustrate by way of examples with which we are intimately familiar some of the defining
characteristics of situated experiments and key issues associated with them.
Greenberg (2002): Employee Theft Situated Experiment
This situated experiment was designed to assess the effects of various naturalistic variables and manipulated variables on employee theft. Participants were customer service
representatives who worked in one of two branch offices in different parts of the United
States. One of the offices had an ethics program in place for approximately 6 months before
the study began. Among other things, this consisted of giving employees at least 10 hours of
training in ethical principles, behavioral expectations in keeping with the company’s code
of ethics, practice in making ethical decisions, and training in procedures to be followed in
seeking advice on ethical matters. Employees working at the other office, which served as
the control group, were given no such training. This constituted a naturalistic independent
variable insofar as it differentiated between participants on the basis of a naturally occurring
treatment.
This manipulation is of interest in several respects. Notably, although the researcher
capitalized on the presence or absence of the ethics program at different locations as determined by the company, he did not create this difference for the benefit of the research
itself. This has the ethical benefit of separating the researcher from any potential consequences associated with adverse reactions to the intervention (e.g., disruption caused by
rejection of the ethics program among workers) as well as problems linked to withholding the intervention from some location (e.g., spikes in inventory shrinkage in the control group). At the same time, there is a drawback of conducting the experiment in this
manner. Namely, the decision to launch the ethics program in one location as opposed to
another was not determined at random by the experimenter. Rather, as might be expected,
this was a matter of logistical convenience for the company. Such a violation of random
assignment is not atypical in field experiments (West, Biesanz & Pitts, 2003), and therefore may be expected as well in situated experiments using naturalistic manipulations of
this type. This poses a potential threat to internal validity insofar as pre-experimental differences in theft levels might have led to the decision to implement the ethics program
in one location as opposed to another. In the present instance, the company’s position
was that this was not the case and that the decision was dictated only by logistical concerns.
It also is important to note that it was not possible from this manipulation to determine
which of the several elements of the ethics program accounted for the results. In other words,
the multidimensional nature of the naturalistic independent variable created a confounding
between components of that variable. This confounding was not considered problematic
for theory-testing because the research was designed to differentiate between the overall
presence or absence of an ethics program with all its naturalistic elements intact. Finding
that employees stole less from employers when their office had an ethics program in place
than when it had no such program, it would be instructive for future researchers to refine the
multifaceted independent variable used in this study by assessing the unique impact of its
individual components. Doing so requires comparing the effectiveness of ethics programs
that include or exclude various elements across samples of otherwise equivalent workers
Downloaded from jom.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
J. Greenberg, E.C. Tomlinson / Journal of Management 2004 30(5) 703–724
709
and settings, a large-scale naturalistic experiment that would be very difficult to be able to
conduct.
Greenberg (2002) also tapped a naturally occurring individual difference between employees—level of cognitive moral development. Approximately two months before the study
was conducted, the short form of the Social Reflections Measure (SRM-SF; Gibbs, Basinger
& Fuller, 1992) was administered to a large group of employees from the company. Participants in the study were volunteers from this larger pretested group. Although this type
of procedure is routinely followed in research on individual differences, Greenberg (2002)
took an additional step because the protocol for scoring responses to the Gibbs et al. (1992)
measure involves interpreting open-ended responses. To be sensitive to the possibility that
local norms may dictate specific interpretations of these responses, the researcher arranged
for scoring to be performed by four officials from the company whose employees were
studied. These officials were blind to the research and to the identity of the participants.
Although they were carefully trained in Gibbs et al.’s (1992) clearly specified scoring protocol, it was considered a potential safeguard to use raters who were capable of interpreting
any work-specific responses that may have been made.
The research procedure was modeled after an earlier laboratory experiment by the same
author (Greenberg, 1993). This involved inviting participants to take a certain amount of
money as payment for completing a questionnaire. Taking amounts in excess of this stated
amount was considered theft. The investigator manipulated information about the sponsor of
the task, stating either that it was the company or a few individual managers. This procedure
constituted the “laboratory” component of this situated experiment.
As is typical in any experiment conducted in a university laboratory, participants in the
Greenberg (2002) situated experiment were thoroughly debriefed about the study before
being dismissed. Typically, college students require only limited explanation as to why
they participated in a study because the reason is almost always the same: They were
required to fulfill a class requirement. In a work setting, however, no such expectations
exist, thereby prompting the need for an explicit explanation as to why the unusual experience was created for employees. In this case, that explanation was straightforward: The
company conspired with the researcher to create temptations and opportunities to steal,
ironically, to provide a basis for subsequent training in adhering to ethical standards. As it
worked out, these training sessions, which were conducted several months after the study
was over, proved highly beneficial insofar as the trainer (who also was the researcher who
conducted the research at the company) was able to draw on the findings of the research
to illustrate various points about the impact of individual and situational determinants
of employee theft. In fact, it was with the specific intent of demonstrating these points
that organizational officials agreed to conducting the study. Although objective data were
not available to support this claim, all involved agreed that these experiences (which the
company came to regard as “pretraining experiential exercises”) proved invaluable in this
regard.
Greenberg and Roberge (in press): Mindless Excuses Situated Experiment
Greenberg and Roberge (in press) report the results of two situated experiments assessing
the conditions under which workers accepted “mindless” apologies for underpayment—
Downloaded from jom.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
710
J. Greenberg, E.C. Tomlinson / Journal of Management 2004 30(5) 703–724
that is, content-free messages containing stray characters. Both studies manipulated the
extent to which apologies contained mindless content in conjunction with the degree of
underpayment created. In Experiment 1, participants were led to believe that the source of
their underpayment was a supervisor from their own company in all cases. Experiment 2
added the source of the underpayment as an additional independent variable: At one level,
the source also was a supervisor (as held constant in Experiment 1); at another level, it
was said to be a stranger conducting research by an outside firm for its own use. From
the company’s perspective, the investigation constituted an exercise designed to demonstrate the importance of clear communication and fair payment on workplace behavior,
which were major topics subsequently covered in a training session conducted by the researcher.
The participants in both studies were customer service representatives from the same
parent company whose employees participated in the study by Greenberg (2002). However, participants in the Greenberg and Roberge (in press) experiments were from different
offices located in different US cities. Accordingly, the possibility of cross-experimental
contamination within the “subject pool” was unlikely. This is an important consideration
when conducting situated experiments in organizations. Unlike college students, who may
be expected to move on over time to classes that do not require research participation, employees are likely to remain on their jobs for a while, thereby increasing the chances that
they will participate in multiple studies if the same research site is tapped repeatedly. This
may be problematic if it runs the risk of creating “professional subjects” who are unlikely to
be representative of a broader sample (Carlston & Cohen, 1980), and also because it introduces a source of bias resulting from potential carry-over experiences from earlier research
participation (Kruglanski, 1975).
In Experiment 1, employees volunteered to complete a questionnaire sponsored by their
company after work hours in exchange for $10. After completing the task, they were given
information manipulating the degree to which they would be underpaid, accompanied by an
apology for that underpayment that varied with respect to the degree to which it contained
interpretable content. The primary dependent variable was assessments of the fairness of the
pay received for performing this task. To varying degrees, this pay was less than promised,
thereby providing an opportunity to examine the effects of different degrees of underpayment. The nature of the independent and dependent variables and the manner in which this
study was conducted were such that it could have been performed in a laboratory setting
outside the workplace. That is to say, none of the variables capitalized on unique features
of the work setting.
However, the follow-up investigation, Greenberg and Roberge’s (in press) Experiment
2, explicitly capitalized on nature of the work setting by introducing a third variable—the
nature of participants’ relationship with the source of underpayment. Specifically, the person responsible for the underpayment was described either as an agent of the organization
(replicating Experiment 1), or a stranger with whom they had no connection whatsoever.
Although conceivably such a manipulation could have been carried out in a university laboratory (e.g., by indicating that the pay came from one’s professor as opposed to some
other anonymous source), the work setting made it possible to introduce this variable in a
highly natural manner. This is because workers in the company studied were accustomed
to having opportunities to complete questionnaires after work in exchange for $10. This
Downloaded from jom.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
J. Greenberg, E.C. Tomlinson / Journal of Management 2004 30(5) 703–724
711
expectation heightened the salience of the underpayment manipulation because it was not
only a violation of the stated pay rate, but also of prevailing normative standards. Moreover, because workers in this company were used to completing questionnaires administered by different sponsors, this manipulation of the source of underpayment was likely
to be salient without arousing suspicion about its authenticity, as might have occurred in
a university lab. Cast in general terms, this study capitalized on naturalistic differences
in conditions and expectations of the participants, which is a unique benefit of situated
experiments.
Although several of the participants in the Greenberg and Roberge experiments completed
questionnaires on earlier occasions, it is important to note that cross-study contamination
was unlikely. We say this because these employees’ previous questionnaire experiences
were affectively neutral and unrelated to the nature of these investigations. The greatest
potential threats of contamination stemmed from participants’ sharing their experiences with
coworkers. However, several experimental safeguards were incorporated into the procedure
to avoid this problem.
First, only one experimental session was conducted in each of the offices involved in the
research (three different offices in Experiment 1, and two different offices in Experiment
2, all of which were located in cities distant from one another). Thus, even if participants
shared their experiences with coworkers in their own facilities, these others were not going
to participate in the research.
Second, the employees involved in the research had no formal contact with their counterparts in other offices, thereby precluding the possibility that they would be able to share
their experiences, which would potentially bias the results by introducing time-of-study as a
confounding variable (for a discussion of the potential problems of what Gergen, 1973, has
called enlightenment effects, see Greenberg & Folger, 1988, pp. 157–159). Third, to further
minimize the unlikely possibility that information about the studies would be transmitted
between participants in the different offices, no mention was made of the fact that research
was going to be conducted elsewhere. In addition, to minimize the period within which
information may have been communicated between offices, the time between experimental
sessions was purposely kept at a minimum (in Experiment 1, the study was administered at
three different offices within one workweek and in Experiment 2, the study was conducted
at two different offices within three days). Moreover, Experiment 2 was conducted only
three weeks after the last experimental session was run in Experiment 1.
Insofar as the participants in these situated experiments were underpaid in the course
of the study, the same ethical considerations noted in conjunction with Greenberg (2002)
apply in this case as well. Notably, although participants in some conditions were led to
believe that they would be paid less than the promised, normatively appropriate, amount,
they actually were paid in full before leaving the experimental setting. As in any laboratory
setting in which participants are intentionally misled, Greenberg and Roberge (in press) fully
explained the necessity of using deception to the workers who participated in their research
as part of the debriefing procedure. As in the case of Greenberg (2002), the deception used
here was relatively benign and any potential distress caused by underpayment was very
short-lived. In this case as well, post-experimental questionnaires revealed that participants
were not distressed about the procedure used and readily claimed to understand the need
for deception in the experiment and the study’s value to themselves and the company.
Downloaded from jom.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
712
J. Greenberg, E.C. Tomlinson / Journal of Management 2004 30(5) 703–724
Chen et al. (2003, Study 2): Situated Experiment on the Effects of Status on the Relationship
between Outcome Favorability and Procedural Fairness
Study 2 in Chen et al. (2003) presents the results of another situated experiment. This
investigation was conducted to examine the nature of the interactive relationship between
the favorability of rewards received and the fairness of the procedures used to determine
those rewards as a function of the relative status of the individuals. This investigation
introduced relative status as a moderator of the well-established interactive relationship
between outcome favorability and procedural fairness (Brockner & Weisenfeld, 1996).
This investigation was performed on site at a financial services company as part of a
training program focusing of the fair treatment of coworkers. Specifically, it was a pretraining experience upon which the experimenter drew for purposes of illustrating various
principles of organizational justice in a training session in which participants were involved
several weeks later. The researcher conducted both the experiment and the training session
in conjunction with management’s wishes to develop methods for enhancing employees’
fair treatment of one another. Specifically, the company was interested in demonstrating to
employees some of the factors that led their colleagues to be interested in seeking them out
or to avoid them as teammates.
Participants were customer service representatives and supervisors who worked at the
same office of a financial services company. In this organization’s status hierarchy, the
supervisors were one level above the customer service representatives. The study was conducted in conjunction with an effort to determine how quickly employees could compute
consumers’ credit scores using a new computer program on which all received earlier training. Unlike the questionnaire tasks performed by participants in Greenberg (2002) and
Greenberg and Roberge (in press), this task was “real” insofar as actual performance was
measured on a task that had high face validity to participants.
Employees participating in the study were told that they would be paired randomly with
one of the other people in the room with whom they would be competing on the credit-scoring
task. On a computer screen they were informed of the other person’s job title, but not his or
her identity. As a result, participants could tell only that they were working against another
individual whose status either was higher than theirs, lower than theirs, or equal to theirs,
but not exactly who this person was. To guard against bias associated with knowledge of
that person, individual names were not shared. This procedure illustrates a unique benefit
of a situated experiment: The levels of the independent variable capitalized on naturalistic
differences (participants recognized the other’s actually higher, lower, or equal status),
but the manner in which participants were assigned to conditions in which they believed
they were interacting with another of higher, lower, or equal status was accomplished in a
completely random fashion.
The dependent variable was of the type used in most experimental studies—a questionnaire. In this case, the instrument assessed participants’ desires to work with that same
partner on another project. After this measure was completed (along with several others
constituting the manipulation checks), participants were debriefed and all were given the
same amount of reward. Debriefing consisted of explaining the nature of the study, including
the fact that they were misled about their relative outcomes and the nature of the procedures
by which they were determined.
Downloaded from jom.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
J. Greenberg, E.C. Tomlinson / Journal of Management 2004 30(5) 703–724
713
Internal validity was enhanced by assigning participants to combinations of conditions
at random. Likewise, the internal validity of the relative status manipulation was enhanced
by the fact that these status differences were salient and that participants were well aware of
them (as confirmed in manipulation checks). Although there is some potential confounding
between the other person’s status and other aspects of that individual’s identity, this possibility was minimized by the fact that the other person could have been any one of four other
people. Moreover, because all the individuals involved worked in different departments, the
chances were reduced that salient aspects of those individuals’ identities were known to the
participants.
It is important to note that participants in Chen et al. (2003, Study 2) were volunteers
who agreed to help assess the new credit-scoring system. This was, in fact, what they did. In
contrast to the bogus questionnaire task performed by participants in the Greenberg (2002)
situated experiment, performance on this research task was of value to the company. Because
the study was performed during working hours, no concerns were aroused about having to
pay overtime. The most serious ethical concern in this study was the false information presented regarding outcome quantity and procedure, and this was both minor and short-lived.
Distinguishing Situated Experiments from Laboratory Experiments and
Field Experiments
These examples illustrate four key features of situated experiments that distinguish them
from laboratory experiments and field experiments: (1) participants’ level of awareness of
experimental participation, (2) opportunities for random assignment, quality of manipulations, and control over variables, (3) the artificiality of the research setting, and (4) the
purpose of the research (for a summary, see Table 1).
Awareness of Experiment
By definition, participants in a lab experiment know that they are participating in a
research study. This has been documented to result in any of a variety of subject role arTable 1
Comparisons between lab experiments, field experiments, and situated experiments on key criteria
Criterion
Laboratory experiments
Field experiments
Situated experiments
Participants’ awareness of
participation in research
Opportunities for random
assignment
Quality of manipulations
Control over variables
Artificiality of research
setting
Principal beneficiary of
the research
High
Low
Relatively low
High
Low
Precise
High
High
Imprecise
Low
Low
Researcher (and
students, if properly
debriefed)
Researcher
High for some variables, low
for others
Precise
High
High for some variables, low
for others
Researcher, organization,
and employees
Downloaded from jom.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
714
J. Greenberg, E.C. Tomlinson / Journal of Management 2004 30(5) 703–724
tifacts, such as the tendency for participants to be highly anxious (Kruglanski, 1975), or
to behave intentionally in a manner either contrary with or in keeping with their expectations about what is expected of them (Cook, Bean, Calder, Frey, Krovetz & Reisman,
1970)—all of which may bias the results (for a review, see Greenberg & Folger, 1988,
Chapter 6). By contrast, participants in most field experiments have no awareness that they
are involved in an experiment, thereby precluding the opportunity for them to bias their
responses.
Participants in situated experiments fall between these two extremes. They are aware that
they are involved in some kind of special activity, which may trigger some nominal levels
of arousal. However, because participants are unlikely to be aware of the actual experimental aspect of this event, they would not have any opportunity to bias their responses in
any systematic fashion. This was the case in all four of the situated experiments described
here. In these situated experiments, as often is done in university laboratories, the true
purpose of the research was disguised. Specifically, the ostensible purpose of the investigations described here either was to complete a questionnaire (Greenberg, 2002; Greenberg &
Roberge, in press) or to assess performance on a new work procedure (Chen et al., 2003).
As a result, there was no opportunity for participants to be aware of the actual research
itself.
Opportunities for Random Assignment, Quality of Manipulations, and Control over Variables
The opportunity to assign participants to conditions at random in an effort to hold constant
the effects of variables other than the independent variable is the sine qua non of the
laboratory experiment. Lab experiments also offer researchers the opportunity to manipulate
their independent variables precisely and to have control over extraneous variables. These
characteristics can enhance the internal validity of a lab experiment—not automatically, but
if properly conducted (Brewer, 2000).
By contrast, random assignment to conditions in field experiments is likely to be far more
limited in the case of most variables—in which case, the researcher would be said to be
conducting a quasiexperiment (see Greenberg & Folger, 1988, Chapter 5). Moreover, field
experimentation is likely to suffer from imprecise manipulations and give researchers only
limited control over key variables of interest.
Again, situated experiments represent a point somewhere between these extremes. For
those variables that are under the direct control of the experimenter, random assignment is
indeed possible. This would be the case for all variables except those that are not under the
experimenter’s direct control (although there are not likely to be many, there may be a few).
This, of course, depends on the exact nature of the experiment. In fact, in the examples of
situated experiments presented here, only one variable (presence or absence of an ethics
program) in one experiment (Greenberg, 2002) was not under the experimenter’s control.
All other variables in all other studies were precisely manipulated and carefully controlled.
Artificiality of Research Setting
As noted earlier, although some researchers explicitly attempt to replicate key elements of organizations in their research settings, most others do not. That is, instead
Downloaded from jom.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
J. Greenberg, E.C. Tomlinson / Journal of Management 2004 30(5) 703–724
715
of creating mundane realism (i.e., a situation that resembles some aspects of some setting of interest), they focus on creating experimental realism (i.e., conditions likely to
have an immediate effect on research participants) (Carlsmith, Ellsworth & Aronson,
1976). This follows from the argument that what is most important when it comes to
testing theory is faithfully operationalizing the variables of theoretical interest regardless
of whether or not these duplicate the characteristics of any particular non-laboratory setting (Berkowitz & Donnerstein, 1982). Hence, it has been argued that artificiality in experiments is not merely acceptable, but desirable (Henshel, 1980)—or, as Berkowitz and
Donnerstein (1982) put it, “artificiality is the strength and not the weakness of experiments”
(p. 256).
A key consideration in this regard concerns the scope of the theories being tested. In
the case of an applied field, such as management, researchers are inclined to be interested in testing theories that specify certain elements of a setting, such as a work group
or an organization. In this case, testing theories may well require that the setting not be
artificial with respect to those elements. Rather, they may need to be studied in their natural environments. In other words, although artificiality with respect to some variables
may be desirable, mundane realism is precisely what is necessary to examine the effects
of other variables. Several examples of this may be seen in the situated experiments described here. For example, Greenberg (2002) studied the presence or absence of an ethics
program, Greenberg and Roberge (in press) studied reactions to underpayment caused by
familiar or unfamiliar organizational agents, and Chen et al. (2003) examined the reactions
to people who differed with respect to their organizational status. The validity of efforts
to replicate these variables outside of an actual organizational context surely would be
questionable.
Benefits of Conducting the Research to the Participating Organization
In the case of scientifically oriented research, the researcher’s major (if not sole) concern is
likely to be insight into the theory or phenomenon under investigation. Although this applies
as well to scholars conducting situated experiments, the setting in which such research is
conducted promotes sensitivity to the interests of other stakeholders as well—specifically,
the organization and its employees.
Interestingly, the researcher’s theoretically based interest in conducting the investigation
may or may not be shared by officials of the host organization. For them, there is likely
to be a far more immediate return—an opportunity to demonstrate some key phenomenon
that subsequently will facilitate important organizational training. This was the case for
all four situated experiments described here. Specifically, what constituted a scientific experiment for the researcher was perceived as a valuable pretraining exercise for the host
organizations—and in one case (Chen et al., 2003), an opportunity to collect valuable normative data about task performance. Among the participants, the benefits of the research
experience are different still. For them, the immediate benefit comes from receiving extra
pay, rewards, or time off regular work assignments. Of course, this is in addition to any
long-term benefits linked to being trained or the abstract benefits of understanding the phenomena being researched. In short, we believe that properly conducted situated experiments
may be a win-win-win experience.
Downloaded from jom.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
716
J. Greenberg, E.C. Tomlinson / Journal of Management 2004 30(5) 703–724
Internal Validity in Situated Experiments
Many of the characteristics of situated experiments that make them unique (e.g., participant-workers’ ongoing opportunities for communication and their advance knowledge of
one another) also render them potentially vulnerable to internal validity threats. We now
will identify several such threats along with ways of minimizing them.
Potential Violations of the Stable-Unit-Treatment-Value-Assumption
One of the potential benefits of situated experiments is that they allow for random assignment of participants to most, if not all, conditions. Randomization makes it possible
to attain unbiased estimates of the causal effects of variables as long as the stable-unittreatment-value-assumption (SUTVA) is met (West et al., 2000). This requires satisfying
two conditions. First, the randomization procedure must be inert—that is, the manner by
which participants are assigned to conditions itself cannot affect in any way participants’
responses to the treatments received. The SUTVA would be violated, for example, if participants were led to believe that they were selected to participate in a particular condition
on some nonrandom basis (e.g., because of their special skills). The second criterion for
satisfying the SUTVA is that participants’ responses not be affected by their knowledge
of the treatments others receive (except, of course, insofar as this itself may constitute a
treatment). This would be the case, for example, if participants had some idea that their
colleagues in other conditions were getting paid differently.
The potential to violate each of these conditions strikes us as real possibilities in the case
of situated experiments. After all, because participation in research is not typically part
of the work experience, it may be tempting to induce participation by somehow leading
employees to believe that they were specially selected. Doing so would violate the SUTVA.
In addition, it’s easy to envision that employees who otherwise may feel fairly paid for
performing an experimental task may come to feel relatively underpaid upon learning that
people in another group are getting paid more for doing the same work. This too would
violate the SUTVA.
West and his associates (2002) note that “If participants are not aware of the random
assignment and are only aware of the nature of the experimental condition in which they
participate, the likelihood that SUTVA will be met is greatly increased” (p. 48). To meet this
assumption, thereby yielding unbiased estimates of causal effects, these authors recommend
geographically isolating participants. This was precisely what was done in the two situated
experiments by Greenberg and Roberge (in press) and the one by Greenberg (2002), in
which parts of the experiments that were conducted at different times also were conducted
at different locations.
Group Administration of Treatments
In field experiments it is not unusual for researchers to administer different treatments to
different groups of participants. As insightful as such field experiments may be, the practice
of administering treatments to entire groups is potentially problematic insofar as it fails to
ensure that the responses of members of each treatment group are truly independent (West
Downloaded from jom.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
J. Greenberg, E.C. Tomlinson / Journal of Management 2004 30(5) 703–724
717
et al., 2000). This nonindependence, in turn, leads to artificially reduced estimates of the
standard errors of the effects, effectively inflating the chance of Type 1 error (Barcikowski,
1981).
Because they capitalize on naturalistic differences, situated experiments are at risk for this
problem. Consider, for example, Greenberg’s (2002) study of employee theft. In this case,
it appeared safe to assume that differences in participants’ levels of cognitive moral development were evenly distributed between the two locations studied. Moreover, information
about the victim of the theft was randomized on an individual basis. However, because the
naturalistic manipulation of ethics program occurred between groups, noninterdependence
cannot be assumed. In this particular case, however, because the intraclass correlation was
low (.04), and because the level of alpha selected for significance testing was more extreme
than usual (p < .001), it is unlikely that the null hypothesis regarding this variable was
rejected prematurely. A more sophisticated way of avoiding this potential problem when
testing hypotheses about naturalistic group-level effects is by using hierarchical linear modeling techniques that make appropriate corrections for the degree of noninterdependence
within groups (e.g., Kreft & DeLeeuw, 1998).
Better yet, whenever possible, researchers should follow the practice of administering
conditions to individuals completely randomly within group settings as done in some of
the situated experiments described here. For example, in Study 2 of Chen et al. (2003) and
in both studies conducted by Greenberg and Roberge (in press), all independent variables
were administered on a completely randomized basis to individuals via computer terminals
that were not visible to others during the group administration of the study. In other words,
although participants assembled in groups, the independent variables were not administered
on this basis, but rather, individually. Insofar as the unit of analysis was the individual, this
procedure was appropriate.
Breakdown of Randomization
The situated experiments described here were brief studies of the type typically conducted
in the laboratory. Although we envision that most situated experiments will be of this type,
it also is possible for them to be more complex and longitudinal in nature. This would be
the case, for example, if a researcher had groups of workers in different locations play a
negotiation game with one another over an extended period of time. Researchers conducting
situated experiments of this type need to be aware of breakdown of randomization a potential
threat to internal validity that does not arise in the hour-long pretraining exercises about
which we have been speaking—breakdown of randomization.
Although highly informative, large-scale field experiments conducted at multiple research
sites often run the risk of breakdown of randomization particularly when people other than
the researcher (e.g., officials of the organization studied) are responsible for the treatments
(Boruch, McSweeny & Soderstrom, 1978). This occurs all too frequently, for example, in
epidemiological research when physicians required to administer drugs to certain patients
fail to do so (e.g., Kopans, 1994). This problem may be avoided in situated experiments in
precisely the same manner that has been recommended in the case of field experiments—
that is, by carefully monitoring the randomization process and the treatments received by
each participant in the study (Braucht & Rieichardt, 1993).
Downloaded from jom.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
718
J. Greenberg, E.C. Tomlinson / Journal of Management 2004 30(5) 703–724
Creating Opportunities to Enhance Internal Validity
Thus far, we have discussed potential threats to internal validity associated with situated
experiments and ways of overcoming them. In addition, we also should note that situated
experiments may have associated with them several features that enhance internal validity,
at least if capitalized upon in appropriate fashion. Several examples may be seen in the
situated experiments presented here.
First, it may be possible in situated experiments to capitalize on the unique features of the
research settings by involving organizational officials in the research process. For example,
Greenberg (2002) did this by using executives from the company within which the data were
collected to participate in the scoring of the cognitive moral development measure. As noted
earlier, this was done in an effort to facilitate the interpretation of any company-specific
responses that may have been made.
Second, by capitalizing on naturalistic qualities of the work environment, situated experiments provide opportunities to embed methodological features that otherwise may arouse
suspicion in the laboratory. Greenberg’s (2002) use of a bowl of pennies on the training room
table to study employee theft is a good example. Because participants were accustomed to
using this bowl of pennies for other training exercises, its use in this situated experiment
was unlikely to cause bias induced by suspicion, as sometimes occurs in the laboratory (e.g.,
Greenberg & Folger, 1988). This is not to say, of course, that any apparatus or any manipulation in a field setting automatically provides this assurance. Rather, by capitalizing on
their natural occurrence, researchers have a good opportunity to camouflage manipulations
and measurements of interest to them.
Third, we note that situated experiments provide good opportunities to enhance internal
validity by studying complex naturalistic variables in a highly controlled manner. Differences in employee status, as studied in Chen et al.’s (2003) Study 2 represent one such
example. In this case, the status differences between employees were “real” insofar as they
reflected actual differences between employees in the same company among whom they
had meaning. To manipulate status in a laboratory, as sometimes has been done by giving
participants feedback about the status of others (e.g., Jones, Gergen & Jones, 1963) is surely
less meaningful, thereby raising questions about the internal validity of the manipulation (if
not also its salience). We acknowledge that the practice of using “real other employees” to
manipulate status runs the risk of introducing confounds associated with feelings about the
individual others involved in the study—who, because they are known—cannot be made to
be neutral with respect to all other variables. To minimize this problem, Chen et al. (2003)
created a situation in which the status of the other individual involved in the study was
known although his or her exact identity was unclear because it could have been one of
several others with whom the participants did not have a direct reporting relationship.
Ethical Considerations in Situated Experiments
Although it is important to consider the ethical treatment of participants in all research
investigations, special ethical considerations are involved in situated experiments. These
involve the role of deception and the nature of the debriefing procedure.
Downloaded from jom.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
J. Greenberg, E.C. Tomlinson / Journal of Management 2004 30(5) 703–724
719
Deception
Involvement in research experiences among workers must be completely voluntary, appropriately compensated, and be recognized as a valuable, if not also enjoyable, experience.
Indications to the contrary should be taken as grounds for not doing the research, or terminating the study if it already began. Even if company officials believe that the experience may be
“good for them,” it is the scientist’s ethical and legal obligation to avoid doing anything that
might jeopardize the employer–employee relationship. After all, the underlying purpose of
any organizational research is to improve the lot of management and employees. Conducting research that threatens to jeopardize the well-being of any individual or the company
renders that investigation unjustifiable on the grounds that its costs far outweigh its benefits.
This raises the matter of deceiving employees in the course of a situated experiment.
For many years, experimental social psychologists raised questions about the messages
they were sending about their profession by routinely deceiving participants in laboratory
research. As Vinacke (1954) challenged his colleagues a half-century ago, “What sort of
reputation does a laboratory which relies heavily on deceit have in the university and the
community where it operates?” and “what possible effects can there be on [the public’s]
attitude toward psychologists . . . ?” (p. 155).
To be sure, the same type of question may be raised about situated experiments in organizations in which deception occurs. In this case, however, the question takes a different
form, and one with a decidedly more immediate impact—namely, what is the long-term
impact of deceiving employees? Given the widespread importance of developing trust between employers and employees (Miller, 2001), it is reasonable to ask why employers may
be willing to breach that trust in the course of conducting an experiment. In the case of
the situated experiments reported here (and in any that may be done), the answer is clear:
Trust was not violated. In contrast to social psychologists who expressed concern about the
public image of their profession created by repeated use of deception (Baumrind, 1985)—
not to mention, the validity of research findings predicated on this technique (Aronson &
Carlsmith, 1968)—the practice of deceiving research participants in the workplace is new
and not widespread (yet, at least). Thus, it is unlikely that participants will come to a situated
experiment with preconceived expectations that they will be deceived.
To be ethically aware, however, organizational researchers must consider the possibility
that employees’ trust in their employers may be eroded by the decision to deceive them in the
course of the study. However, we advocate that the nature of the deception used in situated
experiments should be so benign in magnitude, so short-lived, and so thoroughly debriefed
as to not jeopardize trust. This was the case in all the situated experiments described here.
These considerations, although important, are not significantly different than those raised
about any experiment involving human participants conducted in a university laboratory.
When conducting an experiment in an organization, however, their importance is magnified
by virtue of the ongoing and sometimes delicate nature of the relationship between employers and employees. In fact, we caution against conducting situated experiments involving
deception in any organizations in which the relationship between labor and management is
strained. Thus, if the overall level of trust between people in an organization is already low,
it would be inappropriate and inadvisable to threaten relationships further by introducing
deceptive practices, no matter how benign or ultimately effective they may be.
Downloaded from jom.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
720
J. Greenberg, E.C. Tomlinson / Journal of Management 2004 30(5) 703–724
Debriefing
The process of debriefing research participants involves not only revealing any deceptions
used in the course of the study, but also fully explaining the nature of the investigation itself
(for an overview of this process, see Greenberg & Folger, 1988, Chapter 8). The need to
debrief employees who participated in a situated experiment is somewhat different from
the need to debrief undergraduate student volunteers who participate in research to fulfill a
course requirement. This is predicated on two major differences between the incentives for
participating in research in each of the settings.
First, in the case of college students, research participation typically is justified on the
grounds that it provides a useful educational experience, and the benefits gleaned from of this
experience represent a major form of compensation to the participant for the time spent. By
contrast, employees may have little reason to expect to receive an “educational experience”
as part of their regular work activities—even such extra-role behaviors as volunteering to
complete a survey, as in Greenberg (2002). Second, whereas college students typically are
required to participate in experimental research to fulfill a course requirement and know
that they are involved in a research study, employees in situated experiments typically
get involved as volunteers and do not recognize the research aspect of their activities.
In fact, participants in Greenberg’s (2002) situated experiment were led to believe that
any “research” in which they were involved had to do with the bogus questionnaire they
completed as a cover-story for underpaying them.
Justifying such deception on ethical grounds raises several considerations. As in any
research study, the nature of the experience for participants who are “kept in the dark”
should be such that participants would not object to it even if they had complete information
(Greenberg & Folger, 1988). This was done by Greenberg (2002) by conducting a pilot study
in which a thorough description of the procedure to be used was presented to another group
of workers who were not involved in the research. Responses to questionnaires and inperson interviews confirmed that there was no reason to suspect that any employees would
have reason to feel negative about the experience. Also, as noted earlier, the educational
benefits to the employer should be considerable. Because Greenberg’s (2002) findings were
used to facilitate training in the ethics program, the study was widely considered a useful
educational experience.
Unlike college students who typically are “paid back” with knowledge, employees are
working for a living, so any time they spend participating in research should be compensated at their usual rate of pay (or overtime, if indicated). Indeed, workers involved in the
studies by Greenberg’s (2002) and by Greenberg and Roberge (in press) were compensated
for their extra hour spent on their jobs. Likewise, participants in Chen et al. (2003, Study 2)
were paid all the tickets promised to them in advance. Moreover, we must consider differences in the normative expectations surrounding each setting. Although it is widely believed
that there is some benefit to students for participating in research, making the experience
worthwhile for them, some small degree of coercion is expected in student–teacher relationships. After all, it is normatively appropriate for professors to dictate class requirements
in a manner that gives students no voice in the process. By contrast, in recent years workers have become less likely to accept openly coercive tactics. This is especially important
given the importance of preserving the relationship between workers and their managers.
Downloaded from jom.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
J. Greenberg, E.C. Tomlinson / Journal of Management 2004 30(5) 703–724
721
Therefore, when conducting situated experiments it is possible under no circumstances to
tolerate coercion.
Conclusion
Research methodologists have long advocated that as behavioral scientists, our mission
is not to rely on only one single research technique, but rather, to examine corroborating
evidence from studies using multiple research methods with offsetting flaws (McGrath,
1982; Webb, Campbell, Schwartz & Sechrest, 1966). In this regard, we believe that the
situated experiment is one method that deserves a prominent place in the methodological
toolkit of organizational scientists. We hope that by pointing out the potential benefits of
situated experiments, future researchers will consider using them to investigate important
phenomena. We believe that such efforts will result in research that ultimately benefits the
advancement of science (by virtue of its theoretical value) and immediately benefits the
organizations within which the research is conducted (by virtue of its applied value as an
exercise).
References
Anderson, N., Ones, D. S., & Sinangil, H. K. (Eds.). 2002. Handbook of industrial, work, and organizational
psychology. Vol. 1: Personnel psychology. Newbury Park, CA: Sage.
Anderson, N., Ones, D. S., Sinangil, H. K., & Viswesvaran, C. (Eds.). 2002. Handbook of industrial, work, and
organizational psychology. Vol. 2: Organizational psychology. Newbury Park, CA: Sage.
Aronson, E., Brewer, M., & Carlsmith, J. M. 1985. Experimentation in social psychology. In G. Lindzey & E.
Aronson (Eds.), The handbook of social psychology: 3rd ed., Vol. 1, 441–486. New York: Random House.
Aronson, E., & Carlsmith, J. M. 1968. Experimentation in social psychology. In G. Lindzey & E. Aronson, (Eds.),
The handbook of social psychology: 2nd ed., Vol. 2, 1–79. Reading, MA: Addison-Wesley.
Babbie, E. R. 1975. The practice of social research. Belmont, CA: Wadsworth.
Barcikowski, R. S. 1981. Statistical power with group mean as the unit of analysis. Journal of Educational Statistics,
6: 267–285.
Baumrind, D. 1985. Research using intentional deception: Ethical issues revisited. American Psychologist, 40:
165–174.
Berkowitz, L., & Donnerstein, E. 1982. External validity is more than skin deep: Some answers to criticisms of
laboratory experiments. American Psychologist, 37: 245–257.
Blackburn, R. S. 1987. Experimental design in organizational settings. In J. W. Lorsch (Ed.), Handbook of
organizational behavior: 126–139. Englewood Cliffs, NJ: Prentice Hall.
Boruch, R. F., McSweeny, A. J., & Soderstrom, E. J. 1978. Randomized field experiments for program planning,
development, and evaluation. Evaluation Quarterly, 2: 655–695.
Braucht, G. N., & Rieichardt, C. S. 1993. A computerized approach to trickle-process, random assignment.
Evaluation Review, 17: 79–90.
Brewer, M. B. 2000. Research design and issues of validity. In H. T. Reis & C. M. Judd (Eds.), Handbook of
research methods in social and personality psychology: 3–16. New York: Cambridge University Press.
Brockner, J., & Wiesenfeld, B. M. 1996. An integrative framework for explaining reactions to decisions: The
interactive effects of outcomes and procedures. Psychological Bulletin, 120: 189–208.
Carlsmith, J. M., Ellsworth, P. C., & Aronson, E. A. 1976. Methods of research in social psychology. Wadsworth,
CA: Addison-Wesley.
Carlston, D. C., & Cohen, J. L. 1980. A closer examination of subject roles. Journal of Personality and Social
Psychology, 38: 857–870.
Downloaded from jom.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
722
J. Greenberg, E.C. Tomlinson / Journal of Management 2004 30(5) 703–724
Chastain, G., & Landrum, R. E. 1999. Protecting human subjects: Departmental subject pools and institutional
review boards. Washington, DC: American Psychological Association.
Chen, Y. Brockner, J., & Greenberg, J. 2003. When is it “A pleasure to do business with you”? The effects of
relative status, outcome favorability, and procedural fairness. Organizational Behavior and Human Decision
Processes, 92: 1–21.
Cook, T. D., Bean, J., Calder, B., Frey, R., Krovetz, M., & Reisman, S. 1970. Demand characteristics and three
conceptions of the frequently deceived subject. Journal of Personality and Social Psychology, 14: 185–194.
Cook, T. D., & Campbell, D. T. 1976. The design and conduct of quasi-experiments and true experiments in field
settings. In M. D. Dunnette (Ed.), Handbook of industrial and organizational psychology: 223–326. New York:
John Wiley & Sons.
Diener, E., & Crandall, R. 1978. Ethics in social and behavioral research. Chicago: University of Chicago Press.
Dunnette, M. D., & Hough, L. M. (Eds.). 1990. Handbook of industrial and organizational psychology: Vol. 1.
Palo Alto, CA: Consulting Psychologists Press.
Ellis, A. P. J., Hollenbeck, J. R., Ilgen, D. R., Porter, C. O. L. H., West, B. J., & Moon, H. 2003. Team learning:
Collectively connecting the dots. Journal of Applied Psychology, 88: 821–835.
Festinger, L. 1971. Laboratory experiments. In W. M. Evan (Ed.), Organizational experiments: Laboratory and
field research: 9–24. New York: Harper and Row.
Fisher, C. D. 1984. Laboratory experimentation. In T. S. Bateman & G. S. Ferris (Eds.), Method and analysis in
organizational research: 169–185. Reston, VA: Reston Publishing.
Folkes, V. S., & Whang, Y. 2003. Account-giving for a corporate transgression influences moral judgment: When
those who “spin” condone harm-doing. Journal of Applied Psychology, 88: 79–86.
Fromkin, H. L., & Streufert, S. 1976. Laboratory experimentation. In M. D. Dunnette (Ed.), Handbook of industrial
and organizational psychology: 415–465. New York: John Wiley & Sons.
Gergen, K. J. 1973. Social psychology as history. Journal of Personality and Social Psychology, 26: 309–320.
Gibbs, J. C., Basinger, K. S., & Fuller, D. 1992. Moral maturity: Measuring the development of sociomoral
reflection. Hillsdale, NJ: Lawrence Erlbaum Associates.
Golembiewski, R. T. 2000. Handbook of organizational behavior: 2nd ed. New York: Marcel Dekker.
Gordon, M. E., Slade, L. A., & Schmitt, N. 1986. The “science of the sophomore” revisited: From conjecture to
empiricism. Academy of Management Review, 11: 191–207.
Greenberg, J. 1993. Stealing in the name of justice: Informational and interpersonal moderators of theft reactions
to underpayment inequity. Organizational Behavior and Human Decision Processes, 54: 81–103.
Greenberg, J. 2002. Who stole the money, and when? Individual and situational determinants of employee theft.
Organizational Behavior and Human Decision Processes, 89: 985–1003.
Greenberg, J., & Folger, R. 1988. Controversial issues in social research methods. New York: Springer-Verlag.
Greenberg, J., & Roberge, M. E. in press. The efficacy of mindless apologies for underpayment inequity: When
is it “Only the Thought that Counts”? Journal of Applied Psychology.
Guerrin, B., & Innes, J. 1993. Social facilitation. New York: Cambridge University Press.
Henshel, R. L. 1980. The purpose of laboratory experimentation and the virtues of artificiality. Journal of
Experimental Social Psychology, 16: 466–478.
Higbee, K. L., Lott, W. J., & Graves, J. P. 1976. Experimentation and college students in social psychology research.
Personality and Social Psychology Bulletin, 2: 239–241.
Jones, E. E., Gergen, K. J., & Jones, R. G. 1963. Tactics of ingratiation among leaders and subordinates in a status
hierarchy. Psychological Monographs, 77: 1–20.
Jones, R. A. 1985. Research methods in the social and behavioral sciences. Sunderland, MD: Sinauer Associates.
Jones, S. R. 1990. Worker interdependence and output: The Hawthorne studies reevaluated. American Sociological
Review, 55: 176–190.
Kopans, D. B. 1994. Screening for breast cancer and mortality reduction among women 40–49 years of age.
Cancer, 74: 311–322.
Kreft, I. G. G., & DeLeeuw, J. 1998. Introducing multilevel modeling. London: Sage.
Kruglanski, A. W. 1975. The human subject in the psychology experiment: Fact and artifact. In L. Berkowitz
(Ed.), Advances in experimental social psychology: Vol. 8, 101–147. Orlando, FL: Academic Press.
Kuhn, K. M., & Yockey, M. D. 2003. Variable pay as a risky choice: Determinants of the relative attractiveness of
incentive plans. Organizational Behavior and Human Decision Processes, 90: 323–341.
Downloaded from jom.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
J. Greenberg, E.C. Tomlinson / Journal of Management 2004 30(5) 703–724
723
Matarazzo, J. D., Wiens, A. N., Jackson, R. H., & Manaugh, T. S. 1970. Interviewee speech behavior under
different content conditions. Journal of Applied Psychology, 54: 15–26.
Mayo, E. 1933. The human problems of an industrial civilization. London: Macmillan.
McGrath, J. 1982. Dilemmatics: The study of research choices and dilemmas. In J. E. McGrath, J. Martin, & R.
A. Kulka (Eds.), Judgment calls in research: 69–102. Newbury Park, CA: Sage.
Milgram, S. 1963. Behavioral study of obedience. Journal of Abnormal and Social Psychology, 67: 371–
378.
Miller, G. 2001. Why is trust necessary in organizations? The moral hazard of profit maximization. In K. S. Cook
(Ed.), Trust in society: 307–331. New York: Russell Sage Foundation.
Mook, D. G. 1983. In defense of external invalidity. American Psychologist, 38: 379–387.
Pritchard, R. D., Dunnette, M. D., & Jorgenson, D. O. 1972. Effects of perceptions of equity and inequity on
worker performance and satisfaction. Journal of Applied Psychology, 56: 75–94.
Reis, H. T., & Stiller, J. 1992. Publication trends in JPSP: A three-decade review. Personality and Social Psychology
Bulletin, 18: 465–472.
Roethlisberger, F. J., & Dickson, W. J. 1939. Management and the worker. Boston: Harvard University Press.
Rogelberg, S. G. 2002. Handbook of research methods in industrial and organizational psychology. Malden, MA:
Blackwell.
Rosenthal, R. 1967. Covert communication in the psychological experiment. Psychological Bulletin, 67: 356–
367.
Sackett, P. R., & Larson, J. R., Jr. 1990. Research strategies and tactics in industrial and organizational psychology.
In M. D. Dunnette & L. M. Hough (Eds.), Handbook of industrial and organizational psychology: 419–489.
Palo Alto, CA: Consulting Psychologists Press.
Scandura, T. A., & Williams, E. A. 2000. Research methodology in management: Current practices, trends, and
implications for future research. Academy of Management Journal, 43: 1248–1264.
Seeman, J. 1969. Deception in psychological research. American Psychologist, 24: 1025–1028.
Sherif, M., Harvey, O. J., White, B. J., Hood, W. R., & Sherif, C. W. 1961. Intergroup cooperation and competition:
The Robber’s Cave experiment. Norman, OK: University Book Exchange.
Skrondal, A., & Rabe-Hesketh, S. 2004. Generalized latent variable modeling: Multilevel, longitudinal, and
structural equation models. Boca Raton, FL: CRC Press.
Smith, E. R. 2000. Research design. In H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in social
and personality psychology: 17–39. New York: Cambridge University Press.
Stone-Romero, E., Weaver, A. E., & Glenar, J. L. 1995. Trends in research design and data analytic strategies in
organizational research. Journal of Management, 21: 141–157.
Triplett, N. 1897–1898. The dynamogenic factors in pacemaking and competition. American Journal of
Psychology, 9: 507–533.
Vinacke, W. E. 1954. Deceiving experimental subjects. American Psychologist, 9: 155.
Webb, E. J., Campbell, D., Schwartz, R., & Sechrest, L. 1966. Unobtrusive measures: Nonreactive research in the
social sciences. Chicago: Rand McNally.
Weick, K. E. 1965. Laboratory experiments with organizations. In J. G. March (Ed.), Handbook of organizations:
194–260. Chicago: Rand McNally.
West, S. G., Biesanz, J. C., & Pitts, S. C. 2000. Causal inference and generalization in field settings: Experimental
and quasi-experimental designs. In H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in social
and personality psychology: 40–84. New York: Cambridge University Press.
Whitley, B. E., Jr. 1996. Principles of research in behavioral science. Mountain View, CA: Mayfield.
Zimbardo, P. G., Haney, C., Banks, W. C., & Jaffe, D. 1973, April 8. The mind is a formidable jailer: A Pirandellian
prison. The New York Times Magazine: Section 6, 36, 39–44.
Jerald Greenberg, Ph.D., is the Abramowitz Professor of Business Ethics and Professor of
Organizational Behavior at the Ohio State University’s Fisher College of Business. He has
published extensively in the field of organizational justice and has won numerous awards
for his research. Professor Greenberg also has authored or edited many books, including the
texts, Behavior in Organizations (with Baron) and Managing Behavior in Organizations.
Downloaded from jom.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016
724
J. Greenberg, E.C. Tomlinson / Journal of Management 2004 30(5) 703–724
He holds Fellow status in the Academy of Management and the Society for Industrial and
Organizational Psychology.
Edward C. Tomlinson is a doctoral candidate in organizational behavior and human resources at The Ohio State University. He holds an undergraduate degree in economics and
business from Virginia Military Institute, an MBA from Lynchburg College, and a Masters
in Labor and Human Resources from The Ohio State University. His primary research interests within organizational behavior include the role of trust in professional relationships,
negotiation and dispute resolution, and employee deviance.
Downloaded from jom.sagepub.com at PENNSYLVANIA STATE UNIV on May 11, 2016