Group Deception in Computer

Proceedings of the 38th Hawaii International Conference on System Sciences - 2005
Group Deception in Computer-Supported Environments
Kent Marett
Washington State University
Joey F. George
Florida State University
Abstract
Business organizations emphasize the importance of
teamwork and collaboration within work groups more
than ever before. Unfortunately, group interaction is not
always positive. Very little research has been conducted
to investigate the behavior and judgments of group
members who are belong to group in which one of the
members is deceptive. This study is one of the first
attempts to look at this phenomenon, from both the
deceiver and receiver sides.
Groups of three student subjects engaged in a
group negotiation task, with one of the group members
randomly assigned the role of deceiver. Groups varied
by
the
availability
of
computer-supported
communication for discussion purposes, their physical
proximity with one another, and the number of group
members who were warned about the possibility of
deception. Results indicated that individuals lied more
when using computers to communicate with others and
when both of their group partners had been warned.
Group members were not proficient at detecting lies in
any of the conditions. Implications of these findings and
their potential implications for research and practice are
discussed.
1. Introduction
Information
technology
allows
business
organizations to emphasize the importance of teamwork
and collaboration within work groups more than ever
before. Computer-mediated communication (CMC) has
been a popular tool for coordinating work groups, in
large part because of the unique capabilities it offers its
users, including allowing the dispersion of group
members and different levels of synchronous discussion.
Unfortunately, group interaction is not always positive.
Kling [1] claims that research relying solely on the
convivial relationships between group members and
ignoring coercive, competitive, and conflictual relations
is not realistic, and should thus be acknowledged in any
type of group research. When group members decide to
covertly act in their own self-interests, the use of
deception is a common tactic. As Ekman [2] implied,
lying is so universal that it is relevant to all human
affairs. It is logical to assume that deceit occurs across
electronic channels, whether it be via e-mail, chat, or any
of the other new media, every bit as much as it occurs in
traditional conversation. However, little to no previous
research has focused on the deceptive behavior of group
members who use CMC to conduct deceptive
communication.
This study explores the behavior and performance of
individuals, both the deceivers and their intended targets,
who take advantage of the inherent properties of CMC to
communicate with others within their work groups.
Groups differed in terms of the availability of computer
support and in terms of proximity, with some groups
meeting in the same room while others were dispersed.
Groups also differed by the availability of warnings for
group members that deception may have been present
within group discussion. The primary research question
that drove this study is: How do people differ in their
behavior when deceptive communication occurs in
groups meeting under the varying conditions of
proximity, computer support, and warnings?
The
behavior of interest is both the amount of deception
contributed to the discussion, as well as the detection of
lies by the recipients.
The following section will discuss the prior
literature and theory base that informed this study. The
research model and hypotheses that were tested are then
presented. That is followed by a brief description of the
research design and procedures used for conducting the
experiment. The study concludes with potential
implications of this research.
0-7695-2268-8/05/$20.00 (C) 2005 IEEE
1
Proceedings of the 38th Hawaii International Conference on System Sciences - 2005
2. Prior Literature and Theory
2.1 Deception Literature
For the purposes of this study, the term deception is
defined as “a message knowingly transmitted by a sender
to foster a false belief or conclusion by the receiver” [3],
p.205). In order to reveal a more realistic analysis of
socially interactive communication between a deceptive
sender and receivers, Buller and Burgoon [3] developed
their interpersonal deception theory (hereafter referred
to as IDT). This theory views deceptive communication
as a strategic activity, with interaction between
conversational participants influencing future behavior
and cognition for all involved, and this is the view of
deceptive communication taken here.
According to IDT, deceivers will judge the success
of their deception by assessing the behavior and
reactions of its receivers, and they will adjust their own
behavior and deceptive content if necessary. Their
deceptive messages contain not only the intentionally
false information but also any ancillary behavior and
content protecting the sender and these involuntary cues.
To reconcile this, IDT draws upon Ekman’s (1992)
leakage theory, which states that most deceivers will
become psychologically aroused while lying and will
inadvertently display, or “leak out,” behavioral cues
hinting at their duplicity. These cues include such
behavior as pupil dilation, increased voice pitch, and
self-grooming motions, among others [4]. It has been
proposed that leakage is the result of feelings of fear,
guilt, excitement, or anxiety [5] or from being overly
motivated [6]. Because the task of shaping and
maintaining a deceptive message is cognitively taxing,
untrained deceivers are unable to simultaneously manage
their nonverbal behavior, resulting in the leakage of
deceptive indicators [3].
Unfortunately, people are not very proficient at
detecting lies, as prior studies state that accuracy rates
are generally around fifty percent [7]. One inhibiting
factor may be the natural disposition found in most
people that McCornack and Parks [8] termed the truth
bias; most people believe that others are being truthful
with them until given reason to believe otherwise, and it
has been found between people ranging from complete
strangers to intimate couples [9]. However, truth bias
can be lessened by warnings from an external source and
can perhaps lead to better lie detection [10, 11]. It is
important to note that all of the aforementioned studies
have focused on dyads and not group deception, which is
the goal here.
People stand to perform better at successfully lying
to others, as well as detecting lies, when they have verbal
and nonverbal feedback to gauge during the discussion.
The medium that is being utilized to communicate thus
becomes a factor in deception, since not all media
transfer the same feedback between communicative
partners. CMC is leaner in this regard than other more
traditional media. The following section reviews media
differences for groups that are either computer-supported
or not.
2.2 Media Differences and Proximity
This study focuses on groups using either electronic
media or more traditional means to communicate with
each other, resulting in a need to review past research on
cross-media differences. Two prominent theories in this
area are media richness theory and social presence
theory. Media richness theory refers to the differences
in the capacities of communication media to transmit
“rich” messages, particularly the amount of feedback,
social cues, language variety, and personal focus that can
be conveyed between partners [12]. Face to face, verbal
communication is considered the richest medium in
terms of the potential number of cues transmitted, while
formal numeric text is considered the least rich, and
other types of media fall within the range between the
two. The theory claims that richer media allow users to
communicate more quickly and clearly during highly
ambiguous and equivocal situations, resulting in better
performance on associated tasks. There is supposedly a
proper fit between equivocality and medium that affects
performance, although some studies have produced
conflicting findings [13, 14].
Social presence theory developed by Short,
Williams, and Christie [15] focuses on the degree to
which communicating participants can sense the
presence of others while using a particular medium.
Similar to media richness, the theory states that the more
the channel availability of a medium limits the
transmission of social cues, the less personable and
socially sensitive the communication is, resulting in
communicators paying less attention to the other
participants. The absence of social presence from
communication has been found to have an impact on the
communicative event, especially within groups. Social
presence is a necessary component for establishing and
enforcing group norms, an effect shown especially in
leaner environments [16, 17]. The relative lack of social
presence in computer-mediated communication has been
tied to antisocial behaviors, such as “flaming” [18].
Along those lines, leaner media may also be more
conducive for deception among group members.
Because the number of social cues transmitted by
leaner media is restricted, many of the indicators used
reliably to diagnose deception are prevented from
reaching information receivers.
Using the cue
transmission portion of these two theories, Rao and Lim
0-7695-2268-8/05/$20.00 (C) 2005 IEEE
2
Proceedings of the 38th Hawaii International Conference on System Sciences - 2005
[19] developed a table ranking indicators of deception by
their detectability across video-based, audio-based, and
text-based media. The number of indicators that can be
leaked in a video mode is larger than in audio and textbased modes. Their conclusion is that the type of
medium can prevent people from using their lie detection
abilities to their full advantage.
Finally, research focused on computer-supported
media has given attention to the differences in
performance when members are physically dispersed, as
can be the case with group work [20]. One explanation
for these differences in performance for dispersed group
members, for better or worse, may be due to a lack of
access to social cues normally associated with group
work. Tung and Turban [21] speculate that distributed
groups miss social cues such as laughter, disruptive
movements, or the sound of other group members
typing, which add to a temporal patterning of work in
collocated groups. Another explanation for performance
differences is that collocated groups are prone to
psychological effects that dispersed groups may be able
to avoid because of their remoteness. With its origin in
social psychology, the theory of social facilitation seeks
to explain the changes in behavior in both human and
animal subjects when they are in the presence of others.
Zajonc offered an explanation for the phenomenon, by
suggesting that the mere physical presence of others can
increase the drive level within the individual, resulting in
a response governed by strong habits [22]. In a metaanalysis of 287 studies, Guerin [23] differentiated
between several phenomena underlying social
ComputerSupported
Communication
H1A
H2A
H1B
facilitation, including the expected evaluation from the
audience, self-presentation, attempts to conform to
socially accepted behavior, and cognitive conflict. In
terms of task performance, Baron and colleagues [24]
found that the presence of others can have a facilitating
effect for simple tasks, while Evans [25] found they have
an inhibiting effect during the completion of a more
complex version of the task. Lying can be either
mindless or difficult, depending on its consequences and
target. Because CMC provides additional opportunities
for groups to meet while apart, this prior research is
relevant, but no previous work testing how social work
cues and social facilitation affects the act of deceiving
others or the process of detecting lies seems to have been
published.
3. Research Model and Hypotheses
The research model tested in the dissertation is
illustrated below in Figure 1. Derived from the literature
review in the previous section, the research model
illustrates the proposed relationships among five
constructs: the communicative medium, proximity,
warnings, amount of deceptive communication, and
deception detection accuracy.
Six hypotheses are
presented, with the first three dealing specifically with
the amount of deceptive information that is submitted
during a deceitful group member, and with the other
three focusing on the accuracy at detecting deception by
the other group members.
+
Amount of
Deception
---
Proximity
H2B
H1C
Number of
Forewarned
Receivers
+
+
+
Detection
Accuracy
H2C
Figure 1: Research Model
0-7695-2268-8/05/$20.00 (C) 2005 IEEE
3
Proceedings of the 38th Hawaii International Conference on System Sciences - 2005
Prior research suggests that CMC users may be more
prone to deceiving others than when using a more
traditional medium, which is a strategic use of
technology predicted by Zmud [26]. This could be due
in large part to an increased negative emotional state
caused by using CMC, which can result in less
evaluation apprehension [27], less inhibition when
communicating [28], and less effort to be polite to others
[29]. Like those antisocial behaviors, lying to others
may come easier to those who, through the use of CMC,
find themselves less socially connected to their
conversational partners [15]. Given the need and desire
to deceive others, it would seem that computer users are
more apt to submit lies than their non-computer-aided
counterparts. Therefore, the following hypothesis is put
forth:
Hypothesis 1A: Deceivers using computermediated communication will submit more deceptive
information to group members than deceivers not using
computer-mediated communication.
Deception theorists claim that lying is a cognitively
difficult task, with the need to manage information,
image, and behavior simultaneously. It has been
suggested that the mere presence of others inhibits
performance on relatively complex tasks [23, 25]. It has
also been suggested that mere presence dissuades
individuals from engaging in socially unacceptable
behavior, and proximity has been positively correlated
with concerns of establishing trust and protecting the
feelings of others [30]. Obviously, people lie to others
all the time in order to gain acceptance, but it is
predicted that individuals will be less likely to lie to
collocated group members. Therefore:
Hypothesis 1B: Deceivers in dispersed groups
will submit more deceptive communication than
deceivers in collocated groups.
IDT states that deceivers and receivers assess the
verbal content and nonverbal behavior of conversational
partners during the course of discussion. The deceiver
makes assessments on how well the lie was accepted,
and depending on how the receivers react over time, may
either choose to support the lie or curtail any further
deception. Buller and Burgoon [3] believe that in the
face of probing (a sign of suspicion), deceivers focus on
making “strategic repairs,” which may take the form of
lying to cover previous lies. It should therefore be
expected that if a group includes more suspicious
members, deceitful individuals will be prone to continue
lying:
Hypothesis 1C: The more forewarned receivers
in a group, the higher the volume of deceptive
communication submitted by the deceiver.
The next three hypotheses focus on lie detection by
group members.
The ability to detect deceptive
information stems from two sources, the content of the
message and the social cues that accompany it. As Rao
and Lim [19] propose, communicating in a written mode
via CMC effectively filters visual and auditory indicators
that are available in face-to-face conversations. Ekman
[2] claims that visual cues are among the most common
clues known to typical individuals, who often base their
detection judgments solely on visual indicators (i.e.,
“looking into someone’s eyes”), ignoring other
indicators available to them.
Although reliable
paralinguistic and content-based indicators (such as
sentence length and personal distancing) are available in
the text-based format of CMC, they are not commonly
known to most communicators who are reliant on
nonverbal cues. Therefore:
Hypothesis 2A: Receivers using computermediated communication will be less accurate at
detecting deception than receivers without CMC.
With regard to proximity of group members, past
research in both social facilitation and in CMC have
shown that collocated group members focus more on
their relationships, norms, and communication within the
group than in dispersed settings [20, 31]. Being more
socially conscious of adhering to and enforcing group
norms, collocated group members should be expected to
focus more stringently on the content of communication.
According to Tung and Turban [21], collocated
communicators also have access to more social and
environmental cues than dispersed group members, even
when communicating via CMC. Therefore:
Hypothesis 2B: Receivers in groups that are
collocated will be more accurate at detecting deception
than receivers in dispersed groups.
Finally, the assessments made by receivers may be
helpful for detecting lies in a group setting. Following
the logic of IDT, it is entirely possible that a suspicious
group member can pass suspicions along to other
members, not solely to the deceiver. A higher number of
forewarned receivers may make this transmission more
likely. Even if their suspicions are not expressly stated
verbally or in text, the nonverbal behavior of suspicious
receivers may be transmitted to others [32]. Therefore,
the following hypothesis regards the differences in
detection accuracy that may occur as a result of inducing
suspicion in receivers through a warning:
0-7695-2268-8/05/$20.00 (C) 2005 IEEE
4
Proceedings of the 38th Hawaii International Conference on System Sciences - 2005
Hypothesis 2C: The more forewarned receivers
in a group, the higher the rate of successful deception
detection.
4. Procedures
The study methodology is based on a controlled
laboratory experiment using a 2 x 2 x 3 factorial design
(see Figure 3), crossing group member proximity with
the presence of computer-mediated communication, with
a third treatment varying the number of group members
forewarned of the possible presence of deceptive
communication. For the computer-mediated treatment,
CMC-present groups used a group support system to
communicate, while the absent condition required the
use of face-to-face verbal communication or dispersed
headsets. For the proximity treatment, the collocated
condition featured group members working in the same
room, around a boardroom table, with constant visual
contact with each other, while the dispersed condition
required group members to work in isolation with no
visual contact with other members. To accomplish this,
the dispersed groups met in a suite of interview rooms
maintained by the college.
Groups were composed of three participants,
resulting in 180 subjects. Participants were taken from
volunteering students enrolled in senior level
undergraduate classes in the College of Business.
Interested parties selected a convenient time slot from
one of three sign-up lists distributed separately in class.
This ensured that the membership of each group would
be random, and by virtue of one list containing time slots
beginning fifteen minutes early, this also determined
which subject was randomly (and unknowingly)
assigned the role of the deceiver. Subjects were asked to
fill out a web-based survey before coming to the
experiment. The survey is based on a personal value
instrument [33], and the ratings subjects gave to their
options on the instrument determined their functional
autonomy across six value dimensions: theoretical,
economic, aesthetic, social, political, and religious
values.
Computer Support
Present
Absent
Local
Proximity
Dispersed
Number of Forewarned Receivers
(0, 1, or 2 per group)
Figure 2. Research Design.
The two decision-making tasks were standard across
all treatments. The first, used as a training task to help
subjects become used to the technology and the group
process, was based on the campus parking problem [34].
The second task involved the group deception and was
based on the Personal Trust Foundation budget
allocation task developed by Watson, DeSanctis, and
Poole [35]. The task features a scenario in which the
sum of ten million dollars has been posthumously
deeded to a charitable foundation headed by the three
group members, and the group was responsible for
allocating the inheritance across one or more of six
proposed projects.
At the arranged time, the first subject who arrived
was given pre-meeting instructions. The researcher
informed the first subject of the true nature of the study,
and that he or she would be asked to deceive the other
group members during discussion of the second task the
0-7695-2268-8/05/$20.00 (C) 2005 IEEE
5
Proceedings of the 38th Hawaii International Conference on System Sciences - 2005
group was assigned. The subject was told that if they
could convince both of the other group members to
choose a specific foundation project to receive the most
funding, he or she would receive a $25 cash reward. The
deceivers advocated the particular foundation project
that was in direct opposition of the value dimension
identified earlier by the pre-questionnaire. The subject
was given an opportunity to refuse this role, but none did
so. The subject was instructed not to discuss his or her
role with the others at any time. Once the final two
participants arrived, they were supplied with informed
consent forms and verbally reminded that they could
refuse to participate at any time, without losing any extra
credit that was previously offered.
Following the training task, a written description of
the experimental task was supplied to each group
member. The deceiver received a reminder of his or her
role and the potential reward for successfully deceiving
the others, along with notification of which of the six
foundation projects to argue for. Up to this point, the
deceivers had not been told what they would be arguing
for, so they were unable to rehearse lies beforehand.
The forewarning of receivers also occurred at this point,
within the instructions given them. The instructions
randomly given to group members contained a statement
warning them of the possibility of deception, and that
lying might cause a more-deserving project to be
neglected. The forewarned receivers were instructed that
if they identified any comments made during discussion
that were later verified as untruthful, they would be
eligible for a $5 reward. Receivers not provided these
instructions were only provided with the written
description of the task and were thus classified as naïve.
The researcher then left the immediate site in order to
prevent any possible “Hawthorne effect.”
Topic
discussion ranged between ten and twenty minutes,
followed by voting via secret ballots. Subjects were
informed that a record of the group discussion would be
kept, either by a GroupSystems-produced transcript or an
audio recording, but that comments made during
discussion would remain confidential. Results of the
voting were provided to all subjects after the session, no
matter the treatment group.
Data from the experiment was compiled from two
sources, a transcript or recording of the previous group
discussion and a post-discussion questionnaire
administered to all group members. The questionnaires
contained items pertaining to familiarity, truth bias, and
participation ratings [36]. Both types of receivers, naïve
and forewarned, were asked to specifically identify any
information they thought to be deceitful during the
previous discussion. On the other hand, the deceiver was
asked if he or she perceived suspicions from the other
group members, and if so, to specifically recall what
statements or impressions seemed false. The deceiver
was asked to remain behind an extra moment after the
two receivers left, in order to review the transcript with
the researcher. The deceiver was asked to point out any
particular statements in which a lie was submitted, and
these deceptive comments were marked for later
comparison with the receiver questionnaires. Following
the transcript analysis, the deceiver was thanked and
dismissed from the experimental setting.
Each individual statement containing admitted false
information was considered one lie and served as a
surrogate for the amount of deception communicated to
the group. Each lie was parsed from the transcript and
compared with any accusations made by the receivers on
the questionnaires, with matches constituting a
successful detection.
The hypotheses were tested
primarily by group mean comparisons via ANOVA, the
results of which are presented in the following section.
5. Results
Descriptive statistics are presented in Table 1 below.
Regarding the hypotheses dealing with the amount of
deception submitted, the ANOVA results showed that
there was indeed a significant difference between
computer-supported and non-supported deceivers (F
[1,60] = 12.74, p = .001), lending support to Hypothesis
1A. There was no support for the proximity effects
predicted in Hypothesis 1B (p =.644), but the amount of
deception trend went as expected in Hypothesis 1C (F
[2,60] = 4.79, p = .013), as deceivers lied more in groups
in which the other group members had been warned.
There were no significant interactions between the three
independent variables. Overall, there was an average of
1.82 lies submitted in each group meeting.
The other three hypotheses were devoted to the
detection accuracy performance by group members who
were the recipients of the lies. Computer-supported
group members barely outperformed their non-supported
counterparts, which was opposite what was predicted in
Hypothesis 2A. Hypothesis 2B predicted that proximate
receivers would detect more accurately than dispersed
receivers, but this was not found to be significant (p =
.287). Finally, groups with two forewarned receivers
were more accurate than those with no or only one
forewarned receivers, but this also was not quite
significant (p = .160), therefore lending no support for
Hypothesis 2C. Only eight percent of the lies in this
study were detected.
0-7695-2268-8/05/$20.00 (C) 2005 IEEE
6
Proceedings of the 38th Hawaii International Conference on System Sciences - 2005
Table 1. Means, with Standard Deviations in parentheses.
Computer Support
Supported
Not Supported
Proximity
Proximate
Dispersed
Number of Forewarned Receivers
0
1
2
Number
Of Lies
2.20 (1.06)
1.43 (.57)
Detection
Accuracy
.091 (.28)
.078 (.24)
1.77 (.77)
1.87 (1.07)
.058 (.23)
.110 (.29)
1.35 (.59)
2.00 (1.03)
2.10 (.97)
.050 (.22)
.053 (.20)
.150 (.34)
Having said that, deception detection rates were
generally abysmal, and deceivers had little trouble
conceiving and submitting lies. As a side note, deceivers
were able to convince the other group members to
contribute more money to their assigned charity 72
percent of the time, a sobering by-product of deceptive
communication. Figures 3 and 4 below display the
average number of lies and individual lie detection rates
for each of the twelve conditions.
6. Discussion
This study was undertaken with the purpose of
investigating the behavior and perceptions of group
members exposed to deceptive communication from
within. It is important to note that these group
discussions were unstructured and that both deceivers
and receivers had complete autonomy to lie as much they
wanted, to communicate in whatever manner they saw
fit, and to even accuse each other of being deceitful.
AVG NUMBER
OF LIES
3
2
1
NUM SUSP REC
PROXIMITY
COMP SUPPORT
2.6
2.4 2.4
1.4
2.8
1.6
0
1
2
0
1
2
Proximate
Dispersed
Computer-Supported
1.2
1.6 1.6
1.6
1.2 1.4
0
1
2
0
1
2
Proximate
Dispersed
Non-Supported
Figure 3. Number of Lies in Each Condition.
0-7695-2268-8/05/$20.00 (C) 2005 IEEE
7
Proceedings of the 38th Hawaii International Conference on System Sciences - 2005
DETECTION
ACCURACY RATE
0.40
0.30
0.20
.25
0.10
.00 .00
NUM SUSP REC
PROXIMITY
COMP SUPPORT
.10
.00
.11
.20
.10 .10
0
1
2
0
1
2
Proximate
Dispersed
Computer-Supported
.05
.10
.00
0
1
2
0
1
2
Proximate
Dispersed
Non-Supported
Figure 4. Individual Detection Rates in Each Condition.
The differences between CMC and non-CMC groups
were mainly found on the deceiver side of the equation.
Deceivers who were supported by the group support
system submitted more lies than those who had no
computer support. This is not surprising considering the
poor reviews given the GSS by participants, especially
their lower social presence-like ratings such as its
enjoyability, its pleasantness, and satisfaction with the
medium. Another possibility is that the leaner qualities
of the medium gave deceivers cause to lie more.
Delayed feedback from group members and a shortage
of social cues transmitted meant that CMC deceivers
continued lying in the face of suspicion or toward a lost
cause without knowing. From the receiver standpoint,
the GSS group members were slightly better at detecting
lies than non-CMC users. Although CMC users are
typically less trusting than traditional communicators
[37], post-hoc analyses of truth bias measures showed
that both types of group members were highly trusting
(computer-supported 5.11, non-supported 5.76 on a 7point scale). The media differences were not enough to
cancel out the effects of the truth bias.
With regard to proximity, neither of the hypotheses
dealing with proximity was supported by the data.
Social facilitation was not as strong an influence on lying
as initially thought. It was thought group members who
were collocated with others would be less likely to lie
because of the social norms against doing so, but
because all of the subjects accepted the role of deceiver
without exception, perhaps the attitude about lying is not
as shameful in this sample as it might be in others. The
difference in proximity had little influence on detection
accuracy as well. The lack of support here tends to
confirm the “distraction hypothesis” first suggested by
Maier and Thurber [38], which stated that an
overabundance of cues can serve to distract potential lie
detectors from the discrepancies and nonverbal behavior
signifying deception. The proximate members were
certainly exposed to more cues than the dispersed
receivers, and were thus less successful in detecting lies.
One can naturally assume that these distractions can be
even more burdensome in a group setting.
Finally, the warnings for receivers had the predicted
effects in the group discussions, with varying degrees of
significance. Deceivers did lie significantly more to
groups with two forewarned receivers as opposed to
those with fewer warned members. The warnings may
have had an additional consequence for receivers beyond
that of reducing truth bias, that being an increased
inquisition put forth by these more suspicious receivers.
Warning both receivers seemed to make the task more
difficult for deceivers. This is informally reflected in the
transcripts and in the post-experiment questionnaires, but
also in the fact that the deceiver managed to successfully
convince two forewarned receivers 65 percent of the
time, compared to 70 percent for groups with only one
and 80 percent in groups with no suspicious receivers.
However, the warnings did not have a significant effect
on detection accuracy, although the trend went in the
expected pattern, with receivers in groups where two
warnings were given out-detected receivers in lesswarned groups. A manipulation check revealed that the
warnings had a reducing influence on truth bias for
warned receivers, but not enough to improve their
individual detection rates.
There is reason to believe that receivers might detect
lies better if they are highly motivated to do so. It was
felt that the Personal Trust Foundation task was salient to
the students who participated, as it has been used in
studies exploring group conflict in the past [35], and for
the most part, these particular subjects exhibited
polarizing belief systems in the pre-task survey.
0-7695-2268-8/05/$20.00 (C) 2005 IEEE
8
Proceedings of the 38th Hawaii International Conference on System Sciences - 2005
Whether or not the subjects became overly consumed in
the task and thus did not hone in on false statements
remains to be seen.
The results presented here are quantitative and do not
directly deal with the content of the group discussions.
Future content analysis could reveal what social cues and
indicators are the most useful for detecting lies in
groups, especially for groups using CMC. Further, the
students in this study were relatively unfamiliar with
each other and their communicative patterns, so a
possible extension of this study could be the use of
established groups.
3.
Buller, D. and J. Burgoon, Interpersonal deception
theory. Communication Theory, (1996). 6: p. 203242.
4.
DePaulo, B., J. Lindsay, B. Malone, L. Muhlenbruck,
K. Charlton, and H. Cooper, Cues to deception.
Psychological Bulletin, (2003). 129(1): p. 74-118.
5.
Vrij, A., K. Edward, K. Roberts, and R. Bull,
Detecting deceit via analysis of verbal and nonverbal
behavior. Journal of Nonverbal Behavior, (2000).
24(4): p. 239-263.
6.
DePaulo, B., S. Kirkendol, J. Tang, and T. O'Brien,
The motivational impairment effect in the
communication of deception: Replications and
extensions. Journal of Nonverbal Behavior, (1988).
12(3): p. 177-202.
7.
Miller, G. and J. Stiff, Deceptive communication.
(1993), Newbury Park, CA: Sage Publications, Inc.
8.
McCornack, S. and M. Parks, Deception detection
and relationship development: The other side of trust,
in Communications Yearbook 9, McLaughlin, Editor.
(1986), Sage Publications: Beverly Hills, CA.
9.
McCornack, S. and T. Levine, When lovers become
leery: The relationship between suspicion and
accuracy in detecting deception. Communication
Monographs, (1990). 57: p. 219-230.
10.
Stiff, J., H. Kim, and C. Ramesh, Truth biases and
aroused suspicion in relational deception.
Communication Research, (1992). 19(3): p. 326-345.
11.
Biros, D., J. George, and R. Zmud, Inducing
sensitivity to deception in order to improve decision
making performance: A field study. MIS Quarterly,
(2002). 26(2): p. 119-144.
12.
Daft, R. and R. Lengel, Organizational information
requirements, media richness, and structural design.
Management Science, (1986). 32(5): p. 554-570.
13.
Dennis, A. and S. Kinney, Testing media richness
theory in the new media: The effects of cues,
feedback, and task equivocality. Information Systems
Research, (1998). 9(3): p. 256-274.
14.
El-Shinnawy, M. and L. Markus, The poverty of
media richness theory: Explaining people's choice of
electronic mail vs. voice mail. International Journal
of Human-Computer Studies, (1997). 46: p. 443-467.
Kling, R., Cooperation, coordination, and control in
computer-supported work. Communications of the
ACM, (1991). 34(12): p. 83-88.
15.
Short, J., E. Williams, and B. Christie, The Social
Psychology of Telecommunications. (1976), New
York, NY: John Wiley.
Ekman, P., Telling lies: Clues to deceit in the
marketplace, politics, and marriage. Vol. 2. (1992),
New York: WW Norton and Company.
16.
Kiesler, S., J. Siegel, and T. McGuire, Social
psychological aspects of computer-mediated
7. Conclusion
The intent of this study was to begin an exploration
of deceptive communication among group members,
especially for groups that conduct discussions via
computer-based media. The vast majority of deception
research has been focused on dyadic communication;
therefore much of the prior learning and findings that
informed the research model and hypotheses here was
based on the reciprocal communication between two
people. As logic dictates, the simple addition of a third
person to the discussion had a confounding effect for the
subjects involved in this study.
IDT holds that
perceptual information, both verbal and nonverbal, is
important for successful deception as well as the
detection of deception. The additional group member
adds even more social cues to gauge. It is not surprising
the detectors would have a difficult time under those
circumstances, but it appears that deceivers have an
opportunity to exploit a group situation. Their success in
negotiating the task and the poor lie detection in this
experiment are reasons to be concerned with group
deceptive communication.
Acknowledgement
Portions of this research were supported by the US Air
Force Office of Scientific Research under the US
Department of Defense University Research Initiative
(Grant #F49620-01-1-0394).
The views, opinions,
and/or findings in this report are those of the authors and
should not be construed as an official Department of
Defense position, policy, or decision.
References
1.
2.
0-7695-2268-8/05/$20.00 (C) 2005 IEEE
9
Proceedings of the 38th Hawaii International Conference on System Sciences - 2005
communication. American Psychologist, (1984).
39(10): p. 1123-1134.
17.
Sproull, L. and S. Kiesler, Reducing social context
cues: Electronic mail in organizational
communication. Management Science, (1986).
32(11): p. 1492-1512.
18.
Aiken, M. and B. Waller, Flaming among first-time
group support system users. Information &
Management, (2000). 37: p. 95-100.
19.
Rao, S. and J. Lim. The impact of involuntary cues
on media effects. in 33rd Hawaii International
Conference on System Sciences. (2000).
20.
Valacich, J., J. George, J. Nunamaker, and D. Vogel,
Physical proximity effects on computer-mediated
group idea generation. Small Group Research,
(1994). 25(1): p. 83-104.
21.
Tung, L. and E. Turban, A proposed research
framework for distributed group support systems.
Decision Support Systems, (1998). 23: p. 175-188.
22.
Zajonc, R., Social facilitation. Science, (1965).
149(3681): p. 269-274.
23.
Guerin, B., Mere presence effects in humans: A
review. Journal of Experimental Social Psychology,
(1986). 22: p. 38-77.
24.
Baron, R.S., D. Moore, and G. Sanders, Distraction
as a source of drive in social facilitation. Journal of
Personality & Social Psychology, (1978). 36(8): p.
816-824.
25.
Evans, G., Behavioral and physiological
consequences of crowding in humans. Journal of
Applied Social Psychology, (1979). 9(1): p. 27-46.
26.
Zmud, R., Opportunities for strategic information
manipulation through new information technology, in
Organizations and Information Technology, Fulk and
Steinfeld, Editors. (1990), Sage Publications, Inc.:
Thousand Oaks, CA.
27.
Gallupe, B., A. Dennis, W. Cooper, J. Valacich, J.
Nunamaker, and L. Bastianutti, Electronic
brainstorming and group size. Academy of
Management Journal, (1992). 35: p. 350-369.
28.
Kiesler, S. and L. Sproull, Group decision making
and communication technology. Organizational
Behavior and Human Decision Processes, (1992).
52: p. 96-123.
29.
Sussman, S. and L. Sproull, Straight talk: Delivering
bad news through electronic communication.
Information Systems Research, (1999). 10(2): p. 150166.
30.
Burgoon, J., J. Bonito, A. Ramirez, N. Dunbar, K.
Kam, and J. Fischer, Testing the interactivity
principle: Effects of mediation, propinquity, and
verbal and nonverbal modalities in interpersonal
interaction. Journal of Communication, (2002).
52(3): p. 657-677.
31.
George, J., G. Easton, J. Nunamaker, and G.
Northcraft, A study of collaborative group work with
or without computer-based support. Information
Systems Research, (1990). 1(4): p. 394-415.
32.
Buller, D., K. Strzyzewski, and J. Comstock,
Interpersonal deception: I. Deceivers' reactions to
receivers' suspicions and probing. Communication
Monographs, (1991). 58: p. 1-24.
33.
Allport, G.W., P. Vernon, and G. Lindzey, A Study of
Values. (1951), Cambridge, MA: Riverside
Publishing.
34.
Jessup, L., T. Connolly, and J. Galegher, The effects
of anonymity on GDSS group process with an ideagenerating task. MIS Quarterly, (1990). 14(3): p.
313-321.
35.
Watson, R., G. DeSanctis, and M. Scott Poole, Using
a GDSS to facilitate group consensus: Some intended
and unintended consequences. MIS Quarterly,
(1988). 12(3): p. 463-478.
36.
Burgoon, J., Buller, D., Dillman, L., and J. Walther,
Interpersonal deception: IV. Effects of suspicion on
perceived communication and nonverbal behavior
dynamics. Human Communication Research, (1995).
22(2): p.163-196.
37.
Alge, B., C. Wiethoff, and H. Klein, When does the
medium matter? Knowledge-building experiences
and opportunities in decision-making teams.
Organizational Behavior and Human Decision
Processes, (2003). 91(1): p. 26-37.
38.
Maier, N. and J. Thurber, Accuracy of judgments of
deception when an interview is watched, heard, and
read. Personnel Psychology, (1968). 21: p. 23-30.
0-7695-2268-8/05/$20.00 (C) 2005 IEEE
10