How Users Perceive and Respond to Security Messages

How Users Perceive and Respond to Security Messages:
A NeuroIS Research Agenda and Empirical Study*
Bonnie Brinton Anderson, Anthony Vance
Information Systems Department, Brigham Young University
[email protected], [email protected]
Brock Kirwan
Department of Psychology & Neuroscience Center, Brigham Young University
[email protected]
David Eargle
Katz Graduate School of Business, University of Pittsburgh
[email protected]
*Under review. Please do not share without permission.
ABSTRACT
There is increasing recognition that users play a vital role in the information security of
organizations. This is because, in spite of technical security safeguards, many critical security
decisions are left up to users. A prime example is users’ response to security messages—
discrete communication designed to persuade users to impair or improve their information
security posture. Research shows that users are too susceptible to malicious messages such as
phishing attacks, and yet too resistant to protective messages such as security warnings. There
is therefore a need to better understand how users perceive and respond to security messages.
In this article, we argue for the potential of NeuroIS—cognitive neuroscience and its
associated neurophysiological methods applied to Information Systems (IS)—to shed new light
on users’ reception of security messages. We outline an agenda for researching four key
questions relating to gender differences, habituation, stress, and fear, and examine the unique
capabilities of NeuroIS to elucidate each one.
In addition, as an illustration for how NeuroIS can be used to investigate these questions,
we present the results of an electroencephalography (EEG) laboratory experiment that shows
convincing evidence that gender plays an important role in how users process security
messages. Together, our research agenda and empirical findings demonstrate the promise of
using NeuroIS to study users’ responses to security messages specifically, and security
behavior in general.
Keywords: Security messages, information security behavior, NeuroIS, gender differences,
habituation, stress, fear, uncertainty.
1. Introduction
In recent years information security has emerged as a top managerial concern, driving
the worldwide security technology and services market to $67.2 billion in 2013, and it is
expected to increase to $86 billion by 2016 (Gartner, 2013). However, despite growing
investment in information security technology, users continue to represent the weak link of
security (Furnell & Clarke, 2012). Accordingly, users are increasingly targeted by attackers to
gain access to the information resources of organizations (Mandiant, 2013). As a case in point,
consider the 2011 breach of RSA Security LLC, which in turn led to the compromise of
Lockheed Martin and other aerospace and defense companies (Drew, 2011; Poulsen, 2011).
Tellingly, the breach of RSA can be traced back to a malicious e-mail attachment sent to an
RSA employee that triggered a security warning dialog (see Figure 1; (Hypponen, 2011)).
Although the employee’s failure to heed this warning may not be entirely to blame for these
security incidents, it is clear that users’ behavior plays a critical role in the security of
organizations.
Figure 1. The actual warning message believed to have immediately preceded the installation of
malware that led to the breach of RSA Security (Hyppönen 2011).
As illustrated by the above example, a crucial aspect of security behavior is how users
perceive and respond to security messages—discrete communication designed to persuade
users to impair or improve their information security posture. A large body of research shows
that users are easily susceptible to malicious messages such as phishing attacks that prompt
2
users to install malware or visit compromised websites (Hong, 2012). On the other hand, a
parallel stream of research shows that users routinely disregard protective messages such as
software security warnings (Bravo-Lillo et al., 2013). There is therefore a need to better
understand how users perceive and respond to security messages.
In this article, we argue for the potential of NeuroIS—cognitive neuroscience and its
associated neurophysiological methods applied to IS (Dimoka et al., 2011)—to shed new light
on users’ reception of security messages. The neural bases for human cognitive processes can
offer new insights into the complex interaction between information processing and decision
making (Dimoka et al., 2012), allowing researchers to open the “black box” of cognition by
directly observing the brain (Benbasat et al., 2010). The potential of NeuroIS has been
recognized by security researchers who have begun using neurophysiological methods to
investigate security behavior (Hu et al., 2014; Moody et al., 2011; Vance et al., 2014; Warkentin
et al., 2012). Crossler et al. (2013) observed that “these studies, and others like them, will offer
new insights into individual behaviors and cognitions in the context of information security
threats” (2013, p. 96).
We contribute to the nascent area of NeuroIS security research by presenting a research
agenda for examining security messages. To do so, we outline four key questions drawn from
the security and cognitive neuroscience literatures that directly relate to how users receive and
process security messages (see Table 1). Further, each of these questions marks a potentially
fruitful stream of research. Although these are not the only important research questions relating
to security messages, they are suggestive of the pressing security issues that NeuroIS has the
unique capability of addressing.
Table 1. Guiding Questions for Researching Security Messages via NeuroIS
1. How does gender influence users’ responses to security messages?
2. How does habituation affect users’ responses to security messages?
3. What is the impact of stress on a users’ response to security messages?
4. How do fear and uncertainty affect users’ neural processing of security messages?
3
As an illustration of how NeuroIS can be used to investigate these research questions,
we present the results of an electroencephalography (EEG) laboratory experiment that
represents an initial foray into investigating the first question on the influence of gender on how
users’ respond to security messages. We found that the amplitude of the P300, an event-related,
potential component indicative of cognitive-resource allocation, was higher for all participants
when viewing malware warning screenshots relative to those of regular websites. Additionally,
we found that the P300 amplitude was greater for women than for men, providing empirical
evidence that men and women process malware warnings differently.
Together, our research agenda and empirical findings demonstrate the promise of using
NeuroIS to study users’ responses to security messages. We anticipate that the pursuit of this
research agenda will provide scholars with a more complete understanding of how users’
receive security messages, which will in turn lead to more accurate development and application
of theory (Dimoka et al., 2011). Additionally, we expect that the neurophysiological data
stemming from this research will guide the design and testing of more effective forms of security
warnings that overcome impediments to users’ reception of security warnings identified in this
article (Dimoka et al., 2012). Finally, this article echoes the call of Crossler et al. (2013) for
further research on information security behavior in general using NeuroIS methods by showing
examples of the insights that can be gained.
This paper proceeds as follows. In Section 2 we formally define and introduce a
taxonomy of security messages, and give a brief introduction to NeuroIS. In Section 3, we
present our research questions and suggest various NeuroIS methods for their investigation. In
Section 4, we discuss the results of our empirical EEG experiment as an illustration of the value
of investigating the proposed research questions. Finally, in Section 5 we describe the
implications of our research agenda for future research on security messages.
4
2. REVIEW OF SECURITY MESSAGES AND NEUROIS
We define a security message as discrete communication that is designed to persuade
users to either impair or improve their information security posture. Most security messages are
predominantly textual, such as software dialogs or email communication. However, security
messages may also be aural and/or visual, such as voicemail memos, signage, or online videos.
Our definition is broad in that it includes persuasive messages from both attackers and
defenders. This is because communication from both commonly use the same persuasive
techniques and cues (Abbasi et al., 2010; Bravo-Lillo et al., 2013; Dhamija et al., 2006), and
engage many of the same mental processes used in decision making (Luo et al., 2013); (Luo et
al., 2013; Wright & Marett, 2010).
On the other hand, our definition is narrow in the sense that it only embraces discrete
messages, rather than the entirety of security-related communication. This typically includes
conversation with coworkers and peers, security, education, training, and awareness (SETA),
classroom instruction (Karjalainen & Siponen, 2011), and sustained social engineering attacks
that might continue over hours, days, or longer (Mitnick & Simon, 2001).
Security Messages Taxonomy
We present in Figure 2 below a taxonomy of security messages along with specific
examples, which consistent with our definition, may be offensive or defensive in nature. Our
scheme classifies security messages according to three primary dimensions: (1) immediacy, (2)
relevancy, and (3) complexity. Immediacy refers to the extent to which a message can be
deferred. At one extreme, modal software dialogs by design interrupt the user’s workflow until
the message has been processed (Egelman et al., 2008). On the other end of the spectrum,
security advisories often come in the form of email, which is easily set aside for later processing
(Weber, 2004). Immediacy has important implications for how security messages are processed,
because users are less likely to act on messages that can be deferred (Egelman et al., 2008).
5
This is why, for example, in recent years web browsers have emphasized modal warnings that
interrupt the user rather than passive indicators that reside in the chrome of the browser and are
easily overlooked (Akhawe & Felt, 2013).
Relevancy concerns the applicability of a security message to the workflow or task that
the user is engaged in. Security messages that are congruent with or come as a natural result of
performing an action are more likely to be processed by users because they are anticipated or
are clearly applicable to the present task (Vredenburgh & Zackowitz, 2006). On the other hand,
security messages that have little connection to a user’s current activity are less easily
processed because they require users to switch attention from the task at hand (Meyer, 2006).
This is, for example, one reason why information security policies (ISPs) are less likely to be
followed if they are disconnected from a user’s routine work activities (Vance et al., 2012) .
Additionally, this is also why spear-phishing attacks that are targeted to a user’s work are much
more effective (Luo et al., 2013).
Finally, complexity describes the informational density of a security message, and/or the
mental effort required to process the message. Security messages can be very sparse, such as
software dialogs that may only contain a few words. Conversely, other security messages
contain multiple sub-arguments, such as fear appeals, which convey (1) the severity of a threat,
(2) the user’s susceptibility to a threat, (3) the efficacy of a suggested response, and (4) the
user’s self-efficacy to enact the protective action (Johnston et al., 2014; Johnston & Warkentin,
2010). More complex still are legalistic, acceptable-use policies that users find intractable (Foltz
et al., 2008).
We hasten to note that although the taxonomy depicts a binary, high/low classification
for each dimension, simplicity of presentation demands that each message falls along a
gradient. Further, some types of security messages (such as phishing emails) are flexible
enough to fall into several categories. For example phishing emails may simply offer a single
6
link as bait, or be long and abstruse like a Nigerian 419 scam (Herley, 2012). Finally, we
observe that the hierarchical ordering of the taxonomy suggests a precedence among the
dimensions, with immediacy being the most important factor in whether a user processes a
message. This is because messages high in immediacy have the ability to interrupt the user and
demand attention (Egelman et al., 2008; Lesch, 2006). Likewise, we consider relevancy to be
the next most important factor, given that if a message is determined to be highly relevant, a
user will invest time and effort to process the message, regardless of complexity (Vredenburgh
& Zackowitz, 2006).
Immediacy
Relevancy
Complexity
Examples
High
Device or software security
configuration prompts
High
Low
High
High
Ransomware
Low
Low
High
Low
Password strength meters,
software installation warnings
Scareware
Privacy policy
confirmation email
High
Low
Software update prompt
High
Fear appeals, phishing emails
Low
Backup reminder,
phishing emails
Low
Figure 2. Taxonomy of Security Messages
7
The Potential of NeuroIS to Explain Security Behavior
As the field of information security behavior matures, understanding why a particular
behavior happens and not simply that it does happen become increasingly necessary. To this
end, NeuroIS offers a promising approach to investigate the effectiveness of security (Crossler
et al., 2013). Specific to this research, the neural bases for human, cognitive processes can
offer new insights into the complex interaction between information processing in information
security behavior and decision making (Dimoka et al., 2011).
Whereas human-computer interaction (HCI) researchers have historically relied on
surrogates of cognition, such as keyboard and mouse movements, neuroscience methods allow
researchers to open the “black box” of cognition by directly observing brain processes
(Benbasat et al., 2010). Indeed, NeuroIS holds the promise of “providing a richer account of
user cognition than that which is obtained from any other source, including the user himself”
(Minnery & Fine, 2009). The ultimate promise of applying neuroscience to HCI is to use insights
from the study of the brain to design effective user interfaces that can help users make informed
decisions (Mach et al., 2010; Riedl et al., 2010a). Table 2 summarizes the variety of tools and
measures available in NeuroIS. We refer readers to Appendix A of Dimoka et al. (2012) for a
more thorough discussion of the various neurophysiological tools commonly used.
8
Table 2. Description and Focus of Measurement of Commonly-Used Neurophysiological
Tools (adapted from Dimoka et al. 2012)
Neurophysiological Tools
Focus of Measurement
Psychophysiological Tools
Eye Tracking
Eye pupil location (“gaze”) and movement
Skin Conductance Response (SCR)
Sweat in eccrine glands of the palms or feet
Facial Electromyography (fEMG)
Electrical impulses on face caused by muscle
fibers
Electrocardiogram (ECG)
Electrical activity on skin caused by muscles of
the heart
Brain Imaging Tools
Functional Magnetic Resonance Imaging Blood flow changes in the brain due to neural
(fMRI)
activity
Positron Emission Tomography (PET)
Metabolic changes in the brain due to neural
activity
Electroencephalography (EEG)
Electrical potentials on the scalp due to neural
activity
Magnetoencephalography (MEG)
Magnetic field changes due to neural activity
3. A RESEARCH AGENDA FOR USING NEUROIS TO EXAMINE USERS’ RECEPTION OF
SECURITY MESSAGES
In this section we outline a research agenda for examining users’ reception of security
messages through the lens of NeuroIS. Our purpose in proposing this agenda is several fold.
First, we advocate a multidisciplinary, collaborative approach to the study of security messages
that integrates the fields of behavioral security and cognitive neuroscience literatures. We
believe that such a synergistic tack will yield important new insights into users’ security behavior.
Second, we focus the attention of IS researchers on areas that are likely to lead to the design of
more effective security warnings. Finally, we call for IS researchers to join in engaging these
research questions. In this way, we echo the call of Crossler et al. (2013) for more NeuroIS
research on information security behavior in general.
Our agenda is organized around four research questions. Each is grounded in the
behavioral security and neuroscience literatures, and identifies gaps in our current
understanding of how users receive security messages. Further, each research question
represents a promising stream of research, rather than individual, one-off studies. However, we
9
stress again that these areas are not the only relevant or interesting research questions
involving NeuroIS and security messages. Nevertheless, our review of the behavioral security,
warning, and neuroscience literatures suggests that these four areas are especially promising
avenues of examination.
3.1 How does gender influence users’ responses to security messages?
One of the most fundamental factors of human behavior, and yet commonly overlooked,
is gender (Riedl et al., 2010b). Consequently, a study of security messages that generalizes
across genders is not sufficient. It is therefore necessary to investigate the effect of gender
differences in user responses to security messages.
The literature on gender differences in trust behavior includes survey results as well as
experimental findings. The results show that in general, men trust more than do women (Alesina
& La Ferrara, 2002; Bravo-Lillo et al., 2013; Glaeser et al., 2000; Terrell & Barrett, 1979).
Similarly, limited research in the area of gender differences in online trust has indicated that
women trust less in online settings as well (Cyr & Bonanni, 2005; Cyr et al., 2007; Garbarino &
Strahilevitz, 2004; Gefen et al., 2008; Rodgers & Harris, 2003). In spite of women’s lack of trust
in online settings, the percentage of women using the internet is increasing, and is now,
statistically speaking, the same as men. The most recent Pew Internet study has shown that
among adults, internet usage for women is now at 84% compared to 85% for men (Zickuhr &
Smith, 2012). However, women still report lower computer self-efficacy (Simmering et al., 2009;
Vekiri & Chronaki, 2008), especially in environments where men have had more encouragement
and access to technology (Scott & Walczak, 2009).
Traditionally, with consumer product warnings, women have reported a greater likelihood
than men to read and comply with warning information (Godfrey et al., 1983; Laughery &
Brelsford, 1991). However, Vigilante and Wogalter (1997) suggest that product familiarity guides
10
the attention paid to product warnings. So if women have lower computer self-efficacy, it is likely
that women will pay more attention to computer-based security warnings.
Riedl et al. (2010b) performed an fMRI study examining gender differences in the
response to online offers of varying trustworthiness. These authors found differential patterns of
activation for women and men separately when comparing neural responses to trustworthy
versus untrustworthy online offerings. The authors suggest that their results indicate that
women’s brains process more information (and process it more comprehensively) during
decision-making tasks. Specifically, the authors suggest that women processed the tasks with
more emotion whereas men processed the task more cognitively, and that these results link to
the empathizing-systemizing theory (Baron-Cohen et al., 2005) from the field of psychology.
Speck et al. (2000) found sex differences in brains when performing memory tasks.
Females have demonstrated a greater cross-hemispheric blood flow in temporal regions during
tests for memory recall (Ragland et al., 2000). A recent study by Ingalhalikar et al. (2013)
mapped the neural paths of almost 1000 subjects. They found that overall, male brains are
structured to facilitate connectivity between perception and coordinated action and have high
connections within each hemisphere, whereas female brains are structured to facilitate
communication between hemispheres, especially in the frontal regions. See figures 3 and 4
below.
11
Figure 3: Mean structural connections in the male brain, showing high connections within
hemispheres (Ingalhalikar et al., 2013).
Figure 4: Mean structural connections in the female brain showing high connections between
hemispheres (Ingalhalikar et al., 2013).
Two of the NeuroIS methods that could be helpful in studying gender differences in
response to security messages are EEG and fMRI. In particular, using EEG, we can examine
the P300 component of the event-related potential (ERP), which is associated with attention and
memory operations (Polich, 2007). The P300 reflects brain activity approximately 300-600
milliseconds after exposure to a stimulus. The speed of this measure allows one to see reaction
12
differences in subjects before they have time to consciously contemplate a response. FMRI also
measures neurophysiological responses to stimuli, and allows us to compare the differences in
neural activation between genders (Riedl et al. 2010). Tasks of mental rotation (Weiss et al.,
2003) visual stimulation, emotional recognition (Fischer et al., 2004; Lee et al., 2005; Lee et al.,
2002), verbal processing (Ragland et al., 2000; Shaywitz et al., 1995), and object construction
(Cowan et al., 2000; Georgopoulos et al., 2001) have all shown patterns of differential activation
between the sexes.
Pursuing this research question is likely to lead to the development of improved security
messages that account for gender differences in brain responses. First, we believe that an
improved understanding of the different areas of the brain activated in the processing of security
message can lead to more well-rounded messages that can trigger effective responses in both
men and women. Second, an alternative approach, based on the findings of this type of
research, could lead to more specialized security messages that could be customized to play to
the strengths of an individual’s gender profile. For example, neuroscientists have found that
brain regions associated with emotion are activated in women’s brains more so than that
observed in men (Riedl et al., 2010b). Therefore, security messages customized to women may
increase their effectiveness through use of emotional appeals. Third, in a similar way, these
neurophysiological tools can help identify and remediate potential “blindspots” to security
messages for certain genders. For example, multi-tasking is a classic cross-hemispheric task
that men do poorly compared with women (Ingalhalikar et al., 2013). Security messages
targeted at men could therefore make use of full-screen, interstitial messages that limit
multitasking. By accounting for such differences, security researchers may arrive at designs of
security messages that are more effective.
13
3.2 How does habituation affect users’ responses to security messages?
Research suggests that security warnings may fail to achieve their intended purpose for
a number of reasons, including environmental stimuli, interference, failure of warnings to grab or
sustain attention, and lack of understanding or capability of warning recipients (Cranor, 2008;
Wogalter, 2006). However, a major contributor to warning failure is habituation, which is
characterized by automatic responses regardless of changes to the context or content of the
warning message as a result of repeated exposure to warnings (Kalsher & Williams, 2006).
Furthermore, the habituation process can be accelerated if no negative consequence is
experienced when a warning is ignored (Vredenburgh & Zackowitz, 2006).
A number of laboratory experiments have pointed to the role of habituation in the failure
of warnings (Schechter et al., 2007; Sharek et al., 2008). Egelman et al. (2008) found a
significant correlation between recognition and disregard of security warnings. Sunshine et al.
(2009) observed that participants remembered their responses to previous security warnings
and applied them to other websites even if the level of risk involved had changed.
These laboratory study results are mirrored by those in the field. Akhawe and Felt (2013)
found that in approximately 50 percent of secure sockets layer (SSL) web browser warnings,
users decided to click through in 1.7 seconds or less, a finding that “is consistent with the theory
of warning fatigue” (2013, p. 14). Bravo-Lillo et al. (2013) conducted a large field experiment
using Amazon Mechanical Turk in which users were rapidly exposed to a confirmation dialog
message. After a period of 2.5 minutes and a median of 54 exposures to the dialog message,
only 14 percent of the participants recognized a change in the content of the confirmation dialog
in their control (status quo) condition.
Because memory is foundational to human cognition, decision-making, and behavior, it
likely plays an important role in a number of other security behaviors, such as security education
programs, information security policy compliance, and the processing of security messages.
14
Memory researchers have discovered a pervasive phenomenon in which there is less brain
activity for images that have been previously viewed (Grill-Spector et al., 2006). The reduction in
brain activity due to repeated encounters with a specific stimulus is called “repetition
suppression” (Desimone, 1996) and is thought to reflect the allocation of fewer cognitive
resources to repeated stimuli.
We contend that it is helpful to examine not only behavior that suggests habituation, but
also to study the neurophysiology associated with habituation. Of the variety of NeuroIS
methods, two that are especially relevant when studying habituation are fMRI and eye tracking.
fMRI is able to track neural activation through changes in blood oxygenation, known as the
“BOLD” response. Using fMRI we can determine if there is in fact a decreased blood flow (the
repetition suppression effect) to the regions of the brain (primarily the hippocampus) associated
with memory as security warnings are viewed repeatedly. Whereas the repetition suppression
effect has been established in the context of images (e.g., (Bakker et al., 2008), it is not yet
known whether this effect applies to security messages which contain both visual and textual
elements.
Eye tracking allows researchers to capture all eye movement associated with a given
image. Habituation or the eye movement-based, memory effect can be gauged by examining
the movement associated with warnings on repeated viewings. Eye movements have been used
extensively as an index of cognitive processing in both cognitive psychology (Liversedge &
Findlay, 2000) and cognitive neuroscience (Hayhoe & Ballard, 2005). Eye movement behavior
provides an important insight into cognitive processes that may not be available to conscious
introspection. Both of fMRI and eye tracking can allow researchers to examine unconscious
activity in addition to recording the behavior of the participants during experiments.
The use of neurophysiological tools to study habituation will likely offer several benefits.
First, fMRI and eye tracking tools will allow researchers to determine objectively how severe the
15
effects of habituation are. Although habituation has been inferred as a factor in many security
warning studies (e.g., Schechter et al. 2007; Egelman et al. 2008; Sunshine et al. 2009), little
research has specifically investigated habituation. In addition, research that examines
habituation in the context of security warnings has done so indirectly by observing the influence
of habituation on security behavior rather than by measuring habituation itself (Bravo-Lillo et al.,
2013; Brustoloni & Villamarín-Salomón, 2007). Habituation is difficult to measure precisely using
experimental observation and self-reports, but NeuroIS methods such as eye tracking or fMRI
can directly observe the underlying neurological mechanisms brought about through habituation.
Second, researchers will be able to determine the rate of habituation neurologically by
observing repeated exposures to security messages over time.
Finally, such neurological measures of habituation can guide the development and
testing of security warnings that are resistant to habituation and thus lead to more secure
responses over time.
3.3 What is the impact of stress on a users’ response to security messages?
Stress has received attention from researchers in many fields due largely to the profound
detrimental effects it can have on individuals' productivity and well-being. Stress that is induced
by characteristics of and interactions with information communication technologies (ICT) has
been termed technostress (Brod, 1984). The antecedents and implications of this kind of stress
have been evaluated (Ayyagari et al., 2011; Riedl, 2012) and examined using neurobiological
methodologies (Riedl et al., 2012). One perspective of stress is as a transaction between an
individual and a required task. When an individual perceives that they do not have the resources
or skills necessary to complete a required task, the perceived shortfall can then be considered
to be a threat to that individual's well-being (Ayyagari et al., 2011; Cooper et al., 2001; Lazarus,
1995). This evaluative transaction has been termed stress. Furthermore, the sources which
16
prompt evaluation have been termed stressors, and the effect of negative outcomes that come
from lengthened exposure to stress is termed strain (Ayyagari et al., 2011; Cooper et al., 2001).
Various levels of an individual's responses to stress have been identified, including
physiological (biological), affective, and behavioral response levels (Sonnentag & Frese, 2003).
Some behavioral responses are of the greatest interest to this research question. An individual
under chronic stress is more likely to have narrowed attention and poorer working memory
capacity (Searle et al., 1999). In turn, these outcomes of stress on behavior have been
associated with individuals making worse decisions (e.g., Wall Street traders under stress made
worse risk evaluations than did traders under less stress (Riedl, 2012)). Also, an individual
under stress will exert greater general effort, but only in the short-term.
While certain kinds of stress can be evaluated via psychometric methodologies,
biological measurements are favored due to their ability to measure unconscious stress
responses (Riedl, 2012; Riedl et al., 2012). Neurological measurements tap into the various
biological systems that are activated under stressful situations. Riedl (2012) provides a
comprehensive overview of the biological responses that occur under stress, and also includes
descriptions of methodologies that can be used to measure the responses. Users’ interactions
with security messages can both lead to stress and also be influenced by stress. For example,
fear appeals require an appraisal of a threat, and likewise an appraisal of one's ability to
respond to the threat (Johnston et al., 2014; Johnston & Warkentin, 2010). If the required
response to deal with a security threat is not easily within an individual's capacity, the
individual's stress level will likely increase. Also, the mere presentation of a security message
may result in stress. Security messages often interrupt a user's regular workflow and demand
precious time and cognitive resources such as attention and reflection (Jenkins, 2013; West,
2008). Dual task interference theory suggests that such requirements to perform multiple but
separate tasks can cause performance problems for individuals (Pashler, 1994). Also, we
17
reason that unplanned demands on a worker's resources may cause them to reassess their
ability to complete their task at hand, which can lead to stress. This sudden stress could impact
a user's response to the security message. Furthermore, responses to security messages while
under stress will likely be affected in additional ways due the behavioral outcomes of stress,
including the outcomes of narrowed attention and poorer working memory capacity. This may
lead to failing to consider pertinent dimensions or implications of a portrayed, security threat,
leading to undertaking unnecessarily high risks in order to conserve time and get back to the
interrupted work task. Also, deception detection is theorized to require attention to possible cues
of deception, which are then compared against memories to form expectations of how nondeceptive messages should appear (Johnson et al., 2001). For deceptive messages such as
phishing attacks or scareware, narrower attention and poorer working memory may blind the
user from considering the possibility of deception..
Two neurophysiological methods well suited to measuring stress are cortisol level
measurement and skin conductive response. Cortisol is commonly called the stress hormone.
When an individual’s stress level increases, so does the amount of cortisol. Simply put,
psychological stressors stimulate the release of cortisol into the bloodstream. Cortisol mediates
stress responses and returns the body to homeostasis (Dickerson & Kemeny, 2004). Cortisol
levels peak 10–40 minutes after stressor onset as measured using blood, spinal fluid, or saliva
samples (Dickerson & Kemeny, 2004). Riedl et al. (2012) demonstrated how technostress could
be measured using cortisol samples. Examining cortisol levels allows researchers to objectively
measure stress associated with security messages.
Skin conductance response (SCR) is also known as galvanic skin response. The
increase in the activity of sweat glands when an individual is stressed creates a temporary
condition where the skin becomes a better electricity conductor (Randolph et al., 2005). SCR
has been linked to measures of arousal, excitement, fear, emotion, and attention (Raskin, 1973).
18
SCR tools can measure activity in the sympathetic nervous system that changes the sweat
levels in the eccrine glands of the palms. SCR is low cost, which makes it widely accessible. In
addition, SCR is relatively easy to use and requires minimal intervention on subjects (Dimoka et
al., 2012). We believe that SCR will be useful in measuring the stress levels associated with
users’ responses to security messages.
Investigating this research stream offers many potentially important insights for security
researchers and practitioners. First, scholars will be able, for the first time, to objectively
measure how security messages induce technostress in message recipients. Second, and
similarly, scholars will be able to objectively observe how stress from other causes affects
recipients’ processing of security messages. Third, findings from the above will allow scholars to
design security warnings that are both more effective by inducing less stress in recipients and
more robust within stressful environments. Finally, the greater methodological precision gained
through measures of physiological and automatic fluctuations in stress will likely lead to more
nuanced theoretical explanations of how security messages contribute to technostress, as
compared with relying on self-reported measures only.
3.4 How do fear and uncertainty affect our neural processing of security messages?
Fear and uncertainty are two closely related concepts that can have powerful impacts on
how individuals respond to security messages. Fear is an emotional state entered in response
to the presence of a threat to safety (Whalen, 1998; Witte, 1992). It is a strong neural correlate
of amygdala activation (LeDoux, 2003; Murphy et al., 2003), and prompts threat-withdrawal
(Frijda, 1986) or safety-approaching behaviors (Blanchard & Blanchard, 1994). In an information
security context, fear is commonly used in both benevolent and malicious messages.
Benevolent security messages such as fear appeals, describe a threat to an individual's
information (Johnston et al., 2014; Johnston & Warkentin, 2010). Attackers may likewise
manipulate fear in their messages. Phishing messages may contain ominous warnings about a
19
pending threat to a user’s bank account if the user does not immediately verify their login
credentials (Drake et al., 2004; Kessem, 2012). While users may cognitively know about the
existence of phishing schemes, strong emotion such as fear may invoke automatic responses
that bypass cognition (Ortiz de Guinea & Markus, 2009), leading to an individual falling victim to
the attack. Such tactics may take advantage of human tendencies to be more risk-averse and
risk pessimistic while experiencing fear (Lerner & Keltner, 2001). Falling victim to the attack
seems to be the more conservative option.
Uncertainty, a state associated with decision making, is linked to fear in the sense that
fear is an emotion experienced in response to a threatening situation with uncertain outcomes –
information about the outcomes may be conflicting or missing. Uncertainty of this type has been
described as being "ambiguous" (Camerer & Weber, 1992), and has been found to be
associated with distinct neural correlates (Hsu et al., 2005) and behaviors from decisions with
known risks. In decision-making, humans avoid ambiguous uncertainty beyond that which can
be explained by probabilistic risk calculation theories, suggesting a basic human aversion to it
(Ellsberg, 1961). The definition of ambiguous uncertainty overlaps with the relevant concept of
unknown risks, which are defined by hazards judged to be unobservable, unknown, new, and
delayed in their manifestation of harm (Farahmand et al., 2008). Security messages often
require users to make reactive decisions steeped in uncertainty with various factors contributing
to the uncertainty. For example, malware, SSL certificate, software certificate, and phishing
warnings often describe potential attacks, occasionally providing key indicators that raised the
alarm, but leave the choice to heed or disregard the message to the user. Rather than provide
probabilities of risk to users, these kind of warnings have ambiguous uncertainty out of
necessity, given that it is hard to quantify the likelihood of information security risk for any given
user (Herley, 2009). Even if warnings do make specific risk probability assertions, users may
still enter a state of uncertainty, based on judging the credibility of the message, if they reason
20
that the messages may be "crying wolf" (Sunshine et al., 2009) - they have seen similar
messages before, but disregarded them and did not experience the adverse advertised outcome.
Thus, adding to the uncertainty is the difficulty of immediately discerning the outcomes of
available choices in response to security threat messages. Furthermore, users may experience
heightened uncertainty due to not comprehending the text of a security message, which is a
common occurrence (e.g., Dhamija et al., 2006; Egelman et al., 2008; Sunshine et al., 2009).
Recent calls have been made for the use of neurological methodologies to better capture
the subtle emotional currents of fear in IS contexts,(Dimoka et al., 2012; Riedl et al., 2010a). In
neurological studies, fear has been manipulated in human participants using methods such as
fear conditioning (LeDoux, 2000; Murphy et al., 2003) and, interestingly, exposure to fearful
facial expressions (Morris et al., 1996; Murphy et al., 2003). Fearful facial expressions, thought
to serve the observer as signals of a threat to safety (Gray, 1987), cause similar brain
activations regardless of observers' attention to or awareness of the expressions (e.g.,
Anderson et al., 2003). For the IS, security manipulations such as fear appeals (Johnston &
Warkentin, 2010), Crossler et al. (2013) recently called for the use of neurological
methodologies to examine whether information threats invoke emotional fear similar to threats to
physical safety, a call answered by a recent research proposal to use fMRI to investigate the
question (Warkentin et al., 2012). Another possible measurement approach to capturing fear
includes galvanic skin conductance measurements (e.g., Soares & Ohman, 1993; Tupak et al.,
2014).
Uncertainty has been captured in neuroscience studies with fMRI (Hsu et al., 2005; Krain
et al., 2006) and associated with activity in the amygdala, the orbitofrontal cortex, and the
striatum (c.f. Platt & Huettel, 2008; Sarinopoulos et al., 2010). It has also been measured via
skin conductance (e.g., Grupe & Nitschke, 2011; Sarinopoulos et al., 2010).
21
A NeuroIS examination of how fear and uncertainty impact processing of security
messages points to several prospective contributions. First, the objective measures afforded by
NeuroIS techniques will enable researchers to assess the impact of fear on users’ response to
security messages more directly. For example, scholars may determine whether fear prompted
by security messages has an inverted U-shaped relationship with its effectiveness in prompting
protective actions, in which some level of fear leads to more effective results, but too much
leads to maladaptive responses (Liang & Xue, 2009). Second, in so doing, scholars may design
security messages that create an optimum level of fear in message recipients, which may even
be tailored to the individual. Third, a NeuroIS examination of uncertainty may locate regions of
the brain correlated with uncertainty in processing of security messages, allowing the effects of
uncertainty on processing of security messages to be more precisely measured. Finally, this in
turn may lead to the development of more effective security messages or even SETA programs
that objectively reduce the neurological levels of uncertainty in users.
4. EXPERIMENT
In this section, we describe a NeuroIS study that represents an initial foray into the
research question of how gender affects users’ reception of security messages, and is
illustrative of the value of pursuing the research questions outlined above. Our experiment used
EEG to examine whether there are gender differences in response to security messages,
specifically in this case, browser malware warnings.
EEG and P300
The neurophysiological measure we use in this study is the P300 component of an
event-related potential (ERP) measured with electroencephalography (EEG). ERPs measure
neural events triggered by specific stimuli or actions and the P300 component is produced by a
distributed network of brain processes associated with attention and memory operations. The
P300 is observed in any task that requires stimulus discrimination (Polich, 2007). The P300
22
component is measured by assessing the amplitude and latency of a waveform within a time
window, usually between 250 and 500 milliseconds. Peak amplitude is defined as the difference
between the mean pre-stimulus baseline voltage and the largest positive-going peak of the ERP
waveform during the time window. Peak latency is defined as the time from stimulus onset to the
point of maximum positive amplitude within a time window. Because peak measurements are
more easily influenced by noise in EEG measurements, many researchers use more resistant
measures such as mean amplitudes and the 50 percent area latency within specific poststimulus time windows (Luck, 2005). The P300 scalp distribution is defined as the amplitude
change over the midline electrodes (Fz, Cz, Pz), which typically increase in magnitude from the
frontal to parietal electrode sites (Johnson, 1993).
Neuroscientists distinguish between two different sub-components of the P300 – the P3a
and the P3b (Squires et al., 1975). The P3a occurs earliest within the P300’s time window, has
a more frontal distribution, and is associated with exposure to novel stimuli (Polich, 2003). The
P3b, which can occur anywhere within the P300’s time window (Polich, 2007), has a more
posterior distribution, and is elicited in connection with improbable events, cognitive processing,
and cognitive workload magnitude (Donchin, 1981). The P3b refers to what was generally called
the P300, and hence has the nickname “classic P300” (Squires et al., 1975). In this paper, all
references to the P300 refer to the P3b.
Hypotheses
Our experiment is a variant of the oddball paradigm (originally used by Squires et al.,
1975) because our subjects saw one of the targets (the security warning) relatively infrequently.
Passive stimulus processing generally produces smaller P300 amplitudes than active tasks.
Since infrequent, low probability stimuli can be biologically important, adapting to inhibit
unrelated activity promotes processing efficiency thereby yielding large P300 amplitudes
(Steffensen et al., 2008). Because the security warning could be a very important (although
23
infrequent) event, we would expect the respective P300 to be higher relative to the regular
website instance. Therefore we hypothesize:
H1. Mean P300 amplitude will be greater for all participants when viewing security
warning screenshots than when viewing regular website screenshots.
The relationship between gender and P300 amplitude has been controversial as some
studies see no gender bias or larger amplitudes in males (Roalf et al., 2006). However, Kolb and
Whishaw (1990) demonstrated that event-related potentials are sensitive to gender. Using both
auditory and visual oddball tasks, Steffensen et al. (2008) reported that females have a larger
P300 amplitude than males. Additionally, Guillem and Mograss (2005) showed that females had
a greater P300 response to an event-related potential for the relevant stimulus than did males.
Papanicolaou et al. (1985) and Polich (1989) proposed that hemispheric asymmetry
might give rise to greater P300 amplitudes in females than males. Because the brains of men
are typically more lateralized (asymmetrical) than those of women (Yee & Miller, 1987), women
should evince more symmetrical processing of visual stimuli. This difference in symmetry may
lead to differences between the measures of P300 at the Cz (central) and Pz (parietal) sites.
Females show a larger anteroposterior amplitude gradient, meaning the front part the brain
plays a more critical role in females (e.g., Emmerson et al., 1989; Johnson et al., 1985; Pelosi et
al., 1992).
It has been suggested that stimulus “intensity” may be an important variable in
determining P300 amplitude (Howard & Polich, 1985). Fjell and Walhovd (2001) reported a
larger P300 amplitude elicited when viewing unpleasant scenes presented on slides than to
pleasant scenes. Paired with the general distrust shown by women in online environments
(Bashore & Ridderinkhof, 2002; Polich & Corey-Bloom, 2005), we hypothesize that the P300
amplitude will be higher for women than men, subject to brain topography:
24
H2. Mean P300 amplitude will be greater for women than for men when viewing security
warning screenshots.
Method
Participants and Materials
Sixty-one healthy volunteers (32 females, 29 males) were recruited from a large private
university to participate. Participants gave written informed consent before participation and
received $10 for their participation. Participants had normal color vision, and were free from
head injury, neurological insult, and major psychiatric disorders. Mean age for participants was
22.44 (std.=2.35); females: 21.69 (2.62); males: 23.28 (1.69). The stimuli consisted of 20 screen
shots of popular websites (e.g., amazon.com, netflix.com) as well as the Google Chrome web
browser malware warning screen (see Figure 5).
Figure 5. Color screenshot of Google Chrome security warning screen.
25
Procedures
Participants were instructed that this was a target-detection task in which they would be
shown images of common websites and browser warning screens. Participants were
familiarized prior to the experiment with the warning screen and instructed to press one button
with the index finger of their right hand for “safe” websites and another button with the middle
finger of their right hand for warning screens. Stimuli were presented in an electrically-shielded
testing room, on a 17 inch LCD computer monitor, and responses recorded using a computer
mouse. Each trial of the experiment began with a central fixation screen for 2000 ms followed by
a safe website or warning screen stimulus. Each safe website was presented five times for a
total of 100 safe websites. The colorized warning screen was presented eight times and a
grayscale version of the same warning screen was presented twice (please see Appendix A for
a test controlling for color). Stimulus order was randomized. Stimuli were presented for 3000 ms,
during which time participants were instructed to press a button in order to classify the stimulus.
Stimulus presentation was followed by a “blink” screen for 1500 ms during which they were
instructed to blink.
Electrophysiological Data Recording and Processing
The electroencephalogram (EEG) was recorded from 128 scalp sites using a HydroCel
Geodesic Sensor Net and an Electrical Geodesics Inc. (EGI; Eugene, Oregon, USA)
amplification system (amplification 20K, nominal bandpass 0.10-100 Hz). The EEG was
referenced to the vertex electrode and digitized at 250 Hz. Impedances were maintained below
50 kΩ. EEG data were processed off-line beginning with a 0.1 Hz first-order high-pass filter and
30 Hz low-pass filter. Stimulus-locked ERP averages were derived spanning 200 ms prestimulus to 1000 ms post-stimulus, and were segmented based on trial type criteria (safe
websites, warning screens). Eye blinks were removed from the segmented waveforms using
Independent Components Analysis (ICA) in the ERP PCA Toolkit (Dien, 2010). The ICA
26
components that correlated at 0.9 with the scalp topography of a blink template (generated from
data recorded during the “blink” screens) were removed from the data (Dien et al., 2010; Frank
& Frishkoff, 2007). Artifacts in the EEG data due to saccades and motion were removed from
the segmented waveforms using Principal Components Analysis (PCA) in the ERP PCA Toolkit
(Dien, 2010). Channels were marked bad if the fast average amplitude exceeded 100 mV or if
the differential average amplitude exceeded 50mV. Data from three participants (1 female, 2
males) were excluded from ERP analyses due to low trial counts or excess bad channels. Data
from the remaining participants were averaged, re-referenced and waveforms were baseline
corrected using a 200 ms window prior to stimulus presentation.
The P300 amplitudes were extracted as the mean amplitude within the 300-600 ms poststimulus window (Fjell & Walhovd, 2001). Latencies were calculated as the 50 percent area
latency (Bashore & Ridderinkhof, 2002; Polich & Corey-Bloom, 2005) for the 300-600 ms post
stimulus window. Amplitudes and latencies were analyzed for the Cz and Pz electrodes using
repeated ANOVAs (See Figure 6 for a regional map of the EEG net). The Cz region is the
central region of the scalp and the Pz region is the parietal region of the scalp.
27
Figure 6. Layout of the 128-electrode EEG net with Cz and Pz electrodes marked in red. The
front of the head is up.
Analysis
Behavioral Performance
As a group, participants correctly identified 96.9% (std.=7.9%) of safe websites and
99.3% (2.6%) of warning screens. Because of the disproportionate number of safe trials (100
out of 110), participants could have responded “safe” regardless of the stimulus and still
obtained 91% correct. Accordingly, we calculated a discriminability measure (d') that took into
account hit rates as well as false-alarm rates [i.e., d'=z(hits)-z(false alarms)]. The mean d'
measure was well above chance (mean d' = 4.66, std. = 0.60, t = 58.74, p < .001).
Discriminability did not differ across gender (male mean d' = 4.58, std. = .68; female mean d' =
4.73, std. = .53, t = .94, p = .35). Reaction times (RTs) likewise did not differ across gender
(male mean RT = 952.3, std. 322.3 ms; female mean RT = 847.4, std. 275.9 ms, t = 1.34, p =.19.
Taken together, these results indicate that men and women performed approximately equally on
the behavioral task.
28
The P300 ERP
Investigation of the topographical activation maps confirmed a centrally distributed positivity at
300ms post-stimulus onset (Figure 7). We can see greater electrical activity along the scalp in
the warning images compared to the safe images.
Figure 7. Topographical distribution of ERP potentials for the 300-600 ms post-stimulus periods.
There was a centrally distributed positivity at 300ms post-stimulus onset.
Figure 8 depicts the grand average waveforms for the Cz and Pz electrode sites. The P300
amplitude was analyzed during the 300-600 ms post stimulus period (shaded). At the Cz
electrode site, women had greater amplitudes for the P300 for both safe websites and the
29
warning screen. The P300 amplitude was enhanced for the warning screen across genders at
both electrode sites.
Figure 8. Grand average ERP waveforms at the Cz and Pz electrode sites.
For our hypothesis testing, we first examined whether P300 was higher for all
participants when viewing security warning screenshots than when viewing regular website
screenshots (H1). At the Cz electrode site, an ANOVA on the mean amplitude revealed a main
effect of screen type (i.e., safe sites vs. warning screens, F = 34.23, p < .001). Similarly, at the
Pz electrode site, an ANOVA revealed a main effect of screen type (F= 23.35, p < .001). Thus
H1 was supported.
We next examined whether P300 amplitude was higher for women than for men when
viewing security warning screenshots (H2). At the Cz electrode site, an ANOVA demonstrated a
main effect of gender (F = 4.63, p <.05; see Table 3). At the Pz electrode site, no main effect of
30
gender was found (F = .12, p =.73). Thus, the amplitude of P300 was greater for women than for
men at the Cz electrode overall, but not at the more posterior Pz electrode site. Future research
could examine the regional difference in responses. Therefore, H2 was supported, albeit only for
the Cz electrode site.
Table 3: Mean amplitude (and standard deviation) of the P300 ERP
component as measured during the 300-600 ms post-stimulus period
Cz mean (std.)
Pz mean (std.)
Females
Males
Females
Males
Safe
0.07 (1.73) -1.02 (1.89)
2.11 (1.99) 2.36 (2.32)
Warning
2.30 (3.34) 0.94 (2.85)
3.70 (3.58) 3.92 (3.29)
Interestingly, although we observed the expected P300 enhancement for warning
screens relative to regular websites at both electrode sites, the gender by screen type
interaction was not significant (cZ: F = .15, p = .70); pZ: F =.01, p =.95). This indicates that while
women do exhibit greater P300 amplitudes than do men when viewing security warning screens,
the size of the enhancement or “boost” to P300 amplitude when viewing security warning
screens was not greater for women than for men at either electrode site. In other words, for both
women and men, P300 increased by approximately the same amount when viewing the security
warning (see Figure 5, earlier). We summarize the results of our hypothesis testing in Table 4
below.
Table 4. Hypotheses and Outcomes
H1
P300 will be higher for all participants when viewing security
warning screenshots than when viewing regular website
screenshots.
H2
P300 will be higher for women than for men when viewing security
warning screenshots.
Supported?
Yes
Yes, for the
Cz region
5. CONTRIBUTIONS
This study makes several important contributions—both conceptual and empirical—to
the study of security messages and to behavioral information security generally, as summarized
in Table 5 next. We elaborate on each of these contributions.
31
Table 5. Research Contributions
Element of Research
Contribution
Conceptual
Advocates a multidisciplinary approach to the study of
Research agenda
security messages integrating behavioral security and
cognitive neuroscience.
Identifies four promising areas of research grounded in the
Multidisciplinary approach
cognitive neuroscience literature.
Taxonomy of security
Provides a foundation for subsequent study of security
messages
messages.
Empirical
Increased P300 amplitude in
Shows that security warnings do elicit a response in the
response to malware warnings brain, indicating that they succeed in gaining participants’
for men and women
attention and prompting a cognitive decision process.
Presents empirical evidence that men and women process
Women exhibit greater P300
malware warnings differently in the brain. Represents a first
amplitude than do men in
step in investigating the influence of gender on users’
response to malware warnings
responses to security messages (Research Question 1).
Illustrates how the EEG method may be used to examine
users’ responses to security warnings. EEG is amenable to
EEG method Illustration
studying a variety of phenomena relating to security
messages, and security behavior in general.
Demonstrates the value of using NeuroIS to examine our
Proof of value of proposed
proposed research questions about how users process
research questions
security messages.
Conceptual Contributions
First, we outline a research agenda comprising four guiding questions for researching
users’ reception of security messages via NeuroIS methods. This agenda is a valuable resource
to the behavioral information security community because it (1) identifies several potentially
fruitful streams of research based on our review of the cognitive neuroscience and behavioral
information security literatures. Additionally, our agenda (2) points out a variety of NeuroIS
methods that are well suited to investigating each question. Thus, this research agenda
provides a groundwork that can assist scholars to initiate their own research on behavioral
processing of security messages.
Second, this paper advocates a multidisciplinary approach to the study of security
messages, integrating behavioral information security and cognitive neuroscience. Doing so will
32
increase our understanding beyond that provided by traditional experimental observation and
self-reporting. Our research questions are especially amenable to a NeuroIS lens because the
factors of (1) gender, (2) habituation, (3) stress, and (4) fear or uncertainty are deeply rooted in
our psyches and may affect our behavior unconsciously. For this reason, these and similar
factors are difficult to measure without neurophysiological tools (Riedl et al., 2010b). Using
NeuroIS methods to directly observe the brain can afford insights of IS phenomena that can be
gained in no other way (Dimoka et al., 2011).
Third, we formally define security messages and provide a taxonomy to aid in their
examination. Further, our taxonomy usefully unifies the research domains of security warnings
and malicious messages that have previously been examined as independent phenomena
(Akhawe & Felt, 2013; Luo et al., 2013). Following Gregor’s (2006) classification of theory,
taxonomies are “theories for analyzing,” which “analyze ‘what is’ as opposed to explaining
causality or attempting predictive generalizations” (2006, p. 622). Taxonomies provide a
foundation for the subsequent development of higher-order theories of explanation, prediction,
and design. In the space of security behavior, taxonomies have proven useful guides for future
research (Loch et al., 1992; Willison & Warkentin, 2013).
Empirical Contributions
First, the results showed that amplitude of P300 for all subjects increased when viewing
the security warning screenshot regardless of gender, supporting H1. This finding indicates that
security warnings do elicit a response in the brain, even for static images in our simulated
laboratory experiment. Although users may be habituated to ignore warnings (Egelman et al.,
2008), this finding establishes that the security warning succeeded in gaining participants’
attention and triggering a neurological response. Moreover, this finding illustrates the usefulness
of NeuroIS methods to directly measure the effectiveness of security messages in prompting a
33
cognitive decision process. Such neurological data can aid researchers to design more effective
security warnings (Dimoka et al., 2012).
Second, we found empirical evidence that men’s and women’s brains process malware
warnings differently, as exhibited by the greater P300 amplitudes for electrodes in the Cz region
for women when viewing security warnings. This finding corresponds to previous findings in
trust research which showed that women are generally less trusting of online sites than are men
(Alesina & La Ferrara, 2002; Riedl et al., 2010b; Rodgers & Harris, 2003). Although preliminary
in nature, our findings indicate that significant gender-based differences exist in the brain. These
findings can potentially be useful in designing security messages that adapt to the user, based
on a profile which includes gender.
Third, our experiment illustrates the usefulness of EEG for examining questions relating
to security messages. We chose EEG because of our hypotheses' implicit requirement for high
temporal precision. The temporal resolution provided by analyzing the P300 response is so
precise (300 ms after stimulus) that we were able to measure brain activity before any
conscious decision processes began. This allowed us to observe visceral responses at their
most basic level, which is not possible with conventional survey or experimental methods.
Accordingly, EEG is an effective method to study a variety of time-sensitive phenomena relating
to behavioral information security.
Finally, the results of our experiment provide a proof of value for our NeuroIS research
agenda for security messages, demonstrating the kind and quality of insights that can be gained
by pursuing our proposed research questions. Further, our initial foray into the question of how
gender influences users’ reception of security messages naturally suggests related questions.
For example, it is presently unknown whether females' heightened intensity of reaction to
malware warnings is due to increased perception of threat. FMRI measurements of fear
response activation in the brain could be compared between genders to answer this question.
34
Our results illustrate the promise of NeuroIS to increase our understanding of users’ reception to
security messages, leading to the development of more complete behavioral theories, and
guiding the design of more effective security warnings (Dimoka et al., 2012).
6. CONCLUSION
NeuroIS has potential to provide new understanding of how users’ respond to security
messages, a problem that has long vexed security researchers (Adams & Sasse, 1999; BravoLillo et al., 2013). In this paper, we put forward a NeuroIS research agenda to examine four key
neurological factors relating to how users receive and process security messages. Further, we
presented the results of an experiment that illustrates the value and kinds of insights that can be
derived using a NeuroIS approach. By pursuing these research questions, IS security scholars
can significantly advance our understanding of security messages and how they can be made
more effective.
7. REFERENCES
ABBASI A, ZHANG Z, ZIMBRA D, CHEN H and NUNAMAKER JR JF (2010) Detecting fake
websites: the contribution of statistical learning theory. MIS Quarterly 34(3).
ADAMS A and SASSE MA (1999) Users are not the enemy. Communications of the ACM
42(12), 40-46.
AKHAWE D and FELT AP (2013) Alice in warningland: A large-scale field study of browser
security warning effectiveness. In Proceedings of the 22th USENIX Security Symposium.
ALESINA A and LA FERRARA E (2002) Who trusts others? Journal of Public Economics 85(2),
207-234.
ANDERSON AK, CHRISTOFF K, PANITZ D, ROSA ED and GABRIELI JDE (2003) Neural
Correlates of the Automatic Processing of Threat Facial Signals. The Journal of
Neuroscience 23(13), 5627-5633.
AYYAGARI R, GROVER V and PURVIS R (2011) Technostress: technological antecedents and
implications. MIS Quarterly 35(4), 831-858.
BAKKER A, KIRWAN CB, MILLER M and STARK CEL (2008) Pattern Separation in the Human
Hippocampal CA3 and Dentate Gyrus. Science 319(5870), 1640-1642.
BARON-COHEN S, KNICKMEYER RC and BELMONTE MK (2005) Sex differences in the brain:
Implications for explaining autism. Science 310(5749), 819-823.
BASHORE TR and RIDDERINKHOF KR (2002) Older age, traumatic brain injury, and cognitive
slowing: Some convergent and divergent findings. Psychological Bulletin 128(1), 151198.
BENBASAT I, DIMOKA A, PAVLOU PA and QIU L (2010) Incorporating social presence in the
design of the anthropomorphic interface of recommendation agents: insights from an
fMRI study. In ICIS 2010 Proceedings, St. Louis.
35
BLANCHARD RJ and BLANCHARD DC (1994) Opponent environmental targets and
sensorimotor systems in aggression and defence. In Ethology and psychopharmacology
(Cooper SJ and Hendrie CA, Eds.), pp 133-157,Wiley, Chichester, U.K.
BRAVO-LILLO C, KOMANDURI S, CRANOR LF, REEDER RW, SLEEPER M, DOWNS J, et al.
(2013) Your Attention Please: Designing security-decision UIs to make genuine risks
harder to ignore. In Proceedings of the Ninth Symposium on Usable Privacy and
Security, p 6, ACM.
BROD C (1984) Technostress: The human cost of the computer revolution. Addison-Wesley
Reading, MA.
BRUSTOLONI JC and VILLAMARÍN-SALOMÓN R (2007) Improving security decisions with
polymorphic and audited dialogs. In Proceedings of the 3rd symposium on Usable
privacy and security, pp 76-85, ACM.
CAMERER C and WEBER M (1992) Recent developments in modeling preferences:
Uncertainty and ambiguity. Journal of risk and uncertainty 5(4), 325-370.
CHAPANIS A (1994) Hazards associated with three signal words and four colours on warning
signs. Ergonomics 37(2), 265-275.
COOPER CL, DEWE PJ and O'DRISCOLL MP (2001) Organizational stress: A review and
critique of theory, research, and applications. Sage.
COWAN RL, FREDERICK BDB, RAINEY M, LEVIN JM, MAAS LC, BANG J, et al. (2000) Sex
differences in response to red and blue light in human primary visual cortex: a bold fMRI
study. Psychiatry Research: Neuroimaging 100(3), 129-138.
CRANOR LF (2008) A Framework for Reasoning About the Human in the Loop. UPSEC 8, 1-15.
CROSSLER RE, JOHNSTON AC, LOWRY PB, HU Q, WARKENTIN M and BASKERVILLE R
(2013) Future directions for behavioral information security research. Computers &
Security 32(0), 90-101.
CYR D and BONANNI C (2005) Gender and website design in e-business. International Journal
of Electronic Business 3(6), 565-582.
CYR D, HASSANEIN K, HEAD M and IVANOV A (2007) The role of social presence in
establishing loyalty in E-Service environments. Interacting with Computers 19(1), 43-56.
DESIMONE R (1996) Neural mechanisms for visual memory and their role in attention.
Proceedings of the National Academy of Sciences 93(24), 13494-13499.
DHAMIJA R, TYGAR JD and HEARST M (2006) Why phishing works. In Proceedings of the
SIGCHI conference on Human Factors in computing systems, pp 581-590.
DICKERSON SS and KEMENY ME (2004) Acute stressors and cortisol responses: a theoretical
integration and synthesis of laboratory research. Psychological bulletin 130(3), 355.
DIEN J (2010) The ERP PCA toolkit: An open source program for advanced statistical analysis
of event-related potential data. Journal of Neuroscience Methods 187(1), 138-145.
DIEN J, MICHELSON CA and FRANKLIN MS (2010) Separating the visual sentence N400
effect from the P400 sequential expectancy effect: Cognitive and neuroanatomical
implications. Brain Research 1355(0), 126-140.
DIMOKA A, BANKER RD, BENBASAT I, DAVIS FD, DENNIS AR, GEFEN D, et al. (2012) On
the use of neurophysiological tools in IS research: Developing a research agenda for
NeuroIS. MIS Quarterly 36(3), 679-702.
DIMOKA A, PAVLOU PA and DAVIS FD (2011) Research commentary—NeuroIS: The potential
of cognitive neuroscience for information systems research. Information Systems
Research 22(4), 687-702.
DONCHIN E (1981) Surprise!… surprise? Psychophysiology 18(5), 493-513.
DRAKE CE, OLIVER JJ and KOONTZ EJ (2004) Anatomy of a phishing email, Mountain View,
CA, USA.
36
DREW C (2011) Stolen Data Is Tracked to Hacking at Lockheed. The New York Times, New
York.
EGELMAN S, CRANOR LF and HONG J (2008) You've been warned: an empirical study of the
effectiveness of web browser phishing warnings. In Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems, pp 1065-1074, ACM, Florence,
Italy.
ELLIOT AJ, MAIER MA, BINSER MJ, FRIEDMAN R and PEKRUN R (2009) The effect of red on
avoidance behavior in achievement contexts. Pers Soc Psychol Bull 35(3), 365-375.
ELLIOT AJ, MAIER MA, MOLLER AC, FRIEDMAN R and MEINHARDT J (2007) Color and
psychological functioning: The effect of red on performance attainment. J Exp Psychol
Gen 136(1), 154-168.
ELLIOT AJ, NIESTA KAYSER D, GREITEMEYER T, LICHTENFELD S, GRAMZOW RH,
MAIER MA, et al. (2010) Red, rank, and romance in women viewing men. Journal of
Experimental Psychology: General 139(3), 399-417.
ELLSBERG D (1961) Risk, ambiguity, and the Savage axioms. The Quarterly Journal of
Economics, 643-669.
EMMERSON RY, DUSTMAN RE, SHEARER DE and TURNER CW (1989) P3 latency and
symbol digit performance correlations in aging. Experimental Aging Research 15(3-4),
151-159.
FARAHMAND F, ATALLAH M and KONSYNSKI B (2008) Incentives and perceptions of
information security risks. In International Conference on Information Systems (ICIS)
2008 Proceedings, Citeseer.
FISCHER H, FRANSSON P, WRIGHT CI and BÄCKMAN L (2004) Enhanced occipital and
anterior cingulate activation in men but not in women during exposure to angry and
fearful male faces. Cognitive, Affective, & Behavioral Neuroscience 4(3), 326-334.
FJELL A and WALHOVD K (2001) P300 and neuropsychological tests as measures of aging:
Scalp topography and cognitive changes. Brain Topography 14(1), 25-40.
FOLTZ CB, SCHWAGER PH and ANDERSON JE (2008) Why users (fail to) read computer
usage policies. Industrial Management & Data Systems 108(6), 701-712.
FRANK RM and FRISHKOFF GA (2007) Automated protocol for evaluation of electromagnetic
component separation (APECS): Application of a framework for evaluating statistical
methods of blink extraction from multichannel EEG. Clinical neurophysiology 118(1), 8097.
FRIJDA NH (1986) The emotions. Cambridge University Press, Cambridge, New York.
FURNELL S and CLARKE N (2012) Power to the people? The evolving recognition of human
aspects of security. Computers & Security 31(8), 983-988.
GARBARINO E and STRAHILEVITZ M (2004) Gender differences in the perceived risk of
buying online and the effects of receiving a site recommendation. Journal of Business
Research 57(7), 768-775.
GARTNER (2013) Gartner Says Worldwide Security Market to Grow 8.7 Percent in 2013.
Retrieved January 29, 2014, from http://www.gartner.com/newsroom/id/2512215.
GEFEN D, BENBASAT I and PAVLOU PA (2008) A research agenda for trust in online
environments. Journal of Management Information Systems 24(4), 275-286.
GEORGOPOULOS AP, WHANG K, GEORGOPOULOS M-A, TAGARIS GA, AMIRIKIAN B,
RICHTER W, et al. (2001) Functional magnetic resonance imaging of visual object
construction and shape discrimination: relations among task, hemispheric lateralization,
and gender. Journal of cognitive neuroscience 13(1), 72-89.
GLAESER EL, LAIBSON DI, SCHEINKMAN JA and SOUTTER CL (2000) Measuring Trust. The
Quarterly Journal of Economics 115(3), 811-846.
37
GODFREY SS, ALLENDER L, LAUGHERY KR and SMITH VL (1983) Warning messages: will
the consumer bother to look? In Proceedings of the Human Factors and Ergonomics
Society Annual Meeting, pp 950-954, Sage Publications.
GRAY JA (1987) The psychology of fear and stress. (2 ed.). Cambridge University Press, New
York, NY US.
GREGOR S (2006) The nature of theory in information systems. Mis Quarterly, 611-642.
GRILL-SPECTOR K, HENSON R and MARTIN A (2006) Repetition and the brain: neural
models of stimulus-specific effects. Trends in cognitive sciences 10(1), 14-23.
GRUPE DW and NITSCHKE JB (2011) Uncertainty is associated with biased expectancies and
heightened responses to aversion. Emotion 11(2).
GUILLEM F and MOGRASS M (2005) Gender differences in memory processing: Evidence
from event-related potentials to faces. Brain and Cognition 57(1), 84-92.
HAYHOE M and BALLARD D (2005) Eye movements in natural behavior. Trends in Cognitive
Sciences 9(4), 188-194.
HERLEY C (2009) So long, and no thanks for the externalities: The rational rejection of security
advice by users. In Proceedings of the 2009 workshop on New security paradigms
workshop, pp 133-144, ACM.
HERLEY C (2012) Why do nigerian scammers say they are from nigeria? In WEIS.
HONG J (2012) The state of phishing attacks. Communications of the ACM 55(1), 74-81.
HOWARD L and POLICH J (1985) P300 latency and memory span development.
Developmental Psychology 21(2), 283-289.
HSU M, BHATT M, ADOLPHS R, TRANEL D and CAMERER CF (2005) Neural systems
responding to degrees of uncertainty in human decision-making. Science 310(5754),
1680-1683.
HU Q, WEST R, SMARANDESCU L and YAPLE Z (2014) Why Individuals Commit Information
Security Violations: Neural Correlates of Decision Processes and Self-Control. In Hawaii
International Conference on Systems Sciences, Waikoloa, HI.
HYPPONEN M (2011) How we found the file that was used to hack RSA. Retrieved 6/26/2012,
2012, from http://www.f-secure.com/weblog/archives/00002226.html.
INGALHALIKAR M, SMITH A, PARKER D, SATTERTHWAITE TD, ELLIOTT MA, RUPAREL K,
et al. (2013) Sex differences in the structural connectome of the human brain.
Proceedings of the National Academy of Sciences 111(2), 823-828.
JENKINS JL (2013) Alleviating insider threats: Mitigation strategies and detection techniques.
PhD Thesis, Ph.D., The University of Arizona.
JOHNSON PE, GRAZIOLI S, JAMAL K and GLEN BERRYMAN R (2001) Detecting deception:
adversarial problem solving in a low base-rate world. Cognitive Science 25(3), 355-392.
JOHNSON R, JR. (1993) On the neural generators of the P300 component of the event-related
potential. Psychophysiology 30(1), 90-97.
JOHNSON R, PFEFFERBAUM A and KOPELL BS (1985) P300 and long-term memory:
Latency predicts recognition performance. Psychophysiology 22(5), 497-507.
JOHNSTON A, WARKENTIN M and SIPONEN M (2014) An enhanced fear appeal rhetorical
framework: Leveraging threats to the human asset through sanctioning rhetoric. MIS
Quarterly.
JOHNSTON AC and WARKENTIN M (2010) Fear appeals and information security behaviors:
An empirical study. MIS quarterly 34(3), 549-566.
KALSHER M and WILLIAMS K (2006) Behavioral compliance: Theory, methodology, and
results. Handbook of warnings, 313-331.
KARJALAINEN M and SIPONEN M (2011) Toward a New Meta-Theory for Designing
Information Systems (IS) Security Training Approaches. Journal of the Association for
Information Systems 12(8).
38
KESSEM L (2012) Phishing in Season: A Look at Online Fraud in 2012, RSA: Speaking of
Security.
KOLB B and WHISHAW IQ (1990) Fundamentals of human neuropsychology. (3 ed.). W H
Freeman/Times Books/ Henry Holt & Co, New York, NY, US.
KRAIN AL, WILSON AM, ARBUCKLE R, CASTELLANOS FX and MILHAM MP (2006) Distinct
neural mechanisms of risk and ambiguity: A meta-analysis of decision-making.
NeuroImage 32(1), 477-484.
LAUGHERY KR and BRELSFORD JW (1991) Receiver characteristics in safety
communications. In Proceedings of the Human Factors and Ergonomics Society Annual
Meeting, pp 1068-1072, SAGE Publications.
LAUGHERY KR and WOGALTER M (2014) A three-stage model summarizes product warning
and environmental sign research Safety Science 61, 3-10.
LAZARUS RS (1995) Psychological stress in the workplace. Occupational stress: A handbook 1,
3-14.
LEDOUX J (2003) The emotional brain, fear, and the amygdala. Cellular and Molecular
Neurobiology 23(4-5), 727-738.
LEDOUX JE (2000) Emotion circuits in the brain. Annual Review of Neuroscience 23(1), 155184.
LEE T, LIU H-L, HOOSAIN R, LIAO W-T, WU C-T, YUEN KS, et al. (2002) Gender differences
in neural correlates of recognition of happy and sad faces in humans assessed by
functional magnetic resonance imaging. Neuroscience letters 333(1), 13-16.
LEE T, LIU H, CHAN C, FANG S and GAO J (2005) Neural activities associated with emotion
recognition observed in men and women. Molecular psychiatry 10(5), 450-455.
LERNER JS and KELTNER D (2001) Fear, anger, and risk. Journal of personality and social
psychology 81(1).
LESCH M (2006) Consumer product warnings: research and recommendations. Wogalter,
Michael. S.
LIANG H and XUE Y (2009) Avoidance of information technology threats: A theoretical
perspective. MIS Quarterly 33(1), 71-90.
LIVERSEDGE SP and FINDLAY JM (2000) Saccadic eye movements and cognition. Trends in
Cognitive Sciences 4(1), 6-14.
LOCH KD, CARR HH and WARKENTIN ME (1992) Threats to information systems: today's
reality, yesterday's understanding. MIS Quarterly, 173-186.
LUCK SJ (2005) An introduction to the event-related potential technique. MIT Press, Cambridge,
Ma.
LUO XR, ZHANG W, BURD S and SEAZZU A (2013) Investigating phishing victimization with
the Heuristic–Systematic Model: A theoretical framework and an exploration. Computers
& Security 38, 28-38.
MACH QH, HUNTER MD and GREWAL RS (2010) Neurophysiological correlates in interface
design: An HCI perspective. Computers in Human Behavior 26(3), 371-376.
MANDIANT (2013) APT1: Exposing one of China's cyber espionage units, Mandiant.
MEYER J (2006) Responses to dynamic warnings. Handbook of Warnings. Human Factors and
Ergonomics, 221-229.
MINNERY BS and FINE MS (2009) Neuroscience and the Future of Human-Computer
Interaction. Interactions.
MITNICK KD and SIMON WL (2001) The art of deception: Controlling the human element of
security. John Wiley & Sons.
MOODY G, GALLETTA D, WALKER J and DUNN B (2011) Which Phish Get Caught? An
Exploratory Study of Individual Susceptibility to Phishing.
39
MORRIS JS, FRITH CD, PERRETT DI, ROWLAND D, YOUNG AW, CALDER AJ, et al. (1996)
A differential neural response in the human amygdala to fearful and happy facial
expressions.
MURPHY FC, NIMMO-SMITH IAN and LAWRENCE AD (2003) Functional neuroanatomy of
emotions: A meta-analysis. Cognitive, Affective, & Behavioral Neuroscience 3(3), 207233.
ORTIZ DE GUINEA A and MARKUS ML (2009) Why break the habit of a lifetime? Rethinking
the roles of intention, habit, and emotion in continuing information technology use. MIS
Quarterly 33(3), 433-444.
PAPANICOLAOU AC, LORING DW, RAZ N and EISENBERG HM (1985) Relationship between
stimulus intensity and the P300. Psychophysiology 22(3), 326-329.
PASHLER H (1994) Dual-task interference in simple tasks: data and theory. Psychological
Bulletin 116(2), 220-244.
PELOSI L, HOLLY M, SLADE T, HAYWARD M, BARRETT G and BLUMHARDT LD (1992)
Event-related potential (ERP) correlates of performance of intelligence tests.
Electroencephalography and Clinical Neurophysiology 84(6), 515-520.
PLATT ML and HUETTEL SA (2008) Risky business: The neuroeconomics of decision making
under uncertainty. Nature Neuroscience 11(4), 398-403.
POLICH J (1989) Frequency, intensity, and duration as determinants of P300 from auditory
stimuli. Journal Clinical Neurophysiology 6(3), 277-286.
POLICH J (2003) Overview of P3a and P3b. In Detection of change: Event-related potential and
fMRI findings (Polich J, Ed.), pp 83-98,Kluwer Academic Press, Boston.
POLICH J (2007) Updating P300: An integrative theory of P3a and P3b. Clinical
Neurophysiology 118(10), 2128-2148.
POLICH J and COREY-BLOOM J (2005) Alzheimer's disease and P300: Review and evaluation
of task and modality. Current Alzheimer Research 2(5), 515-525.
POULSEN K (2011) Second Defense Contractor L-3 ‘Actively Targeted’ With RSA SecurID
Hacks. Wired.
RAGLAND JD, COLEMAN A, GUR RC, GLAHN DC and GUR RE (2000) Sex differences in
brain-behavior relationships between verbal episodic memory and resting regional
cerebral blood flow. Neuropsychologia 38(4), 451-461.
RANDOLPH A, MCCAMPBELL L, MOORE M and MASON S (2005) Controllability of galvanic
skin response. In 11th International Conference on Human–Computer Interaction (HCII),
Las Vegas, NV.
RASKIN DC (1973) Attention and arousal. Electrodermal activity in psychological research, 125155.
RIEDL R (2012) On the biology of technostress: literature review and research agenda. ACM
SIGMIS Database 44(1), 18-55.
RIEDL R, BANKER RD, BENBASAT I, DAVIS FD, DENNIS AR, DIMOKA A, et al. (2010a) On
the foundations of NeuroIS: Reflections on the Gmunden Retreat 2009. Communications
of the Association for Information Systems 27(Article 15).
RIEDL R, HUBERT M and KENNING P (2010b) Are there neural gender differences in online
trust? An fMRI study on the perceived trustworthiness of ebay offers. MIS Quarterly
34(2), 397-428.
RIEDL R, KINDERMANN H, AUINGER A and JAVOR A (2012) Technostress from a
Neurobiological Perspective: System breakdown increases the stress hormone cortisol
in computer users. Business & Information Systems Engineering 4(2), 61-69.
ROALF D, LOWERY N and TURETSKY BI (2006) Behavioral and physiological findings of
gender differences in global-local visual processing. Brain and Cognition 60(1), 32-42.
40
RODGERS S and HARRIS MA (2003) Gender and E-Commerce: An exploratory study. Journal
of Advertising Research 43(03), 322-329.
SARINOPOULOS I, GRUPE DW, MACKIEWICZ KL, HERRINGTON JD, LOR M, STEEGE EE,
et al. (2010) Uncertainty during Anticipation Modulates Neural Responses to Aversion in
Human Insula and Amygdala. Cerebral Cortex 20(4), 929-940.
SCHECHTER SE, DHAMIJA R, OZMENT A and FISCHER I (2007) The emperor's new security
indicators. In Security and Privacy, 2007. SP'07. IEEE Symposium on, pp 51-65, IEEE.
SCOTT JE and WALCZAK S (2009) Cognitive engagement with a multimedia ERP training tool:
Assessing computer self-efficacy and technology acceptance. Information &
Management 46(4), 221-232.
SEARLE BJ, BRIGHT JEH and BOCHNER S (1999) Testing the 3-factor model of occupational
stress: The impact of demands, control and social support on a mail sorting task. work &
stress 13(3), 268-279.
SHAREK D, SWOFFORD C and WOGALTER M (2008) Failure to recognize fake internet popup
warning messages. In Proceedings of the Human Factors and Ergonomics Society
Annual Meeting, pp 557-560, Sage Publications.
SHAYWITZ BA, SHAYWLTZ SE, PUGH KR, CONSTABLE RT, SKUDLARSKI P, FULBRIGHT
RK, et al. (1995) Sex differences in the functional organization of the brain for language.
SIMMERING MJ, POSEY C and PICCOLI G (2009) Computer Self‐Efficacy and Motivation to
Learn in a Self‐Directed Online Course. Decision Sciences Journal of Innovative
Education 7(1), 99-121.
SOARES JJ and OHMAN A (1993) Backward masking and skin conductance responses after
conditioning to nonfeared but fear-relevant stimuli in fearful subjects. Psychophysiology
30(5), 460-466.
SONNENTAG S and FRESE M (2003) Stress in organizations. Wiley Online Library.
SPECK O, ERNST T, BRAUN J, KOCH C, MILLER E and CHANG L (2000) Gender differences
in the functional organization of the brain for working memory. Neuroreport 11(11), 25812585.
SQUIRES NK, SQUIRES KC and HILLYARD SA (1975) Two varieties of long-latency positive
waves evoked by unpredictable auditory stimuli in man. Electroencephalography and
Clinical Neurophysiology 38(4), 387-401.
STEFFENSEN SC, OHRAN AJ, SHIPP DN, HALES K, STOBBS SH and FLEMING DE (2008)
Gender-selective effects of the P300 and N400 components of the visual evoked
potential. Vision Research 48(7), 917-925.
SUNSHINE J, EGELMAN S, ALMUHIMEDI H, ATRI N and CRANOR LF (2009) Crying wolf: An
empirical study of SSL warning effectiveness. In SSYM'09 Proceedings of the 18th
conference on USENIX security symposium, pp 399-416, Montreal, Canada.
TERRELL F and BARRETT RK (1979) Interpersonal trust among college students as a function
of race, sex, and socioeconomic class. Perceptual and Motor Skills 48(3, Pt 2), 1194.
TUPAK SV, DRESLER T, GUHN A, EHLIS A-C, FALLGATTER AJ, PAULI P, et al. (2014)
Implicit emotion regulation in the presence of threat: Neural and autonomic correlates.
NeuroImage 85.
VANCE A, ANDERSON BB, KIRWAN CB and EARGLE D (2014) Using Measures of Risk
Perception to Predict Information Security Behavior: Insights from
Electroencephalography (EEG). Journal of the Association for Information Systems
Forthcoming.
VANCE A, SIPONEN M and PAHNILA S (2012) Motivating IS security compliance: Insights from
habit and protection motivation theory. Information & Management 49(3-4), 190-198.
41
VEKIRI I and CHRONAKI A (2008) Gender issues in technology use: Perceived social support,
computer self-efficacy and value beliefs, and computer use beyond school. Computers &
Education 51(3), 1392-1404.
VIGILANTE J, WILLIAM J and WOGALTER MS (1997) On the prioritization of safety warnings
in product manuals. International Journal of Industrial Ergonomics 20(4), 277-285.
VREDENBURGH A and ZACKOWITZ I (2006) Expectations. Handbook of warnings, 345-353.
WARKENTIN M, WALDEN EA and JOHNSTON AC (2012) Identifying the neural correlates of
protection motivation for secure IT behaviors. In Gmunden Retreat on NeuroIS 2012.
WEBER R (2004) The grim reaper: the curse of e-mail. MIS quarterly 28(3), 3-14.
WEISS E, SIEDENTOPF C, HOFER A, DEISENHAMMER E, HOPTMAN M, KREMSER C, et al.
(2003) Sex differences in brain activation pattern during a visuospatial cognitive task: a
functional magnetic resonance imaging study in healthy volunteers. Neuroscience letters
344(3), 169-172.
WEST R (2008) The psychology of security. Communications of the ACM 51(4), 34-40.
WHALEN PJ (1998) Fear, vigilance, and ambiguity: Initial neuroimaging studies of the human
amygdala. Current directions in psychological science 7(6), 177-188.
WILLISON R and WARKENTIN M (2013) Beyond deterrence: An expanded view of employee
computer abuse. MIS Quarterly 37(1), 1-20.
WITTE K (1992) Putting the fear back into fear appeals: The extended parallel process model.
Communications Monographs 59(4), 329-349.
WOGALTER MS (2006) Communication-human information processing (C-HIP) model.
Handbook of warnings, 51-61.
WOGALTER MS, KALSHER MJ, FREDERICK LJ and MAGURNO AB (1998) Hazard Level
Perceptions of Warning. International Journal of Cognitive Ergonomics 2(1-2), 123-143.
WRIGHT RT and MARETT K (2010) The influence of experiential and dispositional factors in
phishing: An empirical investigation of the deceived. Journal of Management Information
Systems 27(1), 273-303.
YEE CM and MILLER GA (1987) Affective valence and information processing. [Research
Support, U.S. Gov't, P.H.S.]. Electroencephalogr Clin Neurophysiol Suppl 40, 300-307.
ZICKUHR K and SMITH A (2012) Digital differences, Pew Internet & American Life Project.
42
APPENDIX A. THE EFFECT OF COLOR ON PERCEPTIONS OF MALWARE WARNINGS
In our experiment we controlled for color differences either between red and grayscale
malware warnings, or between predominantly red screenshots, including malware warnings and
as well as common red-themed websites like Netflix.com. This is because the wider warning
literature has found that “colors red, orange, and yellow tend to connote the greatest levels of
hazard as compared to other colors” Laughery and Wogalter (2014), with red conveying the
highest levels of hazard of the three (Chapanis, 1994; Wogalter et al., 1998). Additionally, the
color red is commonly used for malware warnings in practice (Akhawe & Felt, 2013).
In a series of studies in cognitive neuroscience, Elliot and coauthors examined the effect
of the color red on individuals who perceive it in various contexts, including in achievement
contexts (c.f. (Elliot et al., 2009; Elliot et al., 2007; Elliot et al., 2010). In these experiments, they
found red has a deleterious effect on intellectual performance (Elliot et al., 2007) and evokes
avoidance behavior (Elliot et al., 2009). These studies used as controls the colors black and
green (green being the chromatic opposite of red). Participants’ reactions were measured using
EEG methodologies.
However, contrary to these previous findings, our results showed that no main effect for color
and no interactions between color and gender, or color and screen type. At the Pz electrode site,
an ANOVA revealed a main effect of screen type (F = 4.86, p <.05) and a main effect of color (F
= 4.92, p <.05), but no color by gender or color by screen type interactions (p >.05). Taken
together, these results indicate that the stimulus color did not differentially impact cognitive
processing as indexed by the P300 amplitude. These results contradict the conventional wisdom
of practice.
43