Understanding the Hidden Dissatisfaction of Users Toward End

IDEA GROUP
Journal ofPUBLISHING
End User Computing, 15(2), 1-22, Apr-June 2003
701 E. Chocolate Avenue, Hershey PA 17033-1240, USA
Tel: 717/533-8845; Fax 717/533-8661; URL-http://www.idea-group.com
1
ITJ2421
Understanding the Hidden
Dissatisfaction of Users Toward
End-User Computing
Nancy Shaw, George Mason University, USA
Joo-Eng Lee-Partidge, Central Connecticut State University, USA
James S.K. Ang, National University in Singapore, Singapore
ABSTRACT
The objective of this research is to examine satisfied and dissatisfied end-users in an organization
to determine if they hold different technological frames of reference towards end-user computing
(EUC). This research examines the effectiveness of the computer systems at the organization,
while at the same time measuring the level of end-user satisfaction with the EUC environment.
Grounded theory techniques for qualitative analysis of interviews were used to assess the
technological frames of reference of selected highly satisfied and highly dissatisfied users.
While analysis of the satisfaction surveys alone indicated that the user population was generally
satisfied with their EUC environment, follow-up interviews and service quality gap analysis
highlighted several individual support areas that required remedial action. In addition, satisfied
and dissatisfied users held different views or technological frames of reference towards the
technology they used. Their frames of reference affected their expectations of the technology,
their interactions with the MIS support staff, and their utilization of the technology on a day-today basis.
Keywords: end-user computing, cognitive structures, end-user support, user satisfaction
INTRODUCTION
This research examines the different
views and perspectives of individuals in an
organization toward end-user computing
(EUC) and EUC support, and how those
views can affect end-user satisfaction.
End-user satisfaction has long been used
as an important surrogate measure of information system success (Zmud, 1979;
Doll and Torkzadeh, 1988; DeLone and
McLean, 1992; Torkzadeh and Doll, 1993;
Buyukkurt and Vass, 1993; Henry and
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group
Inc. is prohibited.
2 Journal of End User Computing, 15(2), 1-22, Apr-June 2003
Stone, 1994; Guimaraes and Igbaria, 1994;
Mirani and King, 1994; Seddon, 1997; Blili
et al., 1998; Foong, 1999; Mahmood et al.,
2000; Aladwani, 2002, Shaw et al., 2002).
End user satisfaction is a perceptual or subjective measure of system success, serving as a substitute for objective determinants of information systems effectiveness
(Ives et al., 1983).
We are interested in how an
individual’s view or perspective can affect
end-user satisfaction. In social cognitive
research, views and perspectives, also
known as frames of reference, have been
used to explain an individual’s mental processes. A few studies in the IS area have
been conducted to understand the views
or attitudes individuals hold towards technology (Bostrom and Heinen, 1977,
Dagwell and Weber, 1983; Noble, 1986;
Pinch and Bijker, 1987; Kumar and BjornAnderson, 1989; Jawaher and Elango,
2001). The term “technological frame of
reference” was introduced by Orlikowski
and Gash (1994) to describe the underlying assumptions, expectations, and knowledge that people have about technology.
In the current study, we extend the
idea of technological frame of reference
to assess the views users hold towards
EUC. In particular, we are interested in
determining if satisfied and dissatisfied users hold different views of the technology,
and ultimately if those different views influence their satisfaction with that technology. In particular, we examine the effectiveness of end-user support in an organization, the satisfaction of end-users with
that support and the technological frames
of reference of those users. By concentrating on the differences between satisfied and dissatisfied end-users, we hope to
deepen our understanding of end-user satisfaction and dissatisfaction so as to iden-
tify contributory factors that lead to dissatisfaction.
We use a combination of quantitative
and qualitative analysis in our case study.
An instrument measuring end-user satisfaction was used to assess the satisfaction
of individual users with the overall EUC
environment, and service quality gap analysis was used to measure the effectiveness
of the support organization in the organization. Grounded theory techniques (Glaser
and Strauss, 1967) were used in the qualitative analysis of interviews to assess the
frames of reference of selected satisfied
and dissatisfied users.
CONCEPTUAL FRAMEWORK
AND RESEARCH MODEL
The objective of this research is to
examine satisfied and dissatisfied end-users in an organization to determine if they
hold different technological frames of reference towards end-user computing (EUC).
Can their different frames of reference be
used to explain their different satisfaction
levels? What is the relationship between
satisfaction with end-user support and satisfaction with the overall end-user computing environment? The research model is
presented in Figure 1.
Measuring EUC Satisfaction
Several different tools have been developed to assess end-user computing satisfaction. Two validated instruments commonly used to measure satisfaction with
end-user computing are the Doll and
Torkzadeh (1988) instrument and the Ives,
et al. (1983) instrument. These instruments
can be used in one of two ways: as a
straightforward measurement of the level
of satisfaction within an organization, or as
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group
Inc. is prohibited.
Journal of End User Computing, 15(2), 1-22, Apr-June 2003 3
Figure 1. Research Model
Satisfaction with
end-user support
End-User Satisfaction
End-User’s
Technological
Frames of
References
a tool to identify factors or determinants
that can affect satisfaction. This study uses
a variation of the Ives et al. (1983) instrument developed by Mirani and King (1994)
that was specifically adapted for the EUC
context (see Appendix A).
A review of the EUC satisfaction literature (shown in Table 1) surfaced several factors that are shown to significantly
influence EUC satisfaction.
The results from end-user satisfaction studies are quite variable, with some
studies giving support to the influence of
one factor while others find little or no support for the same variable. In a meta-analysis of 45 end-user satisfaction studies,
Mahmood et al. (2000) separate factors that
affect satisfaction into three general categories: perceived benefits and convenience, user background and involvement,
and organizational attitude and support. As
listed in Table 1, end-user support was
shown to have significantly affected EUC
satisfaction in eight studies. As we are interested in examining the existing partnership between the IS department and endusers in an organization, and given the sig-
nificance of end-user support as highlighted
by previous studies, we decide to concentrate on the effect of end-user support on
EUC satisfaction or dissatisfaction.
Measuring the Effectiveness of
EUC Support
As indicated in the section above,
prior research has shown that end-user
support contributes to end-user satisfaction
(Buyukkurt and Vass, 1993; Lederer and
Spencer, 1988; Rainer and Carr, 1992; Bowman et al., 1993; Brancheau and Wetherbe,
1988; Trauth and Cole, 1992; Mirani and
King, 1994; Shaw et al., 2002). One measurement of support is service quality, which
measures how well the service level delivered matches customer expectations
(Lewis and Booms, 1983). Service quality
is more difficult to measure than product
quality, as it is a function of the recipient’s
perception of quality. For example, one
end-user may expect installation of a new
software package to take an hour and be
very happy that it takes 45 minutes; another may be unhappy when expecting it
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group
Inc. is prohibited.
4 Journal of End User Computing, 15(2), 1-22, Apr-June 2003
Table 1. Factors that influence EUC satisfaction
End-User participation in design
Doll and Torkzadeh (1988), Montazemi
(1988), Mirani and King (1994), AmoakoGyampah and White (1993), McKeen et al
(1994), Lawrence and Low (1993), Guimaraes
et al (1992), Hartwick and Barki (1994),
Baroudi et al (1986), Yoon and Guimaraes
(1995), McKeen and Guimaraes (1997), Park
et al (1993 – 1994), Saleem (1996), Choe
(1998)
End-User computing self efficacy (i.e. the Henry and Stone (1994), Montazemi (1988)
belief that one is able to master a particular
behavior)
Technical support
Buyukkurt and Vass (1993), Lederer and
Spencer (1988), Ranier and Carr (1992),
Bowman et al (1993), Brancheau and
Wetherbe (1988), Trauth and Cole (1992),
Mirani and King (1994), Shaw et al. (2002)
Documentation
Torkzadeh and Doll (1993)
Management Support
Henry and Stone (1994), Guimaraes, et al.
(1992), Lawrence and Low, (1993), Igbaria et
al., (1995), Yoon and Guimaraes (1995)
Ease of System Use
Henry and Stone (1994), Davis (1989), Igbaria
et al. (1995), Davis et al. (1989)
Previous computing experience
Henry and Stone (1994), Lehman and Mukthy
(1989), Lawrence and Low (1993), Palvia
(1996), Montazemi et al (1996), Ryker and
Nath (1995), Yoon and Guimaraes (1995),
Thompson et al (1994), Venkatesh (1999),
Chan and Storey (1996), Igbaria et al (1989),
Blili et al (1998)
End-user computing attitudes
Henry and Stone (1994), Lee et al (1995),
Hartwick and Burki (1994), Davis et al (1989),
Thompson et al (1994), Satzinger and Olfman
(1995), Aladwani (2002), Shaw et al (2002)
Outcome Expectancy
Henry and Stone (1994)
Existence of Hot Line
Existence of Information Center
Bergeron and Berube (1988)
Number of systems analysts
Montazemi (1988)
Level of requirements analysis
Proportion of online applications
Degree of decentralization
Standards and guidelines
Data provision support
Mirani and King (1994)
Purchasing relating support
Variety of software supported
Post development support
Training on backup and security
New software upgrades
New hardware upgrades
Low percentage of H/S downtime
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group
Inc. is prohibited.
Journal of End User Computing, 15(2), 1-22, Apr-June 2003 5
to take 15 minutes but having to wait 30
minutes for the activity to be completed.
Objectively, the latter was more productive, but in the former the end-user was
more satisfied.
Service quality can be measured by
a comparison of user expectations (or
needs) with the perceived performance (or
capabilities) of the department or unit providing the service. The difference between
these two measurements is called the service quality gap (Parasuraman et al., 1985).
Parasuraman’s work resulted in a 45-item
instrument, SERVQUAL, for assessing
customer expectations and perceptions of
the quality of service in retailing and service organizations. Service quality has been
the most researched area of services marketing (Fisk et al., 1993)
Service quality measurements have
been used in IS research as a measure of
IS success. Recognizing that a major component of the product an IS department
delivers has a service dimension, IS researchers have recently begun to look for
ways to assess the quality of that service
(Shaw et al., 2002). The gap analysis
method was first used by Kim (1990) to
measure the quality of service of an IS
department. Pitt et al. (1995) used a 22item version of the SERVQUAL instrument
developed by Parasuraman et al. (1985),
to test the instrument’s usefulness in the
MIS environment. They assessed several
aspects of the instrument’s validity, including content validity, reliability, convergent
validity, nomological validity and discriminant validity. They concluded that
SERVQUAL could be used with confidence in the MIS environment. They also
reported that the results of a service quality assessment was very useful in not only
assessing current levels of service quality,
but also as a diagnostic tool for determin-
ing actions for raising service quality (Pitt
et al., 1995).
The instrument itself has been the
subject of considerable debate (Brown et
al., 1993; Parasuraman et al., 1993; Fisk et
al., 1993; Van Dyke et al., 1997; Pitt et al.,
1997). The focus of the debate concerns
calculating differences between two possibly different constructs: expectations and
perceptions of performance. To counteract the concerns surrounding the validity
of the instrument in an IS context, Pitt et
al. (1997) demonstrated that the service
quality perception-expectation subtraction
is rigorously grounded. See Kohlmeyer and
Blanton (2000) for a complete discussion
of the SERVQUAL debate. Researchers
generally agree that the instrument is a good
predictor of overall service quality, and is
applicable for use in the IS context (Fisk et
al., 1993; Kettinger and Lee, 1997; Pitt et
al., 1997). Remenyi and Money (1994)
developed a service-quality instrument specifically for the EUC environment to establish the effectiveness of the computer
service and to identify key problem areas
with EUC. This instrument is used in the
current study (see Appendix B).
Assessing End-User’s
Frame of Reference
While measuring levels of end-user
satisfaction in an organization is relatively
straightforward and has been heavily documented, measuring or assessing views and
perspectives of individuals is not so straightforward. This field of research originated
in the social sciences domain. However,
over the past few years, these concepts
have been applied to several other areas
of research, and more recently to those
areas related to the management and use
of computers.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group
Inc. is prohibited.
6 Journal of End User Computing, 15(2), 1-22, Apr-June 2003
The cognitive sciences suggest that
the world as it is experienced does not consist of events that are meaningful in themselves. Cognitions, interpretations, or ways
of understanding events are all guided by
what happened in the past (Schutz, 1970).
When faced with an unknown situation or
object (artifact), we automatically create
our own interpretation of what that artifact
is. Which particular past experiences are
called up and how those experiences are
imposed onto a structure are what determine our individual cognitive structures
(Gioia, 1986). Individual cognitive structures, or schemas, allow individuals to draw
on knowledge and past experiences to help
them make sense of information. They influence perception and memory, and can
be both facilitating and constraining.
Schemas can change over time; existing
schemas can also inhibit the learning of new
schemas (Markus and Zajonc, 1985).
A minimal amount of research has
been conducted on the social cognitive perspectives that individuals hold towards technology. Bostrom and Heinen (1977) first
introduced the concept of frame of reference when they suggested that some of
the social problems encountered during the
implementation of information systems
were due to the frames of reference held
by the systems designers. Later work by
Dagwell and Weber (1983), Kumar and
Bjorn-Anderson (1989) and Boland (1978,
1979) expanded on the earlier study by
examining the influence of the designer’s
values and conceptual framework on the
resultant systems. This earlier work became the basis for a group of studies investigating the social aspects of information technology that considered the perceptions and values of both the designers and
users (Hirschheim and Klein, 1989; Kling
and Iacono, 1984; Markus, 1984). While
these studies proposed the idea that indi-
viduals have assumptions and expectations
regarding technology, Orlikowski and Gash
(1994) expanded on this concept to emphasize the social nature of technological
frames, their content, and the implications
of these frames on the development, implementation, and use of that technology.
Technological frame of reference
was introduced by Orlikowski and Gash
(1994) in a study that proposed a systematic approach for examining the underlying
assumptions, expectations, and knowledge
that people have about technology. They
argue that an understanding of an
individual’s interpretation of a technology
is critical to understanding their interaction
with it. Of particular significance in
Orlikowski and Gash’s work is the discussion of the contextual dimension of frames.
Members of a social group as a whole will
come to have an understanding of particular technological artifacts, including not only
knowledge about the particular technology,
but also a local understanding of specific
uses in a given setting. Earlier work by
Noble (1986), and Pinch and Bijker (1987)
had shown that technological frames could
strongly influence the choices made regarding the design and use of technology, including adoption rates (Jurison, 2000).
In our paper, we are interested in assessing the technological frame of reference that users hold towards end-user computing. In particular, we are interested in
determining if satisfied and dissatisfied endusers hold different views of the technology, and ultimately if these different views
influence their satisfaction with that technology.
ORGANIZATIONAL
BACKGROUND
Otis Elevator, a multi-national corporation headquartered in Connecticut, par-
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group
Inc. is prohibited.
Journal of End User Computing, 15(2), 1-22, Apr-June 2003 7
ticipated in the study. This research was
conducted at their Pacific Area of Operations Headquarters (PAO-HQ) in
Singapore. Otis Elevator, a wholly owned
subsidiary of United Technologies Corporation, was founded in 1853. It is currently
the world’s largest manufacturer of elevators, moving walkways and other horizontal transportation systems. Their products
are offered in more than 200 countries
worldwide, with over 77% of sales occurring outside of the U.S. Manufacturing
facilities are located in the Americas, Europe and Asia. The operations are regionalized into seven areas, with the Asia Pacific Area covering China, Korea, all Southeast Asian countries, north Asian countries,
India, Australia, and New Zealand (Otis
Fact Sheet, 2001).
At the time of the study, the PAOHQ offices were located on two floors in a
modern high-rise office building in downtown Singapore–it has since been relocated
to Hong Kong. A separate marketing office responsible for local sales was located
several miles away. PAO-HQ was responsible for sales, service and manufacturing
operations spread out over 18 countries in
their region. The headquarters office was
divided into the traditional functional areas
of Marketing, Operations, Quality Control,
Finance, Engineering, Training, and HR.
The MIS group reported to the Finance director. PAO-HQ employed approximately
100 people, 85 with a personal computer
on the LAN. Internet connections were
available, but were rarely used. The PCs
ran either DOS 3.1 or Windows 95 with
standard Windows office applications. The
majority of the users used only MS office
applications (Excel, Powerpoint, Word) and
e-mail. Customized software was in use
for specialized functions (e.g., an OTISwide financial reporting system, an OTISwide internal management reporting sys-
tem, and a PAO-HQ developed engineering support system), with the relevant employees using those applications.
The PAO-HQ MIS group comprised
five employees, all co-located with the other
HQ personnel in the same office building.
Two employees were directly responsible
for LAN administration and end-user support at PAO-HQ. The remaining employees supported infrastructure development,
maintenance and training for the 18 country locations. Request for support came in
over the telephone or e-mail, and were
logged daily. The majority of support calls
were resolved by the second day either in
person or over the phone by the MIS support group. A small percentage of calls
were escalated to the application developer
(either Microsoft or an OTIS developer in
Connecticut). After-hours and weekend
support was provided through pager calls
to the support person on duty.
RESEARCH METHODOLOGY
AND RESULTS
Data gathering for our study was carried out in two phases. The first phase
utilized a survey instrument, while the second phase was an in-depth case study using grounded theory techniques. Data gathering consisted of unstructured and semistructured interviewing, documentation review, and observation. This triangulation
across various data collection techniques
is beneficial because it provides multiple
perspectives and yields stronger substantiation of constructs (Orlikowski, 1993).
Phase One: The objective of this
phase was to measure the effectiveness
of the computer systems at Otis, while at
the same time measuring the level of enduser satisfaction with that system. As mentioned earlier, the two most common instruments used to measure satisfaction with
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group
Inc. is prohibited.
8 Journal of End User Computing, 15(2), 1-22, Apr-June 2003
end-user computing are the Doll and
Torkzadeh (1988) instrument and the Ives,
Olsen and Baroudi (1983) instrument. The
Doll and Torkzadeh instrument was developed to measure “computing satisfaction”
of an end-user with a specific application.
In our research, the intent was to measure
end-users’ overall satisfaction with enduser computing, not with a specific application. Therefore, we used a more general measure, the short form of the User
Information Satisfaction (UIS) questionnaire originally developed by Ives et al.
(1983) and later modified by Mirani and
King (1994) for the EUC context. A service-quality instrument developed by
Remenyi and Money (1994) was used to
establish the effectiveness of the computer
service and to identify key problem areas
with EUC. Several additional questions
were included to gather information on the
user’s self-rated computing expertise, prior
computing experience and training, and
current computing usage patterns.
Phase Two: Fourteen respondents
comprising a mix of satisfied and dissatisfied respondents (identified during Phase
One) were interviewed. Techniques for
qualitative analysis (Miles and Huberman,
1994) and grounded theory (Glaser and
Srauss, 1967; Martin and Turner, 1986;
Strauss, 1987) were employed in the development of the descriptive categorizations
used for the technological frames of reference. The software package NUD*IST
was used to assist in the content analysis
of the interviews.
Analysis of Data–Phase One
The site consists of approximately 85
end-users, running a variety of IBM-compatible PCs on a Novell Netware LAN.
The majority of the end-users utilized
Microsoft applications in a Windows envi-
ronment. Fifty-seven survey instruments
were returned, yielding a response rate of
67%. The purpose of the survey was to
develop a general user profile of the endusers, determine the support needs of these
users, and rate the performance of the IS
department in meeting these needs. Most
respondents had between 5-10 years of experience with personal computers, and 2-6
years experience in a networked environment. Most respondents used their computers 3-6 hours a day, and rated their general level of PC expertise in the intermediate to advanced range. The respondents
were also asked to rate their level of expertise for the applications they use at Otis.
Applications with the highest number of
users (word processing, spreadsheets, electronic mail, and presentation software) also
had the highest mean levels of expertise.
In contrast, the applications that were not
used by many people (databases, Internet
browser, electronic fax and flowcharting)
showed lower mean levels of expertise.
The general user profile is summarized in
Table 2.
The respondents were asked to evaluate 22 separate support items (see Appendix B). The items were first evaluated on
a five-point Likert scales in terms of that
item’s importance to the user in the performance of his or her job. These same items
were then evaluated by the user according
to the performance of the IS department
when providing those items. The difference between the performance scores and
the importance score indicates the effectiveness of the IS department in performing the various functions. A zero gap would
indicate that there is an exact match between importance and performance. A
positive gap indicates that the IS department is committing more resources than
are required, whereas a negative gap indicates that the performance is less than the
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group
Inc. is prohibited.
Journal of End User Computing, 15(2), 1-22, Apr-June 2003 9
Table 2: General User Profile (Otis Elevator Pte Ltd)
Number of Respondents
Gender (if supplied)
57
Male
Female
19
5
Job Function (if supplied)
Manager
End-User
Yrs PC Experience (mean)
Yrs PC Network Experience
(mean)
Hrs / Day on PC
Self Efficacy Rating
(general PC expertise)
5.2
Beginner
Novice
Intermediate
Advanced
Expert
importance, that is the IS department is
underperforming. Since the gap is determined by subtracting the importance score
from the performance score, a positive gap
implies user satisfaction with that item,
while a negative gap implies user dissatisfaction with that item.
Analysis of this dataset surfaced several significant issues. A rudimentary ranking of the data by order of importance indicated which specific support areas were
most important to the users, and which areas were not considered as important.
Service quality gap analysis indicated which
support areas were being satisfactorily or
unsatisfactorily delivered, and correlation
of the service quality gap with satisfaction
indicated which support factors affected
end-user satisfaction. Each of these issues alone was important, but when combined, they provided a much richer picture
of the support environment at any organization.
11
15
8.24
4.43
1.9 %
7.4 %
38.9 %
48.1 %
3.7 %
A basic analysis of the importance
and performance scores was performed.
The mean and standard deviations for each
of the 22 attributes were calculated. The
mean perceptual gap score and standard
deviation were calculated for each item.
The gap was calculated by subtracting the
importance score from the performance
score. The correlations between the gap
scores and the overall satisfaction scores
were then determined. The items as
shown in Table 3 are listed in rank order of
importance.
Only five support items scored a positive gap, indicating satisfaction with that
item. The item with the highest positive
gap was “degree of personal control” (gap
of .407). The other four items that indicated user satisfaction were “new hardware upgrades” (.268), “new software
upgrades” (.232), “standardization of hardware” (.132), and “participation in planning
system requirements” (.075). The remain-
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group
Inc. is prohibited.
10 Journal of End User Computing, 15(2), 1-22, Apr-June 2003
Table 3. Service Quality Gap Analysis – listed by rank order of importance score
Attributes
#
Descriptor
10 Data security and
privacy
13 Fast response
time from IS
staff to remedy
problems
6 High degree of
tech. competence
of IS staff
11 System’s
response time
5 Low percentage
of hardware and
software down
time
1 Ease of access
for users to
computing
facilities
19 Ability of the
system to
improve personal
productivity
7 User confidence
in system
16 Positive attitude
of IS staff
12 Extent of user
training
9 System
responsiveness to
changing user's
needs
17 Users’
understanding of
the system
3 New software
upgrades
2 New hardware
upgrades
8 Degree of
personal control
users have over
their systems
15 Flexibility of the
systems to
produce prof.
*
**
Rk
1
Importance
Mean
SD
4.25
.815
Rk
7
Performances
Mean
SD
3.855 .678
Perceptual
Gap
SD
-.400
1.116
Gap Corr.
w/ Satis.
.1333
2
4.20
.911
14
3.545
.919
-.611
1.235
.2771**
3
4.10
.867
6
3.857
.554
-.250
1.014
.1727
4
4.07
.858
9
3.764
.769
-.296
1.002
.2167
5
4.05
1.017
2
3.909
.727
-.148
1.035
-.0934
6
4.03
.962
1
3.964
.571
-.073
.959
-.0161
7
4.01
.782
8
3.836
.688
-.185
1.047
-.1489
8
4.01
.924
3
3.893
.412
-.125
1.046
.0588
9
3.89
.809
11
3.745
.645
-.130
1.133
.2720**
10
3.81
.779
18
3.182
.819
-.604
1.115
-.0942
11
3.76
.769
13
3.585
.865
-.189
1.241
-.0610
12
3.65
.844
12
3.618
.652
-.037
.990
.2017
13
3.64
.841
5
3.875
.689
.232
1.027
.2457*
14
3.48
.831
10
3.750
.815
.268
1.152
.2346*
15
3.44
.933
4
3.889
.697
.407
1.125
.2979**
16
3.41
1.013
16
3.364
1.238
-.074
1.226
-.1677
implies correlation is significant at 10% level
implies correlation is significant at 5% level
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group
Inc. is prohibited.
Journal of End User Computing, 15(2), 1-22, Apr-June 2003 11
ing 17 items had a negative gap – indicating underperformance of the IS department,
or dissatisfaction. The items with the largest negative gap (indicating highest level
of dissatisfaction) were “fast response time
from IS staff” (-.611), “extent of user training” (-.604), and “help with database or
model development” (-.423).
The service quality gap for three support items (fast response time from IS staff,
positive attitude of IS staff, and degree of
personal control) were positively correlated
to satisfaction (r = .277, .272, and .298 respectively, significant at 5% level). These
support items have the strongest influence
on satisfaction in this environment.
The results of the Mirani and King
EUC satisfaction portion of the survey
showed that as a whole, there was a high
level of EUC satisfaction at Otis Elevator
(7-point Likert scale, Mean 5.32, SD 1.07).
With a score of 4 indicating neither satisfied nor dissatisfied, we have interpreted
scores below 4 to indicate varying degrees
of dissatisfaction and scores above 4 to indicate varying degrees of satisfaction. One
individual scored below 3, indicating a high
degree of dissatisfaction. Six individuals
scored between 3 and 4, indicating a lesser
degree of dissatisfaction; 20 individuals
scored between 4 and 5, indicating a lesser
degree of satisfaction; 22 individuals scored
between 5 and 6, indicating a higher degree of satisfaction; and seven individuals
scored above 6, indicating the highest degree of satisfaction. One respondent did
not answer this portion of the survey.
Analysis of Data – Phase Two
While the overall score from the user
information satisfaction portion of the survey revealed a generally high level of satisfaction, gap analysis of the 22 support
items clearly showed specific support ar-
eas where there was user dissatisfaction.
The interviews therefore concentrated on
those specific areas, and the views of the
end-users were gathered to assist in the
assessment of their technological frames
of reference. All seven respondents that
scored below a four and seven randomly
selected end-users that scored above a four
were interviewed. Analysis of the interviews resulted in the development of the
technological frame of reference of the
satisfied and dissatisfied user.
The principal author conducted the
interviews. The interviews were tape recorded, transcribed and systematically examined for patterns in the frames of a satisfied and a dissatisfied user. The initial
content analysis occurred through opencoding (Corbin and Strauss, 1990) of the
interview transcripts. A research team
comprising four research colleagues performed analysis of the interviews. An initial set of patterns (categories) emerged
from the analysis of the coded transcripts.
The transcripts were then physically formatted to conform to the standards of the
qualitative analysis software package,
NUD*IST. In NUD*IST, the initial categories that had emerged from the analysis were formulated into a hierarchical tree
structure, and the transcripts were closedcoded and documented. The search function of the software was used to interrogate the transcripts in order to verify and
substantiate the initial categories that were
discovered during the analysis of the opencoding. A number of additional categories
began to emerge from this analysis, as further questions were translated into queries,
and a second set of categories began to
emerge that addressed additional aspects
of the technology that had not been initially
analysed. Category frameworks were then
iteratively developed, applied to the data,
and revised. The tree structure continued
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group
Inc. is prohibited.
12 Journal of End User Computing, 15(2), 1-22, Apr-June 2003
to grow and was refined further, as additional nodes were included and nodes that
did not indicate any theoretical significance
were deleted. In addition, several nodes
were combined and merged, as the categories themselves began to crystallise and
became clearer.
The results revealed three basic categories that could be used to group satisfied and dissatisfied users: type of learner,
their view of the role of the PC, and the
complexity level of the applications they
used. Using these categories, Table 4 outlines the frames of the two groups: satisfied and dissatisfied users.
In general, the satisfied user is a selfdirected learner who continually seeks out
additional learning opportunities. He views
the PC as a tool that is necessary only for
the completion of his work and utilizes complex applications in his work. Conversely,
the dissatisfied user does not actively seek
out additional learning opportunities, and
generally will only take IT-related courses
if they are required for the job. This user
views the PC as a tool that could enhance
job performance, and could contribute to
their productivity. Generally, they use lesscomplex types of applications.
These different frames were not initially easily explainable. In particular, our
original assumptions contradicted the findings that a satisfied user would use more
complex applications and would view the
PC as a “task enhancement” tool as opposed to a “task completion” tool. However, since these users viewed the PC as a
tool to “get the job done” as opposed to
“getting the job done better”, they expected less of the PC, and therefore were
more easily satisfied. Research on users’
expectations of technology finds that users
who have realistic expectations of the benefits of technology tend to be more satisfied (Compeau et al., 1999). In addition,
since they used applications with a higher
complexity level, their potential to initiate
complex technical queries was increased.
Their interactions with the MIS staff were
at a “higher” technical level than those with
less complex applications. From the analysis of the MIS staff interviews as well as
user interviews, we had concluded that the
MIS personnel had little patience with “routine” MIS queries. On the contrary, when
the interactions with the MIS support staff
dealt with technical issues, the MIS staff
would respond more readily, and with a
Table 4. Satisfied vs. Dissatisfied Users
Type of Learner
Role of PC
Complexity of
Application
Satisfied User
self-directed
task completer
more complex
Dissatisfied User
non self-directed
task enhancer
less complex
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group
Inc. is prohibited.
Journal of End User Computing, 15(2), 1-22, Apr-June 2003 13
more positive attitude. The satisfied users
had a self-directed learning style that facilitated the acquisition of more complex
applications. Further, as a result of using
more complex applications, their interactions with the MIS staff were frequent and
positive, and this resulted in a more satisfied user.
The technological frame of reference
of the dissatisfied user is explainable in a
similar fashion. Since these users were
not self-directed and took only courses that
were required for their job, they did not
acquire more complex applications. Since
their interactions with the MIS staff were
at a more “routine” level, these interactions
were not positive. Several users who
shared this frame of reference reiterated a
high level of dissatisfaction with the “superior” attitude of the MIS staff when routine queries were posed to them. In addition, these users expected more from the
PC, viewing the PC as a “task enhancing”
tool. They expected the PC to greatly contribute to the productivity of their job. Since
these users expected more of the PC, they
were more easily dissatisfied.
DISCUSSION
This research study gathered service
quality data on 22 specific support factors,
and overall end-user satisfaction with the
EUC environment at an organization. In
addition, interviews of satisfied and dissatisfied end-users were assessed to develop
the technological frame of reference of
these two user groups. The relationship
among these three items was explored.
While the EUC satisfaction portion
of the survey alone indicated that the user
population as a whole was satisfied with
their EUC environment, service-quality gap
analysis and follow-up interviews surfaced
several areas of dissatisfaction. Only five
out of 22 support items had a positive service-quality gap (indicating satisfaction with
that item). Hidden areas of dissatisfaction
were detected by performing the
SERVQUAL analysis. The three support
factors that showed the largest negative
service-quality gap between importance
and performance (indicating dissatisfaction
with that item) were “fast response time
from IS staff”, “extent of user training”,
and “help with database or model development”. Although it is important to identify
specific support areas that have large negative gaps, identifying which specific support factors influence overall satisfaction
provides a richer picture. For example, the
support factor “fast response time from IS
staff” has one of the largest negative service-quality gaps, indicating a high level of
dissatisfaction with that particular support
item. In addition, the gap for this item highly
correlates with overall user satisfaction.
Conversely, while “extent of user training”
also has a large negative service-quality
gap, the gap does not correlate with overall user satisfaction.
Including consideration of user satisfaction to service-quality analysis adds an
important piece to the overall study of diagnosing the issues confronting MIS support teams. It is noted that specific support factors that were viewed as having
the largest gap between importance and
performance were not necessarily the
same as those with the most influence on
satisfaction. This study shows that understanding the relationship between support
factors and user satisfaction in subtle, has
multiple aspects and requires observation
from a number of different viewpoints for
complete understanding. Clearly neither
listing the support factors by importance
nor performance alone shows the full significance of these items. Service levels
determined by the difference between im-
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group
Inc. is prohibited.
14 Journal of End User Computing, 15(2), 1-22, Apr-June 2003
portance and performance highlight issues
of high importance and low performance
from those where both are high or both are
low. The practitioner could target those
items for attention rather then dilute attention for those of high importance where
service is already high. Similarly, the practitioner would be able to postpone attention
for those with low levels of service, but
are not viewed as highly important. In particular, management at Otis would be better served in targeting additional support to
“fast response time from IS staff” rather
than “extent of user training” when attempting to increase overall user satisfaction.
Grounded theory techniques were instrumental in creating the technological
frames of reference for the satisfied and
dissatisfied users. While the dissatisfied
user appeared to have higher expectations
regarding the contribution the technology
could make to their job performance, they
were not interested in obtaining additional
training or acquiring more complex applications for their PC. On the other hand,
the satisfied users were very interested in
obtaining additional training and more complex applications, while holding lower expectation levels concerning the value the
technology could bring their job performance. The two user groups held different views or perspectives towards the technology they used. This influenced their
expectations of the technology, affected
their interactions with the MIS support
team, and changed their utilization of the
technology on a day-to-day basis. The technological frame of reference of the users
did indeed influence their ultimate satisfaction with their overall end-user computing
environment.
One of the results of the actual implementation of the end-user satisfaction sur-
vey itself was an increase in IT usage patterns. The participation in the survey process increased the awareness levels of
some of the users regarding the capabilities and functionality of the technology, as
well as the available support functions of
the organization. This caused the users to
use the PC more often, and for a larger
variety of functions. This effect was positive for Otis, and welcomed by the MIS
staff.
SUMMARY AND CONCLUSIONS
Relying on user satisfaction surveys
alone will not provide a complete picture
of the end-user environment in an organization. It is necessary to look beyond the
end-user satisfaction surveys to tease out
hidden areas of dissatisfaction. Servicequality gap analysis can be used to identify
specific support areas that need attention,
as well as identify which particular support
areas influence overall end-user satisfaction. In addition, practitioners should be
aware that the end-user population is not a
homogeneous population that can be served
with a one-size-fits-all support strategy.
This confirms earlier research as noted in
Powell and Moore (2002) and Jurison
(2000).
For researchers, this study extends
previous work in two areas: end user satisfaction and technological frame of reference. This study demonstrates the utility
of the service-quality measure as a tool
adding deeper understanding of user views
and needs. The inclusion of a user satisfaction measure shows that service gaps
alone only partially account for user views
and attitudes.
The identification of technological
frames of reference and their effect on enduser satisfaction is crucial to a deeper un-
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group
Inc. is prohibited.
Journal of End User Computing, 15(2), 1-22, Apr-June 2003 15
derstanding of satisfaction. This socio-cognitive thread has not been fully explored as
it applies to technology, or MIS in general.
In addition, the findings of different
relationships between support factors and
user satisfaction in different studies (see
Table 1) suggest that the relationships are
contextual in nature and not constant for
all situations. Researchers building theory
in this area may be better served by examining additional environmental variables that
could affect these relationships.
Limitations of the Study
This research effort occurred at one
research site. The empirical data collected
reflects the specific organizational context
and events at Otis Elevator in Singapore.
The technological frames that were elicited during this research were salient to the
respondents under study, in an environment
where the support function had recently
undergone some organizational realignment,
and where new technology was being introduced on a continual basis. Two support personnel had recently been assigned
as full-time support to the PAO-HQ staff,
with the other personnel supporting the regions, replacing an earlier shared strategy
where all personnel supported both PAOHQ and the regions. It is possible that
different frames of reference could be elicited from respondents that operate in a
more stable environment.
The effect of gender and culture were
not explored in this study. Both of these
factors could contribute to the formation
of technological frames of reference. While
the satisfied and dissatisfied user groups
were mixed in terms of gender and culture, the user population as a whole showed
a distinct alignment along functional lines.
All but one of the management personnel
at Otis were male, non-Singaporean. All
the non-managers
Singaporeans.
were
female
Directions for Future Research
The discovery that technological
frames of reference can impact satisfaction is only relevant if those frames of reference can be altered. Tyre and Orlikowski
(1994) posit that there are “windows of
opportunity” that exist where adaptation of
a particular technology can be affected.
The relationship among the three components (technological frame of reference,
satisfaction with EUC support and overall
EUC satisfaction) does not have to remain
static. The introduction of a trigger or the
exploitation of an existing one can open a
window of opportunity, altering the frames
of reference and thereby creating a cyclical relationship.
Future research that concentrates on
identifying specific triggers that could
change the alignment of technological
frames of reference is needed. Ideally, any
subsequent research would first assess the
technological frames of reference and their
effect on satisfaction, introduce a trigger
mechanism to alter the frames, and then
reassess the frames and the level of satisfaction at a later date in order to measure
any changes. In this way, the effect of
technological frames of reference on satisfaction would be more fully explored.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group
Inc. is prohibited.
16 Journal of End User Computing, 15(2), 1-22, Apr-June 2003
Appendix A: Overall End-User Satisfaction Instrument
Disagree………………Agree
1 2 3 4 5 6 7
1.
Your relationship with the Office of Information
Technology staff is good.
† † † † † † †
2.
Your communication with the OIT staff is precise.
† † † † † † †
3.
The attitude of the OIT staff is positive.
† † † † † † †
4.
The degree of training provided to you is sufficient.
† † † † † † †
5.
The speed of the responses to your requests for service is
good.
† † † † † † †
6.
The quality of the responses to your requests for service is
good.
† † † † † † †
7.
The information that is generated from your computing
activities is relevant.
† † † † † † †
8.
The information that is generated from your computing
activities is accurate.
† † † † † † †
9.
The information that is generated from your computing
activities is precise.
† † † † † † †
10
The information that is generated from your computing
activities is complete.
† † † † † † †
11.
The information that is generated from your computing
activities is reliable.
† † † † † † †
12.
You are able to carry out your computing activities with
speed.
† † † † † † †
13.
Your understanding of the applications you use is good.
† † † † † † †
14.
Your perceived participation in the information systems
function is high.
† † † † † † †
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group
Inc. is prohibited.
Journal of End User Computing, 15(2), 1-22, Apr-June 2003 17
Appendix B: End-User Support Instrument
Tick the box that corresponds to the importance that each of the following 22
system attributes contribute to the performance of your job.
Irrelevant
1
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
Somewhat
Important
2
Important
Very Important
Critical
3
4
5
Ease of access for users to computing facilities.
New hardware upgrades.
New software upgrades.
Access to external databases through the system.
A low percentage of hardware and software down
time.
A high degree of technical competence of systems
support staff.
User confidence in systems.
The degree of personal control users have over their
systems.
Systems responsiveness to changing users needs.
Data security and privacy.
System’s response time.
Extent of user training.
Fast response time from system support staff to
remedy problems.
Participation in planning of the systems
requirements.
Flexibility of the system to produce professional
reports, e.g. graphics and desktop publishing.
Positive attitude of information systems staff to
users.
User’s understanding of the system.
Overall cost-effectiveness of information systems.
Ability of the system to improve personal
productivity.
Documentation to support training.
Help with database or model development.
Standardization of hardware.
1
†
†
†
†
†
2
†
†
†
†
†
Importance
3
†
†
†
†
†
4
†
†
†
†
†
5
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group
Inc. is prohibited.
18 Journal of End User Computing, 15(2), 1-22, Apr-June 2003
REFERENCES
Aladwani, A. (2002), “Organizational
Actions, Computer Attitudes, and End User
Satisfaction in Public Organizations: An
Empirical Study”, Journal of End User
Computing, 14(1), 42-49.
Amoaka-Gyampah, K. and White, K.
(1993), “User Involvement and user satisfaction, an exploratory contingency model”,
Information & Management, 25(1), 110.
Baroudi, J. J., Olson, M. H., and Ives,
B. (1986), “An empirical study of the impact of user involvement on system usage
and information satisfaction”, Communications of the ACM, 29, 232-238.
Bergeron, F. and Berube, C. (1988),
“The Management of the End-User Environment: An Empirical Investigation”, Information & Management, 14(3), 107113.
Blili, S., Raymond, L., and Rivard, S.
(1998), “Impact of task uncertainty, enduser involvement, and competence on the
success of end-user computing”, Information and Management, 33(3),137-153.
Boland, R. (1978), “The Process and
Product of Systems Design”, Management
Science, 24(9), 887-898.
Boland, R. (1979), “Control, Causality and Information Systems Requirements”, Accounting, Organizations and
Society, 4(4), 259-272.
Bostrom R., and Heinen, J. (1977),
“MIS Problems and Failures: A SocioTechnical Perspective, Part I — The
Causes”, MIS Quarterly, 1(3), 17-32.
Bowman, B., Grupe, F., Lund, D., and
Moore, W. (1993), “An Examination of
Sources of Support Preferred by End-User
Computing Personnel”, Journal of End
User Computing, 5(4), 4-12
Brancheau, J. and Wetherbe, J.
(1988), “Higher and Lower-Rated Infor-
mation Centers: Exploring the Differences”,
Journal of Information Management,
9(1), 53-70
Brown, T.J., Churchill, G. and Peter,
J. (1993), “Research Note: Improving the
Measurement of Service Quality”, Journal of Retailing, 69(1), 127-139.
Buyukkurt, M. and Vass, E. (1993),
“An Investigation of Factors Contributing
to Satisfaction with End-User Computing
Process”, Canadian Journal of Administrative Sciences, 10(3), 212-229.
Chan, Y. E. and Storey, V.C. (1996),
“The use of spreadsheets in organizations:
Determinants and consequences”, Information and Management, 31(3), 119-134.
Choe, J. M. (1998), “The effects of
user participation of the design of accounting information systems”, Information and
Management, 34(3), 185-198.
Compeau, D., Higgins, C., and Huff,
S. (1999), “Social Cognitive Theory and Individual Reactions to Computing Technology: A Longitudinal Study”, MIS Quarterly, 23(2), 145-158.
Corbin, J. and Strauss, A., (1990),
Basics of Qualitative Research, Newbury
Park, CA, Sage Publishing.
Davis, R. D. (1989), “Perceived usefulness, perceived ease of use and user
acceptance of information technology”,
MIS Quarterly, 13(3), 319-339.
Davis, F. D., Bagozzi, R. P. and
Warshaw, P. R. (1989), “User acceptance
of computer technology: A comparison of
two theoretical models”, Management Science, 35(8), 982-1003.
Dagwell, R. and Weber, R. (1983),
“Systems Designers’ user models: A comparative study and methodological critique”.
Communications of the ACM, 26(11),
987-997.
DeLone W., and McLean, E. (1992).
“Information System Success: the quest for
the dependent variable.” Information Sys-
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group
Inc. is prohibited.
Journal of End User Computing, 15(2), 1-22, Apr-June 2003 19
tems Research, 3(1), 60-95.
Doll, W. and Torkzadeh, G. (1988),
“The Measurement of End-User Computing Satisfaction”, MIS Quarterly, 12(2),
259-274.
Fisk, R., Brown, S., and Bitner, M.
(1993), “Tracking the Evolution of the Services Marketing Literature”, Journal of
Retailing, 69(1), 61-103.
Foong, S. (1999), “Effect of end-user
personal and systems attributes on computer-based information system success in
Malaysian SMEs”, Journal of Small Business Management, 37(3), 81-87.
Glaser, B. and Strauss, A. (1967), The
Discovery of Grounded Theory, Chicago,
Il, Aldine Publishing Company.
Gioia, D. A. (1986), “Symbols, scripts,
and sensemaking: Creating meaning in the
organizational experience”, In The Thinking Organization, Jossey-Bass, San Francisco, CA.
Guimaraes, T. and Igbaria, M. (1994),
“Exploring the relationship between IC success and company performance”, Information and Management, 26(3), 133-142.
Guimaraes, T., Igbaria, M., and Lu,
M. (1992), “The determinants of DSS success: an integrated model”, Decision Sciences, 23(2), 409-429.
Guimaraes, T., Yoon, Y. and
Clevenson, A. (1996), “Factors important
to expert systems success: A field test”,
Information & Management, 30(3), 119131.
Hartwick, J. and Barki, H. (1994),
“Explaining the role of user participation in
information system use”, Management
Science, 40(4), 440-465.
Henry, J. and Stone, R. (1994), “A
Structural Equation Model of End-User
Satisfaction with a Computer Based Medical Information System”, Information Resources Management Journal, 7(3), 2134.
Hirschheim, R. and Klein, H. (1989),
“Four Paradigms of Information System
Development”, Communications of the
ACM. 32(10), 1199.
Igbaria, M., Guimaraes, T., and Davis,
G. B. (1995), “Testing the determinants of
microcomputer usage via a structural equation model”, Journal of Management Information Systems, 11(4), 87-114.
Igbaria, M., Iivari, J., and Maragahh,
H. (1995), “Why do individuals use computer technology? A Finnish case study”,
Information and Management, 29(5), pp.
227-238.
Igbaria, M., Pavri, F. N., and Huff, S.
(1989), “Microcomputer applications: an
empirical look at usage”, Information and
Management, 16(4), 187-196.
Ives, B., Olson, H. and Baroudi, J.
(1983), “The Measurement of User Information Satisfaction”, Communications of
the ACM, 26(10), 785-793.
Jawahar, I., and Elango, B. (2001),
“The effect of attitudes, goal setting, and
self-efficacy on End User performance”,
Journal of End User Computing, 13(2),
40-45.
Jurison, J. (2000), “Perceived Value
and Technology Adoption Across Four End
User Groups,” Journal of End User Computing, 12(4), 21-28.
Kettinger, W. and Lee, C. (1997),
“Pragmatic perspectives on the measurement of information systems service quality”, MIS Quarterly, 21(2), 223-240.
Kim, K.K. (1990) “User Information
Satisfaction: Towards Conceptual Clarity”,
Proceedings of the International Conference on Information Systems.
Kling R. and Iacono, S. (1984), “Computing as an Occasion for Social Control”,
Journal of Social Issues, 40(3), pp. 7796.
Kohlmeyer, J. and Blanton, J. (2000),
“Improving IS Quality”, Journal of Infor-
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group
Inc. is prohibited.
20 Journal of End User Computing, 15(2), 1-22, Apr-June 2003
mation Technology Theory and Application, 2(1).
Kumar K., and Bjorn-Anderson, N.
(1990), “A Cross-cultural Study of Designer Values Relevant to Information Systems Development”, Communications of
the ACM, 33(5), 528-538.
Lawrence, M. and Low, G. (1993),
“Exploring individual user satisfaction within
user-led development “, MIS Quarterly,
17(2), 195-207.
Lederer, A. and Spencer, V. (1988),
“The Effective Information Center: Targeting the Individual User for Success”, Journal of Systems Management, 39(1), 2227.
Lee, S. M., Kim, Y. R., and Lee, J.
(1995), “ An empirical study of the relationships among end-user information systems acceptance, training, and effectiveness”, Journal of Management Information Systems, 12(2), 189-202.
Lehman, J. A. and Murthy, V. S.
(1989), “Business graphics trends, two
years later”, Information and Management, 16(2), 57-69.
Lewis, R. and Booms, B. (1983), “The
Marketing Aspects of Service Quality”, In
Emerging Perspectives on Services Marketing, L. Berry, G. Shostack, and G. Upah
(eds.), Chicago, American Marketing, 99107.
Mahmood, M., Burn, J., Leopoldo, A.,
Jacquez, C. (2000), “Variables affecting
information technology end-user satisfaction: a meta analysis of the empirical literature”, International Journal of HumanComputer Studies, 52, 751-771.
Markus, M. (1984), “Power, Politics,
and MIS Implementation”, Communications of the ACM, 26(6), 430-444.
Markus, H. and Zajonc, R. (1985),
“The Cognitive Perspective in Social Psychology”, in The Handbook of Social Psychology (Vol 1), G. Lindzey and E.
Aronson (eds.), New York, Random
House, 137-230.
Martin, P. and Turner, B. (1986),
“Grounded Theory and Organizational Research”, The Journal of Applied Behavioral Science, 22(2), 141-157.
McKeen, J. D. and Guimaraes, T.
(1997), “Successful strategies for user participation in systems development”, Journal of Management Information Systems,
14(2), 133-150.
McKeen, J. D., Guimaraes, T., and
Wetherbe, J. C. (1994), “The relationship
between user participation and user satisfaction: an investigation of four contingency
factors”, MIS Quarterly, 18(4), 427-448.
Miles, M. and Huberman, A.M.
(1994), Qualitative Data Analysis, Thousand Oaks, CA, Sage Publications.
Mirani, R. and King, W. (1994), “The
Development of a Measure for End-User
Computing Support”, Decision Sciences,
25(4), 481-499.
Montazemi, A. (1988), “Factors Affecting Information Satisfaction in the Context of the Small Business Environment”,
MIS Quarterly, 12(2), 239-256.
Montazemi, A. R., Cameron, D. A.
and Gupta, K. M. (1996), “An empirical
study of factors affecting software package selection”, Journal of Management
Information Systems, 13(1), 89-105.
Noble, D. (1986), Forces of Production: A Social History of Industrial Automation, Oxford University Press, New
York.
Orlikowski, W. (1993), “CASE Tools
as Organizational Change: Investigating
Incremental and Radical Changes in Systems Development”, MIS Quarterly,
17(3), 309-341.
Orlikowski, W. and Gash, D. (1994),
“Technological Frames: Making Sense of
Information Technology in Organizations”,
ACM Transactions on Information Sys-
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group
Inc. is prohibited.
Journal of End User Computing, 15(2), 1-22, Apr-June 2003 21
tems, 12(2), 174-207.
Palvia, P. C. (1996), “A model and
instrument for measuring small business
user satisfaction with information technology”, Information and Management,
31(3), 151-163.
Parasuraman A., Zeithaml, V.A. and
Berry, L. (1985), “A Conceptual Model of
Service Quality and its Implications for
Future Research”, Journal of Marketing,
49(4), 41-50.
Parasuraman A., Zeithaml, V.A. and
Berry, L. (1993), “Research Note: More
on Improving Quality Measurement”,
Journal of Retailing, 69(1), pp. 140-147.
Park, S. W., Jih, K., and Roy, A.
(1993-1994), “ Success of management
information systems an empirical investigation”, Journal of Computer Information Systems, 34(2), 33-37.
Pinch, T. and Bijker, W. (1987), “The
Social Construction of Facts and Artifacts”,
in The Social Construction of Technological Systems, W. Bijker, T. Hughes, and
T. Pinch (eds), Cambridge, MA, MIT
Press, 17-50.
Pitt, L., Watson, R., and Kavan, C.
(1995), “Service Quality: A measure of information systems effectiveness”, MIS
Quarterly, 19(2), 173 – 187.
Pitt, L, Watson, R., and Kavan, C.
(1997), “Measuring Information Service
Quality: Concerns for a complete canvas”,
MIS Quarterly, 21(2), pp. 209-222.
Powell, A., and Moore, J. (2002),
“The Focus of Research in End User Computing: Where Have We Come Since the
1980’s?” Journal of End User Computing, 14(1), 3-22.
Rainer, R. and Carr, H. (1992), “Are
information centers responsive to end user
needs?” Information and Management,
22(2), 113-121.
Remenyi, D. and Money, A. (1994),
“Service quality and correspondence analy-
sis in determining problems with the effective use of computer services”, European
Journal of Information Systems, 3(1), 212.
Ryker, R. and Nath, R. (1995). “An
empirical examination of the impact of computer information systems on users”, Information & Management, 29(4), 207215.
Saarinen, T. (1996), “An expanded
instrument for evaluating information system success”, Information & Management, 31(2), 103- 119.
Saleem, N. (1996), “ An empirical test
of the contingency approach to user participation in information systems development”, Journal of Management Information Systems, 13(1), 145-166.
Satzinger, J. W. and Olfman, L.
(1995), “Computer support for group work:
perceptions of the usefulness of support
scenarios and end-user tools”, Journal of
Management Information Systems, 11(4),
115-148.
Schutz, A. (1970), On Phenomenology and Social Relations. University of
Chicago Press, Chicago, Ill.
Shaw, N., Niederman, F., and
DeLone, W. (2002), “And empirical study
of success factors in end-user support”,
DATABASES Advances in Information
Systems, 33(2).
Seddon, P. (1997), “A respecification
and extension of the DeLone and McLean
model of IS success”, Information Systems Research, 8(3), 240-253.
Strauss, A. (1987), Qualitative
Analysis for Social Scientists, Cambridge,
Cambridge University Press.
Szajna, B. and Scamell, R. (1993),
“The effects of information system user
experiences on their performance and perceptions”, MIS Quarterly, 17(4), 493-516.
Thompson, R. L., Higgins, C. A., and
Howell, J. M. (1994), “Influence of expe-
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group
Inc. is prohibited.
22 Journal of End User Computing, 15(2), 1-22, Apr-June 2003
rience on personal computer utilization: testing a conceptual model”, Journal of Management Information Systems, 11(1), 167187.
Torkzadeh, G., and Doll, W. (1993),
“The place and value of documentation in
end-user computing”, Information &
Management, 24(3), 147-158.
Trauth, E. and Cole, E. (1992), “The
Organizational Interface: A method for
supporting end users of packaged software”, MIS Quarterly, 16(1), pp. 35-54.
Tyre, M. and Orlikowski, W. (1994),
“Windows of Opportunity: Temporal Patterns of Technological Adaptations in Organizations”, Organization Science, 5(1),
98-118.
Van Dyke, T.P., Kappelman, L.A.,
and Prybutok, V.R. (1997), “Measuring
Information Systems Service Quality: Concerns on the Use of the SERVQUAL
Questionnaire”, MIS Quarterly, 21(2),
195-208.
Venkatesh, V. (1999), “Creation of
favorable user perceptions: exploring the
role of intrinsic motivation”, MIS Quarterly, 23(2), 239-260.
Yoon, Y. and Guimaraes, T. (1995),
“Assessing expert systems impact on users’ jobs”, Journal of Management Information Systems, 12(1), 225-249.
Zmud, R.W. (1979), “Individual differences and MIS success; a review of the
empirical literature”, Management Science, 25(10), 966-979.
Nancy C. Shaw is an Assistant Professor of Information Systems at George Mason University in
Fairfax, Virginia. She received her Ph.D. in Information Systems from the National University
of Singapore, and an MBA and a BBA from the University of Kentucky. Dr. Shaw has been a
practitioner and consultant in the information systems industry for over 20 years. She has
worked for AT&T, General Electric and most recently as a senior systems analyst for the Central
Intelligence Agency. Dr. Shaw served as a Military Intelligence Officer in the U.S. Army Reserves
during the Persian Gulf War. Her current research interests include end-user computing support
and knowledge management. Dr. Shaw has published in the International Journal of Information
Management and DATABASES.
Joo-Eng Lee-Partridge is an MIS Professor in the School of Business at Central Connecticut
State University. She received her B.Sc. (First Class Honors) degree from the National University
of Singapore and her Ph.D. degree in Management Information Systems from the University of
Minnesota, Twin Cities. Her research interests cover topics such as facilitation, group support
systems, computer-based learning, end-user computing, negotiation and knowledge
management. Her publications have appeared in MIS Quarterly, Group Decision and
Negotiation, European Journal of Information Systems, Journal of Strategic Information Systems,
Omega and International Journal of Information Management.
James S.K. Ang is Associate Professor with the Department of Decision Sciences, School of
Business, National University of Singapore. He holds the B.Sc (Mathematics and Philosophy)
from the University of Singapore, and the M.A.Sc. and Ph.D. (Management Sciences) degrees
from the University of Waterloo, Canada. His research interests include systems modeling
using Petri nets and object-oriented formalism, information systems planning, and e-business
topics. Dr. Ang has published articles in journals such as IEEE Transactions on Knowledge
and Data Engineering, IEEE Transactions on Systems, Man and Cybernetics, Data Base, Journal
of Operations Management, Decision Sciences, Data and Knowledge Engineering, Information
and Management, International Journal of Production Economics, Decision Support Systems,
and INFOR.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group
Inc. is prohibited.