Understanding objectivity in information system evaluation

Understanding
objectivity in
information system
evaluation
Perceptions of
information system
economics
Peter Schuurman
Distributed by:
Peter Schuurman
[email protected]
Printed by:
Printer
Understanding objectivity in information system evaluation
Perceptions of information system economics
Doctoral dissertation, University of Groningen, The Netherlands
ISBN 978-XX-XXX-XXXX-X
Copyright (c)
Peter Schuurman (2011)
All rights are reserved. No part of this publication may be reprinted or utilized in any form
or by any means, including recording in any information storage or retrieval system,
without prior written permission from the copyright owner.
Rijksuniversiteit Groningen
Understanding objectivity in information system evaluation
Perceptions of information system economics
Proefschrift
ter verkrijging van het doctoraat in de
Economie en Bedrijfskunde
aan de Rijksuniversiteit Groningen
op gezag van de
Rector Magnificus, dr. E. Sterken,
in het openbaar te verdedigen op
donderdag 7 juli 2011
om 14:45 uur
door
Peter Martijn Schuurman
geboren op 5 augustus 1981
te Utrecht
Promotor:
Prof. dr. E.W. Berghout
Beoordelingscommisie:
Prof. dr. W.A. Dolfsma
Prof. drs. J.A. Oostenhaven
Prof. dr. P. Powell
Acknowledgements
On my first research related course, back in 2006, I was ‘corrected’ from referring to this
research as “our research” to “my research.” Today, as then, I still stand by my own
wording. In the first place, therefore, I would like to express my gratitude to Getronics for
setting the research in motion, and Mark Smalley, Jan van Bon and CIOnet Nederland for
having it go the distance. Subsequently, my thanks goes to all participating organizations
and interviewees, who I, due to promised anonymity, regretfully can not address by
name.
In-house, I would like to show my appreciation to Philip Powell for his thorough reviews
and feedback. Additionally, Karel de Bakker, Arnold Commandeur, and Jan Braaksma are
thanked for their priceless input and support, as well as Chee-Wee Tan for helping me
break the numbers, and Jannie Born and Durkje van Lingen-Elzinga for making life easier
at the university.
Finally, I would like to give my family and friends all the credits they deserve for their
everlasting support and understanding. However, above each and everyone my gratitude
goes to Egon Berghout for all his guidance, directions, motivation, and cooperation. I truly
believe that, for me, there is no better supervisor out there.
Peter Schuurman
Groningen, 15th of December, 2010
v
Contents
Acknowledgements ........................................................................................................... v
1
2
3
4
Evaluating information system economics .................................................................. 9
1.1
Introduction ................................................................................................................. 9
1.2
Information system economics ................................................................................. 10
1.3
Information system management ............................................................................. 12
1.4
Information system evaluation .................................................................................. 13
1.5
Research objective and questions ............................................................................. 14
1.6
Research design ......................................................................................................... 15
1.7
Summary and conclusions ......................................................................................... 22
Literature on evaluating information system economics ........................................... 23
2.1
Introduction ............................................................................................................... 23
2.2
Information systems .................................................................................................. 23
2.3
Value creation ............................................................................................................ 30
2.4
Benefits ...................................................................................................................... 34
2.5
Costs ........................................................................................................................... 37
2.6
Evaluation methods ................................................................................................... 40
2.7
Summary and conclusions ......................................................................................... 42
Theoretical perspectives and objectivity................................................................... 43
3.1
Introduction ............................................................................................................... 43
3.2
New Institutional Economics ..................................................................................... 45
3.3
Behavioural economics .............................................................................................. 49
3.4
Objectivity .................................................................................................................. 51
3.5
Propositions and hypotheses..................................................................................... 53
3.6
Summary and conclusions ......................................................................................... 61
Research method ..................................................................................................... 65
4.1
Introduction ............................................................................................................... 65
4.2
Empirical research design .......................................................................................... 65
4.3
Questionnaire design ................................................................................................. 65
4.4
Data acquisition ......................................................................................................... 68
4.5
Data analysis process ................................................................................................. 70
4.6
Summary and conclusions ......................................................................................... 74
vi
5
6
Evaluation in practice ............................................................................................... 75
5.1
Introduction ............................................................................................................... 75
5.2
Understanding evaluation practices .......................................................................... 75
5.3
Cost, benefit and evaluation perceptions.................................................................. 85
5.4
Explaining perceived evaluation performance .......................................................... 94
5.5
Summary and conclusions ....................................................................................... 103
Summary and conclusions ...................................................................................... 105
6.1
Introduction ............................................................................................................. 105
6.2
Review of research questions and results ............................................................... 105
6.3
Limitations of the research ...................................................................................... 110
6.4
Research contribution.............................................................................................. 110
6.5
Suggestions for future research............................................................................... 111
6.6
Final remarks............................................................................................................ 112
Appendix A: Questionnaire ............................................................................................ 115
Appendix B: Latent variable correlation ......................................................................... 121
Samenvatting in het Nederlands (Summary in Dutch) .................................................... 123
Publications related to the research .............................................................................. 129
References .................................................................................................................... 130
Index ............................................................................................................................. 141
vii
1 Evaluating information system economics
1
Evaluating information system economics
1.1 Introduction
“It is important that [costs] are considered and evaluated in the context of the complete
range of benefits that is expected to be achieved” (Ward and Daniel 2006). And vice versa,
as not doing so creates a value judgement which is based on a one-sided portrayal of the
assessed concept. That is, value is an analysis of benefits and costs and without
information on one it is impossible to know what level of the other is acceptable. Yet, in
the evaluation of the economics of information systems, organizations continue to
struggle with the assessments, realization, and management of benefits, whereas the
analysis of costs seems to have reached a steady state which provides reasonably
satisfying appraisals. The current situation leaves value judgements based on the
weighing of the two difficult. This research attempts to uncover the origins of this
problem by examining the differences between benefits and costs. Subsequently, the
consequences for the evaluation of information system economics are addressed.
In the past decennia the information function has developed from its initial operational
automation tasks to enabling strategic possibilities by, for example, driving innovation
and facilitating new business models. Further, in the future, the importance of
information systems to individuals, organizations, government and society is expected to
increase (Berghout 2002). While making this transition, worldwide expenditure on
information systems is expected to rise to US$4.4 trillion in 2011 (WITSA 2008). However,
only 7% of the overall sample of senior business managers and information system
professionals agreed that they do an extremely effective job of controlling IT costs (Shifrin
2006). This implies that the evaluation of the economic aspects of information systems
has improved, but that there is still much to gain if the efficiency and effectiveness of
their management can be increased.
At least since the 1960’s (Williams and Scott 1965), an extensive portfolio of evaluation
methods for information systems has been created to aid this managerial quest for
efficiency and effectivity. Despite some evidence of the use of the methods in practice
(Al-Yaseen, et al. 2006), their usefulness appears to be lacking, or at least unable to keep
pace with progress in technological complexity, as reports of the squandering of
resources on unsuccessful projects and ineffective management persist (Latimore, et al.
2004; McManus and Wood-Harper 2007; Tichy and Bascom 2008). Underpinning the
problem complexity, as well as the disparity between theory and practice, these reports
indicate that organizations do not benefit from the potential value of information system
evaluation.
To address the origin of the difficulties with the evaluation of information system
economics and the possible reasons behind their persistency, the fundamental concepts,
how they relate and the research itself are discussed in this chapter. First, the subject of
9
1 Evaluating information system economics
information system economics in general is addressed. Second, the management of
information systems, as understood for this research, is explained in Section 1.3. Then, in
Section 1.4, the role of information system evaluations is expounded upon. Next, the
research objective, research questions and accompanying design are discussed in Section
1.5 and 1.6. Finally, the findings and conclusions of this chapter are drawn together in
Section 1.7.
1.2 Information system economics
To research the evaluation of information system economics, the economic aspects have
to be specified. In general, the field of economics assesses the production and
consumption of goods and services. By doing so, it addresses the choices (to be) made in
the pursuance of a distribution that leads to an optimal, or at least satisfying, solution
given the scarcity of production factors. In this research, this solution is defined by the
concept of value, which is discussed next.
Value consists of the total of consequences as delivered by, in this case, an information
system (Renkema and Berghout 2005). These consequences can be positive and/or
negative, and might occur in a financial as well as non-financial capacity. The non-financial
consequences consist of the total of positive and negative contributions. In addition,
summing to profitability, the positive and negative financial consequences are referred to
as revenues and costs respectively. This terminology is arbitrary though, as it depends on
the accounting view applied. One could, for instance, also use the cash inflow or earnings
when dealing with positive financial consequences, or the cash outflow or expenditure
when addressing the negative ones. The total of positive consequences are generally
referred to as benefits, whereas the negative aspects are termed burdens. In practice,
however, the common concept used to address the latter is costs. Given the widespread
use of this term as such, it is adopted in this research to refer to burdens. An overview of
Figure 1: Definition of value (Renkema and Berghout 2005)
10
1 Evaluating information system economics
the concept of value and its subdivision in consequences is provided in Figure 1.
As easy as this summation of consequences may sound, the actual process of value
creation is complex and has many potential complications. Focusing on a high level of
abstraction to explain how value for the business could be created by information
systems, there is some common understanding about the process (Berghout and Remenyi
2005). In their process model of IT business value creation, illustrated in Figure 2, Soh and
Markus (1995) state that IT investments can only lead to favourable organizational
performance if three necessary, but not sufficient, conditions are met. These conditions
are, first, organizations need to effectively acquire IT assets from their IT expenditures.
This is important as “for a given level of IT expenditure, some organizations may be able
to obtain an applications portfolio of greater breadth and depth and a better
infrastructure” than other organizations (Soh and Markus 1995). Second, if high quality IT
assets are provided, they need to be combined with the process of appropriate IT use in
order to acquire favourable IT impacts. These impacts represent the intermediate
outcome of operational information systems. “If, for example, the organization achieved
positive impacts somewhat after its key competitors did so, the outcomes of increased
productivity and value to customer may be achieved” - the intermediate outcome - “but
any potential bottom line results might be competed away.” These bottom line results can
only be achieved if also the third condition is met; this condition states that the IT impacts
need to be not negatively influenced by the organization’s competitive situation.
In this research, this total of consequences caused by information systems is the central
theme. The matter is addressed by assessing the circumstances determining whether
value is provided or not and then focusing on the subdivision of benefits and costs. Both
concepts are elaborated upon in Chapter 2. In the next section, the management
principles used to affect the overall value are discussed in general.
Figure 2: IT business value creation (Soh and Markus 1995)
11
1 Evaluating information system economics
1.3 Information system management
In order to affect the economic performance of information systems, organizations
should manage them. The tasks in information system management can be defined as
optimizing information system services by making policy and plans and coordinating the
services and organization in a changing technological, economic and organizational
environment (Looijen, 2004). For this, the management of information systems is
generally categorized in three domains: technical, application, and functional
management (Looijen 2004). In these domains, technical management is responsible for
the maintenance and exploitation of the information technology infrastructure, which
comprises all hardware, system software and associated data sets. Application
management carries out the tasks involved with maintaining and exploiting all application
software and their associated databases. Functional management is in charge of the
maintenance and exploitation of the information technology functionality. This
functionality is determined by the extent to which an information system’s capabilities
are aligned with the business processes (Klompé 2003; Van der Pols 2003). The three
domains are illustrated in Figure 3.
When comparing the domains of information system management to the model of IT
business value creation, the IT conversion process shows an overlay with technical and
application management. Further, functional management links the maintenance and
exploitation of operational information systems to the business processes similar to how
the IT use process reflects the demand for IT by the business in order to obtain desirable
IT impacts.
Based on the three different domains, information system management can be
embedded in the organization by means of an operational model and organizational
structure (Bellini, et al. 2002). One way to do this is by using the processes as described in
a variety of models, methods, and collections of best practices. Of these aids, the
Figure 3: Three domains of information system management (Looijen 2004)
12
1 Evaluating information system economics
Information Technology Infrastructure Library (OGC 2005), ITIL, has become the de facto
standard in support of technical management. For application management the
Application Services Library (Van der Pols 2005), ASL, is available and the Business
Information Services Library (Van der Pols 2005), BiSL, is on hand to assist businesses in
organizing their functional management.
Summarizing, organizations can optimize their information system efforts from a
technology perspective by altering the three domains of information system
management. To guide them in making choices on which alterations to make,
organizations can assess the past, present, and future situation by means of evaluations.
These evaluations are addressed next. Additionally, the concept of the information
system itself is explored further in the next chapter.
1.4 Information system evaluation
In order for the process of management to direct the information systems in the right
direction, the activity of evaluation provides guidance. Evaluation is considered to be
“undertaken as a matter of course in the attempt to gauge how well something meets a
particular expectation, objective or need” (Hirschheim and Smithson 1999). More specific,
Willcocks (1992) takes a management perspective and defines information system
evaluation to be the activity of “establishing by quantitative and/or qualitative means the
worth of IT to the organization.” The outcome of such an assessment can then be used in
the decisions an organization has to take when managing their information systems.
Throughout the life cycle of an information system various of these decision moments
occur; the most noticeable of which are the go/no-go investment decisions.
Evaluations can serve two purposes; these purposes reveal themselves in the evaluations
being either formative or summative (Remenyi, Money, and Sherwood-Smith 2000). In
the case of formative evaluation, the organization evaluates in order to increase the
performance, focusing on the future (ex-ante). Summative evaluations on the other hand
are solely executed to observe the quality of past performance (ex-post). Thus, any
evaluation should consist of a monitoring and a forecasting aspect.
Following this distinction, the subjects to be reflected in evaluations need to include both
outcome-based and process-based approaches (Hinton, Francis, and Holloway 2000; Doll,
Deng, and Scazzero 2003). Outcome-based attributes focus on the measures of
information system effectiveness, whereas the process-based approaches consider the
activities and processes resulting in these outcomes. The process-based approach builds
upon the notion that certain conditions have to be met, even though they are by no
means sufficient for the sure creation of the outcomes (Mohr 1982; Markus and Robey
1988). This distinction is similar to software quality management techniques, where
several techniques focus on quality characteristics of software (McCall, Richards, and
Walters 1977; Boehm 1978; ISO/IEC 1994) and other techniques focus on improving the
software development process (Pfleeger 1991; Bicego, et al. 1994). Eventually, the two
13
1 Evaluating information system economics
approaches will come together, as advanced software development processes will
produce high quality software and vice versa.
Closely connected to the division between formative and summative evaluations is the
difference between evaluation to learn or to judge. In this distinction evaluating to learn
generally has a positive air in which organizations focus on the future and possible
improvements, whereas evaluating to judge tends to the negative with a focal point in
the past and more often inflicting punishment for faults than giving praise for good
performances.
Resembling the situation of information systems’ management, their evaluation is
supported by a legion of methods, models, and techniques. In what might be the latest
attempts to count these methods, Renkema and Berghout (1997; 2005) come to 67
methods which they categorized into the four types of financial, ratio, multiple criteria,
and portfolio methods (Renkema and Berghout 2005). The most well known of the
methods include the Return on Investment (possibly making its first information system
appearance in Weston and Copeland 1986), Information Economics (Parker, et al. 1988),
and the Balanced scorecard (Kaplan and Norton 1992). Since then, it seems more
probable that the rise in the number of methods has increased further, rather than
stagnated.
Regarding these methods, McKeen and Smith (2003) identify three problems often arising
when the metrics are assessed. These problems are, first, the lack of connection to the
drivers of the business leaves the business wondering about their meaning. Second, the
apparent non-existenting correlation between the investment and its provided value due
to the level of knowledge work involved, offers little insights into business value. Third,
value provided by information systems is more and more becoming an aspect of potential
future value in addition to what has been delivered in the present. Further problems arise
with the demarcation of individual systems’ performance, the extent to which
information systems are interwoven in organizations, and the elusive nature of benefits
(Remenyi, et al. 2000). In addition, a difference between the perceived success of an
evaluation and its effect in the organization can be identified (Nijland 2004).
Overall, evaluating information systems remains troublesome. It is seen to support
organizations in taking managerial decisions regarding their systems, but does not seem
to be able to keep up with the complex developments in technology. In this research the
way organizations try to predict and monitor the economic performance of their
information systems will be addressed in order to guide future developments in
information system evaluation.
1.5 Research objective and questions
Given the apparent situation in which the existing evaluation methods seem unable to
support the information system management principles in their quest for value, Tillquist
14
1 Evaluating information system economics
and Rodgers (2005) define the “key roadblock to accounting for the value of IT [to be] a
lack of a systematic, objective methodology specifically designed to separate and identify
the contribution of IT.” In addition, they state that “depending upon the various
interpretations and interests of analysts and stakeholders leads to biased and conflicting
estimates of value.” However, given the available ways and means, it seems unlikely that
yet another technique would address the problems described in the previous section
(Powell 1992). Therefore, there is a need for more insight into the foundations of
evaluation, and its specific characteristics, use and value in practice.
This research focuses on the differences in perception of evaluators towards the
systematics and objectivity of the benefits and costs in value evaluations. Specifically, it
addresses why the evaluation of information system value does not seem to deliver an
effective, and feasible, constellation of the two. In light of this problem, this research
attempts to uncover objectivity as a possible source of origin of the apparent
incompetence of evaluation to deal satisfactorily with both benefits and costs. The central
research objective is formulated as:
Improving the understanding of objectivity in the evaluation of information system
economics.
Investigation of this objective needs to at least address the following questions:
What are benefits and costs and how different are they,
what is the gap between the assessments of benefits and of costs in information system
evaluation in literature, and, assuming a gap exists,
which practices are used by organizations to evaluate benefits and costs, and
which direction do changes in these practices need to have in order to reduce the gap?
The central way of dealing with these questions is through the differences in perception
towards benefits and costs. In doing so, their perceived objectivity will become a core
concept.
Given that information system evaluation research and practice have a history of over 40
years, yet are neither well understood, nor routinely practiced, insight into these issues
may enable better understanding of some of the fundamental underlying problems.
1.6 Research design
Prior to commencing the search for an answer to the research question, a research design
is created. In such a design the researcher’s choices on a wide array of alternative
approaches are considered. Here, the design is categorized and discussed by adopting the
framework for research as defined by Saunders, Lewis, and Thornhill (2006). This
framework, termed the research ‘onion’, divides the considerations into six layers; peeled
from the outside in, these are the layers of research philosophies, approaches, strategies,
15
1 Evaluating information system economics
choices, time horizons, and techniques and procedures. An illustration of the overall
framework is provided in Figure 4.
Next, the outer two layers are discussed as the researcher’s attitude towards research in
Section 1.6.1. Subsequently, the opportunities available to make the remaining layers
operational are considered in Section 1.6.2. Finally, in Section 1.6.3, the resulting design
to guide the research is presented. Unless indicated otherwise, the sections are based on
Saunders, et al. (2006).
1.6.1 Research attitude
A researcher’s attitude towards a study can be divided into the adopted philosophical
beliefs and the way of reasoning taken up in the form of a research approach. Each of
these is elaborated upon next.
Research philosophy
Research philosophy “relates to the development of knowledge and the nature of that
knowledge.” Three major ways to consider this philosophy can be identified; these are, (i)
epistemology, (ii) ontology, and (iii) axiology. Subsequently, each of these ways will be
briefly discussed.
Figure 4: The research ‘onion’ (Saunders, Lewis, and Thornhill 2006)
16
1 Evaluating information system economics
The first way addresses “what constitutes acceptable knowledge in a field of study.” In
this area the positions of positivism, realism, interpretivism, and pragmatism can be
identified as the main views. Between these positions, positivism holds the view that “all
true knowledge we may obtain is based on the observation or experience of real
phenomena in an objective and real world” (Cornford and Smithson 2006). That is, any
knowledge gained is seen to be free of any social value. Taken a realism viewpoint, the
researcher assesses any object as existing “independent of the human mind.” This means
that any observation made is one of a reality that exists as a separate entity. Similar to
positivism, realism “assumes a scientific approach to the development of knowledge.” In
contrast, interpretivism is the main representative in the field of information systems of
what is called ‘anti-positivism’ (Cornford and Smithson 2006). It accepts the position that
knowledge gained cannot be generalized to objective facts or rules, but rather that the
observations are subject to the social entity from which it was gained. Generalization as
such is therefore not seen as the primary purpose of knowledge generation. Finally,
pragmatism works around the debate between positivism and interpretivism and
observes the nature of acceptable knowledge to be possibly a continuum of both. In
doing so, it states that researchers should take the best of both worlds in support of that
which they value.
It is argued here, that in light of the express presence and central role of perceptions in
the research conducted, this creates a world which cannot be seen to exist as a separate
entity. Especially the search for the influence of objectivity in the form of subjective
reasoning and political behaviour, which will be expounded upon in Chapter 3, creates a
social entity which in the eyes of the researcher cannot be seen to exist apart. Therefore
in this research an interpretive, or at the very least pragmatic, position is present.
Second to epistemology, ontology considers the researcher’s view on reality, which can
also be either objective or subjective by nature. In this area, Saunders, et al. therefore
distinguish between the positions of objectivism and subjectivism. The former holds the
view that “social entities exist in reality external to social actors concerned with their
existence.” From this point of view, the nature of reality exists on its own, outside and
independent of the mind of the observer. The latter argues the opposite by assuming a
position in which the entities “are created from the perceptions and consequent actions of
those social actors concerned with their existence.” That is, the researcher observes an
object which gains its existence through this observation. It is then the researcher’s task
to create understanding of the phenomenon.
Here, in the attitude towards ontology, a subjective stand is taken. It is therefore believed
that the nature of the observed depends on the perceptions of the observer.
Finally, axiology deals with the position a researcher’s own values occupy in a research, as
it are these which ultimately influence the choices made. A position on axiology can be
described using a wide variety of topics; these include, for instance, the researcher’s
17
1 Evaluating information system economics
personal background and development, but also the adopted views in areas such as
aesthetics, politics, and ethics.
The axiological attitude taken up here is limited to an elaboration of the perceived
influence of the researcher on the observed, thus also extending the ontological
viewpoint. As will become apparent in following sections of this chapter, this influence is
most likely to occur in the relation between the interviewer and interviewees. As such,
the researcher should be aware of his possible influence on the data when it is collected
and should therefore attempt always to bear in mind possible negative effects. The role
of the interviewer when conducting an interview is addressed further in Chapter 4.
Research approach
Next to the researcher’s views on the research philosophy, the adopted way of reasoning
plays a role in the design, execution and outcome of a research. This reasoning is covered
by the research approach which can be either deductive or inductive. On the one hand, a
deductive approach follows a cycle in which a theory is developed, hypotheses or
propositions are stated, and finally a research is designed and performed to test these
theses. Consequently, the deductive research approach is seen to be a ‘theory testing’
approach. The inductive approach on the other hand works the other way around. In an
inductive research cycle, the data is collected first, after which theory is developed based
on the analysis of the gained data. As such, the inductive approach tends to the
development of theory, rather than its testing.
The deductive and inductive approaches are also recognized in the empirical cycle of
research activities in the area of fundamental research as provided by ‘t Hart, et al.
(1998). In this cycle, represented on the left hand-side in Figure 5, subsequently the
activities of observation, induction, deduction, theses testing, and data evaluation are
executed. The succession of inductive by deductive reasoning functions as a distinction
between exploratory and explanatory research.
In addition, ‘t Hart, et al. (1998) provide a regulative cycle of research activity for research
that is relatively practice-oriented, as illustrated on the right hand-side of Figure 5. This
research approach follows the sequence of diagnosis, solution design, solution
implementation, and result evaluation. With the development of a design to research the
resolving of a problem, the approach can be classified under the design science (Hevner,
et al. 2004) line of research. The development of a design also means that the regulative
cycle is inclined towards inductiveness.
This research incorporates the empirical cycle of fundamental research and generally
builds on a deductive line of reasoning. The practical interpretation hereof is developed in
Section 1.6.3.
18
1 Evaluating information system economics
Figure 5: The empirical and regulative cycles of research activities ('t Hart, et al. 1998)
1.6.2 Research operationalization
Alongside the adopted attitudes towards research, a research design in the line of the
research onion consists of four more layers to make it operational. The layers are
respectively the strategy, research choices, time horizons, and data collection and
analysis. Resembling the structure of the previous section, each of the layers is addressed
next. However, in contrast to that section, the implications of the choices to be made are
not included with the descriptions, but are presented in Section 1.6.3 as the overview of
the research design.
Research strategy
The research strategy adopted states by which means the researcher is going to conduct
the study. Broadly speaking, the available strategies all represent some form of either an
experiment, survey, case study, action research, grounded theory, ethnography, or
archival research. Saunders, et al. describe the different strategies as follows:
Experiment
“involves the definition of a theoretical hypothesis; the selection of
samples of individuals from known populations; the allocation of
samples to different experimental conditions; the introduction of
planned change on one or more of the variables; and measurement
on a small number of variables and control of other variables.”
Survey
“involves the structured collection of data from a sizeable
population.”
Case study
“involves the empirical investigation of a particular contemporary
phenomenon within its real-life context, using multiple sources of
evidence.”
Action research
“concerned with the management of a change and involving close
collaboration between practitioners and researchers.”
19
1 Evaluating information system economics
Grounded theory
“in which theory is developed from data generated by a series of
observations or interviews principally involving an inductive
approach.”
Ethnography
“focuses upon describing and interpreting the social world through
first-hand field study.”
Archival research
“analyses administrative records and documents as principal source
of data because they are products of day-to-day activities.”
An overview of the research methods and the situations to which they fit is provided in
Table 1. In this table, case study research, action research, and ethnography are
combined into the concept of field research (Saunders, et al. 2006; Cornford and
Smithson 2006; Remenyi, et al. 1998).
Research choices
When choosing a research strategy, the researcher is not left to using only a single
approach. Although a mono-method choice is available and might be the suitable
alternative, there is also the opportunity of applying mixed methods or multi-methods.
In the case of multi-methods, the researcher creates a union of either qualitative or
quantitative methods. As such, the basis of the different data gatherings is equal and the
analysis can be combined. When applying mixed methods, a crossover of qualitative and
quantitative research strategies is constructed. To unite the data, the research has to
convert qualitative data in a quantitative entity, or vice versa.
The advantages of using more than one type of method are twofold; first, the researcher
has the opportunity of tuning the method to the purpose in case the research has several
heterogeneous purposes. Second, the application of multiple methods enhances the level
Table 1: The application of the main research strategies ('t Hart, et al. 1998)
Question
What?
Where?
Who?
How?
Experiment
Behaviour
(current)
Laboratory,
field
Groups of testees
(relatively small)
E.g. observation, tests
Survey
Opinions
Desk, field
Groups of
respondents
(relatively big)
Question methods,
observation
Field research
Behaviour and
opinions
Field
Several groups, or
one group
Various, triangulation
Archival research
Reflection of
behaviour and
opinions
Desk, library,
archives,
databases
No limits
Various non-reactive
measurements or
secondary analysis
Strategy
20
1 Evaluating information system economics
of triangulation, and thus the validity of the research.
Time horizons
Depending on the purpose of the research, the time horizons under review require a
different approach. The two main ways to deal with this time horizon are the crosssectional and longitudinal observation. Of these, the cross-sectional approach retrieves
data at a given moment in time, providing a snapshot of the studied situation as-is. In
contrast, the longitudinal approach collects data in multiple instances over time. The
availability of these multiple data points enables the researcher to analyse change and
development among that which is studied.
Data collection and data analysis
The practices to apply in the processes of data collection and data analysis are dependent
of the chosen research strategy. The selected ones are described in Chapter 4 on the
method of research applied in the empirical part of this study.
1.6.3 Overview of research design
In pursuit of the differences of perception towards benefits and costs, the research strives
for increased understanding of the problem area in order to lead future developments. As
a result, the goal of the research is not developing a new evaluation technique, but rather
adding knowledge, possibly leading to guidelines. Therefore, ‘t Hart, et al.’s empirical
cycle of fundamental research activities is adopted (1998).
Taking an deductive approach, the research sets off with the observation activity by
creating an elaborate review of existing literature on the evaluation of information
system economics. This overview serves the purpose of explaining the difficulties of the
evaluation of information system economics, and putting these difficulties in perspective.
Following that, a theory is developed towards discovery of the underlying causes which
enable the diversity. Based on this theory, propositions and hypotheses are drawn up
which are to be examined in the testing activity.
The testing activity is performed to create insights into the progress of evaluation practice
in the field, i.e. the behaviour of organizations when evaluating the economic aspects of
information systems. In addition, forming the centre of the developed theory, the
differences in perceptions of the observations and objectivity of benefits and costs are
investigated. Being aimed at both behaviour and opinions, the testing activity is therefore
devised by field research. As the research pursues a comprehensive view on the possible
directions of progress available and the large range of perspectives, preference is given to
more small investigations, rather than several extensive ones. This approach is made
operational by interviewing 32 information system or project portfolio managers.
21
1 Evaluating information system economics
Figure 6: Overview of the research design
The detailed techniques that are used in the process of data collection and analysis,
including the questionnaire design and data acquisition, are elaborated upon in Chapter
4. A synopsis of the general research design is illustrated in Figure 6.
1.7 Summary and conclusions
To cover the evaluation of information system economics as well as the causes of change
in these economics, this research consists of four elements; first, the information systems
which are evaluated. Second, the value created by these systems under evaluation, which
is divided in the benefits and costs. Third, the evaluation practices used to evaluate the
benefits and costs. Fourth, the evaluators’ perceptions regarding the assessed economics
and the performance in evaluations in relationship to their objectivity. It is proposed that
the former perceptions differ between benefits and costs, and that the latter perceptions
will fluctuate.
The research is developed as follows. In Chapter 2, a foundation is provided by means of a
literature review exploring the content of the evaluations. This content is divided
between the information systems under evaluation, their problematic value creation, and
the difficulty this provides to the assessment of both benefits and costs. Then, in Chapter
3, a possible explanation for these difficulties is built up by the development of a theory
based on the perceived objectivity differences between benefits and costs. In order to
gain data to test the theoretical expected propositions and hypotheses, a data acquisition
process is executed by means of 32 interviews. The accompanying process is dealt with in
Chapter 4. Next, Chapter 5, handles the analysis of this data and reflects on the
propositions and hypotheses. The research is rounded off in Chapter 6 with a discussion
of the conclusions, recommendations, limitations and possibilities for future research
stemming from the findings.
22
2 Literature on evaluating information system economics
2
Literature on evaluating information system economics
2.1 Introduction
This chapter presents a body of knowledge identifying and defining the concepts involved
when evaluating information system economics. The purpose of this overview is to create
an understanding of the concepts which are to be studied in this research. In the course
of synthesizing the available literature, the difficulties that organizations have to
overcome in the process of evaluating the economic consequences of information
systems surface. Subsequently, in the next chapter, a theory is developed providing a
possible explanation for these difficulties.
The starting point of the description is the information systems under investigation. Using
a systems approach (Churchman 1971), their boundaries are described in Section 2.2.
Next, in Section 2.3, the economic foundation of the systems, as introduced in the
previous chapter, is explained, first, by assessing the concept of value and its delivery by
information systems, and second, by addressing the concepts of information system
benefits and costs. These explanations are provided in Section 2.4 and 2.5. As a final
notion, information system evaluation methods are addressed in Section 2.6. Given the
purpose of this chapter, each section concludes by considering the consequences of the
depicted literature regarding information system evaluation. These sections culminate
into the overall findings and conclusions of this chapter as presented in Section 2.7.
2.2 Information systems
Arguably, the artifact ‘information system’ continues to be under-theorized and taken for
granted throughout the field of information system research. As a result the ability of
researchers to understand many of the implications information systems have, becomes
restricted (Orlikowski and Iacono 2001). Further problems with not conceptualizing
information systems, or being able to do so, can arise in practice. It is not unlikely that
particularly problematic areas are those that depend on the differentiation between
systems, such as portfolio management and service level management. To manage and
research information systems it is therefore important to define the concept of
information systems and state where the boundaries with its environment are considered
to be.
Several approaches can be adopted in an attempt to cope with information systems and
their boundaries (Avgerou 2002). On the one hand, for instance, actor-network theory
(Latour 1987; Callon 1999; Law 1997) would treat information systems as social entities
that interact as any other entity with their surroundings. On the other hand, a systems
theory-based approach (Churchman 1971) would observe an information system in itself,
affected by its environment. In this research, the latter view on information systems is
used to create understanding of these boundaries due to the perspective’s two major
benefits; these are, first, it provides a way to cope with the interdisciplinary nature of the
23
2 Literature on evaluating information system economics
concept (De Leeuw 1990). In particular, it enables the possibility of looking at information
systems which are connected to, but distinguishable from, their environment. This
ordering and systemizing frame theoretically enables the identification and management
of individual information systems. As this research focuses on the evaluation of
information system economics, and not on the benefits and costs themselves, this is seen
as an advantage. Second, systems theory offers support in fulfilling the conditions needed
for effective control of the system, including changes (De Leeuw 1990). Doing so might
help in creating a solid weighing of benefits and costs.
The basic framework taken from systems theory is presented in the next section. The
three dimensions of the framework, being functional, analytical, and temporal, are
discussed subsequently in Sections 2.2.2, 2.2.3, and 2.2.4. Finally, the conclusions
regarding information system evaluation are drawn up in Section 2.2.5.
2.2.1 Systems theory
From a systems theory perspective any system can be defined as “a collection of …
elements connected in such a way that no (groups of) elements are isolated from other
elements” (De Leeuw 1990). If such a system is connected to objects in its environment,
as is the case with any information system in an organization, it is an open or dynamic
system. Each dynamic system “can be abstracted to a real system and an information
system which determines the behaviour of the real system” (Brussaard and Tas 1980).
That is, the information system deliberately influences the real system and with that
controls its behaviour. The information system sends information to and receives
information from its environment as well as from the real system. The real system on the
other hand receives input from and processes output to the environment in the form of
Figure 7: The information paradigm (as illustrated by Looijen 2004)
24
2 Literature on evaluating information system economics
materials or energy (Looijen 2004). This information paradigm is illustrated in Figure 7.
The paradigm shows that the environment from which inputs are derived and to which
outputs are delivered, is (part of) a higher-level information system itself as any
information system is both a sub- and super-system of a real system; i.e. recursion exists.
Therefore, setting the boundaries is also a task of defining the level of abstraction.
“Should we wish to distinguish between systems, we would do better to refer to sets of
representative attributes than to a system as a whole” (Ahituv and Neumann 1987). These
attributes are defined by De Leeuw to arise in three dimensions: objects, aspects, and
phases. The objects of an information system can be seen as the functionalities it
performs, essentially identifying different states of a controlled system (Brussaard and
Tas 1980). Aspects can be regarded as comprising the physical elements of an information
system; thus its analytical components. The phases determine the temporal part of
information systems and cover the change over time. A graphical representation of how
the different dimensions determine an individual information system is shown in Figure 8.
With the objective of explaining the difficulties in differentiating between individual
information systems in mind, each of the dimensions is elaborated upon in the next
sections.
2.2.2 Functional dimension
“Information systems exist to serve, help or support people taking action in the real world,
and it is a fundamental proposition that in order to conceptualize, and so create, a system
which serves, it is first necessary to conceptualize that which is served, since the way the
latter is thought of will dictate what would be necessary to serve or support it” (Checkland
and Holwell 1998). In this research, ‘that which is served’ is the organization in whatever
level of abstraction chosen (as the organization itself is also a system, it is thus also
Temporal
t
aly
n
A
ica
l
Figure 8: Dimensions of the information systems function (De Looff 1997)
25
2 Literature on evaluating information system economics
redundant). In the literature an information system’s functions to the organization are,
for instance, described as “to determine user needs, to select pertinent data from the
infinite variety available from an organization’s environments (internal and external), to
create information by applying the appropriate tools to the data selected, and to
communicate the generated information to the user” (Nichols 1969 in Galliers 1987), and
it “exist[s] to generate, record, manipulate and communicate data necessary for the
operational and planning activities which have to be carried out if the organization is to
accomplish its objectives” (Davis 1982 in Galliers 1987).
Organizations themselves can be defined as “social entities that are goal-directed, are
designed as deliberately structured and coordinated activity systems, and are linked to the
external environment” (Daft 2007). Underlying the social aspect, Daft (2007) argues that
perhaps the most important aspect when considering an organization is its people and
the relationships between them. As such, an information system is always shaped by the
interests, values, and assumptions of the people who design, construct, and use it
(Orlikowski and Iacono 2001). Therefore, in order to set the boundaries of an information
system from a functional point of view, a characterization of organizational activity and
the functionality information systems offer to these activities becomes essential.
This depiction itself unfortunately suffers from comparable difficulties as the concept of
an organization is not limited to the boundaries of, for instance, a single business. E.g. not
all designers, constructors, and users of an information system will be considered part of
the same business. Putting it in general terms however, three organizational levels can be
distinguished; these are the micro-, meso-, and macro-level (Bots and Sol 1988). Many
different theories can be used to fill in each of the perspectives. Next are listed a few to
indicate the diversity of possible typologies.
First, on a micro-level, information systems can be separated by the functionality they
perform for the activities of users within business processes. These activities can be any
task performed or supported by an information system that takes place in an
organization. Overall the activities are part of the business processes, which are
recognized on the next level.
One way to classify information systems on this second level, the meso-level, is according
to the organizational characteristics, such as divisions and functions, business processes
or goals. De Looff (1997), for instance, derives a functional classification from Porter’s
primary and secondary business functions: “Primary activities are those involved in the
physical creation of the product, its marketing and delivery to buyers, and its support and
servicing after sale. Secondary activities provide the inputs and infrastructure that allow
the primary activities to take place. … The information system can that support these
functions can be classified accordingly. Systems supporting the [primary] functions can be
divided into systems that actually execute primary information processes, and systems
that support physical or information primary processes” (De Looff 1997). Taking a more
26
2 Literature on evaluating information system economics
abstract perspective, information systems can be characterized by the strategic level they
serve (for instance Watson, Pitt, and Kavan 1998; Looijen 2004; Love and Irani 2004). On a
strategic level, an information system supports top-level management in sustaining and
improving the competitive nature of the organization, tactically an information system
supports mid-level management implementing the top-level strategy at a functional level
and operationally an information system supports lower-level management in day-to-day
work (Anthony 1965).
Third, at a macro-level, a typology of different types of organizations in which information
systems are used can be applied, for instance the typology of Starreveld et al. (1989).
Central in their typology is the distinction by type, based on the nature of the value chain
and the possibility of internal control on profit accountability, and order, based on the
ability to relate the internal control to the incoming and outgoing resources. The eight
main classes are trade, industry, agriculture, service industry, insurance, banking,
government and associations.
2.2.3 Analytical dimension
In general, the analytical dimension refers to the components which can be distinguished
when separating a whole into its elemental parts. For information systems an often
adopted, and perhaps traditional, component categorization is that of hardware and
associated software, people and associated procedures and data sets (for instance Davis
and Olson 1985). Each of these is briefly addressed next.
First, hardware is regarded as “the physical equipment used for input, processing, and
output activities in an IS. It consists of the computer processing unit; various input, output,
and storage devices; and physical media to link these devices together.” The
accompanying software consists of the detailed preprogrammed instructions that control
and coordinate these computer hardware components (Laudon and Laudon 2001).
Software itself can be divided in support (system) software and application software. The
latter “provides the business services,” whereas the former concerns operating systems
and utilities used for system development (Sommerville 2007). Second, people are all
those involved with the information systems. These include, but are not limited to, the
system users and owners, development and operations managers, system analysts and
designers, and computer programmers. Procedures are the methods by which these
people work. Third, data sets are “the streams of raw facts representing events occurring
in organizations or the physical environment before they have been organized and
arranged into a form that people can understand and use” (Laudon and Laudon 2001).
The components that fit into these categories do not always have a one-to-one
relationship to an information system. “A piece of hardware can, for example, be used by
several information systems, and an information system can use several classes of data
that are also used or created by other information systems” (Brussaard, cited in De Looff
27
2 Literature on evaluating information system economics
1997). Given the redundancy described in the information paradigm, this can be seen to
go for any combination (of elements) of the three dimensions.
2.2.4 Temporal dimension
The temporal dimension adds the possibility of change to information systems.
Information systems are built and maintained in a system’s life cycle (Land 1992; Swinkels
1999). During this life cycle information systems “evolve in response to new problems and
environmental changes” (Butler and Gray 2006). Several life cycle models exist, many
dominated by technology. As that ‘that which is served’ are organizations and the
economic consequences of information systems are the central theme of this research, a
life cycle based on information economics is taken into consideration here. In this life
cycle model, the way technology is applied is determined by management decisions
regarding benefits and costs (Swinkels 1999). The life cycle framework defines five major
life cycle activities, these are identification, justification, realization, exploitation and
evaluation (Figure 9). The activities are specified as follows (Nijland and Berghout 2002
unless indicated otherwise).
Initially, for a new or adapted information system the identification of the system or its
change takes place. This activity is “concerned with discovering all investment proposals
where IT can benefit the organization.” Next, the rational behind the system elect is
assessed in order to test its applicability and feasibility. This “justification is used to justify
IT investment proposals and to weigh the different proposals against each other, so that
limited organization resources can be distributed among the most attractive proposals.”
Once a proposal has been chosen an organization will attempt to fulfill the goals set.
“During the realization of the investment, the aims and goals that were set during
justification should be achieved against minimal costs. The investment proposal is
implemented during an IT project. This not only implies a technical implementation of the
system, but also the integration of the system in the business process.” After realization,
the system transfers to the exploitation phase in which it is used, managed and
maintained. “The primary target of the exploitation activity is to optimize the support of
ion
ificat
t
n
e
d
I
Evaluation
n
zatio
Reali
Figure 9: Information economics life cycle (after Nijland and Berghout 2002)
28
2 Literature on evaluating information system economics
existing systems to the business process. … During exploitation the business reaps the
benefits of the system” (Klompé 2003). At this stage systems are regarded as functioning
and are therefore referred to as operational information systems (Klompé 2003).
Throughout the life cycle, evaluation can take place “by comparing the outcomes to the
set goals.” “By evaluating the whole investment cycle and its outcomes, improvements
can be implemented in the IT investment process.” In contrast to the first four activities
which can be placed in series, evaluation runs in parallel to the entire life cycle of an
information system.
As noted, information systems change, evolve, during their life cycle (Butler and Gray
2006). This encompasses how and why an information system may change and/or
maintain its stability over time (Van der Blonk 2002). The evolution of an information
system can be captured as a process of carefully constructing, maintaining, and altering
an order (Van der Blonk 2002).
The evolution can have two origins, both stemming from the environment of the system;
they are either adaptive or perfective (Bjerknes, Brattereig, and Espeseth 1991). Adaptive
evolution is caused by changed requirements imposed on the organization from the
outside, or by a new or changed technical environment. Perfective evolution is caused by
new or changed requirements from the organization (Bjerknes, et al. 1991). The first
includes issues such as changes in the nature of doing business or legislative changes, the
latter can for instance occur due to increasing users’ experience with the system (Olle, et
al. 1991).
However, not only do systems evolve, there is also a need to correct errors (Olle, et al.
1991). These corrections to the system are not considered evolutionary since they stem
from previous enhancements, they are normally considered as corrective maintenance
(Bjerknes, et al. 1991).
2.2.5 Conclusions regarding information system and evaluation
The large deviations between the three dimensions described underline the many
different definitions possible for addressing information systems. It could, for instance, be
argued that in its most abstract appearance the entire world can be defined as an
information system. For any abstraction to be meaningful it must form a coherent
aggregation of all three dimensions. A suitable level of such an abstraction for the
evaluation of information systems, both in research and practice, would somehow enable
the identification of information systems on a recognizable and mutual comparable level
while maintaining practicality. Depending on the purpose of defining information
systems, this level changes. Evaluations supporting asset management might for instance
require a different definition than those supporting life cycle management, portfolio
management or an outsourcing contract. In regard to the evaluation of information
system economics this means that boundary definitions need to be created which result
29
2 Literature on evaluating information system economics
in uniform, but sufficiently complete, descriptions of the economic aspects. Only such
definitions enable meaningful comparisons of benefits and costs within and among
information systems.
2.3 Value creation
In Section 1.2 the concept of value as provided by information systems is defined to be
the sum of the positive and negative effects they cause, in which these effects can be
either financial or non-financial. In addition, the process of value creation was described
using the IT business value creation model by Soh and Markus (1995). This model
concludes with the issue that bottom line results can only be achieved if information
technology impacts are not negatively influenced by the organization’s competitive
situation. Unfortunately, the model contains little specificity in defining concepts such as
the conversion process and IT impacts. Therefore, although generically useful, the level of
abstraction of the process model of Soh and Markus is too high to identify the impact of
changes in information management activities to the economics of information systems.
In an attempt to lower the level of abstraction the Vision-to-value vector model of Tiernan
and Peppard (2004), as portrayed in Figure 10, can be used. The focus of this model is on
the supply of information systems and their information handling services, which in turn
could create value. It describes the link between the provision of information systems and
the value creation they can create, mainly focusing on the conversion process of Soh and
Markus. In the model two loops can be identified; these are the project loop and the
service loop. In the first loop, a new use for the deployment of an information system
within a business, what they call a vision, is developed after its business case is approved.
Figure 10: A vision to value model (Tiernan and Peppard 2004)
30
2 Literature on evaluating information system economics
The latter document describing the costs of resources and the business benefits. This
development will take place in a project in which the new use is realized and necessary
business changes are implemented. The realized system can then deliver new information
handling services to the business, offering potential value. Whether or not value is
actually created depends on the use of the system. Tiernan and Peppard even state that
“value does not arise in any other way” (Tiernan and Peppard 2004). The second loop
consists of the application of the resources of the information system organization. These
resources need to be divided between all activities which take place in both service and
project management. Overall, it is the interrelationship between business benefits,
projects, services and resources which is the vision-to-value vector.
The vision-to-value vector model does not explain possible flaws in creating information
system value the way Soh and Markus do. However, it does give insights into the
conversion process and, to certain extent, also the use process which were identified in
the process model. Therefore, the competitive process remains to be explored.
The situation in the competitive process is reflected by the level of competitive advantage
an organization has or is potentially provided by its information systems. Although a clear
definition of competitive advantage is lacking (Rumelt 2003), generally it can be regarded
as “a favourable competitive position in an industry” (Porter 1985). This advantage can be
temporary or sustained, depending on how long it will last (Barney 2007). Barney links the
advantages to the performance of an organization by stating that firms with a relatively
better competitive position will generate greater-than-expected value by utilizing its
resources. This “positive difference between expected value and actual value is known as
an economic profit or an economic rent” (Barney 2007). It is the difference between what
a production factor is paid and what is needed for it to keep in its current use, that is, its
transfer earnings.
A competitive advantage however may or may not be observed in (organizational)
performance measures. Ray, Barney, and Muhanna (2004) argue that, “a firm may excel
in some of its business processes, be only average in others, and be below average in still
others. A firm’s overall performance depends on, among other things, the net effect of
these business processes on a firm’s position in the market.” Further, when viewing the
firm as a “nexus of contracts ... the existence of rent in the nexus must be defined
separately from who appropriates the rent. That is, nexus rent is the sum of all the rent in
the nexus regardless of which stakeholders appropriate it.” “[R]ent may be distributed
anywhere in the nexus. The nexus can still have strategic capabilities that other firms lack
even if rent is appropriated by nodes other than the shareholders.” “[T]he firm will have
valuable inimitable capabilities and will generate rent. However, inside stakeholders may
appropriate the rent so it is not apparent in performance measures” (Coff 1999). Whether
an organization will be able to obtain competitive advantage and an increased
organizational performance by its information systems is thus out of the hands of the
31
2 Literature on evaluating information system economics
information system organization. The information system organization can only provide
value which facilitates opportunities that could lead to added business performance.
Research on understanding the sources of sustained competitive advantage for firms can
be structured in a framework which “suggests that firms obtain sustained competitive
advantages by implementing strategies that exploit their internal strengths, through
responding to environmental opportunities, while neutralizing external threats and
avoiding internal weaknesses” (Barney 1991). Additionally, the major theories on
explaining competitive advantage can be structured further by adding the temporality
aspects taken into account by the models (Figure 11). In the next sections, to provide a
comprehensive view on value creation, the internal analysis is described using the
resource-based view of the firm and the theory of dynamic capabilities. Subsequently, the
external analysis is explained using Porter’s five forces and generic strategies.
2.3.1 Resource-based view
The resource-based view of the firm (Wernerfelt 1984) “argues that firms possess
resources, a subset of which enables them to achieve competitive advantage, and a
further subset which leads to superior long-term performance” (Wade and Hulland 2004).
The view “assumes that firms within an industry (or group) may be heterogeneous with
respect to the strategic resources they control” and “that these resources may not be
perfectly mobile across firms, and thus heterogeneity can be long lasting” (Barney 1991).
Dynamic
capabilities
Temporality
Dynamic
Resources include all “assets and capabilities that are available and useful in detecting
and responding to market opportunities or threats” (Sanchez, Heene, and Thomas 1996;
Wade and Hulland 2004). Assets are defined as “anything tangible or intangible the firm
can use in its processes for creating, producing, and/or offering its products to a market,”
and capabilities are “repeatable patterns of action in the use of assets to create, produce,
and/or offer products to a market” (Sanchez, et al. 1996). Assets are thus the inputs or
Static
Marked-based
view
Resource-based
view
Internal
External
Unit of analysis
Figure 11: Strategic model overview (after Powell and Thomas 2009)
32
2 Literature on evaluating information system economics
outputs of a process, and capabilities are the skills and processes with which a firm
transforms inputs into outputs (Wade and Hulland 2004). In order to deliver sustainable
competitive advantage to a firm, resources have to be valuable, rare, imperfectly
imitable, and non-substitutable (Barney 1991).
Building on, or extending, the resource-based view, the knowledge-based view suggests
that knowledge-based resources are the most significant resources of a firm (Kogut and
Zander 1992; Grant 1996).
2.3.2 Dynamic capabilities
In contrast to the resource-based view, the theory of dynamic capabilities attempts to
explain the positions of competitive (dis)advantages arising in markets which are
(continuously and possibly rapidly) changing. The view does so by investigating “firmspecific capabilities that can be sources of advantage, and [explaining] how combinations
of competences and resources can be developed, deployed and protected” (Teece, Pisano,
and Shuen 1997). These dynamic capabilities can be defined as “[t]he firm's processes
that use resources − specifically the processes to integrate, reconfigure, gain and release
resources − to match and even create market change. Dynamic capabilities thus are the
organizational and strategic routines by which firms achieve new resource configurations
as markets emerge, collide, split, evolve, and die” (Teece, et al. 1997). In relatively stable
markets, the capabilities are seen to be “complicated, detailed, analytic processes,”
whereas in more dynamic markets the capabilities tend to be “simple, experiential,
unstable processes” focused on being adaptive (Eisenhardt and Martin 2000). Bhatt and
Grover (2005) identify the IT infrastructure, IT business experience, and relationship
infrastructure as specific information system related dynamic capabilities.
Eisenhardt and Martin (2000) state that similarities between firms can be found in the
deployment of capabilities. Therefore, the capabilities are not the source of long-term
competitive advantage themselves, rather this “lies in the resource configurations that
managers build using dynamic capabilities” (Eisenhardt and Martin 2000).
2.3.3 Market-based view
The market-based view approaches competition from a point of view in which
organizations work up market barriers in order to succeed in their line of business. In his
theory of competitive strategy, Porter distinguishes five forces which organizations could
arm with or against (1985); these are the bargaining powers of customers and suppliers,
the threats of new entrants and substitute products, and the competitive rivalry within an
industry. Together the forces determine the attractiveness of a market. Models adopting
this view assume that organizations have the same strategical relevant resources at their
disposal and are able to take up identical strategies. If an organization is able to create a
competitive advantage this is believed to be temporary at most, as “the resources that
firms use to implement their strategies are highly mobile” (Barney 1991).
33
2 Literature on evaluating information system economics
In order to obtain a competitive advantage, however temporary, an organization has to
outperform others in the activities of their value chain. Organizations try to create such a
situation by means of their strategy. Generic strategies organizations can apply are those
of lowering costs (aka. cost leadership), enhancing differentiation, and changing the
competitive scope (Porter and Millar 1985). In the first, an organization tries to create a
favourable competitive position by producing as low-cost as possible. The second strategy
is aimed at outperforming competitors in fulfilling customer needs. Organizations
applying the third strategy try to transcend their (potential) rivals by changing the way of
the market in their advantage.
Similarly, Treacy and Wiersema (1993) distinguish operational excellence, customer
intimacy, and product leadership. The first two resemble the strategies of cost leadership
and enhancing differentiation respectively. The latter could be seen to bear some
resemblance to changing the competitive scope, but is more like a special case of
enhancing differentiation. In contrast to that strategy, this one is seen to apply somewhat
to an organizational product innovation push rather than customer pull.
2.3.4 Conclusions regarding value and evaluation
For an information system organization which tries to justify itself and manage the value
its information systems provide to the organization, three possible areas of improvement
can be identified; these are first, improving the use of information handling services by
the business organization, second, improving the information handling services
themselves, and third, improving information system service management activities. As
the three areas are very much intertwined, changes in one area will often need changes
in another area to succeed, or cause them (beneficial or not). Whether an organization
will be able to obtain competitive advantage and an increased organizational
performance from their information systems is seen to lie outside the scope of the
information system organization. It can only provide value which facilitates opportunities
that could lead to added business performance.
For the organization evaluating the complex process of value creation this means that
they have to actively focus on all contributing elements, either positive or negative, if
they wish to make such value visible. In addition, the impact has to be viewed all the way
through the value creation process. The importance of the competitive process in the act
of creating actual value, or not, puts connections with the organization’s strategy in
prime position within the evaluation.
2.4 Benefits
In Chapter 1, the benefits of information systems are classified as all possible positive
consequences. Additionally, as is seen in the previous section, their realization is
problematic at best. However, when aiming to evaluate benefits, organizations can be
guided by predetermined benefit classifications.
34
2 Literature on evaluating information system economics
At a high level of abstraction, information systems are seen to positively contribute to the
organization in three ways; these are (1) by facilitating activities that could not be done
before, (2) by improving the activities that could already be done, and (3) by enabling the
organization to cease activities that are no longer needed (Ward and Daniel 2006). These
generic benefits can be seen to occur in several areas; i.e. benefit categories. Next, four
examples of benefit taxonomies which are found to be representative are listed to
illustrate the differences and similarities among them.
First, Farbey, Land, and Targett (1995), suggest a benefit taxonomy based on the type of
information system that delivers them. Accordingly, benefits can be business
transformational,
strategic,
interorganizational,
infrastructural,
management
informational or decision supportive, direct value adding, automatic, or due to mandatory
changes. From a different viewpoint, Kusters and Renkema (1996) put the key benefits
into the categories of efficiency gains, effectiveness gains, organizational transformation,
technological necessity and/or flexibility, compliance to external necessities, and wider
human and organizational impacts. Third, Ward, Taylor and Bond (1996) researched the
evaluation and realization of information system related benefits and compiled a list of
perceived benefits comprising of cost reduction, management information, process
efficiency, enabling change, competitive advantage, business necessity, comms, service
quality, no benefits, and other. Finally, reviewing enterprise information systems, Shang
and Seddon (2002) identify a list of five benefits factors; these are operational,
managerial, strategic, IT infrastructure and organizational.
Within the taxonomies it is seen that the emphasis in information system research when
concerned with benefit assessments lies heavily on non-financial aspects (Love, et al.
2005). To cope with this, several approaches to handle the intangibles are offered in the
literature by means of quantification approaches. In an early attempt, Kleijnen (1984), for
instance, presents a mathematical transaction-data creation-decision-reaction approach.
More recently, Anandarajan and Wen (1999) built an approach with the notions of
opportunity cost and expected value from probability theory and extended this with
sensitivity analysis. Hubbard (2010) essentially does the same.
Notwithstanding these conversion approaches, the extent to which non-financial aspects
are represented in benefits causes problems with their measurement, allocation, and
management. The lack of a workable definition for setting the boundaries of information
systems (Section 2.2) leaves the allocation of benefits to information systems seemingly
impossible. The intangibility of benefits confirms this impression, as does the notorious IT
productivity paradox (Brynjolfsson 1993; Hitt and Brynjolfsson 1996; Brynjolfsson and Hitt
1998). This paradox originates from the inconclusive findings on the link between
information system and organizational productivity or performance and is even claimed
to emanate from a “lack of good quantitative measures for the output and value created
by IT” (Brynjolfsson 1993). It should be noted though, that earlier findings do indicate that
35
2 Literature on evaluating information system economics
“creating and maintaining realistic expectations of future system benefits really does
matter” (Staples, Wong, and Seddon 2002).
Further, as was seen in the previous section on value, benefits can only be established if
the organization uses the functionalities offered by its information systems (Tiernan and
Peppard 2004). In addition, the evaluation of information systems suffers from the
excrescences of the troublesome relation between information system departments and
the business (Ward and Peppard 1996; Peppard and Ward 1999). Therefore, given the
necessity of information system use with regard to benefits, benefit evaluation is likely to
experience trouble in this area. Apparent success factors in the areas of management,
open communications, transparency and recognition of diversity, as well as the
involvement of all stakeholders (Fink 2003) confirm this view.
A related problem is that the project managers, and their teams, are commissioned to
deliver a project, not the intended benefits, a vast majority of which is to be obtained
during the exploitation phase of an information system’s economical life. Logically, they
are far more focused on the project’s deliverables than on the positive consequences
resulting from the project (Bennington and Baccarini 2004).
To guide the actual realization of benefits, Ward and Daniel (2006) state that
organizations should actively embrace this active benefit recognition with benefits
management. They define this activity as “the process of organizing and managing such
that the potential benefits arising from the use of information systems are actually
realized” (Ward and Daniel 2006). The organizational level on which this is effective is
seen to differ based on its purpose (Fink 2003); this is illustrated in Figure 12. As with
evaluation (Section 1.4), this purpose consists of both ex-ante and ex-post elements. The
Realize
“what needs to
be done”
“what should
have been
done”
Retrofit
“how well was it
done”
Review
Operational
Strategical
IT benefit management effectiveness
Figure 12: IT benefit management purpose and effectiveness (Fink 2003)
36
2 Literature on evaluating information system economics
actual level of effectiveness is determined by the organization’s benefit realization
capability (Ashurst, Doherty, and Peppard 2008). Ashurst, et al. distinguish four
capabilities related to the activity of benefit realization; these are planning, delivery,
review, and exploitation of benefits. Successively, these capabilities handle identifying
and planning benefits, proposing and executing organizational change (enabling benefits),
accessing delivered benefits and managing future ones. Unfortunately, the existence of
the four capabilities within firms is seen to be limited as “IT professionals still tend to
focus primarily on the delivery of a technical solution, on time, on budget and to
specification” (Ashurst, et al. 2008). However, benefit identification is “not a set of highly
defined, mutually exclusive, strictly sequential activities. Instead, it consists of loosely
defined, overlapping iterative ones” (Changchit, Joshi, and Lederer 1998) and might
therefore be hard to find. The same reasoning might hold for the other capabilities.
In addition to the previously described issues, information is pervasive within the
business and change will undoubtedly lead to indirect effects which might occur
throughout the value chain (Tallon, Kraemer, and Gurbaxani 2000). Identifying all direct
and some indirect effects is a challenge when trying to create an overview of benefits to
internally assess a portfolio of potential changes or investments. As boundaries fade after
implementation, the identification of the contribution of operational information systems
in the current business environment becomes even more problematic, making a
potentially fruitful externally focused benchmark of information system benefits even
more complicated. Active benefits recognition is needed to raise the assessments from
measuring inputs and outcomes to organizational improvement (Alshawi, Irani, and
Baldwin 2003).
2.4.1 Conclusions regarding benefits and evaluation
The benefits play a critical role in evaluating information systems as they are the ultimate
goal of (not) having such systems. In their assessments, the elusive nature of the benefits
creates a situation in which the evaluators might become uncomfortable due to lacking
hold. This might be strengthened by the assessment of benefits necessarily being a
combined action of both the business and the information system supplier. In this
ensemble a quirky situation arises as the latter can fulfill merely a facilitating role while
only the former can actually realize benefits. Then again, moving towards benefits
realization management leaves the organization potentially with inconclusive information
to weigh the pros versus the cons. Building capabilities in both activities therefore seems
to be a necessity in order to adequately learn how to evaluate benefits.
2.5 Costs
As defined in Section 1.2, the burdens of information systems, generally and here
referred to as costs, comprise of all negative consequences information systems induce.
These consequences can be related to a cost object; that is “anything for which a
separate measurement of costs is desired” (Horngren, Foster, and Datar 1994).
37
2 Literature on evaluating information system economics
From a taxonomical viewpoint, these separate measurements can be characterized by
three main categories; these are, (i) fixed versus variable, (ii) direct opposed to indirect,
and, (iii) initial against ongoing costs. First, the distinction between fixed and variable
costs is focused on the change in total costs related to alterations in a cost driver, where a
cost driver is defined as any factor that affects costs (Horngren, et al. 1994). In the case of
a fixed cost, a change will not affect the total costs, whereas it will when dealing with
variable costs. Second, the difference between direct and indirect costs is directed at the
point of origin of the costs. A direct cost is “related to the cost object and can be traced to
it in an economically feasible way” (Horngren, et al. 1994); this is not possible for indirect
costs. Third, the division in initial and ongoing costs is a temporal one (Dier and Mooney
1994). For information systems, their initial costs arise during the phases of identification,
justification and realization. Ongoing costs on the other hand occur while the system is
exploited. Together the initial and ongoing costs form the total costs of a cost object
during its life cycle.
To guide the assessments of costs, an further classification can be made into different
types of costs. Within the three categories previously described, a wide range of specific
types of information system costs can be identified. Stacking all categories together from
Table 2: Overview of information system cost types (Irani, Ghoneim, and Love 2006)
Cost type
Development
Installation and configuration
Staff related costs
Training
Management/staff resources
Accommodation/travel
Implementation
Management time
General expenses
Operations
Cost of ownership, system support
Tangible
Maintenance
Management effort and dedication
Intangible
Security
Employee motivation
Conversion
Phasing out
Employee time
Data conversion
Communication
Personnel issues
Environmental
Hardware
Software disposal
Data preparation/collection
Package software
Productivity loss
Displacement and disruption
Custom software
Strains on resources
Evaluation
System software
Business process re-engineering
Futz
Cabling/building
Organizational restructuring
Downtime
Project management
Implementation risks
Integration
Licenses
Opportunity costs and risks
Learning
Support
Hardware disposal
Moral hazard
Modification
Data communication
Knowledge reduction
Upgrades
Commissioning
Employees redundancy
Overheads
Infrastructure
Change management
38
2 Literature on evaluating information system economics
a total of eight taxonomies (Dier and Mooney 1994; Kusters and Renkema 1996; Remenyi,
Michael, and Terry 1996; Anandarajan and Wen 1999; Irani and Love 2000; Ryan and
Harrison 2000; Mohamed and Irani 2002; Smith, Schuff, and Louis 2002) Irani, Ghoneim,
and Love (2006) come to a total of 57 specific types of costs available in the field of
information systems. An overview of these types of costs is presented in Table 2 to
illustrate their diversity.
Powell (1992) states that “if the organization operates in any sort of decentralized or
devolved budgetary mode, there is a need for some mechanism to identify and allocate
costs.” To do so, an organization has to perform the activity of costing, i.e. the assessment
of the value attached to a cost type and the tracing and reporting of these values.
However, costing requires effort, and thus causes costs to occur itself; that is, the cost of
costing. As an organization wants to increase, for instance, the accuracy, timeliness, or
precision of its costing activity, these costs will go up; whereas if it decreases its
requirements on these aspects, the cost can go down. This is illustrated by the cost of
additional information in Figure 13.
When addressing the current state of research in the field of information systems
towards costs, it is seen that in general a financial approach remains (e.g. Love, et al.
2005; Byrd, et al. 2006), thus actually following the definition of costs as it should be (see
Section 1.2), but omitting to assess the non-financial negative consequences. A notable
exception is Love, Irani, Ghoneim, and Themistocleous (2006), who touch upon the
subject by including, for instance, costs of redundancy and resistance into their
framework on the incorporation of indirect cost factors. It is these, mostly ‘hidden’, costs
which are often neglected. The costs that actually are assessed by organizations might be
best served with scrutiny, as cost underestimations “are considered widespread practice”
(Remenyi, et al. 2000) and even overestimations might occur in cases of an apparent
abysmal overall budget.
Average value
Cost
Optimality
Zone of cost
effectiveness
Marginal value
Perfection of information
Figure 13: Cost of additional information (Harrison 1998)
39
2 Literature on evaluating information system economics
2.5.1 Conclusions regarding costs and evaluation
The surplus of different types of information system costs makes their evaluation
problematic. One the one hand, administrating as many different types as possible will
provide the organization with lots of possibilities to create insight into their information
system costs. In addition, it will enable solid ex-post evaluation and the establishment of
high quality forecasts. On the other hand, the cost of costing will rise, hidden costs are
likely to remain anyway, and next to the information system department’s infrastructure
and own organization it might have little real influence on a significant part of the costs.
As in the case of benefit assessments, cost evaluation is thus also a combined action of
the business and the information system supplier. The available categorizations and
techniques do however provide organizations with a starting point when evaluating costs.
2.6 Evaluation methods
In the opening chapter, the concept of evaluation is described based on the definition
that it is an activity of “establishing by quantitative and/or qualitative means the worth of
IT to the organization” (Willcocks 1992). This section extends that overview by focusing
on the means available to evaluators to assess the benefits and costs of information
systems and to guide their weighing. Generally, these available practices can be
categorized among four classes (Renkema and Berghout 2005); these are traditional,
ratio, multiple criteria, and portfolio methods. Each of which is discussed next.
First, the traditional methods are purely focused on the financial aspects of information
systems. A special role in this is granted to the resulting cash flows, which, from a
business economics viewpoint, are closely related to the concept of value. Often these, in
and out, flows are corrected for the changing time value of money. Of the methods
classifiable as being traditional, the Net Present Value, Return on Investment, Payback
Period, and Internal Rate of Return are the most obvious examples.
However, as seen earlier, especially the benefits of information systems tend to be nonfinancially focused, therefore the question arises of why organizations would employ such
techniques. Reasons for this are seen to be fivefold; they are that the methods are well
known and understood, build on generally accepted accounting principles, often clearly
related to the business’ objectives, in favour with the financial manager (who is often an
important stakeholder), and alternatives might not be approved (Milis and Mercken 2004;
Irani, et al. 2006).
Next, resembling the financial methods, ratio methods weigh two (or more) aspects
against each other. Here however, the weighing is not solely aimed at financial values as
the methods also handle variables such as the total number of users or employees. The
ratio methods are most famously represented by the Return on Management
(Strassmann 1990).
40
2 Literature on evaluating information system economics
Third, multiple criteria methods incorporate both quantitative and/or qualitative
measures. Doing so, they aim to provide the organization with balanced information using
various perspectives. The general directions of the methods are as follows (Renkema and
Berghout 2005). First, several categories of decision criteria are determined. Next, scores
are assigned to these criteria. Finally, after applying weights, an ultimate score is
calculated. These scores can then be compared among the initiatives. The best known
multiple criteria methods include the Balanced Scorecard (Kaplan and Norton 1992) and
Information Economics (Parker, et al. 1988).
Finally, in a portfolio method, information systems are mapped on a coordinate system,
which is composed of axes of considered (compound) decision-making criteria (Renkema
and Berghout 2005). These maps guide management in their decision making (Ward
1988). The classifications made reduce the “apparently infinite continuum of alternatives
to a manageable, pertinent number of discrete options from which high-level directions
can be determined” (Ward and Peppard 2002). The methods are particularly useful to
describe the current state, select feasible options for improvement, monitor changes, and
make decisions on the allocation of resources (Ward and Peppard 2002).
Various other benefits are associated to the use of portfolio approaches as well; these
include improved business-strategy alignment, centralized control, cost reduction, and
communications with business executives (Jeffery and Leliveld 2004). Jeffery and Leliveld
find a positive relationship between performing portfolio analysis and return-on-assets, if
the method is applied on a high level of maturity (2004). There are, however, also some
drawbacks of using portfolio techniques. The methods tend to be resource intensive, and
are, in effect, only an excessive simplification of reality (Ward and Peppard 2002); hence
only support high-level decision making. In addition, determining the metrics to be used
and establishing measurement processes can cause problems, as do lack of skills and
resources, and insufficient business-IT alignment (Jeffery and Leliveld 2004).
A number of portfolio methods are available, among them are the McFarlan-matrix
(McFarlan 1981), the Bedell method (Bedell 1985), the Growth-Share matrix of the
Boston Consulting Group (1970), and the Importance-Performance maps as applied by
Skok, Kophamel, and Richardson (2001).
2.6.1 Conclusions regarding economics and evaluation methods
In their guidance of information system decision making organizations have an extensive
portfolio of evaluation methods at their disposal. In these methods, various approaches
are used to assign value to information systems. However, on the face of it, none of these
appear able to cope with the troubled relation between information systems and the
value they provide to organizations. In addition, as the methods are primarily based on a
quantitative, or quantifying, foundation, it could be that the differences between benefits
and costs are not dealt with accordingly. This aspect is discussed further in the next
chapter.
41
2 Literature on evaluating information system economics
2.7 Summary and conclusions
In this chapter the concepts of the information system and its value provision, benefits
and costs are discussed. In addition, the methods available to evaluators to make
economic assessments of information systems are described. It is seen that the
information systems managed cause benefits and costs by their introduction,
improvement, or termination on either the functional, analytical, and temporal
dimension. This possibly leads to increased business performance, depending on the
competitive environment and the way the organization operates herein.
In assessing information system costs and benefits, it is possible to detect the presence of
differences between the two elements. On the one hand, it was seen that the emphasis in
information system benefits research lies heavily on non-financial aspects, resulting in
problems. Research on costs, on the other hand, has a dominant financial orientation in
information system literature and is closely connected to the field of accounting and little
attention is paid to the negative contributions.
The evaluation of costs and benefits provides most value for organizations when the two
can be directly compared. Surveying the current state, however, the problems with
evaluation of the two are seen to be different. Given the accounting standards in place,
cost accounting for information systems seems to be converging to a more standard
practice with increasing objectivity or at the very least, uniformity. Yet, developments in
information system benefits assessment appear to be lagging behind in this line of work.
While the evaluation of cost struggles with issues such as addressing the right cost drivers
and determining acceptable levels of costs, the assessment of benefits has not similarly
progressed. Benefits and costs thus appear to have a different ‘denominator’, making
them apparently unsuitable to be combined in a single evaluation in their current form.
These differences are something the evaluation methods do not seem to cope with. On
top, the vagueness in setting the boundaries of information systems for which to
determine the benefits and costs creates an apparent practical impossibility to demarcate
the whole. In the next chapter, the understanding of the relationship between benefits
and costs is deepened by the development of a theory on why this effect might appear.
42
3 Theoretical perspectives and objectivity
3
Theoretical perspectives and objectivity
3.1 Introduction
After first introducing the economic consequences of information systems for
organizations in the previous chapters, this chapter presents a theory on why the capacity
of evaluation might not be fully utilized. As this theory will explain organizational
behaviour in the use of evaluation, its foundation lies in the field of firm behaviour from
an economic point of view. In order to provide a toolbox for the theory, a brief overview
of this field is offered first.
In the field of firm behaviour, at least four lines of thought are available to researchers;
these are, the (i) neoclassical theory of the firm, (ii) economics of information, (iii) new
institutional economics (NIE), and (iv) behavioural economics. It is acknowledged that
these four (meta) approaches are by no means exhaustive nor mutually exclusive; in fact,
in some ways the theories are intellectual successors. However, it is assumed that, when
reviewing the direction of this research as well as the underlying theories they employ,
these four provide a reflection of the wide range of possibilities. In order to explain the
choice of applied theories, broad outlines of each of these four are provided next. An
overview of the differences between the theories is provided in Table 3.
The neoclassical theory of the firm sees firms as an alternative coordination to that of
markets. As any theory of the firm, it aims to explain the existence of firms in a world
where markets would result in a perfect distribution (Coase 1937). Further, it examines
the (organizational) structure of firms as well as their behaviour within and boundaries
with this market (Coase 1937). In true neoclassical terms, firm behaviour is explained by
the underlying premise that perfect markets exist and contain actors that have rational
preferences. In addition, it is assumed that firms strive for the maximization of their
profits, whereas individuals in the market aim for maximal utility. Finally, it is reasoned
that the actors possess all relevant information.
Table 3: Differences between theories on explaining the behaviour of organizations
Neoclassical
theory of the firm
Economics of
information
New institutional
economics
Behavioural
economics
Market
Perfect markets
exist
Not necessarily
efficient
Not necessarily
efficient
Not necessarily
efficient
Information
Perfect
Imperfect, costly
Imperfect, costly
Imperfect, costly
Actors
Rational
Bounded
Bounded
Bounded
Decisions
Profit (or utility)
maximizing
Imperfect
information
Rules, norms and
constraints
Cognitive and
emotional
Focus
Market
Information
Institutions
Psychology
43
3 Theoretical perspectives and objectivity
Economics of information, or information economics, reasons that markets are not
necessarily efficient. Emerging inefficiencies are explained by the existence of an
imperfect and costly information distribution and the presence of information
asymmetries (Stiglitz 2000). In contrast to the previous theory, the actors’ rationality
within the market is bounded, and the suppliers are opportunistic (Macho-Stadler, PérezCastrillo, and Watt 2001). These influences of information are used to elucidate the
altered behaviour of the actors (Arrow 1984).
Similar to the economics of information, the NIE assumes imperfect and costly
information as well as bounded rational decision making by the actors (Williamson 2000;
Ménard and Shirley 2005). The NIE aspires to explain the behaviour of the actors by
examining the role and interaction of formal and informal institutional arrangements, in
which these institutions can be described as “the written and unwritten rules, norms and
constraints that humans devise to reduce uncertainty and control their environment”
(Ménard and Shirley 2005).
Deviating further from the economic grounds, behavioural economics reflects on the
(violations of) economic behaviour of actors by combining economic and psychological
theory. Essentially it assumes the environment to influence the mental processes. Next,
influenced aspects such as attitudes, personal factors and expectations affect behaviour
(Antonides 1996). Important directions to explain potential irrationalities in the
behavioural process and outcome are the effects of heuristics and biases, uncertainty,
and framing (Kahneman 2003).
Reflecting the theories and their differences to the research philosophy, as described in
Section 1.6, and the accompanying world view of the researcher, the neoclassical
approach does not correspond, as it assumes perfect markets with rational actors.
Second, the economics of information is not adopted separately, because its main
premise is also implemented within the boundaries of the NIE. In order to act according
to the researcher’s beliefs, the developed theory is thus based upon the final two
viewpoints. Therefore, primarily to explain the development of evaluation of information
system economics, the NIE is expounded in an abbreviated manner in Section 3.2. Next,
behavioural economics is further introduced in the subsequent section to make clear how
evaluations and their content are perceived.
After this disquisition emphasis is placed upon the concept of objectivity, as a third
element of the toolbox, in the Section 3.4. Evaluations of information systems can be
categorized in many ways. Each of these taxonomies highlights different characteristics;
examples are timing (Remenyi, et al. 2000), type of assessment (Renkema and Berghout
1997), and level of objectivity (Powell 1992). Typologies may aid organizations in their
choice of appropriate methods to support their information system investments and
other resource allocation choices. These justifications are assessed based on the business
value to the organization. From an economic standpoint, it was established in the first
44
3 Theoretical perspectives and objectivity
chapter that this value can be considered the sum of all positive and negative
consequences of the system. In an assessment of evaluation methods it is therefore
important to consider how these elements are reconciled. In addition, the level of
objectivity, while often mentioned and often implicitly present, has received little
fundamental examination in information system evaluation literature. Yet, the concept
may prove valuable as higher levels of objectivity in the measurement and evaluation of
the costs and benefits might be able to play a role in more commonly accepted and
employed evaluation principles. As stated by Lovallo and Kahneman, “more objective
forecasts will help you choose your goals wisely and your means prudently” (2003).
Provided that some objects of information are perceived to have different levels of
objectivity than others, the question arises of what is objective. Assessing evaluation from
this point of view might provide understanding of individual perceptions of the
differences between the benefits and costs of information systems. This knowledge can
then be used to guide evaluation in its development.
As such, the purpose of Section 3.4, as well as Section 3.2 and 3.3 for that matter, is to
provide the tools to build theoretical insights into the reasons behind the existing
problems with information system evaluation and the misalignment of benefits and costs.
In Section 3.5, the implications of the described theories for the evaluation of information
system economics are drawn up. These considerations, where needed additionally
supported, result in a selection of propositions and hypotheses that are presented
simultaneously and are tested in Chapter 5. This chapter ends with a summary and
conclusions.
3.2 New Institutional Economics
The first part of the toolbox is provided by the New Institutional Economics. “Institutional
economics focuses on the implications of the fact that firms within the economy as well as
individuals within a firm are self-interested economics agents with divergent objectives”
(Bakos and Kemerer 1992). The NIE is interested in explaining why economic institutions
developed the way they did and not otherwise (Arrow 1987) and “how they arise, what
purposes they serve, how they change and how - if at all – they should be reformed”
(International Society for New Institutional Economics 2007). “The belief systems that
evolve from learning induce political and economic entrepreneurs in a position to make
choices that shape micro and macro economic performance to erect an elaborate
structure of rules, norms, conventions and beliefs embodied in constitutions, property
rights, and informal constraints; these in turn shape economic performance” (North
2005). In doing so it distances itself from questions on resource allocation and degrees of
utilization, which are generally discussed in the older institutional economics (Arrow
1987). It also abandons the standard neoclassical assumptions that individuals have
perfect information and unbounded rationality and that transactions are free of charge
and occur instantly (Ménard and Shirley 2005). It is mainly build upon economics and
45
3 Theoretical perspectives and objectivity
political science, but also incorporates theory from several other social-science
disciplines, such as sociology and anthropology.
Williamson (1998, 2000) distinguishes four levels of social analysis which can be used in
the economics of institutions (Table 4). Although the frequencies provided are arguable,
“this perspective is valuable in emphasizing the varying time spans over which
institutional changes can take place” (Benham 2005). On the top level the (disputable)
spontaneous social structures such as customs, traditions, and norms are covered. Still
partly evolutionary, the institutional environment is located on Level 2. This environment
embeds the “formal rules of the game.” As the system of property rights is not perfect,
these structures are not sufficient to “play the game.” Therefore, on Level 3 the
institutions of governance are established, focusing on the contractual relations and
transactions. As the transactions deal with ex post governance, the “ex ante alignment
and efficient risk bearing” is emphasized on Level 4 by agency theory.
In this research, as by most others using NIE (Williamson 2000), Level 1 is considered to
be out of scope and is taken as a given. In the next sections, shaping the content of the
Table 4: Economics of institutions (Williamson 1998)
Level
Frequency (years) Purpose
Embeddedness:
L1
informal institutions, customs,
traditions, norms religion
100 to 1.000
Often noncalculative;
spontaneous
Institutional environment:
L2
formal rules of the game –
esp. property
10 to 100
Get the institutional environment right.
1st order economizing
(polity, judiciary, bureaucracy)
Governance:
L3
L4
play of the games – esp. contract
(aligning governance structures
with transactions)
Resource allocation and
employment (prices and
quantities; incentive alignment)
1 to 10
Continuous
Get the governance right.
2nd order economizing
Get the marginal conditions right.
3rd order economizing
L1: Social theory
L2: Economics of property rights / positive political theory
L3: Transaction cost economics
L4: Neoclassical economics / agency theory
46
3 Theoretical perspectives and objectivity
toolbox, the theories behind Levels 2 to 4 are further discussed bottom-up. This is done
using respectively agency theory, transaction costs economics, and the economics of
property rights. It should be noted that a wide range of literature is available to explain
the theories, especially for the general descriptions, the cited references are therefore
not necessarily the only or original origins but reflect the assessed sources. Also, in
addition to these four levels, Williamson identifies a “level zero” where “the mechanisms
of the mind take shape” (2000). As such, this level will be dealt with in Section 3.3 on
behavioural economics.
3.2.1 Agency theory
In agency theory, the firm is seen as a system of contracts among self-interested actors
(Ross 1973; Jensen and Meckling 1976). Between those actors, agency relationships come
into existence when one, the principal, transfers decision making authority to another,
the agent, to perform some action on the principal’s behalf effecting the former’s
economic utility (Jensen and Meckling 1976; Bakos and Kemerer 1992), as is the case
between organizations and employees.
Principals would benefit most if they could rely on agents to act in a way that would
maximize their welfare (Bakos and Kemerer 1992). However, agents do not always act in
a way which will lead to that level of welfare. There are two reasons why this behaviour is
typical of agency relationships; these are (i) because of goal incurrence, that is, the
principal and the agent regularly have different goals and objectives. And (ii) because the
inability of the principal to perfectly and costlessly monitor the services provided by the
agent causes information asymmetries to arise (Bakos and Kemerer 1992). Therefore, a
principal has to cope with agency costs, which it will try to minimize. The focus in this
minimization is again the contract between the principal and agent (Eisenhardt 1989).
Agency costs consist of monitoring costs, bonding costs and residual losses (Jensen and
Meckling 1976; Gurbaxani and Seungjin 1991). The monitoring and bonding costs arise
due to the first of two options for the principal to control the behaviour of the agent as
provided by Eisenhardt (1985); that is, the principal can purchase surveillance
mechanisms. The second control approach relates to bonding costs and is to reward the
agent based on outcomes which are thus accepted as substitute measures for the actual
behaviour. In this case, the principal transfers part of the risk it bears to the agent, as part
of the actual outcome can be caused by influences uncontrollable by the agent. Last,
residual losses occur with the principal due to behaviour by the agent which do not
optimize the outcome for the principal to the full extent.
3.2.2 Transaction cost economics
Transaction cost theory aims to explain why under certain circumstances institutions arise
that provide a higher level of efficiency than other institutions would in the same
situation (Gurbaxani and Seungjin 1991). In this view, the range of institutions that can
govern runs between the two extremes of markets and hierarchies, such as firms. On the
47
3 Theoretical perspectives and objectivity
one hand, markets are relatively decentralized control structures that rely on the price
mechanism to control supply and demand. While on the other, hierarchies control in a
comparatively centralized setting in which a central authority is believed to have
adequate information to coordinate supply and demand (Bakos and Kemerer 1992).
Transaction cost economics describe the conditions under which each of these two forms
of governance is likely to emerge. These conditions are based on the minimization of the
costs that emerge whenever there is a transfer of goods or service between two separate
entities (Walker and Weber 1984), the so-called transaction costs. It thus “posits that
there are costs in using a market as a coordination mechanism and that the firm is an
alternative mechanism that facilitates economizing on market transaction costs”
(Gurbaxani and Seungjin 1991).
Analysing the efficiency of transactions provides the transaction cost economics with two
elements; these are (i) the process, or administrative mechanisms, which’ efficiency is
issued. And (ii) the properties of the transactions that determine this efficiency (Walker
and Weber 1984). Eventually the governance form to arise is believed to depend on the
uncertainty associated with carrying the transaction into effect, the frequency with which
a transaction occurs and finally, the uniqueness or specificity of the transacted matter,
either good or service (Walker and Weber 1984).
The overall governance structure is not the only level upto which the approach provides
insights. On a middle level it offers insights into the boundaries between firms and
markets by explaining which activities should be provided in-house and which on the
outside (Williamson 1981). By matching internal governance with characteristics of work,
it also presents insights into the manner in which human assets are arranged (Williamson
1981).
3.2.3 Economics of property rights
The principle of property rights focuses on the ownership and allocation of decision rights
(Alston and Mueller 2005; Ménard 2005). The rights allow the owner to act in a particular
way, or prohibit non-owners to do so (Demsetz 1967). The maximal ownership that can
be gained would provide the following five rights (Alston and Mueller 2005):
the right to use the asset in whichever way the owner pleases;
the right to exclude others from using that asset;
the right to derive income from the asset;
the right to sell the asset; and
the right to transfer the asset to a freely chosen person.
It are these, formal or informal, rights that create an incentive for resource use as their
value provides the value of the asset. “The more exclusive are property rights to the
individual or group the greater the incentive to maintain the value of the asset.
Furthermore, more exclusive rights increase the incentive to improve the value of the asset
48
3 Theoretical perspectives and objectivity
by investment” (Alston and Mueller 2005: 574). In this sense, owning rights thus provides
“veto power” over the value that might be obtained from the subject under matter
(Walden 2005). Following the same argument leads to the conclusion that not possessing
the rights over factors of production that are necessary to ones work thus limits control
(Grossman and Hart 1986). However, as the management and enforcement of property
rights create transaction costs, various forms of institutions can emerge as control
structures in order to minimize these costs (Demsetz 1967; Grossman and Hart 1986; Hart
and Moore 1990).
3.3 Behavioural economics
Behavioural economics, the second part of the ‘toolbox’, focuses on the effects of
psychological and sociological factors on human behaviour in the economy in an attempt
to add accuracy to the current economic models (Diamond and Vartiainen 2007). These
factors are used to explain why the economic actors do not necessarily behave in a way
that would maximize their utility (Kahneman and Tversky 1979). The basic paradigm of
economic psychology is that an objective environment affects these factors, which in turn
influence the economic behaviour of the actor.
Building on this, Antonides (1996) describes an elaborate structural research paradigm
containing an objective, independently measurable, plane and a subjective, directly
assessable, one. On the objective plane, situational economic restrictions affect economic
behaviour, which together with personal resources influence the personal economic
situation of the actor. Additionally, the various personal economic situations of many
actors combined create a general economic environment. This general environment also
reflects back on to the personal environment of each individual as well as the economic
Objective plane
Subjective plane
Societal
opinions
Motives and
personality
Mental
processes
Decision
making
Perceived
restrictions
Personal
resources
Personal
economics
situation
Economic
behaviour
Situational
restrictions
General
economic
environment
Figure 14: The structure of economic behaviour on the objective and subjective plane
(Antonides 1996)
49
3 Theoretical perspectives and objectivity
restrictions. Next, the subjective plane is composed of motives and personality, mental
processes, perceived restrictions, and societal opinions; each of which (in)directly affects
the decisions made by the individual, and consequently its economic behaviour. The
entire model is represented in Figure 14; the upper half and dotted lines represent the
subjective plane, the remainder the objective one.
Combining this model with the notion that individuals within an economy have an
incomplete information processing ability, one of the basic assumptions of behavioural
economics, leads to the conclusion that behaviour is merely based on bounded rational
thinking. Influential directions to explain the potentially irrational decisions include the
effects of heuristics (Kahneman, Slovic, and Tversky 1982), prospect theory (Kahneman
and Tversky 1979) and frames (Kahneman and Tversky 2000). Commonly addressed
Table 5: Categories of psychological factors in economic behaviour (Van Raaij 1996)
Category (Van Raaij 1996)
Description (Van Raaij 1996)
Motivational factors
“biological, social and cognitive motivations, mostly seen as a
discrepancy between an actual and a desired state” (motivation and
personality)
Values and norms
“developed through socialization, guiding and constraining
economic behaviour” (attitude)
Information processing capabilities
“from the internal and external environment, combining information
from memory with new information. Information processing
includes encoding, transformation, and retrieval from memory”
(limited information processing)
Attitudes as an evaluative construct
“an evaluative construct of individuals to judge objects, persons and
ideas. Attitudes should be predictive of behaviour” (attitude)
Social comparison with peers
“of own input e.g. effort, output e.g. payment and situation with
referent persons and social influence of others” (cognitive
consistency)
Rules of heuristics
“for combining information, weighing benefits and costs and
assessing the value of choice alternatives” (limited information
processing)
Attributions of success and failure
“to causes and learning from this for future behaviour” (cognitive
consistency)
Affect in perception and evaluation
“(emotional factors) in perception and evaluation of information
and guiding behaviour” (emotions)
Bargaining and negotiating processes
“in competitive games or in dividing group outcomes among group
members“ (game theory, negotiation)
Learning processes
“other than using rules or heuristics” (learning, limited information
processing)
Expectations
“as evaluations and uncertain knowledge of future events and
developments” (economic expectations and investment behaviour)
50
3 Theoretical perspectives and objectivity
factors are therefore individuals’ appealing to heuristics, biased probability judgements,
overconfidence, anchoring to irrelevant information, loss aversion, and incomplete selfcontrol (Diamond and Vartiainen 2007). Based on Wärneryd (1988), Van Raaij (1996)
categorizes these factors in eleven types that can affect individuals’ economic behaviour;
these categories are presented in Table 5 together with a description.
3.4 Objectivity
Supplementary to the economic principles as described in the previous two sections, the
concept of objectivity finalizes the ‘toolbox’ to be used. In an apparent common sense
supposition, Powell (1992) states that objective measures seek to quantify system inputs
and outputs in order to attach values to them; while subjective methods (usually
qualitative) rely on attitudes, opinions and feelings. The latter part of this view is
supported from a research philosophy perspective in which objectivity relates to
objectivism, the ontological position stating that “social entities exist in reality external to
social actors”; whereas subjectivity descends from subjectivism, arguing that these
entities are created “from the perceptions and consequent actions of social actors”
(Saunders, et al. 2006). Despite extensive and insightful philosophical considerations,
influenced by Descartes, Kant, Foucault, and Nietzsche among many others (Darity 2008),
concealed behind the concepts of objectivism and subjectivism, no empirically applicable
definition to determine a level of objectivity has been established. Megill (1994) however
has defined four conceptual senses of objectivity that provide insight into how it can be
obtained and how it functions; these are the (i) absolute, (ii) disciplinary, (iii) dialectical,
and (iv) procedural sense of objectivity. Each of which is discussed next; unless cited
otherwise, the discussion largely follows Megill (1994).
The absolute objectivity represents the origin of the discussion on objectivity; that is, the
philosophical point of view. In its purest form, objectivity would have no actors involved
and the view would suffer from no distortions, absolute objectivity would thus offer a
‘view from nowhere.’ Following a realism standpoint, absolute objectivity would be
reached by the elimination of the influence of the observer on the observation. This
aspiration can be seen in the shift that took place in the discussion on absolute objectivity
in the 20th century, when the ‘view from nowhere’ and ‘representing things as they really
are’ were expanded towards representative criteria based on which judgement calls can
be made by rational beings to the level of which the matter moved into the direction of
the absolute reality (without the possibility to ever reach this reality).
Disciplinary objectivity, the second sense of objectivity, diverges from this notion of a
single comprehensive convergence and assigns objectivity to the compromises among
accredited authorities in the field of the matter, creating a community objectivity. The
assigned objectivity based on authorial manifest does however not necessarily mean that
the level of objectivity is high. As Power (1997) substantiates for the field of financial
auditing, “below the wealth of technical procedure, the epistemic foundation of financial
51
3 Theoretical perspectives and objectivity
auditing, ... is essentially obscure,” these procedures are based on an obscure knowledge
base and the output is essentially an opinion. The supposed objectivity stems from
disciplinary objectivity and its “acceptability to those outside a discipline depends on
certain presumptions, which are rarely articulated except under severe challenge” (Porter
1995). The trust put in the quantification of measures, advocating “increases [in] precision
and generalizability, while minimizing prejudice, favouritism, and nepotism in decisionmaking” (Darity 2008), can also been seen to be purely disciplinary.
Both disciplinary and absolute objectivity regard subjectivity as a negative influence;
where the former tries to contain subjectivity, the latter even tries to exclude it. Megill’s
third sense, that of dialectical objectivity, is where a positive relation with subjectivity
comes in. Dialectical objectivity declares subjectivity to be a requisite feature when
representing objects, “unless the [observer] already has within himself something of what
a particular [matter] offers, he will fail to see what is being given him” (Megill 1994).
Accepting the observer as an element of the observation would lead objectivity to
concern a sense of judgement and acceptance reflecting generally agreed principles. This,
in turn, results in a situation in which neither objectivity nor subjectivity are absolute and
mutually exclusive (Ford 2004). Therefore it might be better to aspire to reducing
subjectivity when pursuing objectivity, rather than a futile aim for objectivity itself. This
espouses Giddens’ (1984) view that the objectivity of a social system depends on its
enabling and constraining structural properties which create a range of feasible
opportunities wherein the agent can be engaged; the smaller the range of options
available, the lower the subjectivity.
Although none of the four senses is exclusive nor unrelated to the others, the final sense
is clearest in occurring no matter if any of the other appearances are reached or not.
Procedural objectivity, also known as mechanical objectivity, focuses on the impersonal
method of investigation or administration and is essentially a practice reaching its
objectivity by following rules (Porter 1995). These rules, often founded on disciplinary
knowledge, they “are a check on subjectivity: they should make it impossible for personal
biases or preferences to affect the outcome of an investigation” (Porter 1995). The rules
thus create high levels of standardization and reproducibility, which, as in science, might
be further enhanced by triangulation. Availability of rules is not sufficient for procedural
objectivity on its own, as other properties are of influence. Applying rules will often
require some kind of valuation by the actor involved. The more situations which require
actor judgement and the more complex these judgements, the lower the objectivity. This
effect is called ‘multiple subjectivity’ (Berghout 1997). The effect may be moderated by
use of triangulation. As in research, triangulated data enable the actor to ascertain the
statements made. Triangulation can also be created by instating multiple actors to
increase intersubjectivity. In which intersubjectivity is the special case of moving from
subjectivity to objectivity through the use of more than one actor.
52
3 Theoretical perspectives and objectivity
In both procedural and disciplinary objectivity, the level of objectivity is determined by
the position of the actor who either somehow obtains disciplinary approbation or follows
the mechanics in order to employ the evaluation, as well as the other actors involved. An
actor can and might positively or negatively influence the objectivity, based on his power,
the ability to actually influence, knowledge, the ability of how to influence, and interest,
the willingness to influence. Although all are required to influence, neither will ensure
employment. Apart from depending on the position of the actors, objectivity is “practiced
in a particular time and place by a community of people whose interests, hence standards
and goals, change with particular sociocultural and political situations” (Feldman 2004).
Hence, there are objectivity stimulating elements for methods and participants and these
elements will also interact. Something that could be deemed objective at a given time,
might thus lose this objectivity due to subjective manipulation by the actors involved.
Having presented an overview of theory on the economic behaviour of organizations and
the concept of objectivity, the next section is devoted to reflecting upon these concepts
with regards to the evaluation of information systems. This reflection will result in a
selection of propositions and hypotheses which are build on theoretical insights into the
reasons behind the existing problems with information system evaluation and the
misalignment of benefits and costs.
3.5 Propositions and hypotheses
In Chapter 2 the topic of evaluation of information system economics was explained. This
description led to the conclusion that, in the literature, the benefits and costs of
information systems are dissimilar and seem to be treated differently. In addition, the
evaluation of information system economics in practice might not have developed in the
same way on both sides. Subsequently, the first four sections of this chapter supplied a
theoretical toolbox which can help to explain these observations. This theory
development and the formation of hypotheses and propositions is the subject of this
section.
It should be noted that the difference between propositions and hypotheses lies in the
way in which they are tested. The propositions will be tested by means of analytical
reasoning whereas the hypotheses will be subject to statistical examination. In this
research the positions considering the practices applied to evaluation are defined as the
former (Section 3.5.1), the theses on the differences between benefits and costs as the
latter (Section 3.5.2).
3.5.1 Practices in the evaluation of information system economics
Practices associated to a certain activity give insights into the role of the activity inside an
institution. Therefore, the practices executed by the organizations in the process of
evaluating represent a major part of the foundation evaluations possess within that
entity. In this section, several issues are addressed which could be underlying the
53
3 Theoretical perspectives and objectivity
apparent unsuccessful utilization of the potential value of evaluation regarding
information system economics.
Following the theory of property rights, the organization of evaluation resembles a
market in which the evaluators, the agents, supply information concerning their projects
to the principals. Here, the people ultimately deciding on the fate of the project in the
process can be identified to be these principals. The information provided can be supplied
based on a project push, towards the principal, or a project pull, as the principal can give
order to research a possible solution. In either case, it is the principal who holds the
property rights of the project. Ultimately, the principals transfer the tasks of evaluation to
their agents, while maintaining the decision rights.
From the agents’ point of view, once the initial decision is made to start a project, the
incentive to evaluate is likely to diminish. As the project lives on, the progress that would
be eliminated by a redistribution of allocated resources to another project increases
(though technically the used production to date could be defined as sunk); thus increasing
the transaction costs for the principal.
Recalling the distinctions between evaluating to judge or to learn (Section 1.4), on the
long term evaluation has its value in learning. However observing the self-interest of the
agents, it might be seen that the evaluators are often involved with a project and afraid of
short-term judging. This way, the agent might thus be unlikely to prefer the long-term
gains over the short-term influences. Considering this declining tendency of incentives on
the evaluators’ side and the agent’s self-interest, it is expected that evaluations are
mainly driven by the principal, hence:
Proposition 1: Project evaluations are not performed when not obligatory
The dichotomy between judging and learning could have further consequences. If the
evaluators are merely contributing to the demands of the principals, the focal point of the
evaluation is more likely to be fulfilling a role in judgement than one in the process of
learning. Especially as the agents are in the position to learn, whereas the principals are in
the position to judge. In order to take advantage from this learning, the principal would
have to decrease the element of judgement, increasing the agency costs as the power of
the agent increases. Given the potential of losing knowledge as a consequence of the
omittance the principal would also be inclined to incur costs to secure the preservation of
this knowledge. Therefore, it is anticipated that:
Proposition 2: Project evaluations are demand-based rather than learning-based
If the evaluations would prove to be driven by obligations and requisites, the question
arises where the focus of the evaluators lies. It is seen that the tasks of evaluating have
been diverted by the principals to the project team. They, however, have a bigger
incentive to evaluate the project process, rather than the project outcome; as the latter
54
3 Theoretical perspectives and objectivity
is, at least partly, out of their hands. The principal would thus be likely to try to control
the agents in order to negate this effect.
The same two approaches presented by Eisenhardt (1985) to do so can be recognized
one-on-one in the evaluation of benefits and costs of information systems (Van
Wingerden, Berghout, and Schuurman 2009a). The first approach covers the installation
of surveillance mechanisms by the principal. In the second approach, the agent is
rewarded based on outcome of substitute measures for actual behaviour. In information
system evaluation, these mechanisms can be seen to occur in the two distinct approaches
of outcome-based and process-based evaluations (Hinton, et al. 2000; Doll, et al. 2003),
as introduced in Section 1.4 and shown in Figure 15.
Cost
Benefit
Outcome-based
Efficiency
Effectiveness
Process-based
Outcome-based attributes focus on the measurement of results, whereas the processbased approaches consider the activities resulting in these results. For cost evaluation,
the outcome-based approach focuses on the measurement of actual costs and
efficiencies, where the process-based approach focuses on measuring the internal control
systems, possibly including compliance with generally accepted accounting practices
(Power 1997). Compared to the established cost evaluation, no such rules, standard
systems and accepted principles exist for information system benefits. Benefit
management methods do however facilitate their measurement in the two distinct
approaches. First, benefits are measured as such. Estimates are made of the causal
relation between positive effects and information systems. Thorp (1998), for instance,
obtains data on known driving variables, such as revenue, sales volumes, and the
customer base. Second, benefit generating processes are elaborated upon. Benefits are
not measured as such, however, the fact that a certain benefit generating process is in
place at least provides the necessary precautionary conditions for benefits to potentially
occur.
Cost
management
assessment
Benefit
management
assessment
Figure 15: Framework of evaluation approaches
55
3 Theoretical perspectives and objectivity
One could expect that as cost management principles created the foundation of
information system cost evaluations, benefit management principles could become the
building blocks for a ‘to-be established’ foundation of benefit measurement. In this
function process evaluation could espouse the enablement of benefits evaluation.
However, this would only benefit the principal as the actual evaluators, the project team,
are solely rewarded based on the project as a process and the delivery according to
specifications and within budget. Their incentive therefore only lies in complying with the
category of outcome-based evaluations. It is therefore proposed that:
Proposition 3: Project evaluations are more concerned with outcome than process
As the agents are mainly the people associated to the project, it could be that their
knowledge regarding the intended outcome of the project as well as their situational
knowledge of the environment in which the project takes place is lacking when they
evaluate. As time progresses, insights into the actual consequences of the project become
clear and the assessment of benefits and costs would gain quality. This is, on the other
hand, only after the project has already been granted the single important mandate.
Arguably, a solid form of ‘continuous’ evaluation at different stages could dispel this
issue. Such a system, however, will only occur if the additional evaluation would provide
value. As project evaluations are primarily focused on the approval decision and learning
aspects are neglected, as suggested in earlier propositions, this value does not exist.
Therefore decisions are believed to be made too early in the development process, that
is:
Proposition 4: Projects are given full approval too early in the development process
Having developed four potential flaws in the practices of information system evaluation,
the focus in the next section shifts to economic aspects of these evaluations. In particular,
the perceptions of evaluators on benefits and costs and the influence of objectivity
hereupon are considered.
3.5.2 Objectivity in the evaluation of information system economics
In Chapter 2 it is shown that there are a lot of differences between the benefits and costs
of information systems. Benefit assessments seem less reliable but more complete as
both financial and non-financial consequences are examined to a certain extent, whereas
cost assessments only process the financial consequences. This led to the conclusion that
the two appear to have a different denominator which hinders a functional comparison.
Here it is proposed that these differences could stem from varying levels of objectivity
between the two concepts. It can be argued that cost management creates a relatively
objective and comparable foundation – though it will depend on the quality of the data,
the quality of the costing system and of the quality of the output or signals the system
produces. Nonetheless, it is hoped that by careful cost categorization all sources of cost
can be identified, and quantified, in a reasonably robust manner, enabling high quality
56
3 Theoretical perspectives and objectivity
evaluations. When pursuing the evaluation of information system benefits, no such
established and accepted foundation as the one in place for cost measurement can be
build upon. To expand the insights into these differences, next the level of objectivity in
the evaluation of information system economics is further assessed by observing the
evaluation methods in regard to their contained objectivity on benefits and costs.
Based on its description in Section 3.4, an analysis of objectivity in information system
economics evaluation can be performed on the available evaluation methods. With
regard to the disciplinary objectivity, all methods appeal to the same disciplinary
foundation; therefore, a comparison is unlikely to show any differences here. The aspects
which relate to the mechanical objectivity however open up possibilities for analysing
levels of objectivity. Several aspects are distinguished concerning provided rules. The
availability of guidelines on the selection of the object under evaluation – the what aspect
– provide the evaluator with a reference framework for the boundaries of the
assessment. In view of the complex nature of information systems, these rules are a
prerequisite for objectivity. Having established the evaluation’s focus, rules on the
procedures of evaluation – the how aspect – will guide the evaluation in the process of
employing the analysis. This is the part in which rules for identifying costs and benefits
are to be found, as well as guidance to bring cost and return to a common base. Use of
triangulated data further enables objectivity in this aspect. After the evaluation process,
the outcome of the evaluation needs to be addressed. Rules on the criteria to be used in
this action – the which aspect – contribute to the mechanical objectivity of an evaluation
method by uniforming the interpretation. These rules are closely linked to the procedural
rules, but differ in that they guide meaning rather than operation. Next to the rules
connected to the evaluation process, two additional aspects can be identified in the
evaluation’s environment. The first concerns the stakeholders involved in the evaluation –
the who aspect. Active stakeholder management will increase support for the evaluation
as well as the triangulation of data. Additionally, facilitating issues – the why aspect –
regard the embedding of the evaluation in the organization. Issues involved include
supported learning capabilities, communication facilitation, and reporting guidance. An
overview of the aspects is provided in Table 6.
Against this understanding of the concept of objectivity in information system evaluation,
Table 6: Aspects influencing objectivity
Code
Rules regarding the ... of evaluation
MO1
Object
MO2
Procedures
MO3
Criteria
MO4
Stakeholders
MO5
Facilitation
57
3 Theoretical perspectives and objectivity
the evaluation techniques themselves can be addressed. To do so, a list of fifteen
techniques is created, providing a representative cross-section of the available evaluation
methods portfolio. The selection, presented together with the method’s original
information system source in order of occurrence in Table 7, includes some of the most
classical examples as well as a wide range of conceptual backgrounds; i.e., one or more
representatives are included for each of the categories of financial, multiple criteria, ratio,
and portfolio methods. In the compilation of the list the timeline, of Bannister et al.
(2006) is used to cover the changing concepts of information systems at least to a certain
degree.
Next, the level of every objectivity aspect (Table 6) is assessed for each selected method.
Where possible the original sources, as referred to in the table, where used. In addition,
information system evaluation literature was consulted for information on the methods
(e.g. Van Grembergen 2001; Renkema and Berghout 2005). The results of this
identification of sources of objectivity in the methods are also listed in Table 7. It should
be emphasized that the information provides no value judgement on any property other
than the earlier provided description of objectivity.
In general, the guidance provided on each of the five aspects is seen to be sparse. A
number of external factors might be attributed to this, among which are the intended
scope of either the references or the description of the methods. This is likely to be the
case for the matters of the object under evaluation, criteria of evaluation, stakeholders,
and facilitation. For instance, the object under evaluation is seen to be (information
system) projects and/or the (information system) organization. Setting the boundaries of
such entities entails issues far from the focus of any method explanation. As is the case
with stakeholder management, on which little special guidance is provided other than
information on the use of the method by (senior) management. When considering the
aspects of facilitation and criteria of using the evaluation outcome, a similar argument
holds. Either methods are intended for use within a broader scope, such as any of the
financial methods, or the scope of the methods is broader than one on which the aspects
are used; this is the case for methods providing an organizational framework.
On the aspect of procedures, the previous reasoning does not hold. The procedures form
the essence of the methods and therefore the internal rules on how to employ a method
are provided in detail. Nevertheless, looking beyond the internal rules, guidance on the
data to obtain and process creates a foundation for subjectivity as boundaries are not set.
As the older evaluation techniques are an outgrowth of traditional cost-benefit
methodologies their total objectivity relies on their disciplinary qualities. Moving forward
in time, objectivity diminishes on the aspect of procedures as the evaluation approaches
are increasingly enabled to assess benefits. Increasingly, evaluation methods offer a
framework which is customized for the organization employing the technique, rather than
a ready to use assessment. As the objectivity of cost measurements relies on similar
foundations throughout the selected portfolio of evaluation techniques, it appears
58
3 Theoretical perspectives and objectivity
Table 7: Sources of mechanical objectivity in information system evaluation methods
Mechanical objectivity aspect
Technique
(IS) Source
MO1
MO2
MO3
MO4
MO5
Object
Procedures
Criteria
Stakeholders
Facilitation
Cost value
technique
(Joslin 1977 in
Powell 1992)
Project
Financial
None
None
None
Cost benefit
analysis
(Lay 1985 in
Powell 1992)
Project
Financial
None
None
None
Method of
Bedell
(Bedell 1985)
Project,
organization
Implicit
substitutes
None
Management,
users,
automation
Portfolio
management
Value chain
analysis
(Porter and Millar
1985; related
sections from
Antonides 1996 in
brackets)
Organization
Cost and value
drivers
None
None
Action plan
steps
Internal rate of
return
(Weston and
Copeland 1986)
Project
Financial
None
None
None
Net present
value
(Weston and
Copeland 1986)
Project
Financial
None
None
None
SESAME
(Lincoln 1986)
Project
Financial,
categories and
areas
None
Users
Management
recommendations
Return on
investment
(Weston and
Copeland 1986)
Project
Financial
None
None
None
Information
economics
(Parker, Benson,
and Trainor 1988)
Project
Financial,
business and IS
criteria
None
Management
None
Return on
management
(Strassmann
1990)
Organization
Financial
None
None
None
Option theory
(Dos Santos 1991)
Project
Financial,
probabilities
None
None
Management
flexibility
Balanced
scorecard
(Kaplan and
Norton 1992)
Project,
organization
Perspectives, no
explicit
measures
None
Management
None
Benefit
realization
approach
(Thorp 1998)
Project
Perspectives,
methods to be
embedded
None
Management
Results chain
Benefit
management
approach
(Ward and Daniel
2006)
Project
Benefit
identification
supported,
business
measures
None
Management
Process
Val IT
(ISACA 2007)
Organization
Business case,
not explicit
None
Management
Processes
59
3 Theoretical perspectives and objectivity
that the two are even moving apart. Hereafter, the dissimilarities of costs and benefits
are deepened in the form of five hypotheses and an accompanying conceptual model.
When observing the differences in perception of objectivity regarding costs and benefits
it is seen that the use of the available guidance in general and methods in specific should
lead to higher levels of this objectivity. As the guidance towards costs is better
established, both within the methods as within the adjacent fields, it is therefore
proposed that:
Hypothesis 1a: Costs are perceived to be more objective than benefits
However, as the different types of costs and benefits in themselves make use of the same
sources, the inner objectivity is unlikely to differ; therefore:
Hypothesis 1b: All cost aspects are perceived to be equally objective
Hypothesis 1c: All benefit aspects are perceived to be equally objective
This more objective perception of costs is expected to have a further effect on the level to
which the assessments are perceived to be complete. As evaluators can receive better
guidance and are able to build on a field which already has taken more shape, it is
believed to be likely that these assessments are more complete. That is:
Hypothesis 2a: Cost evaluations are perceived to be more complete than benefit
evaluations
Again, within the range of cost and benefit types the foundations used for evaluations are
the same, perceptions on completeness are thus also not expected to differ:
Hypothesis 2b: All cost aspects are perceived to be equally complete
Hypothesis 2c: All benefit aspects are perceived to be equally complete
In addition, this can be expected to have an effect on the perceived importance of the
two elements. If the costs are rather certain, it would not be weird if the interest into
their evaluation should decline in favour of attention paid to benefit evaluation. Looking
at their position in the delivery of value, the firmness of the cost assessments would
namely mean an increase in the importance of the uncertain element in the (none)
delivery of value. Thus:
Hypothesis 3a: Cost evaluations are perceived to be less important than benefit
evaluations
As there is no reason for the source of benefits or costs to influence the value actually
created, their different types are likely to be perceived the same:
Hypothesis 3b: All cost aspects are perceived to be equally important
Hypothesis 3c: All benefit aspects are perceived to be equally important
60
3 Theoretical perspectives and objectivity
“Our perceptual apparatus is attuned to the evaluation of changes or differences rather
than to the evaluation of absolute magnitudes ... Strictly speaking, value should be
treated as a function in two arguments: the asset position that serves as reference point,
and the magnitude of the change (positive or negative) from that reference point”
(Kahneman and Tversky 1979). Again, the well established historical background on the
side of cost evaluation is expected to produce better results as conformity in the
understanding of concepts, and thus measurements, is likely to be higher. Therefore:
Hypothesis 4a: Cost evaluations are perceived to be better performed than
benefit evaluations
Within the diverse cost and benefit categorizations each type should have received the
same level of maturity and experience, thus:
Hypothesis 4b: All cost aspects are perceived to be equally performed
Hypothesis 4c: All benefit aspects are perceived to be equally performed
The central thesis behind all this is that objectivity is related to the perceived
performance in the evaluation process as a whole. This evaluation process can be split
into the separate evaluation of both the cost and benefit elements. It is expected that the
overall perception of the performance can be improved by gains on either side through
each of the aspects noted in Hypothesis 1 to 4. That is, by improving the objectivity,
completeness, importance, or general perceived performance of whichever part of the
evaluation, the overall perceived performance will increase. To complete the theory, the
following hypothesis should thus also be supported:
Hypothesis 5:
The higher the perceived objectivity, the better the evaluation
performance perception
The process in which the first four hypotheses are combined into the previous one is
structured into the conceptual model as provided in Figure 16. Although the emphasis is
put on the propositions and hypotheses, the conceptual model will also be addressed in
the empirical part of the research.
3.6 Summary and conclusions
In the preceding sections a theory has been built towards a possible explanation of why
the evaluation of information system economics has not reached its full potential yet.
After explaining New Institutional Economics, by means of agency theory, transaction
costs theory, and property rights, as well as Behavioural Economics, it has been shown
that objectivity can occur on four levels. These levels are the (i) absolute, (ii) disciplinary,
(iii) dialectical, and (iv) procedural sense of objectivity. These four levels then formed the
foundation of an analysis of objectivity in evaluation methods.
61
3 Theoretical perspectives and objectivity
This analysis unveiled that next to the differences between benefits and costs established
in Chapter 2, their evaluation shows a lack of objectivity. Applying parts of the line of
reasoning of the described theories to this led to a total of four propositions and five
hypotheses. An overview hereof is provided in Table 8. Combined the general line of the
theses is represented in the conceptual model as shown in Figure 16. The model states
that the perceived performance of an evaluation process is influenced by the perceived
performance in both the aspects of cost and benefit evaluation. It is further expected that
the perceived performance in each of these cases improves as the levels of inclusion,
objectivity, and importance rise.
In the next chapter the empirical design of the research is developed. In addition to
providing insights into the data gathering and analysis procedures, the data sample is
described. Subsequently, in Chapter 5, the data are analyzed and the propositions and
hypotheses are tested.
Table 8: Overview of propositions and hypotheses
Id
Thesis
Proposition 1
Project evaluations are not performed when not obligatory
Proposition 2
Project evaluations are demand-based rather than learning-based
Proposition 3
Project evaluations are more concerned with outcome than process
Proposition 4
Projects are given full approval too early in the development process
Hypothesis 1a
Costs are perceived to be more objective than benefits
Hypothesis 1b
All cost aspects are perceived to be equally objective
Hypothesis 1c
All benefit aspects are perceived to be equally objective
Hypothesis 2a
Cost evaluations are perceived to be more complete than benefit evaluations
Hypothesis 2b
All cost aspects are perceived to be equally complete
Hypothesis 2c
All benefit aspects are perceived to be equally complete
Hypothesis 3a
Cost evaluations are perceived to be less important than benefit evaluations
Hypothesis 3b
All cost aspects are perceived to be equally important
Hypothesis 3c
All benefit aspects are perceived to be equally important
Hypothesis 4a
Cost evaluations are perceived to be better performed than benefit evaluations
Hypothesis 4b
All cost aspects are perceived to be equally performed
Hypothesis 4c
All benefit aspects are perceived to be equally performed
Hypothesis 5
The higher the perceived objectivity, the better the evaluation performance perception
62
3 Theoretical perspectives and objectivity
Figure 16: Conceptual model
63
4 Research method
4
Research method
4.1 Introduction
Up till now, this research has focused on information system evaluation from a
theoretical viewpoint. This resulted in a number of propositions and hypotheses as
documented in the previous chapter. In this chapter, the process of conducting fieldwork
and acquiring data to test the gained knowledge and drafted theories is explicated. In the
overall research design in Chapter 1, it has already been indicated that the data is
acquired by means of interviews. In this chapter, the research process behind these
interviews and their analysis is explored. First, the overall empirical process used to
perform the fieldwork and analyze the results thereof is explicated in Section 4.2. From
these choices a questionnaire has emanated, the design of which is discussed in Section
4.3. Next, after giving a general overview of the data acquisition and accompanying
sample in Section 4.4, the methods and techniques of analysis to be employed are
considered in Section 4.5. Finally, the summary and conclusions are drawn up in Section
4.6, before the propositions and hypotheses are tested in the next chapter.
4.2 Empirical research design
Resembling the overall research design as presented in Chapter 1, the empirical part of
this research can also be organized in three parts; these are the (i) development of the
questionnaire, (ii) obtainment of data, and (iii) incorporation of these data (Figure 17). As
there are both hypotheses and propositions to be tested the data incorporation section is
split into quantitative and qualitative analysis. In the next four sections the steps and
decisions in each of the activities are further examined.
4.3 Questionnaire design
Prior to conducting the interviews a combined interview protocol (aka. interview guide)
and questionnaire were developed. The role of the first document is to provide a method
of quality control among the different interviews, not in the least by means of enabling
standardization (Emans 2004; Fowler 2009). One way it ensures this is by standardizing
the role of the interviewer and ensuring that the interviewer will be as consequent and as
complete as possible in playing his part. According to Fowler, this role includes asking the
Figure 17: Empirical research process
65
4 Research method
questions, probing for and on answers, recording these answers, and interpersonal
relationships (2009). Emans’ list of tasks allocated to the interviewer, cover the same
activities, but in addition includes introducing the interview and evaluating answers as
well. Furthermore, he recognizes two higher level tasks; these are task-oriented interview
leadership and social/emotional interview leadership (Emans 2004). The first comprises of
making certain that the material conditions are taken care of, creating clarity about tasks
and roles (not) to be performed by either side of the interview, and ensuring that these
tasks and roles are fulfilled throughout the interview (Emans 2004). The latter includes
“motivating and relaxing both the interviewee and the interviewer” and ensuring the
focus on the interview (Emans 2004).
Following the constructed theory, the protocol contains four sections; these are (i)
evaluation practices, (ii) evaluation performance, (iii) benefit and cost perceptions, and
(iv) possible control variables. Throughout the development phase several feedback loops
and tests with both academics and (former) field experts were carried out, providing
valuable criticism and advice. After considering all feedback and adjusting the
questionnaire where appropriate the final version as included in Appendix A was drafted.
After starting with a general question on the role of information systems in the
organization, providing the interviewer with valuable contextual information, the next
eight questions consider the evaluation practices as currently deployed by the target
organization. The questions do not consider only the evaluation practices, but also their
role in the organization and the generality of the practices. It is therefore their task to
gain data to test the propositions. Throughout these questions it was the task of the
interviewer to make sure that the questions, when applicable, were answered for both
costs and benefits as well as for the four phases of the information system economics
evaluation life cycle.
The second part of the questionnaire focuses on the perceived performance of the
organization in evaluating information systems. After triangulating the extent to which
the organization evaluates information system projects throughout the life cycle, the
questions consider several skills perceived to be valuable when evaluating information
systems and the level to which the six elements of the information system business case
(Schuurman 2006; Schuurman and Berghout 2006) are implemented. Each of the items is
measured by means of a five point Likert scales. Likert scales allow interviewees to
indicate to what extent they agree with a statement and are thus useful when measuring
perceptions (Saunders, et al. 2006). Considering the number of variables and items, a five
point scale was preferred over a seven (or any other) point scale. Although Dawes (2008)
finds a significant difference between five and seven, and ten point scales, this difference
does not exist between the first two on their own. Matell and Jacobi (1972) conclude that
using a smaller scale will not influence the results (two and three point scales left aside),
while the time needed to fill out the questionnaire for the smaller scale is lower. In
66
4 Research method
addition, no ‘do not know’ options are provided. Fowler (2009) indicates that these
options are generally used by participants when they are either unwilling to provide an
answer (often concerning their immediate lives), or if they do not have an adequate
knowledge level on the subject. Given the nature of the questionnaire, as well as the
expertise of the participants, it was anticipated that both reasons would not be valid in
this case. The consequences of using the Likert scales and possibilities for analysis are
recorded in Section 4.5.2. It should be noted that, as the interviews were conducted in
person, this provided a good opportunity to get extra clarification in addition to the
closed question answers; follow-up questions were therefore the rule rather than the
exception.
The third part of the questionnaire covers the perceptions regarding different benefits
and costs. As discussed in Chapter 2, many categorizations of costs and benefits are
readily available to be used in research (or practice, for that matter). In their literature
review on cost taxonomies Irani, Ghoneim, and Love (2006), for instance, were seen to
end up with 57 items somehow associated to information system costs. As such, a
number of items would create an unworkable situation for the interviewer and without a
doubt would put too much of a constraint on the interviewees, it was decided to use a
relatively sparse taxonomy based on the work by Kusters and Renkema (1996). The items
are listed in Table 9.
For each of these sixteen elements, five questions were added to test the hypotheses on
the perceived differences between the evaluation of benefits and costs; all using the
previously described five point Likert scale. The first three questions are intended to
measure the perceived performance of the organization in evaluating information system
economics. To do so, they consider the extent to which each item is included in
Table 9: Cost and benefit items
Cost types
Benefit types
Hardware
–
Initial
Efficiency gains
Hardware
–
Maintenance and operation
Effectiveness gains
Software
–
Initial
Organizational transformation
Software
–
Maintenance and operation
Technological necessity and/or flexibility
IT staff
–
Initial
Compliance to external necessities
IT staff
–
Maintenance and operation
Wider human and organizational impacts
User staff
–
Initial
User staff
–
Maintenance and operation
External staff
–
Initial
External staff
–
Maintenance and operation
Other, namely...
Other, namely...
67
4 Research method
evaluation, the perceived importance of each item, and the perceived performance in
evaluating each item (after Landrum and Prybutok 2004). The next two questions cover a
measurement of the perceived objectivity of every item. For these questions the
approach as used by Goodson and McGee (1991) is followed. In that research the relation
between several aspects related to performance evaluation and the perceived objectivity
is researched in the field of human resource management. Objectivity is measured by a
two item scale; these are the easiness for subjectivity to enter in when evaluating, and
the easiness for politics to do so. Although Goodson and McGee do not elaborate on their
line of thinking, these concepts also came forward in the theory building section of the
previous chapter (see Section 3.5). It is therefore believed that evaluations can become
more, or less, objective depending on the level of influence that is exercised on the data
used. This influence can either purposefully, via politics, or unintentionally, via
subjectivity, influence the data used.
An overview of the included costs and benefits is provided in Table 9. In addition, the
variables measured in the closed questions are outlined in Table 10.
To control for the differences between the organizations several organizational
characteristics are included to be measured. With the exception of the questions on the
information system department’s budget, which are taken from Love, Standing, Lin and
Burn (2005), all questions are consistent with the ones used by Ward, Tayler and Bond
(1996; as provided by Lin 2002). Finally, two open questions are added so to that each
interview would be concluded with an opportunity for the interviewee to provide
additional remarks where desired (Berghout 1997). Next to ensuring the completeness of
the answers, this also provides the researcher with a possibility to identify neglected
areas.
4.4 Data acquisition
4.4.1 Acquisition process
With the questionnaire in place, the interviews could be scheduled. The Dutch branch of
CIOnet, an online and offline network of CIOs and IT managers, accommodated the
acquisition of interviewees through their digital news letter. From over 200 members, 16
interviewees volunteered to participate in the research. In addition, using personal
networks, another 16 were found to be willing to participate. Thus creating a total sample
size of 32, a description of which is provided in Section 4.4.2.
The 32 interviews took place in the time span between mid July 2009 and mid January
2010. First, either by phone or email, an appointment was made with each of the
subjects. Given the questionnaire test results, each appointment was scheduled for an
hour and a half, so as to ensure time would not become the restricting factor in obtaining
all necessary data. All interviews were conducted at the site of the interviewee so as to
68
4 Research method
Table 10: Measured variables
Variable
Explanation
Level of evaluation performance
The organization’s overall perceived evaluation performance
Level of inclusion
The extent to which each item is included in an evaluation
Level of importance
The extent to which each item is perceived to be important in the
evaluation
Level of performance
The extent to which the organization performs in evaluating each
item
Objectivity
– Level of subjectivity
The extent to which it is easy for subjectivity to enter in when each
item is evaluated
– Level of politics
The extent to which it is easy for politics to enter in when each item
is evaluated
reduce their effort. In general this site was the office of the interviewee, however, in the
rare case of a shared office, an ordinary meeting room was used.
While cautious not to provide the participant with any leads concerning the theses, all
meetings kicked off with a very general introduction to the research and an explanation
of the process. In the explanation anonymity of both the participant and the organization
was promised and permission was asked to record the interview for researching
purposes. Although some interviewees asked for extreme caution with the anonymity,
and in one case demanded written consent when a verbatim quote would be used, none
of the participants objected to the recording. At this point, the research questions were
asked, following the questionnaire. In addition to recording the interview, notes were
taken to ensure the best possible data quality.
After finalizing an interview the obtained data was prepared for analysis by creating literal
transcriptions of the interviews and recording the answers to the closed questions in an
SPSS database. The notes were added to the data set during the analysis phase.
4.4.2 Sample description
In total 32 interviewees participated in the research. The majority of the interviewees
fulfill the position of chief information officer (CIO) or manager of the information system
department (56%), while the next frequently represented position is that of portfolio
manager (13%). Other positions (31%) included are, for instance, senior policy officer and
senior IT advisor. As the research did not focus on a specific line of business, a long range
of fields is covered by the sample. This includes law, logistics, industry, banking, waste
management, and public, medical, and information system services. Overall this results in
a ratio between organizations working on a profit versus non-profit basis of 20 (63%) to
12 (38%). Whereas the non-profit organizations are mostly governmental and solely serve
a national market, it can be seen that 31% of the represented departments operate on a
multinational basis. The same diversity can be seen in the size of the represented
69
4 Research method
organizations. Generally they are large sized, with an median annual turnover of €370
million and about 1.300 FTE of employees. In addition, the information system budget
was seen to be equally distributed between the categories of under 2%, between 2 and
5%, and above 5% of the organization’s turnover. An overview of the sample’s properties
is provided in Table 11.
4.5 Data analysis process
Each of the two parts of the questionnaire requires its own set of analysis techniques;
that is, the open questions are processed using qualitative techniques, the closed by
quantitive ones. Both types are described separately next.
4.5.1 Qualitative analysis
Qualitative analysis is necessary for the propositions to be tested. In this process, the
gathered evidence is transformed to analysable data which is then researched. The
Table 11: Overview of sample properties
Property
Category
Interviewee position
Chief information officer / IS manager
Organizational size
56%
4
13%
Other
10
31%
<50M€
5
16%
50-100M€
3
9%
100-500M€
9
28%
>500M€
16
47%
<500 FTE
8
25%
15
47%
>5000 FTE
9
28%
<2%
7
22%
2-5%
10
31%
>5%
8
25%
Unknown or external supplier
7
22%
National
22
69%
Multinational
10
31%
Internal IS supplier
27
84%
External IS supplier
5
16%
Profit
20
63%
Non-profit (governmental)
12
38%
500-5000 FTE
IS budget (of turnover)
Market orientation
Supply chain position
Foundation
(%)
18
IS portfolio manager
Organizational turnover
N
70
4 Research method
means to employ these steps are discussed in this section. Except when referred to
otherwise, this section is based on Saunders, et al. (2006).
The main qualitative data to be analysed is provided by the direct answers to the open
questions. However, it has to be anticipated that these answers are not the only sources
of data to support the propositions, as they are likely to lead to further discussion and the
answers to the closed questions can be given qualitative assistance. Next to the voice
recorded answers, the interviewer does make additional notes when believed necessary,
creating a third source of the data. This gathered data is however still in primary, or raw,
condition. Before it can be analysed, it has to be processed into bits and pieces which
allow the researcher to cope with the pile of data. The most significant (and laborious)
step in this preparation is transcribing the interviews. Notably, in these transcriptions, the
first step towards analysis are already embedded, as, for instance, the (situational)
knowledge and skills of the transcriber influence the outcome, as do choices made on the
way of transcribing (Miles and Huberman 1994).
After transcription, the data is ready to be analysed. Techniques to do so depend on the
level of structure, the accepted influence of researcher interpretation, and whether an
inductive or deductive approach is taken. Regardless of the chosen approach, Saunders,
et al. identify four relatively general applied actions involved with this analysis when it is
at least fairly structured and procedural; these are (i) categorization, (ii) unitizing data, (iii)
recognizing relationships, and, (iv) the development and testing of theories. The first
encompasses the grouping of data, providing an “emergent structure that is relevant to
your research project to organize and analyse your data further.” Second, unitizing data is
the process of attaching blocks, or ‘units,’ of data to the categories. Third, the analysis
continues with a “search for key themes and patterns or relationships in your rearranged
data,” possibly leading to restructuring of the categories. Finally, conclusions are reached
by the developing and testing of theories.
Besides these four general activities, other techniques are available depending on
whether either a deductive or inductive approach is adopted. As described in Section
1.6.1, this research applies the deductive one, therefore the options available for
inductive analysis of qualitative data are left aside here.
“The nature of qualitative data and the inherently subjective way in which it has been
collected, prepared, presented and interpreted means that more attention needs to be
paid to testing the interpretations finally arrived at” (Cornford and Smithson 2006).
Saunders, et al. identify two of the analytical techniques as presented by Yin (2003) to be
especially suitable when taking a deductive approach; these are pattern matching and
explanation building. The first considers propositions that predict findings, these
predictions are then falsified against the actual findings. The second technique is also
based on the matching of patterns, however here the predicting explanations are
constructed during the gathering and analysing of data.
71
4 Research method
The final factor in the analysis of the data is the format in which the results are chosen to
be presented. This format can be based on either the classic single-case, multiple cases,
single/multiple no narrative, or no separate sections for individual cases. Yin notes that
“the fewer the variables ... the more dramatic the different patterns will have to be to
allow any comparisons of their differences” (2003). A possible interpretation of this is that
it might be valuable to reflect on the cases which produce the most extreme results on
the variables. Given the nature of the propositions and the explanatory character of the
qualitative part of this research, it is this option combined with a general description (i.e.
no separate sections for individual cases) which is adopted.
4.5.2 Quantitative analysis
In order to test the hypotheses, the closed questions are statistically analysed. The
statistical methods which can be employed for this purpose depend on the data and
whether or not they fulfil the relevant assumptions. In this section these assumptions are
expounded and tested for the obtained data. In addition, applicable tests are explicated.
Unless stated otherwise, the general descriptions of the statistical methods are created
using Field (2005).
To compare measurements, two ranges of tests can be used, the suitability of which
depends on the data used. On the one hand, parametric tests, such as the t test and
analysis of variance, need the data to meet four assumptions if they are to produce
reliable outcomes; these are (i) a normal distribution, (ii) homogeneity of variance
throughout the data set, (iii) independency between the participants, and (iv) data
measurement on a metric scale, i.e. either on an interval or ratio level. Non-parametric
tests on the other hand, such as the Wilcoxon signed-rank and Kruskal-Wallis tests,
require far less from data in terms of assumptions. In general they are built on the
principle of ranking the measurements. The tests are then executed using the ranks
rather than the actual data. When comparing measures that are not significantly different
one would expect approximately the same rankings, therefore the difference between
the ranks tells us whether or not the variables vary.
Of the four levels of measurement available (i.e. the nominal, ordinal, interval and ratio
scales), the Likert scales used can be considered to be of either the ordinal or interval
level. Both provide an ordered or ranged set of answers, but are dissimilar in the
difference between values. Where variables measured on an interval scale have a
constant distance between the answers, allowing any mathematical operation, ordinal
scales contain no information on the distance between the units of measurement (Hair,
et al. 2006). Although a Likert scale on its own does not provide equal distances, it could
be regarded as interval data. So, although it can be seen that the observation that the
Likert scales are ordinal already violates the fourth assumption of parametric tests,
further testing is required as they could be regarded as interval data in special cases.
72
4 Research method
Therefore, the data are also tested for normality by means of the Kolmogorov-Smirnov
test. First, the multiple related items per question are combined by means of unweighted
averages, so as to provide overall scores on the evaluation performance, cost and benefit
levels. Second, the Kolmogorov-Smirnov Z’s are calculated and their significance is
determined (Table 12). Lacking any significance, the results clearly indicate that the
normal distribution has to be rejected. Although it is argued that parametric tests are
pretty robust (Carifio and Perla 2007), the sample size of this research is small and the
assumption violations are severe. Therefore, for now, this research must employ nonparametric tests.
The non-parametric test to be used depends on two variables. First, it matters whether
the number of variables to be reflected upon are two, or more than two. And second, the
dependency between the measurements is determinative. That is, are the variables
measured for the same participant, and thus related, or are they determined using
different participants, and thus independent. An overview of possibilities is provided in
Figure 18. Although several other tests are available to conduct analyses in the same
situations as portrayed in Figure 18, they are based on the same principles as the ones
noted and only small differences can be identified. Therefore, these other tests are left
aside.
Structural model
The previous section only covered the reflection between variables. However, to build a
model resembling the research conceptual model of Section 3.6 and to test how well
certain variables relate to each other, different techniques need to be called upon.
Structural Equation Modelling (SEM) provides factor analysis and multiple regression to
enable concurrent examination of “interrelated dependence relationships among the
Table 12: On combining evaluation, cost, and benefit items
Variable
N
KolmogorovSmirnov Z
Exact sig.
(2-tailed)
Point
probability
N of
items
Cronbach’
sα
Evaluation performance
31
.550
.894
.000
12
.814
Cost
Included
29
.456
.974
.000
10
.855
Importance
30
.573
.865
.000
10
.831
Performance
27
.625
.786
.000
10
.880
Objectivity
27
.537
.907
.000
20
.765
Included
31
.566
.874
.000
6
.450
Importance
31
.794
.508
.000
6
.354
Performance
31
.859
.410
.000
6
.567
Objectivity
30
.864
.402
.000
12
.361
Benefit
73
4 Research method
Number of variables
K (>2)
2
Wilcoxon
signed-rank
Friedman’s
ANOVA
Mann-Whitney
Kruskal-Wallis
Figure 18: Selecting non-parametric tests
measured variables and latent constructs (variates) as well as between several latent
constructs” (Hair, et al. 2006, emphasis removed). An alternative to SEM is Partial Least
Squares (PLS). In comparison, SEM is covariance-based, its objective is showing that the
complete model with all its paths is plausible, and its focus lies on the testing of theories
and providing explanation, whereas PLS is component-based, its goal is high significance
values to reject the hypothesis of no effect, and it is more useful for prediction and
exploration (Chin 1998; Hair, et al. 2006; Gefen, Straub, and Boudreau 2000). Also, the
latter is indicated to be less sensitive to sample size considerations (Hair, et al. 2006).
Given the sample and the exploratory nature of the current study, PLS is opted for.
4.6 Summary and conclusions
The preceding sections provide an overview of the processes and procedures involved
with the execution of the empirical part of this research. An explanation is given for the
underlying structure, the choices made in designing the questionnaire, and the
acquisition of the data. Further, a depiction of the research sample is provided. Last, the
techniques applicable in the data analysis are set forth for both the cases of qualitative
and quantitive analysis. The ensuing chapter will show the outcome of applying the
content of this chapter to the propositions and hypotheses from Chapter 3.
74
5 Evaluation in practice
5
Evaluation in practice
5.1 Introduction
In Chapter 4 on the research strategy and data collection it is established that the
empirical part of this research is executed by means of interviewing senior information
system staff members of a sample of organizations. This chapter starts in Section 5.2 with
a description of the problem area of information system evaluation as perceived by the
representatives of these organizations. This overview serves the purpose of illustrating
the wide, heterogeneous range of practices executed and trouble still interfering with the
success of these practices. In addition to the general description, several rare, or ‘special’,
cases are depicted to illustrate the directions some organizations take in developing their
evaluation practices. As such, these examples might serve the development of practices.
Their quality and suitability of the practices for other organizations (and the organizations
concerned) is however left in the open. Next, the analysis of the data related to the
hypotheses as well as their testing are handled in Section 5.3. In line with all preceding
chapters, this one closes with a recapitulation of findings and conclusions.
5.2 Understanding evaluation practices
In Chapter 2 it is established that projects are evaluated several times throughout their
life cycle. Each evaluation reassesses the reasoning behind the project and whether
advancing insights shed a different light on these arguments as well as on the actions
taken and to take. Here, the challenges organizations face when evaluation takes place
are discussed based on the data from the semi-structured interviews. Next to the
challenges, the section considers approaches taken to cope with them. To illustrate these
practices, common and uncommon issues are identified and cases that are perceived to
be rather unique are addressed separately. The subjects considered throughout the
analysis follow the line of questioning as composed in the questionnaire (Appendix A).
This includes the general view on evaluation, the practices employed, the involvement of
the business, and the role of evaluations in the decision making process.
It should be noted that as no attempt is made in this research to measure the impact of
the practices found, the common and uncommon cases as well as the unique practices
can neither be labeled as best nor as worst practices. Also, as the nature of the interviews
was semi-structured, there is a variance in the directions taken during the interviews. The
number of occurrences on practices noted below is therefore only an indication of how
often a subject was addressed. Naturally, this does not necessarily mean that the other
organizations go with(out) these practices.
The structure of the overview starts with the primary thoughts on the concept of
evaluation in Section 5.2.1. After that it builds upon the evaluation activities performed
from the initiation and justification phases in Section 5.2.2, during the realization stage in
75
5 Evaluation in practice
Section 5.2.3 and all through to exploitation and maintenance, as described in Section
5.2.4. Section 5.2.5 presents the findings and conclusions regarding these issues.
5.2.1 Observations on evaluation
To provide insights into the mindset on evaluation by the managers, first the concept
itself is reflected upon. Thirteen out of the 32 case organizations show an explicit
emphasis on the traditional trio (Sauer, Gemino, and Reich 2007) of time, budget and, to
a lesser degree, quality checks at the delivery of a project in combination with a review of
the project process (fourteen). Typically this was expressed in line with the comments of
the CIO of a large publisher who stated that
“[f]ormally there are two evaluations. The first is at project closure, when a project
ends; that is, whether the deliverables are produced. And second, the evaluation of
the project process itself; so, how did the project process go? In that case it’s about
the combination of time, money, and quality.”
Presenting a wider view, seven interviewees also note post-project evaluations on the
benefit related aspects of the projects’ business cases as being important, whereas a total
of seven include life cycle management principles in their description.
5.2.2 Initiation and justification
When observing the actual practices, all 32 organizations have incorporated practices to
evaluate their projects at the initiation and justification phases of a project. These
practices are seen to stem from one of two points of origin. On the one hand there is the
gradual adaptation of practices (six indications), as for instance can be seen in the
situation of a water board where
“during justification the board has an important role, so the question is ‘how to tell
the board?’ That’s something that we have learned almost evolutionary.”
On the other hand, ten organizations describe revolutionary changes, often initiated after
the failure of an, at least, reasonably sized, important project. As the project portfolio
manager of a large municipality puts it:
“We used to have a standard, but nobody knew, nobody used it, so that’s what
happens. Then a big project became a total disaster, or at least almost. The
external reviewers of that project advised us, among other things, to start using a
standard practice. Next, our chief financial officer has turned their report into a
law; this is how we do it, these are the terms of how we do business.”
To put this development into practice, especially the larger, process-driven organizations
start with a preliminary study, after which the generally accepted practice of building a
business case and accommodating project planning is executed. Next, these documents
end up at a committee which usually goes by the name of portfolio board, domain board,
or project steering committee, and typically consists of senior management of the
76
5 Evaluation in practice
business units. It is this committee that decides on the selection of projects. In their
evaluations the focus is put on the planning, justification, budget and scope of the project
(sixteen mentions). When looking at the benefits and costs involved with the projects, it is
the business case (22 mentions) which should provide an assessment of the expected
positive and negative impacts of a project in this phase, and with that the reasoning
behind executing it. In this approach, the business case serves as a method for resource
deployment accountability, rather than project justification. Overall, establishing the link
with business objectives is troublesome and benefit realization plans are missing
(mentioned but once). As one interviewee notes,
“what is regular, during the evaluation of an assignment in any case, ..., is the
question of whether the project will do the right things. Often there is no clear
connection between the project results and the business objectives; let alone the
project objectives, because these are really result oriented. Frequently, the
objective of a project is something like ‘we deliver a link between systems A and B,’
which is a result; the business objective however is completely out of sight.”
This raises questions on the extent to which the consequences of project proposals are
clear and on the level of quality of designed solutions. When considering the involvement
of different stakeholders in composing project documentation, it is seen that the tasks
are divided between the supplying IT departments and the demanding business units.
Complementary to the two main stakeholders, other business entities, such as the legal,
finance, and purchasing departments, are seen to keep an eye on the content in five
cases. In all three cases where a parent company is present, the company is seen to
oversee the activities as well.
In principle, the responsibility for, and decision rights regarding, the project lie within the
business and the IT departments support the business in writing proposals. Three IT
managers however indicate that the functional question presented by the business is
often lacking clarity and vision, or as an IT advisor from a hospital manager put it:
“Often projects are started based on the desire for a certain tool. It is not about
needing a car and having a good look around on the car market, but, ‘I want an
Opel Insignia, gray, with a car radio.’ That’s their functional question. ... We, the IT
department, then say, take a step back, what is it you want? It then becomes clear
that they need a transportation vehicle to bring them from A to B.”
These problems might stem from the fact that in some organizations, the business has
little experience with executing projects. For the average user in such an organization, a
project is an one off exercise; the same advisor continues:
“Suppose I do my first project, some things go right, some things go wrong, team
members learn from that. ... However, the people on the functional side often do
77
5 Evaluation in practice
the implementation of the project, close it, and use it, but do not do a new
project.”
It is here where two interviewees indicate another role for the business case, namely a
communicative one. In these kinds of cases the business case comes in as a tool in which
the overall understanding of the problem area and the effects of the demanded solution
are geared into one another.
Once the justification of a single project is handled, the matter of project selection and
portfolio management comes in. Corresponding the situation of single assessments,
project selection practices range from solely informal to completely produced. This is
illustrated by the following two contrasting remarks. On the one hand, a IT manager
working in logistics notes that
“[t]he management team decides what we are going to do and what the budget is.
With some free interpretation you could call it portfolio management; they know
what is going on.”
On the other hand, however, one of his colleagues at a major player in the transport
industry describes their process as follows,
“we use pair-wise comparison by putting six strategic drivers of the organization
side by side in a matrix. Each driver is given a certain percentage reflecting its level
of importance in percentage. Next, every member of the responsible management
team ranks the projects. This is followed by a discussion on the differences in
ranking results, which are a signal of dissimilarity in interpretation of the strategy.
Finally a consensus will be reached on the weights of the drivers and the projects
are ranked once again.”
In the distribution of scarce resources, it is seen that budget is rarely the main
constraining factor. Four interviewees from governmental entities and commercial
organizations even indicate that money rarely plays a major role in selecting projects. In
the portfolio process from the last quote, for example, the prioritized list is established
“even before financial numbers are included.” Following from this line of argument, the
business case is seen to only partially determine project selection. In one of the cases in
which budget was a primary issue, the allocation problem was solved in the company’s
own way, they voted for project selection. This case is presented as Special Case A.
To aid the organizations’ project and portfolio management practices three practices are
identified. First, at least three of the organizations apply a strict categorization in their
projects, although varying in order, all of them used the classes of compliance, continuity,
and commercial projects. Additional classes include innovative projects and “goodies.”
Second, another form of support is provided by actively breaking projects into smaller
78
5 Evaluation in practice
Special Case A: Voting for project selection
Most of the organizations with a solid portfolio selection process come to an agreement
on the final selection of projects with wholehearted support. One of the interviewed
organizations however did not manage to reach this agreement and decided to take a new
approach: voting. Each committee member was given a virtual budget to be voted with on
any selection of projects from the portfolio, but his own projects. The approach was seen
to raise awareness and commitment within the committee, “there was a lot of
communication, everybody was collecting information, ‘I do not understand your project,
do explain!’” When the method was eventually evaluated the committee members were
asked whether the voted portfolio was satisfying, “[e]verybody said yes, only three
remarks were made.”
shards, as performed by another three parties. As one CIO from the telecommunications
industry noted,
“IT projects tend to get out of hand, the best way to stay within your budget is to
have frequent deliveries; smaller bits are often seen as inefficient, but in truth they
are very efficient.”
Finally, six of the interviewees express putting an effort into creating a long-range
planning, a special case of which is provided in Special Case B.
5.2.3 Realization
After obtaining permission to proceed to the realization phase and starting the project,
evaluation turns to a method of project control. Revaluation of the business case’s
validity is nevertheless scarce and some organizations already start to show some
evaluation fatigue as not all projects receive the level of attention the processes in place
would require them to have. Particularly in organizations that only execute a few projects
each year, the motto by now inclines to being happy that a project finally seems to get off
the ground. As told by the project portfolio manager of a large multinational in the energy
industry:
“Do we really look at the business case, added value, again? I’ll tell you, if I’m
honest, in eight out of ten projects we do not. … No, everybody is just too happy,
finally we got something going, by then the steering committee is way too glad to
reassess the business case if the project is fairly running.”
An explanation possibly lies in the low impact the evaluation appears to have. Given the
question of whether organizations ever stop a project, for instance based on insights
gained while updating the business case, the answer is clear: never, “project started is
project finished.” Once initiated, it seems that any project must be dragged to the finish
line by whatever means possible. It is not that projects cannot be stopped, several
79
5 Evaluation in practice
Special Case B: The clean up calender
In addition to the ‘normal’ long-range planning, one of the organizations produces a socalled ‘clean up calender’, in which the phasing out of currently employed information
systems is discussed. “[In a benchmark i]t turned out that there where a number of areas
in which we were relatively expensive. Then we said, well looking at how much we build,
that is not strange, the customer also has to clean up once in a while, because if you do not
maintenance will occur twice. Mind you, we provide quarterly reports on what they get
and how much they have to pay for that. So if you want to be more efficient with your
department, than you have to clean up. That turned into the clean up calender, a longrange plan made by IT together with the business. It is also incorporated in the project
budgets and portfolio management, because eventually it is the outgrowth of your
business case. So ultimately you have to clean up to be efficient. ... You should have some
kind of exit strategy. Software is perishable. Therefore it should be in the business case.”
organizations indicate that they have a process in place which could lead to the
preliminary termination of a project (four mentions). However, in none of these
organizations, nor in the ones without such a process, has this actually been put to work
to the extent that projects are actually terminated. Leaving the portfolio manager to say
that:
”[t]he majority [of projects] keeps simmering, and will have some result. I think a
‘project killer’ would be a beautiful job. Not negative, but positive assessment of
course, just to assess.”
In cases where projects do return to the decision board, this can be ascribed to budget
overruns or scope issues, “budget issues are communicated with the board; the business
case is not renewed, but they know what’s going on,” according to a project manager
working for multiple governmental organizations. One hospital, however, never has any
trouble with budget shortages, as its CIO notes:
“most projects stop as soon as they are out of money, that’s when they are
finished. ... What you see is that not everything from the requirements has been
implemented, some issues are left aside because the budget was depleted. Very
simple. And if you want to continue, you have to ask for an additional project.
Sometimes we have a 2nd and 3rd phase, but against what additional costs? ... And
what is the substantiation for its importance? It’s actually pretty remarkable how
we do that.”
At project closure the emphasis on evaluation rises again. A significant factor for this rise
could be the influence of project management methods and their clear discharge of
projects. It is, therefore, no wonder that of the two possible evaluation targets, the
project’s process and outcome, the former practically monopolizes this assessment. At
80
5 Evaluation in practice
this moment, timely delivery and being on budget are the most important factors,
followed by quality of the outcome and scope. However, the value of these has to be paid
conscientious attention, as best phrased by the earlier quoted project portfolio manager
from an energy supplier:
“One of the biggest issues is that they evaluate, if they evaluate, one or two
months after project closure. By then, people are still in party mode, finally we
finished a project; in the past, the majority of the projects are seen to last forever
and ever. So, imagine, people are very happy, have a party, and evaluate the next
week.”
Another interviewee presented an extremely standardized and formal case in comparison
to the other contributing organizations. This practice, as reflected upon in Special Case C,
is seen to be narrowly fitted to the high risk practices of the organizations activities in the
oil industry.
Special Case C: Go/no-go stages
The organization in question applies a structure in which a project is broken down into six
stages. Each of the stages comprises a go/no-go decision which you cannot pass without
the written consent of the served business unit “that the objectives are reached, that they
conform to the results, et cetera … We are very rigid in systematically following each of the
steps, end-to-end, until the system is in use. ... It looks very formal, but in practice it really
works very well.”
The environment for this strictly structured situation is created by the organization’s
project management office which centrally administrates and reports on all projects.
“From their tooling you can observe the status of every project. This includes the phases of
their life cycle, but also a dashboard with status colours, remaining budgets, open issues,
and so on.” The supply chain point of view, adaptation of the process is required for their
suppliers. “It is a way to look at the total life cycle of a project, a single glance on the
position of a project.”
Yet, not only the planning and costs are incorporated into the process. “At the benefits
side the projects’ objectives are continuously observed. And the financial benefits will
follow. It clearly is scope versus objectives, during every single stage.”
5.2.4 Maintenance and exploitation
After project closure, the outcome of the project should have its reflection on the
organization and deliver its projected benefits as well as operational costs. Post-project
evaluations can be put in place to observe and manage these effects. By this time a quirky
situation arises, on the one hand the benefits are the fundamental reasons to execute a
project, while on the other hand the project itself ceases to exist. With the project, the
allocated responsibilities for achieving results as well as project evaluation seems to come
81
5 Evaluation in practice
to an end. Additionally, from a product viewpoint, the performed activities merge into
maintenance and exploitation. The tasks formerly held by the project organization are
now transferred to the running business, where the optimization of the product becomes
part of the daily work.
This transition is reflected by the practically non-appearance of ex-post evaluation of IT
projects in all of the cases in which this was discussed; just one statement of some sort of
ex-post evaluation was found, let alone a structured process. On the contrary, six
organizations state that although
“the art is to observe the effect, for instance financially or through added value,
after half a year, a year, or maybe even two years”
they “have not seen a post-project review,” that “such activities never happen,” or that
“for some projects we determine that we want to evaluate the outcome after a
year, or two. I think that up till now we have had quite a number of projects for
which that is indicated, but it is never executed.”
The deficiency of practices can be a reflection of the difficulty of ex-post evaluation. As
portrayed in an anecdote by a governmental CIO:
“Whether we actually collect the benefits, that is the real art. We have got a
project that should save 20% personnel costs, that’s colossal, and that’s the reason
to proceed with the project. … We’ll have to make sure that the benefits really
happen. Do new tasks not creep into the departments? Which would not be bad by
the way, you can say you do not fire the 20%, or make them superfluous, and you
are doing some organization improvements in that space. But you have to define it
clearly. Such a system which makes our entire chain more efficient, I cannot image
that there are no staff savings, but I cannot find them. … But let’s face it, it is hard
to do.”
It therefore remains unclear for the organizations whether or not benefits are actually
realized. This is underlined by the total absence of (proactive) benefit realization plans
and activities, which could fill the gap of actual output and outcome evaluation.
Additionally, the arguably easier task of evaluating maintenance costs, for instance in the
form of assessing service level agreements, is also missing in relation to the initial
estimations. A possible exception is presented in Special Case D.
Having observed that, besides perceptions within the organizations, it remains vague at
best whether projects perform, one can wonder whether the activity of evaluation itself is
in vain. Eleven interviewees indicate that the organizations solely learn from projects by
means of personal experience. Following project management techniques, the four
organizations refer to lessons learned, though these are noted to be rarely used. The
82
5 Evaluation in practice
Special Case D: FTE reductions by budget
An exception to the assisted realization of benefits is the situation in which the benefits
are aimed at efficiency gains and cost savings by means of FTE reductions. The common
method to secure these cost savings is to incorporate them in next year’s budget. Chances
are however that this budget will rather be complied with by implementing various small
cuts throughout the organizational entity (in Dutch: ‘kaasschaafmethode’) than by means
of project results. Three managers indicate that these benefits will not be obtained until
an organizational reorganization takes place. “Initially, those benefits are never redeemed
... You just cannot fire 0,1 FTE.”
absence of a feedback loop can be assigned to the pressure of daily business, as is
illustrated in the following statement of an IT supplier:
“There is a lot to be learned from how you did it and with that to improve
everything, but you just do not have the time. Everybody has some champagne,
project succeeded, let’s get on with the next project. ... [But] I think there is little
learning. That has to do with the dynamics of the organization. We are very busy,
this creates a situation in which we do not have a quiet moment to look back. That
moment enters into the matter when a project trips, not until then do we wonder
‘hey boys, what did we do, how did we do that in other projects, and what should
we have done differently?’”
The exception can only be found in the negative, when failed projects are acknowledged
some evaluation seems to arise. However, as a governmental portfolio manager puts it:
“[They say] ‘Look, this is what you had anticipated, you calculated that it would be
way cheaper, but that is not true at all.’ It’s more guilt building, not constructive.”
The statement directly shows that dangers are related to performing evaluations, as also
indicated by two other interviewees. For instance, a CIO from the telecommunication
sector notes:
“It might reduce motivation, because, especially in the case of product
development... Lots of products fail, that’s just the way it is. We launch about ten
products a year and we are very happy if five succeed. So, the question is whether
[(benefit) evaluation] is not very demotivating, because you have to keep
[developing products], it remains a bit of a gamble.”
Another instance of danger appears when people have to work together in future
projects. This seems especially the case when organization are dependent on second or
third parties.
83
5 Evaluation in practice
5.2.5 Findings and conclusions regarding evaluation
This section will discuss the preceding data on the process of evaluation in regard to the
propositions as defined in Chapter 3. Subsequently, each of the four propositions is
tested.
Proposition 1: Project evaluations are not performed when not obligatory
Throughout the life cycle of a project, the emphasis on evaluation fluctuates radically.
Initially quite some activities are in place to evaluate. Notwithstanding a short revival at
project closure, these activities are seen to decline rapidly once the project has been
given its mandate. The evaluations that do take place are thus seen to be focused on the
obligatory moments as put down in project management methods. As projects are never
stopped while in progress, there seems little reason for the agents to evaluate when not
obligatory.
Further, the origin of evaluations provides evidence on the reasons behind evaluations
performed. Outside the normal project cycle of evaluation, the main reason to assess a
project is that of failure; success is often not addressed. If evaluations originate from a
management revolution, thus obligatory, problems are indicated to remain in embedding
the activity in the organization.
It is therefore concluded that project evaluations are only performed when obligatory,
and Proposition 1 is not rejected.
Proposition 2: Project evaluations are demand-based rather than learning-based
In addition to the situation described with Proposition 1, it can be seen that no specific
elements assigned to learning from evaluations occur in the included organizations.
Learning is accepted to be an experience-based concept only and no effort seems to be
made in increasing, for instance, the accuracy of evaluations.
Here, this might also be explained by evaluation fatigue. As people become sick and tired
of the old project they go with the flow and continue with new projects. They are too
busy to evaluate the old ones and learning opportunities are missed. Proposition 2 is
therefore not rejected.
Proposition 3: Project evaluations are more concerned with outcome than process
Assessing the focus of the project evaluations, it is seen that, when evaluating, initially
the focus is on the project’s process, rather than on its results. Hardly any benefit
realization, let alone evaluation, activities are found to be in place. In addition, creating a
strategic link between the organization and the project is indicated to be a major issue.
This is underlined by the few adjustments made in evaluations based on the possibility of
(not) gaining benefits; that is, outcome related aspects such as budget and planning
issues dictate the assessments. Evaluations are therefore regarded as a resource
84
5 Evaluation in practice
deployment accountability system, rather than as an activity leading to fulfilling the full
potential of an investment. Although it should be noted that there might be a
consciousness-raising element in the form of increased clarity of the deliverables.
It is therefore seen that the deviation might rather be split between the evaluation of
project benefits and costs, than between outcome and process. Considering benefits
neither outcome nor process are considered, whereas evaluations of costs are concerned
with both. This results in the evidence being inconclusive and thus a rejection of
Proposition 3.
Proposition 4: Projects are given full approval too early in the development process
In line with the lack of focus on benefits, a project seldom seems to be stopped once
started; that is organizations commit themselves to the resource deployment. However,
as many projects still end up failing, the question arises if organizations could have known
before the end that that would happen, and thus could have saved resources, and when
they could have known. As projects are only evaluated when obligatory and the two
major points in time are project initiation and project closure, these questions are hard to
answer. However, whatever happens to the failed projects occurs between these two
moments of evaluation. Whereas during a project, progressive insights should cause
increases in the stability and completeness of the information concerning the project and
thus enable a better judgement. Proposition 4 therefore can not be rejected.
5.3 Cost, benefit and evaluation perceptions
In addition to the propositions, several hypotheses were described in Chapter 3. These
hypotheses are tested in this section. Successively, the theses on between cost and
benefit, within cost, and within benefit differences are tested in Sections 5.3.1 to 5.3.3.
Finally, the structured model on explaining the performance in evaluating information
system projects based on objectivity is tested in Section 5.5.4. The underlying threshold
for significance has been set at p<.05 for all tests; however if levels <.01 are measured
these are noted for completeness.
5.3.1 Between cost and benefit constituents
Hypothesis 1a: Costs are perceived to be more objective than benefits
The first hypothesis reflects the perceived objectivity of costs versus that of benefits. It is
observed that the median of 3.2 for cost objectivity is higher than the 2.8 for benefits and
that the interquartile distances do not differ at 3.0 (Table 13). Testing this using the
Wilcoxon Signed Rank test results in a significant difference between cost and benefit
perceived Objectivity with the Z-value, indicating the number of standard deviations from
the expected value, at -3.073 (p<.01, Table 14). As the test is based on positive ranks, this
means that the direction of the relation is negative and thus that benefits are perceived
to be of lower objectivity. Therefore Hypothesis 1a holds.
85
5 Evaluation in practice
Table 13: Descriptive statistics
Variable
Costs
Benefits
n
Mode
Median
Interquartile
range
Skewness
Kurtosis
Inclusion
29
3.7
3.7
1.1
-.37
.24
Importance
30
3.9
5.0
1.0
-.03
-.44
Performance
27
3.3
3.4
.8
-.34
1.76
Subjectivity
27
3.1
3.0
.7
-.03
-.81
Politics
29
3.1
3.0
.7
-.09
-.54
Objectivity
27
3.2
3.0
.5
-.05
-.06
Inclusion
31
3.6
3.3
.8
-.31
-.79
Importance
31
4.0
4.0
.7
.09
.20
Performance
31
2.8
2.5
.8
.51
-.62
Subjectivity
31
3.0
3.0
.3
.56
.71
Politics
30
2.7
3.0
.7
-.27
-1.08
Objectivity
30
2.8
3.0
.5
.12
-.64
Performance
Subjectivity
Table 14: Wilcoxon Signed Ranks test - Costs vs. benefits
Included
Importance
-2.109
a
-1.998
a
Objectivity
Z
-.638
Asymp. Sig. (2-t.)
.524
b
.758
b
.035
b
.046
b
.001
b
.002
b
Exact Sig. (2-tailed)
.533
b
.766
b
.034
b
.045
b
.001
b
.001
b
Exact Sig. (1-tailed)
.267
b
.383
b
.017
b
.023
b
.000
b
.001
b
.004
b
.004
b
.001
b
.001
b
.000
b
.000
b
Point Probability
-.308
b
Politics
a
-3.258
a
-3.073
a
a. Based on positive ranks.
b. Based on negative ranks.
As indicated, objectivity was measured by two means, these are the easiness for
subjectivity to enter in when evaluating a certain cost or benefit item and the easiness for
politics to do the same. A separate test results in the difference in Subjectivity perception
being significant at p<.05 and Politics at p<.01 (Table 14). Therefore, testing both
measures individually offers no additional deviations.
Hypothesis 2a: Cost evaluations are perceived to be more complete than benefit
evaluations
Again, building on the available body of knowledge for costs is larger than that of
benefits, the second hypothesis states that cost evaluations are likely to be perceived as
more complete than benefit evaluations. Using the Inclusion variable, it is seen that an
equal initial difference is observed as in the previous hypothesis, with medians at
respectively 3.7 and 3.3 (Table 13). The interquartile distance for benefits of .8 to 1.1 for
86
5 Evaluation in practice
costs could however make up for this variance. Using the same method for testing, it is
seen that this is the case and that the difference is highly insignificant (Z=-.638, p=.267,
Table 14); Hypothesis 2a is thus rejected.
Hypothesis 3a: Cost evaluations are perceived to be less important than benefit
evaluations
As cost evaluations were considered to be more standard and less complex than benefit
evaluations, Hypothesis 3a examines their perceived importance. The initial descriptive
statistics result in a median of 5.0 for costs versus 4.0 for benefits with corresponding
interquartile ranges of 1.0 and .7 (Table 13). Again using the Wilcoxon Signed Ranks test
this results in another rejected hypothesis (Z=.308, p=.383, Table 14). Interestingly
however, the test is based on negative ranks, indicating higher cost importance rather
than benefit importance, if it would have been significant.
Hypothesis 4a: Cost evaluations are perceived to be better performed than benefit
evaluations
The final hypothesis reflecting the difference in perceptions between costs and benefits in
general considers the performance of organizations in evaluating them. Wilcoxon Signed
Ranks provides a Z-value of -2.109, resulting in a significant result at p<.05; thus offering
no indication to reject the hypothesis. As expected based on the developed theory, the
negative ranks on which the test is based point towards a higher performance in
evaluating costs than benefits.
5.3.2 Within cost constituents
The cost items included in the questionnaire described a total set of five different costs,
each measured for the initiation phase and the operational phase of the information
system economic life cycle. This allows us to test the following four hypotheses not only
for the entire set of ten items, but also for the life cycle phases.
Hypothesis 1b: All cost aspects are perceived to be equally objective
The medians for objectivity cover a range of 5 to 8, approximately equally divided
between subjectivity and politics (Table 15). As there are more than two variables to be
compared and these variables are measured for independent respondents, the difference
between the cost types is tested by means of the Kruskal-Wallis test, the non-parametric
equivalent of the one-way independent ANOVA (Field 2005). The resulting χ2 of 41.8 for
the objectivity provides that a significant difference is found between the cost types
(p<.01), therefore rejecting the hypothesis. Looking at the subjectivity and politics, the
same level of significance holds.
87
5 Evaluation in practice
Table 15: Descriptive statistics of individual cost items
Hardware
Incl.
Imp.
Perf.
Subj.
Pol.
Obj.
Software
IT staff
User staff
External staff
ini
op
ini
op
ini
op
ini
op
ini
op
Median
4
3
5
4
4
3
3
2
5
4
Mode
5
5
5
5
5
5
2
2
5
4
IQ range
2
3
1
2
2
3
2
2
1
2
Median
3
4
4
5
4
4
4
4
4
4
Mode
3
5
5
5
4
5
4
5
5
5
IQ range
3
2
2
1
2
2
2
2
2
2
Median
4
3
4
3
3
3
3
3
4
3
Mode
4
3
4
3
4
3
3
3
4
3
IQ range
1
2
1
1
1
1
2
2
2
1
Median
4
4
4
3
3
3
3
2
4
3
Mode
4
4
4
4
3
3
2
2
3
3
IQ range
2
1
1
2
1
1
2
1
1
1
Median
4
4
4
4
3
3
3
3
3
3
Mode
4
4
3
4
3
3
3
3
3
3
IQ range
2
2
2
1
2
1
1
1
1
1
Median
8
8
8
7
6
6
6
5
7
6
Mode
8
8
8
8
6
6
6
4
6
6
IQ range
3
2
3
3
3
1
3
2
2
2
The next step would be to test every cost combination possible by means of Wilcoxon
Signed Ranks. These tests are carried out, but reporting on every single one of them for
all benefits and costs for every hypothesis is excessive. Therefore, the remainder of this
chapter confines itself to the discussion of test results of combinations which demand
attention, supported by the broad analysis of the rankings underlying the Kruskal-Wallis
tests.
The rankings for objectivity are led by the initial costs for hardware and software, which
score marginally different (Table 16). Staff issues are ranked considerably lower, with
operational costs and especially the user staff items composing the bottom part of the
ranks. When also considering that the external staff costs score best among the staff
issues, it would thus seem that trust is put in invoices. In the Wilcoxon Signed Rank tests it
is seen that while the test for objectivity is insignificant for operational external staff and
operational hardware, it is significant for subjectivity (Z=-1.985, p<.05). The same holds
for the operational external staff and each of the initial IT staff, operational IT staff and
88
5 Evaluation in practice
Table 16: Kruskal-Wallis test cost ranks
Inclusion
Importance
Performance
Subjectivity
Politics
Objectivity
Hardware ini.
172.69
113.18
181.35
195.81
182.79
189.00
Hardware op.
128.45
156.98
132.44
176.73
169.03
171.58
Software ini.
207.18
157.27
185.61
181.77
199.13
189.81
Software op.
165.23
186.89
148.61
147.74
176.45
160.37
IT staff ini.
167.98
144.88
164.52
150.81
141.69
143.53
IT staff op.
139.27
168.13
142.20
121.21
122.87
114.77
User staff ini.
121.50
172.69
127.22
118.91
118.80
113.61
User staff op.
90.95
169.21
115.79
95.03
101.07
89.96
Ext. staff ini.
202.57
155.90
201.18
182.40
165.58
169.57
Ext. staff op.
140.35
159.03
151.08
153.67
150.02
147.60
Table 17: Kruskal-Wallis test (grouped by cost item)
Inclusion
Chi-Square
df
Asymp. Sig.
Importance
Performance
Subjectivity
Politics
Objectivity
50.351
14.615
27.997
40.182
37.245
41.808
9
9
9
9
9
9
.000
.102
.001
.000
.000
.000
initial external staff costs (Zini.IT=-2.372, Zop.IT=-2.239, and Zini.ext.=-1.992, p<.05). On the
other hand, the operational costs involved with external staff are significantly different
from the initial software costs on politics (Z=-2.203, p<.05) while being insignificant on
subjectivity and objectivity.
Focusing on the information system economic life cycle, the perceived objectivity of initial
costs is tested by means of the Wilcoxon Signed Ranks test. The results show significantly
higher results in objectivity for the matching operational costs for software (Z=-2.540,
p<.01, Table 18), IT staff (Z=-2.127, p<.05), and user staff (Z=-2.203, p<.05) as well as all
costs taken together (Z=-1.965, p<.05). The hardware costs and external staff however
are not significantly different throughout the life cycle (respectively Z=-1.411 and Z=1.912). A possible explanation for this is provided by the interviewees who state that
hardware is objective as these costs are evaluated on invoices. In addition, external costs
in the operational phase are often said to occur within the project contract and/or for a
fixed price.
Another striking variation can be seen when deducing objectivity back into subjectivity
and politics. On the one hand, the difference in Subjectivity for software shows to be
insignificant, on the other Politics are insignificant for either of the three items reflecting
personnel costs. A possible explanation for the former is again the fixed price. The latter
89
5 Evaluation in practice
Table 18: Wilcoxon Signed Ranks Test - Initial vs. operational costs
Hardware
Inclusion
Importance
Performance
Subjectivity
User staff
External
staff
Total
a
-3.472
a
-1.851
a
-2.534
a
-3.473
a
-3.666
a
Asymp. Sig. (2-t.)
.002
a
.001
a
.064
a
.011
a
.001
a
.000
a
Exact Sig. (2-tailed)
.002
a
.000
a
.078
a
.012
a
.000
a
.000
a
Exact Sig. (1-tailed)
.001
a
.000
a
.039
a
.006
a
.000
a
.000
a
Point Probability
.000
a
.000
a
.018
a
.005
a
.000
a
.000
a
-2.804
b
-2.546
b
-1.529
b
-1.761
b
-.357
b
-2.004
b
Asymp. Sig. (2-t.)
.005
b
.011
b
.126
b
.078
b
.721
b
.045
b
Exact Sig. (2-tailed)
.005
b
.014
b
.138
b
.116
b
.729
b
.044
b
Exact Sig. (1-tailed)
.002
b
.007
b
.069
b
.058
b
.365
b
.022
b
Point Probability
.002
b
.005
b
.005
b
.008
b
.013
b
.001
b
-2,791
a
-2,589
a
-1,270
a
-1,121
a
-2,826
a
-2.897
a
Asymp. Sig. (2-t.)
,005
a
,010
a
,204
a
,262
a
,005
a
.004
a
Exact Sig. (2-tailed)
,004
a
,007
a
,237
a
,281
a
,003
a
.002
a
Exact Sig. (1-tailed)
,002
a
,004
a
,119
a
,141
a
,002
a
.001
a
Point Probability
,001
a
,001
a
,027
a
,031
a
,001
a
.000
a
-,868
a
-1,893
a
-2,326
a
-2,118
a
-1,992
a
-2.106
a
,385
a
,058
a
,020
a
,034
a
,046
a
.035
a
Exact Sig. (2-tailed)
,438
a
,053
a
,027
a
,047
a
,063
a
.032
a
Exact Sig. (1-tailed)
,219
a
,026
a
,014
a
,023
a
,031
a
.016
a
Point Probability
,031
a
,002
a
,012
a
,010
a
,020
a
.002
a
-1,149
a
-2,226
a
-1,543
a
-2,070
a
-1,511
a
-2.185
a
,251
a
,026
a
,123
a
,038
a
,131
a
.029
a
Exact Sig. (2-tailed)
,305
a
,039
a
,156
a
,063
a
,250
a
0.26
a
Exact Sig. (1-tailed)
,152
a
,020
a
,078
a
,031
a
,125
a
.013
a
Point Probability
,020
a
,016
a
,023
a
,031
a
,094
a
.003
a
-1,411
a
-2,540
a
-2,127
a
-2,203
a
-1,912
a
-1.965
a
,158
a
,011
a
,033
a
,028
a
,056
a
.049
a
Exact Sig. (2-tailed)
,176
a
,010
a
,041
a
,031
a
,068
a
.050
a
Exact Sig. (1-tailed)
,088
a
,005
a
,020
a
,016
a
,034
a
.025
a
Point Probability
,008
a
,002
a
,003
a
,006
a
,016
a
.002
a
Z
Z
Z
Z
Asymp. Sig. (2-t.)
Objectivity
IT staff
-3.027
Z
Asymp. Sig. (2-t.)
Politics
Software
Z
Asymp. Sig. (2-t.)
a. Based on positive ranks.
b. Based on negative ranks.
indicates that operational personnel costs have the same level of deliberate alteration as
the initial personnel costs.
Hypothesis 2b: All cost aspects are perceived to be equally complete
Hypothesis 2b considers all costs to be included at an equal level. The Kruskal-Wallis test
provides a χ2 of 50.35 and a significant result (p<.01). It is therefore concluded that within
the costs, differences can be found in the level to which they are included in evaluations
(Table 17). The accompanying Kruskal-Wallis ranks show the initial software costs to be
90
5 Evaluation in practice
Table 19: Descriptive statistics of individual benefit items
Incl.
Imp.
Perf.
Subj.
Pol.
Obj.
Efficiency
Effectiveness
Organizational
transformation
Technological
Compliance
Wider
Median
4
4
3
4
4
3
Mode
4
3
3
4
4
2
IQ range
2
1
2
1
1
2
Median
4
4
4
4
5
4
Mode
4
4
4
4
5
3
IQ range
2
1
2
1
1
1
Median
3
2
3
3
4
2
Mode
3
2
3
3
4
2
IQ range
1
1
1
1
1
2
Median
3
2
3
3
4
2
Mode
3
2
3
3
4
2
IQ range
1
1
1
1
2
1
Median
3
2
2
3
3
2
Mode
2
2
3
3
3
3
IQ range
1
1
1
1
1
2
Median
6
5
5
6
7
5
Mode
6
4
6
6
6
4
IQ range
2
2
2
2
2
2
included best, closely followed by the costs associated with external staff during the
project (Table 16). All costs involved with the efforts of user staff as well as the hardware
costs during operations and maintenance lag behind. Looking at the Wilcoxon test on cost
combinations, the tests closest to being insignificant are found for the initial external staff
and initial IT staff (Z=-2.062, p<.05) and the initial user staff and initial IT staff (Z=-2.080,
p<.05). Just on the wrong side of the marker are operational hardware costs compared to
the efforts engaged by operational user staff (Z=-1.899, p=.06) and initial IT staff (Z=1.887, p=.06).
When considering the initial versus the operational costs, the outcome of the Wilcoxon
Signed Ranks test based on positive ranks reveals that all costs with the exception of IT
staff costs are better included for the initial efforts compared to the operation ones (user
staff at p<.05, the others at p<.01).
91
5 Evaluation in practice
Hypothesis 3b: All cost aspects are perceived to be equally important
Running the same tests for the level of importance perceived for each of the cost items
delivers a different image. The χ2 of 14.615 means that the hypothesis is not declined and
that in this research the interviewees assign equal importance to all costs (Table 17).
Most striking in the rankings are the operational software costs topping the bill and the
initial hardware costs closing the line (Table 16). The latter could be due to hardware
costs often being indicated as part of a separate overall budget. Interestingly, the
Wilcoxon test on the combinations do indicate slightly significant results for the initial IT
staff and initial hardware costs (Z=-2,021, p<.05) and the external operational staff costs
and initial hardware (Z=-2.078, p<.05). Otherwise, the only two tests getting close are
those between initial user staff costs and its operational equivalent (Z=-1.761, p=.08) as
well as the initial hardware efforts (Z=-1.865, p=.06).
Specifying the reflecting again to the life cycle phases results in significant outcomes of
the Wilcoxon Signed Rank test for hardware (Z=-2.804, p<.01, Table 18), software and
costs overall (Z=-2.546 and Z=-2.004, p<.05). However, no significant results are found for
the three items on personnel costs. As the results are based on negative ranks, it can be
stated that the costs involved with the management and operation of hardware and
software are perceived to be of a higher importance than their initial counterparts.
Hypothesis 4b: All cost aspects are perceived to be equally performed
The final hypothesis on differences among cost items regards the perceived level of
performance in evaluating each of them. In Table 17 the χ2 resulting from the Kruskal
Wallis test is 28.00, providing a significant outcome at p<01. Following Hypotheses 1b and
2b, Hypothesis 4b thus also has to be rejected, clear differences between the evaluation
performance on each cost item are found. Table 16 shows the ranks assigned to the
efforts of external staff during a project to have the highest perceived evaluation
performance, where user staff costs and operational hardware costs are evaluated worst.
The paired comparison of cost items provided further insights by showing the operational
external staff and initial software to be nearing the significance threshold from the lower
side (Z=-2.071, p<.05), while the pair of operational and initial IT staff costs closes down
on the marker from the upper side (Z=-1.851, p=.06).
Narrowing the research down again to the initiation and operation phases of the IS
economics life cycle, the tests provided in Table 18 again prove to be significant at p<.01
for the costs resulting from hardware (Z=-2.791), software (Z=-2.589) and external staff
(Z=-2.826) as well as the overall costs (Z=-2.897). As the tests are based on positive ranks,
it is concluded that the performance of organizations when evaluating operational costs is
lower than when evaluating initial investments. Additionally, it is seen that the difference
between initial and operations costs made for IT and user staff are nowhere near
significant.
92
5 Evaluation in practice
5.3.3 Within benefits constituents
Resembling the cost observations, six items composed the construction of the benefit
side. The matching hypotheses are tested next. Again the Wilcoxon Signed Rank tests of
all combinations of benefit items are only referred to when providing additional insights.
Hypothesis 1c: All benefit aspects are perceived to be equally objective
The median of the level of Objectivity associated by the interviewees with the six benefit
items runs from 6 to 8 with a stable interquartile distance of 2 (Table 19). Testing the
benefits with the Kruskal-Wallis results in a χ2 of 39.906, which is highly significant (p<.01,
Table 20). Therefore, the objectivity among the six items is seen to be varying. Looking at
the ranks associated with the Kruskal-Wallis (Table 21), the benefits of compliance stand
out with the highest level of Objectivity, followed by the technological necessity and
flexibility at a respectable distance. At the bottom of the ranks, the wider human and
organizational issues fall behind the remaining three items. The partition among
Subjectivity and Politics is approximately the same. The individual Wilcoxon Signed Rank
tests further confirm this image with compliance to be significantly more objective than
all the others (Zefficiency=-2.881, Zeffectiveness=-3.270, Zorg. trans.=-3.945, Ztechnological=-2.412,
Zwider=-3.582, all based on negative ranks, p<.01). The first difference not significant is that
between efficiency and technological necessity (Z=-1.943, p=.05), which is however
significant when solely observing the subjectivity scale. It seems that legislative issues are
accepted as is, whereas indirect effects are much fuzzier.
Hypothesis 2c: All benefit aspects are perceived to be equally complete
The completeness of benefit elements in evaluations has a median of 3 to 4 and is
relatively stable with interquartile distances of 1 to 2 (Table 19). Nevertheless, the
perceived Inclusion among benefits is unequal too (χ2=27.419, p<.01, Table 20) and
Hypothesis 2c is thus rejected. The ranks provided in Table 21 indicate compliance again
to have a reasonable lead and wider human and organizational benefits to be severely
lagging behind. The Wilcoxon tests indicate the effectiveness inclusion to be insignificant
with that of organizational transformation (Z=-1.946, p=.05).
Hypothesis 3c: All benefit aspects are perceived to be equally important
The perceptions of the interviewees on the importance of the various benefit elements
provide a median and interquartile distance of 4 and 1 mostly (Table 19). Performing the
Table 20: Kruskal-Wallis test (grouped by benefit item)
Inclusion
Chi-Square
df
Asymp. Sig.
Importance
Performance
Subjectivity
Politics
Objectivity
27.419
30.032
35.597
36.908
30.048
39.906
5
5
5
5
5
5
.000
.000
.000
.000
.000
.000
93
5 Evaluation in practice
Table 21: Kruskal-Wallis test benefit ranks
Rank
Efficiency
Inclusion
Importance
Performance
Subjectivity
Politics
Objectivity
109.19
98.73
88.95
90.33
88.81
87.38
Effectiveness
94.19
113.72
77.61
76.08
81.59
75.98
Org. transf.
75.63
90.45
78.72
85.39
79.90
82.94
Technological
107.42
69.73
111.50
114.66
108.09
114.00
Compliance
123.75
128.38
140.33
137.97
135.61
140.68
64.85
74.31
78.34
70.79
76.19
69.27
Wider
Kruskal-Wallis test on benefit ranks indicates that, contrary to the situation of Hypothesis
3b, Importance ratings are significantly different (χ2=30.032, p<.01, Table 20). In addition,
compliance to external necessities proves to be the superior item (Table 21). More
remarkable is that in comparison to the Inclusion level, effectiveness overtakes efficiency
and wider human and organizational issues pass technological necessity and flexibility in
the ranking. When tested separately however, the difference between efficiency and
effectiveness proves to be insignificant (Z=-1.776, p=.08).
Hypothesis 4c: All benefit aspects are perceived to be equally performed
The final variable to be tested is the perceived Performance in evaluating the benefit
items. The descriptive statistics in Table 19 show values resembling the other variables
with medians of 2 to 4 and interquartile distances of 1 to 2 and not surprisingly these
items are also significantly not the same (χ2=35.597, p<.01, Table 20). In addition, the
compliance issues again have the lead with a large margin over the technological
necessities which are listed second (Table 21). As such, the level of performance when
evaluating compliance issues is thus perceived to be much higher than when assessing
other types of benefits.
5.4 Explaining perceived evaluation performance
With the hypotheses on the individual variables in place, the attention can shift to the
conceptual model and regression analysis. Given the distribution among rankings seen in
the previous two sections, this could provide interesting results. As discussed in Section
4.5.2, the PLS technique is applied, and to apply this technique, PLS SmartPLS 2.0 is used
(Ringle, Wende, and Will 2005). Following the invaluable guide of Andreev, et al. (2009),
the model evaluation subsequently addresses the content composition and validity, the
constructs’ reliability and validity and the analysis of the results.
Content composition and validity
The base model equals the theory as described in Section 3.5. In short, the perceptions of
an organization’s overall evaluation performance are stated to be determined by the
94
5 Evaluation in practice
separate performances in the evaluation of costs and benefits. In turn, these
performances depend on both the importance and presence, i.e. the level of inclusion of
the underlying items, which themselves are also related. Finally it is believed that the
performance perceptions are altered by the level of objectivity of the measured items.
To construct the model the measured items from the questionnaire (Appendix A) are
connected to the latent variables (Table 22). These connections to the constructs can be
either reflective or formative indicators. In the first case, the latent variable effects the
indicators, whereas in the latter case it works the other way around as the measures are
influencing the construct and “jointly determine the conceptual and empirical meaning of
the construct” (Jarvis, et al. 2003). Jarvis et al. (2003) distinguish seven stipulations to
which formative indicators have to account (with opposites indicating a reflective
connection), these are:
1.
2.
3.
4.
5.
6.
“The indicators are viewed as defining characteristics of the construct,
Changes in the indicators are expected to cause changes in the construct,
Changes in the construct are not expected to cause changes in the indicators,
The indicators do not necessarily share a common theme,
Eliminating an indicator may alter the conceptual domain of the construct,
A change in the value of one of the indicators is not necessarily expected to be
associated with a change in all of the other indicators,
7. The indicators are not expected to have the same antecedents and consequences.”
While the fourth stipulation may be false, the other conditions point towards a formative
connection between the ten cost and six benefit items measured and the constructs
Table 22: Indicators of the latent variables
Variable
Question
Cost items
Benefit items
Initial
Operation
Initial & Operation
Inclusion
To what extent are [item] included in an
1
evaluation?
5
5
6
Importance
How important do you perceive [item] in
1
the evaluation of IS projects?
5
5
6
Performance
How does your organization perform in
1
evaluating [item]?
5
5
6
Subjectivity
How easy is it for subjectivity to enter in
1
when [item] is evaluated?
5
5
6
Politics
How easy it is for politics to enter in
1
when [item] is evaluated?
5
5
6
Overall
performance
When considering IS project evaluation,
how do you perceive your (project)
2
organization’s...
10
10
1
Questions 17 and 19 in the questionnaire (Appendix A)
2
Question 11 in the questionnaire, excluding overall performance, cost and benefit management (Appendix A)
95
5 Evaluation in practice
described. Therefore, within the SmartPLS, the constructs are built using formative
indicators, as the accompanying data points are actually measures of the construct. This
way, the ten cost items measured from the level of inclusion question are, for instance,
linked to the Cost Inclusion variable, and the Benefit Objectivity variable is constructed
from both the six items of subjectivity and politics. Moreover, the relations between the
main constructs as provided drawn up in Figure 19, are reflective following the described
line of reasoning. Unfortunately, the base model cannot be adopted one on one, as is
explained next.
Initially, the model is to be run in SmartPLS with settings for the data metrics of 0 mean
and 1 variance, 1,000 iterations maximum, 1.0e-5 as abort criterion, 1.0 as initial weights,
and missing value replacement by the mean. Next, to determine the significance of the
path between the constructs, the statistics are validated by means of bootstrapping.
Bootstrapping has to be employed to validate. Bootstrapping is a technique that draws a
large number of subsamples from the original sample to estimate the model for each of
those subsamples, thus relying solely on the sample data (Hair, et al. 2006). The
combination of the estimates then provides the estimated coefficients and their expected
variability and likelihood of differing from zero (Hair, et al. 2006). The strength of a
bootstrap depends on the number of cases and samples used, in which the cases are the
number of cases in the original sample and the samples represent the number of times a
subsample is drawn. Combining Objectivity into Subjectivity and Politics causes singular
matrices to occur, i.e. levels of extreme harmony between data, which disable
bootstrapping in SmartPLS. As Objectivity is a combination of Subjectivity and Politics, it is
anticipated that omitting this part of the model is less harmful than adjusting the data
and reducing the internal consistency. Therefore, the first alternative is opted for and the
combined Objectivity variable is subdivided into separate Subjectivity and Politics
variables, causing the overall model to be determined as illustrated in Figure 19.
Construct reliability and validity
To have a valuable model, the constructs need to be represented as best as possible.
However, as pointed out by Chin (1998), formative indicators do not necessarily have a
high internal consistency, and it is even stated that no standard measure to content
validity of formative constructs is available (Jarvis, et al. 2003; Gemino, Reich, and Sauer
2007). Thus making, for instance, the Cronbach α’s a matter of secondary importance.
Relevant however is whether multicollinearity appears.
Multicollinearity is the amount to which a variable can be predicted or accounted for by
other variables within the model (Hair, et al. 2006). Previously, it was established in
Hypothesis 3b that on the level of importance, the various cost items are perceived to be
of equal importance. This might already be an indication that the level of multicollinearity
for this variable is too high. For thorough investigation, the extent of multicollinearity
present in a model can be measured by the variance inflation factor (VIF).
96
5 Evaluation in practice
Cost
subjectivity
Cost
politics
Cost
inclusion
Cost
importance
Cost
performance
Overall
performance
Benefit
importance
Benefit
performance
Benefit
inclusion
Benefit
subjectivity
Benefit
politics
Figure 19: Structural model
The VIF is an indicator of “the effect that the other independent variables have on the
standard error of a regression coefficient” (Hair, et al. 2006) which produces higher scores
with an increasing level of multicollinearity. Though it has been suggested that the VIF
scores should be as low as 2,5 (Diamantopoulos and Siguaw 2006), the more common
rule of thumb adopted here is a threshold of 10 (Gefen, et al. 2000).
Upon investigation, the VIF scores of the Cost Importance indeed turns out to be over the
threshold. In addition it is accompanied by Cost Performance and, to a lesser degree,
Benefit Performance, Costs Inclusion and Subjectivity, and Overall Performance (all
VIF>10). A potential solution is to alter the constructs causing the multicollinearity of the
model. In this case, the solution opted for is to separate the model into two models; that
is, one model only containing the initial cost constructs, the other only the operational
ones. The resulting VIF scores are listed in Table 23. It is seen that in the model in which
the latent variables are based on the data on initial costs, the Benefit Performance
variable still does not obtain a score satisficing the rule of thumb, like Cost Performance
does when considering the model containing operation costs. However, it is argued that
97
5 Evaluation in practice
more alterations to the model would cause more harm to the model than what would be
gained by reducing the variance inflation factor.
Next in the process of testing the construct’s reliability is the indicator validity, which
normally is addressed by testing (i) the significance of the path coefficient of the outer
model, (ii) the sign of this path coefficient, and (iii) its magnitude (Jahner, et al. 2008;
Andreev, et al. 2009). However, as the model is built on taxonomies, this elicits some
special circumstances. These circumstances become clear when analyzing one of the four
structural components of theory as distinguished by Gregor (2006): the statements of
relationship (Tan, Benbazat, and Cenfetelli 2011).
In general, statements of relationship presented in a theory can be either associative,
compositional, unidirectional, bidirectional, conditional, or causal (Gregor 2006). The
actual type of relationship that occurs depends on the purpose of the theory assessed.
When assessing the statements of relationship of explanatory and predictive theories,
they are likely to have a causal nature. With analytic theories however, the relations can
be classificatory, compositional, or associative, but not explicitly causal (Gregor 2006).
Here, by using the two taxonomies of benefits and costs to measure the indicators, a
compositional relation between the constructs and the construct is created. Thus, no
explicitly causal relation exists between the indicators and the constructs. As testing the
indicator validity is a reflection of causality between the indicators, and the indicators
themselves have a compositional relation with the construct, it is not logical to expect a
likeliness between the indicators for a given construct. Employing tests of indicator
validity on variables built on a compositional relationship is therefore theoretically
Table 23: Variance inflation factors
Latent variable
VIF
VIF
initial costs model
operational costs model
Cost importance
2.138
2.285
Cost inclusion
3.742
3.918
Cost performance
5.924
11.133
Cost subjectivity
3.995
5.360
Cost politics
5.579
5.688
Benefit importance
2.479
5.158
Benefit inclusion
3.636
3.750
12.414
8.569
Benefit subjectivity
3.237
5.188
Benefit politics
7.905
5.222
Overall performance
4.098
3.252
Benefit performance
98
5 Evaluation in practice
expected to be superficial. The indicator validity tests are therefore eliminated.
The validity of a constructs not only depends on the within model validity, but also on the
“extent to which the measured variables actually represent the theoretical latent
constructs they are designed to measure” (Hair, et al. 2006). The overall validity of the
constructs consists of the underlying discriminant, convergent, and external validity. First,
discriminant validity assesses the expected possibility of whether constructs that should
not be related, are not related. Second, convergent validity checks whether constructs
that should be related actually are related. And third, external validity addresses the level
of which the indicators capture the construct. However the convergent validity is left
aside as “[f]or formative-indicator constructs, convergent validity at the item level is not
relevant, because the composite latent construct model does not imply that the measures
should necessarily be correlated.” (MacKenzie, Podsakoff, and Jarvis 2005). As is the
external validity, due to the absence of reflective constructs necessary for its
measurement. The discriminant validity on the other hand is discussed next.
To test the discriminant validity, the extent to which the constructs are correlated is
observed by examining the confidence intervals. If the model is to pass the test the
construct intercorrelation should be less than .71 (MacKenzie, et al. 2005). Basically,
“[t]his would test whether the constructs have significantly less than half of their variance
in common” (MacKenzie, et al. 2005). To do so, the latent variable correlations, as
provided in Table 27 and Table 28 in Appendix B, are examined. Overall, it is seen that in
each of the models a small number of intercorrelations breach the threshold, most of
which are concerned with combinations of Performance and Objectivity variables. The
vast majority of these violations, however, remain close to the mark. It is therefore
decided that analysis of the models can continue, but that the results should be treated
with care.
In conclusion, though not passing every check involved flawlessly, the constructs’
reliability and validity is seen not to offer any reasons not to evaluate the model. After
already having established the correctness of the composition and validity of the model in
the previous section, it is therefore judged to be legitimate to evaluate the model as
proposed. This evaluation is present in the subsequent section.
Evaluation of the model
The results of both partial least squares regression analyses are presented in Figure 20,
for the model containing the initial costs, and Figure 21, for the operational one. In these
figures three kinds of results are documented; path coefficients, t-statistics, and Rsquares. First, the resulting path coefficients are listed on the lines between the
constructs concerned. The path coefficients represent the standardized size of the effect
of the independent construct on the dependant one; i.e. the size of the standardized
regression coefficients or betas. Notably the subjectivity and politics constructs are
99
5 Evaluation in practice
measured by means of the easiness for them to enter in. Transposing their effect to
objectivity thus results in the situation that increases in subjectivity or politics actually
indicate increased objectivity, not increased subjectivity. Second in the figures are the
results of the bootstrapping procedures, which are found in the form of t-statistics
enclosed by brackets underneath the corresponding path coefficients. Both bootstraps
are executed using 500 cases, 500 samples and no sign changes as bootstrapping settings
and mean replacement for the missing value algorithm setting. The resulting t-statistics
indicate significance and are considered with critical values of 2.04 and 2.75
corresponding for p<.05 (*) and p<.01 (**). Finally, the proportions of explained variance
of the dependant constructs are listed within the constructs by the R-squares. Next, the
models are discussed in three parts, by examining (i) the importance-inclusionperformance connections, (ii) the effect of perceived evaluation performance of cost and
benefit items on the perceived overall evaluation performance, and (iii) the influence of
Cost
subjectivity
.702**
(13.279)
Cost
importance
Cost
inclusion
.493
.847**
(8.474)
Cost
politics
.299**
(3.023)
-.303
(1.389)
Cost
performance
.785
-.122
(.802)
.802**
(3.232)
Overall
performance
.655
Benefit
importance
.661**
(13.240)
.031
(.121)
Benefit
performance
.806
.061
(.656)
.347**
(4.213)
Benefit
inclusion
.437
.453**
(5.534)
Benefit
subjectivity
.182*
(2.446)
Benefit
politics
Figure 20: PLS results of the initial costs model (* p<.05, ** p<.01)
100
5 Evaluation in practice
objectivity on perceived evaluation performance.
Starting with the triangles of Importance, Inclusion, and Performance, it can be seen that
every instance of Importance has a strong positive effect on the Inclusion of cost and
benefit items. That is, more important items are also perceived to be included more in
evaluations. Little direct effect however is found between Importance and Performance
as only in the instance of operational costs the more important items are perceived to be
better evaluated. Finally, Inclusion and Performance are significantly linked for each but
the initial costs connection. Especially the benefit halves of the models show strong
evidence that more is better.
Given the relative absence of the link between Importance and Performance and the
presence of significant relations between the other two, it might be that Inclusion acts as
a mediator for Importance and Performance. To investigate the extent of the mediation
Cost
subjectivity
.642**
(13.092)
Cost
importance
Cost
inclusion
.412
.114
(1.525)
Cost
politics
.500**
(5.637)
.115*
(2.132)
Cost
performance
.816
.404**
(6.011)
.719**
(3.524)
Overall
performance
.582
Benefit
importance
.611**
(12.122)
.130
(.528)
Benefit
performance
.801
.099
(1.152)
.260
(2.829)**
Benefit
inclusion
.374
.491**
(7.685)
Benefit
subjectivity
.190*
(2.333)
Benefit
politics
Figure 21: PLS results of the operational costs model (* p<.05, ** p<.01)
101
5 Evaluation in practice
of Inclusion, three regression models have to be falsified on their own. First, the link
between Importance and Performance without any involvement of Inclusion has to be
significant. Second, the Inclusion-Performance is added to the first model and has to be
significant as well. Finally, the link between Importance and Inclusion is made, which has
to be significant while the significance between Importance and Performance disappears.
An overview of these tests for each of the three possible instances is provided in Table 24.
Based on the effects revealed among the three models, only the one containing the initial
cost items originally shows a significant relation between Importance and Performance
which disappears in the final model. However, as the regression analysis of the InclusionPerformance relation does not provide a significant result in this final step for this model,
it can be concluded that in none of the cases Inclusion serves as a complete mediator for
Importance on Performance. Therefore, no causal sequences between the three variables
is likely to be present.
Second, the Overall Evaluation Performance shows the same image for both models. On
the one hand, the influence of the perceived Performance of evaluating benefit items on
the perceived Overall Evaluation Performance is non-existent. The perceived Performance
of evaluating cost items on the other hand, accounts for approximately 60% of the
variance within the Overall Evaluation Performances. Satisfaction in overall evaluation
performance therefore seems to single-handedly stem from the cost evaluations. This
could indicate a lack of focus on benefits, or at least a feeling of discomfort with the
evaluation of benefits.
Third, with regard to Objectivity, the model containing initial cost items shows significant
results for each of the relations between Subjectivity or Politics and the corresponding
Performance construct. That is, when it becomes harder for subjectivity or politics to
enter in on an evaluation, i.e. the objectivity increases, the perception of the performance
in evaluating cost or benefit items increases. Looking at the path coefficients, especially
the influence of unintended alterations, Subjectivity, seems to have a large part in the
Table 24: Mediation effect of Inclusion
Model
Initial costs
beta
Operational costs
t-stat
beta
t-stat
Benefits
beta
t-stat
I
Importance-Performance
.785
2.298
*
.850
56.350
**
.881
1.040
II
Importance-Performance
.422
4.971
**
.767
9.304
**
.530
13.347
**
Inclusion-Performance
.470
4.559
**
.109
2.215
*
.457
11.204
**
Importance-Performance
.429
1.295
.876
9.602
**
.539
10.769
**
Inclusion-Performance
.395
1.702
-.054
0.586
.374
7.590
**
Importance-Inclusion
.787
18.834
.832
49.701
.845
53.754
**
III
**
**
* p<.05, ** p<.01
102
5 Evaluation in practice
approximately 80% explanation of perceived performance.
Considering the model built up using the operational cost items, the effect of Subjectivity
on Cost Evaluation Performance disappears (perhaps to be replaced by the elsewhere
absent Importance-Performance relation), while the other three remain intact and even
increase in power. It can therefore be concluded that the influence of perceived
objectivity on item evaluation performance is large. This particularly seems to be the case
when considering benefits. Given these insights, the verdict can be closed regarding
Hypothesis 5, which is recalled as:
Hypothesis 5: The higher the perceived objectivity, the better the evaluation
performance perception
As the construct of Objectivity had to be disassembled into separate Subjectivity and
Politics constructs in order to be able to run the models, a definitive answer to hypothesis
5 can not be given. It is however seen that Objectivity adds to perceived evaluation
Performance, especially on the benefit side of the models. In addition, Politics plays its
part on either side. Hypothesis 5 is therefore determined to be plausible.
5.5 Summary and conclusions
The center point of this chapter is the analysis of the gathered data and reflection hereof
on the propositions and hypotheses. An overview of the results of which is provided in
Table 25.
In general, the qualitative analysis of the data gathered in the interviews showed that
evaluation activities deployed in organizations are strongly influenced by the extent to
which an evaluation is obligatory. In addition, the activities are demand-based and
focused on the justification of resource (to be) spend, actions undertaken and choices
made. Therefore, little space seems available for learning either to increase the quality of
future justifications, or to make amendments for the resource spending processes and
projects currently running. A possible implication hereof is that it is likely that in the
future expectations will continue to end up as disappointments.
Moving to the evidence provided by the quantitative analysis, no significant differences
are seen to exist between the levels of inclusion and importance of cost and benefit
items. In regards to performance, however, the cost items prevail. Additionally, in the
conceptual model, it can be seen that organizations are putting an emphasis on including
those items they perceive to be important, whereas the effects of inclusion and
importance on the perceived performance diverge.
Considering objectivity, cost items are seen to be more objective than benefit items.
Further, significant results are found for objectivity to influence both the perceived
evaluation performances of cost and benefit items, after splitting the construct into
subjectivity and politics. Objectivity thus adds to the perceived quality of the
103
5 Evaluation in practice
performance. As costs are perceived to be more objective and better evaluated, the
development of benefit evaluation might want to aim for methods, techniques, or
procedures to increase benefit objectivity in the process of evaluating information system
projects.
In the next chapter, the overall conclusions are drawn up in combination with a reflection
on the research.
Table 25: Overview of tested theses and results
1
Id
Thesis
Test
Proposition 1
Project evaluations are not performed when not obligatory
√
Proposition 2
Project evaluations are demand-based rather than learning-based
√
Proposition 3
Project evaluations are more concerned with outcome than process
x
Proposition 4
Projects are given full approval too early in the development process
√
Hypothesis 1a
Costs are perceived to be more objective than benefits
++
Hypothesis 1b
All cost aspects are perceived to be equally objective
-
Hypothesis 1c
All benefit aspects are perceived to be equally objective
-
Hypothesis 2a
Cost evaluations are perceived to be more complete than benefit evaluations
-
Hypothesis 2b
All cost aspects are perceived to be equally complete
-
Hypothesis 2c
All benefit aspects are perceived to be equally complete
-
Hypothesis 3a
Cost evaluations are perceived to be less important than benefit evaluations
-
Hypothesis 3b
All cost aspects are perceived to be equally important
+
Hypothesis 3c
All benefit aspects are perceived to be equally important
-
Hypothesis 4a
Cost evaluations are perceived to be better performed than benefit evaluations
+
Hypothesis 4b
All cost aspects are perceived to be equally performed
-
Hypothesis 4c
All benefit aspects are perceived to be equally performed
-
Hypothesis 5
The higher the perceived objectivity, the better the evaluation performance
perception
√
1
√ = not rejected, x = rejected, - = rejected, + = not rejected at p<.05, ++ = not rejected at p<.01.
104
6 Summary and conclusions
6
Summary and conclusions
6.1 Introduction
In the first chapter of this research the objective is described as improving the understanding
of objectivity in the evaluation of information system economics. In pursuit of this objective
the second chapter provided a literature-based description of the relevant aspects within
the addressed body of knowledge. Subsequently, in Chapter 3, a description of theories
potentially valuable in obtaining the objective of this research is provided. This resulted in a
list of propositions and hypotheses. To gather data to test these theses, the empirical
process of the research is set up in Chapter 4. This led to the interviewing of 32 information
system managers. Finally, the obtained data was processed in Chapter 5, resulting in an
outcome for each of the theses.
This chapter addresses the consequences of these outcomes as well as the implications of
the research process itself. First, in Section 6.2, the research questions and results are
reviewed. Successively, the same is done for the employed research approach, leading to an
overview of limitations in Section 6.3. Next, in Section 6.4, the contributions of the research,
both to practice as to theory, are discussed. Section 6.5 then contains suggestions for future
research. To conclude the research, final remarks are placed in Section 6.6.
6.2 Review of research questions and results
The overall research objective of improving the understanding of objectivity in the
evaluation of information system economics is addressed by four research questions. The
first three questions address the differences between information system costs and benefits
in general, in their evaluation from a theoretical viewpoint, and in their evaluation in
practice. The fourth question then deals with the directions organizations can take to
increase the conformity between the two. Each of the questions is addressed separately
next, before the final conclusions on the initial objective are discussed.
Differences between information system benefits and costs
The first question is ‘What are benefits and costs and how different are they?’ This question
investigates whether benefits and costs of information systems should receive the same kind
of attention in the process of their evaluation. If differences are to be found, this could
possibly explain some of the apparent issues occurring in the combined evaluation of the
two. However, if no such gap exists, it is likely that a satisfying analysis of the two in relation
to each other is possible. This would create the need for a whole different set of
improvements to bring the two in conformity in comparison to when a gap would be found.
It is seen that information systems occur in three dimensions; the functional, analytical, and
temporal dimension. Any use of the systems or changes in these dimensions cause costs and
possibly benefits to arise. Depending on the competitive environment of the organization,
and the way it operates herein, these costs and benefits potentially lead to increased
105
6 Summary and conclusions
business performance. When assessing information system costs and benefits separately,
differences between the two elements surface. On the one hand, the emphasis regarding
information system benefits research lies heavily on non-financial aspects resulting in similar
problems. Research on costs, on the other hand, has a mainly financial orientation in
information system literature and shows close links to the field of accounting. Further, little
attention is seen to be paid to negative contributions.
The evaluation of costs and benefits provides most value for organizations when the costs
and benefits can be directly consolidated. Surveying the current situation, however, the
problems with evaluation of the two are seen to be dissimilar. Being built on accounting
standards, cost accounting for information systems seems to be converging to a more
standard practice with increasing objectivity or, at the very least, uniformity. Progress in
information system benefits assessment, however, appears to be lagging behind in this line
of work. While the evaluation of costs tries to cope with issues such as addressing the right
cost drivers and determining satisfactory cost levels, the assessment of benefits has not
progressed in the same manner. In conclusion, information system benefits and costs thus
appear to have a different ‘denominator’, making them apparently unsuitable to be
summarized in a single entity as long as their current form persists.
The gap in literature between the assessments of information system benefits and costs
Having developed some understanding of the apparent gap between benefits and costs, the
second question addresses whether these differences are dealt with within the process of
information system evaluation. The question therefore is ‘What is the gap between the
assessments of benefits and of costs in information system evaluation in literature?’ In view
of the overall objective, the contribution of the answer hereto lies in the practical
implications of the differences between costs and benefits; that is, to what extent these
differences are obstructing the evaluation of the two.
It was proposed that differences could stem from varying levels of objectivity. The notion of
which was seen to exist in four senses; absolute, disciplinary, dialectical, and procedural. In
order to address the differences of objectivity in the evaluation of information system
benefits and costs, the available methods and their documentation as supplied by the
literature were looked into, based on these four senses. Hereto, investigation of the amount
of objectivity present in these methods was considered by the what, how, which, who, and
why of evaluation as provided by the methods.
Unfortunately, the guidance provided among these aspects proved to be sparse. The
evaluation methods therefore do not seem to cope with the differences of benefits and
costs; at least not in regard to objectivity. It can therefore be concluded that the
operationalization of evaluation, as present in literature, does not provide evaluators with
appropriate tools for them to overcome the fundamental differences between benefits and
costs and make meaningful connections between them.
106
6 Summary and conclusions
Organizations’ practices to evaluate information system benefits and costs
Given the previously described shortcomings in evaluation literature on the combined
assessment of information system benefits and costs, the question arises of how
organizations cope with these issues in practice. The third question addressed this problem
by asking ‘Which practices are used by organizations to evaluate costs and benefits?’
In the quest for the answer to this question the earlier findings were combined with ideas
from the New Institutional Economics as well as Behavioural Economics. This resulted in four
propositions, an overview of which is provided in Table 26 (which is identical to Table 25, but
is repeated for convenience). To put these propositions to the test, 32 information system
managers were interviewed about their information system project evaluation practices.
Results from these interviews indicate that throughout a project’s life cycle, the attention for
evaluation seems to vary drastically. In general this means that emphasis is only there when
projects receive their initial assessment at justification or their final verdict at project
closure. With the focus on these two evaluation points present, opportunities to use
Table 26: Overview of tested theses and results (repeated)
1
Id
Thesis
Test
Proposition 1
Project evaluations are not performed when not obligatory
√
Proposition 2
Project evaluations are demand-based rather than learning-based
√
Proposition 3
Project evaluations are more concerned with outcome than process
x
Proposition 4
Projects are given full approval too early in the development process
√
Hypothesis 1a
Costs are perceived to be more objective than benefits
++
Hypothesis 1b
All cost aspects are perceived to be equally objective
-
Hypothesis 1c
All benefit aspects are perceived to be equally objective
-
Hypothesis 2a
Cost evaluations are perceived to be more complete than benefit evaluations
-
Hypothesis 2b
All cost aspects are perceived to be equally complete
-
Hypothesis 2c
All benefit aspects are perceived to be equally complete
-
Hypothesis 3a
Cost evaluations are perceived to be less important than benefit evaluations
-
Hypothesis 3b
All cost aspects are perceived to be equally important
+
Hypothesis 3c
All benefit aspects are perceived to be equally important
-
Hypothesis 4a
Cost evaluations are perceived to be better performed than benefit evaluations
+
Hypothesis 4b
All cost aspects are perceived to be equally performed
-
Hypothesis 4c
All benefit aspects are perceived to be equally performed
-
Hypothesis 5
The higher the perceived objectivity, the better the evaluation performance
perception
√
1
√ = not rejected, x = rejected, - = rejected, + = not rejected at p<.05, ++ = not rejected at p<.01.
107
6 Summary and conclusions
progressive insights to add stability and completeness to the existing planning and
expectations for a project can only be imagined to be duly missed. Additionally, no reasons
were found for those concerned to evaluate projects at other moments in time. As such, the
process of evaluation is identified to be solely mandatory. Regrettably, these mandatory
evaluations often miss out on opportunities to evaluate success as the primary booster for
project assessments appears to be that of failure. Arguably, there are many more ways for
things to go wrong, than to go right; therefore, this deficiency possibly identifies a huge loss
in learning potential.
This view is strengthened by learning being accepted by the organizations as an experiencebased concept only. This experience-based learning however is then hindered by a mental
process identified as ‘evaluation fatigue’. As people become sick and tired of the old project
they continue with new projects while leaving learning opportunities on the old ones for
grabs.
The activities that do take place are mainly concerned with the justification of allocated
resources, rather than with the realization of benefits and fulfilling the true potential of an
investment. In addition, creating a strategic link between the organization and the project is
indicated to be a major issue.
In conclusion, organizations struggle with the incorporation of the total of information
system economics in the evaluation of information system projects, and mainly focus on a
few elements. The combination of all these effects makes increases in, for instance, the
accuracy and completeness of evaluations unlikely to emerge, thus keeping drifted
predictions and resource squandering more than alive.
Directions for evaluation to reduce the gap
The final question before reaching the overall conclusions is ‘Which direction do changes in
these practices need to have in order to reduce the gap?’ As such, it addressed the
perceptions of organizations towards the evaluation of information system costs and
benefits and the influence of objectivity thereon. In regard to the central objective of the
research, insights into these perceptions explain which elements organizations find to be
important in good evaluations. These elements can then be used to increase the quality of
evaluations as properties of the functioning elements can be reflected onto the lagging
elements.
To research the perceptions, a total of five hypotheses was created (Table 26). Subsequently,
these hypotheses address the perceptions of information system managers towards the
levels of objectivity, completeness, inclusion and evaluation performance regarding costs
and benefits of information system projects, as well as the overall evaluation performance.
Based on these theses, a conceptual model was developed stating that the perception of an
organization of its overall evaluation performance depends on the separate performances in
evaluating costs and benefits. These performances in turn depend on both the importance
108
6 Summary and conclusions
and inclusion of the cost and benefit items identified. Finally it is believed that the
performance perceptions are altered by the level of objectivity of the measured items. To
operationalize this model, the notion of objectivity was split into subjectivity and politics;
being the unintended and intended alterations of evaluations respectively.
The perceptions of the interviewed managers indicate no significant differences between the
level to which cost and benefit items are included in evaluations and are important in
evaluations. When focusing on the performance in evaluation the different items, the cost
items are however indicated to receive a higher perceived level of performance. Moreover,
the same goes for the case of objectivity, where the cost items are perceived to be
significantly more objective.
The significance of objectivity remains when addressing the perceived performance levels of
the evaluation of costs and benefits as single concepts. For each and every instance, the
perceived performance increases when the easiness for subjectivity or politics to enter in on
the evaluation decreases. That is, higher levels of perceived objectivity actually add to the
image of the evaluation.
Considering the perceived overall evaluation performance, the underlying performances
each show a different view. On the one hand, the cost evaluation performance has a large
significant effect thereon. The benefit evaluation performance on the other hand refrains
from doing so.
A possible conclusion based on these effects however is not that one should directly aim for
the same level of performance for benefit evaluation as for cost evaluation, but rather that
one should take a good look at how the latter level came into existence. As the evaluation of
costs builds on a large foundation from the areas of accounting and finance, it gained an
image of objectivity from, at least, a disciplinary and procedural standpoint. Benefits on the
other hand lack this advantage. To gain the same level of performance, one could therefore
try to look into the same road for benefits as the one costs took; that is, first focus on their
management and realization, then, and not earlier, target the benefits themselves.
Improving the understanding of objectivity in information system economics evaluation
Built upon the apparent difficulties of evaluating the costs and benefits of information
systems, the main objective of this research is to improve the understanding of the influence
of objectivity thereon. In the previous sections, the road to this understanding was paved by
means of answering four associated questions.
The results show that when the perceived objectivity of costs or benefits increases, so does
the perceived performance in the associated evaluations. Additionally, it was seen that costs
are perceived to be significantly more objective than benefits, and that the overall
evaluation performance is solely influenced by the performance in evaluating costs.
Combining these three findings leads to the conclusion that in order to close the gap
109
6 Summary and conclusions
between costs and benefits and therewith increase the quality of evaluations as a whole,
increasing benefit objectivity should be seen as a future direction for progress. Though,
notably, through a twist of definitions one can argue that perhaps a better way to put it is
not to strive for increased objectivity itself, but to aim for decreased subjectivity. In this
process the development of cost evaluations can be used as an initial road map. For now,
however, it seems unlikely that a solid single contemplation of the two can be included in a
satisficing way into an evaluation in the short term. Organizations would therefore be wise
to pay separate attention to each of the two, and ask the question whether it is worth the
effort, rather than whether it is profitable.
6.3 Limitations of the research
This section deals with the implications the choices within the research approach have had
for the research, implications which are dealt with by means of the limitations and potential
issues caused. Addressing all stages in the research approach, these limitations and issues
are mainly found in two areas; these are first, the broad nature of the research, and second,
the perceptions approach. Each of which is discussed in more detail next.
First, considering the attempt to cover both the value creation of information systems for
organizations and the evaluation hereof, it is seen that the scope of the research can be
considered rather large. The literature review, for instance, includes a rather large number
of concepts to address all aspects considered to be relevant. Also, the participating
organizations in the chosen sample did not stem from a certain industry or share
characteristics of a certain type of organization. And the semi-structured nature of the open
questions in the interviews resulted in rather restricted, though valuable, evidence on the
propositions. Consequently, the depth of each of these parts might be judged too superficial
and the generalizability can be argued to be impeded. However, the broadness also means
that a larger area of the problem field could be explored. Overall, and in hindsight, it is
believed that, when considering the current level of understanding of the evaluation of costs
and, especially, benefits of information systems, the lost depth is made up for by the bird’s
eye view.
The second area of limitations origins from dealing with perceptions. That is, the
operationalization of the concepts in the empirical part of the research was performed in
such a way that they did not include any observations of, for instance, outcomes of the
evaluation process or actual political events. Ironically, the objectivity questions are thus as
subjective as possible (assuming the possibility of a scale of objectivity). In addition, this
means that the results of the analysis are solely relative, as the measurement of the data
points is also.
6.4 Research contribution
In order for a research to optimize its value, contribution is needed to both the academic
and the practical field. The biggest contribution of a research however is the point where its
theoretical and practical contributions converge. Therefore what might be the biggest
110
6 Summary and conclusions
contribution of this research lies in the guidance of the future development of the evaluation
of information system economics.
Theoretically, it is shown that large deviations between the costs and benefits exist within
the field of information systems. Having pursued beyond only cost management, cost
evaluation was seen to have reached a higher maturity level than benefit evaluation.
Mirroring the role cost management has played in the past for cost evaluations, benefit
realization management is thus found to be a possible focal point for the development
evaluation of information system benefits.
In addition, the perception of an organization’s performance in the evaluation of information
system economics is addressed, with objectivity being observed as a possible source for
increases in this performance. Doing so, the research shows some of the threats to
evaluation performance in the execution of an evaluation process as well as opportunities
taken by organizations to deal with these threats. Though not providing organizations with
tools, the research does deliver a direction which organizations might reflect and develop
their own evaluation practices upon.
6.5 Suggestions for future research
During the research several issues were identified which are believed to be interesting
directions for future research. Next, four of these issues are briefly discussed.
First, though an evaluation only based on benefits (or only based on costs) is useless by
definition, the subject of benefit management would be served by some individual attention.
Especially troublesome areas hereby are the comparability of benefits among projects, and
the enabling of post-project reviews.
Next, within information management a large amount of management information is
produced; for instance, data on incidents and changes requested. This data might be used to
manage the current information system portfolio or the prioritizing of projects, but, at the
moment, does not appear to be used in this way. Future research on the ways and value of
doing so is considered a field of high potential yields.
Third, the consequences of the process of evaluation lie far beyond that of only providing
the organization with an evaluation document. Behavioural consequences of the process
might, for instance, cause a high quality project evaluation to improve the awareness of an
organization regarding this project. Eventually, the evaluation could then lead to better
projects. Future research could indicate the communicative power of evaluation.
Finally, the concept of objectivity was not the sole focus in this research. As such, its review
and operationalization was only created from a limited angle. Given its complexity and
connected wide variety of angles available, future research focusing on objectivity and
evaluation alone is regarded to be highly interesting.
111
6 Summary and conclusions
6.6 Final remarks
Will organizations ever be able to create airtight value assessments of their information
systems? Very unlikely. The amount and behaviour of variables involved creates just too
much fuzziness. Can they improve on their practices? Certainly! Does this research provide a
road map hereto? By all means no. It does however add some awareness of how practices
are perceived and which development direction might work. And that is perhaps all one can
aim for in the process of evaluation, for "let us not look back in anger, nor forward in fear,
but around in awareness" (Thurber).
112
6 Summary and conclusions
113
Appendix A: Questionnaire
The following questionnaire, as discussed in Chapter 4, was used in the interviews. Minor
adjustments have been made for layout purposes. Answer boxes and translations in Dutch
are left out entirely.
The economic evaluation of IS projects: Questionnaire
I.
Introduction
This questionnaire addresses the economic evaluation of information system (IS) projects in
your organization. The primary questions are separated into three sections: first, the
evaluation of IS projects in general. Second, the economic aspects considered within the
evaluations. And third, contextual aspects of your organization.
II.
IS project evaluation in general
1.
How would you primarily describe the role of IS in your organization?
Hints allowed: Strategic, tactic, operational, other, namely...
2.
What do you understand by evaluation of IS projects?
3.
How does your organization evaluate IS projects?
Focus: Before, during, and after projects
4.
[If unclear in previous question] Who is responsible for evaluating IS projects (building and
assessing)?
5.
Is the method for evaluating IS projects standardized?
Hints allowed: A method can be a guideline, or coherent set of guidelines, that prescribe how somebody who is
willing to follow them should continue, and under what circumstances. Also a template business case, or total
cost of ownership model.
6.
[Only if the previous answer is negative] Is there a particular reason why your organization
has not standardized the IS evaluation method?
7.
How did your organization come to evaluating IS projects the way you do?
If unclear: What is the reason for evaluating IS projects the way you do?
8.
To what extent is the outcome of the evaluation final in making project decisions?
Hints allowed: Think about negotiability of outcome, allowed deviation of the method.
9.
What is the number of IS projects started by your organization over the last twelve months?
10.
To what extent does your organization evaluate IS projects at the following stages?
Never
Always
Project initiation stage (pre-project)
1
2
3
4
5
During the project (ongoing business case)
1
2
3
4
5
Project closure stage (delivery)
1
2
3
4
5
115
6 Summary and conclusions
11.
12.
13.
14.
Operational stage (after project closure, ex-post evaluation)
1
2
3
4
5
Other, namely...
1
2
3
4
5
When considering IS project evaluation, how do you perceive your (project) organization’s...
Very
Very
poor
good
Overall performance (of the activities)
1
2
3
4
5
Competence (for the activities)
1
2
3
4
5
Experience (in the activities)
1
2
3
4
5
Training/learning of staff (of the activities)
1
2
3
4
5
Project selection decisions
1
2
3
4
5
Project specifications
1
2
3
4
5
Used evaluation criteria
1
2
3
4
5
Benefit assessments
1
2
3
4
5
Cost assessments
1
2
3
4
5
Risk analysis
1
2
3
4
5
Project planning
1
2
3
4
5
Stakeholder identification
1
2
3
4
5
Scenario analysis (considering alternatives)
1
2
3
4
5
Hints: Subjective: estimate differs due to opinion, unconsciously. Politics: estimate differs due to strategical use,
consciously.
Very
Very
easy
hard
How easy is it for subjectivity to enter in when IS projects are evaluated?
1
2
3
4
5
How easy is it for politics to enter in when IS projects are evaluated?
1
2
3
4
5
When considering IS project results, how do you perceive your (project) organization’s...
Very
Very
poor
good
Overall project outcomes (for the organization)
1
2
3
4
5
Timely deliveries (of projects)
1
2
3
4
5
Meeting budgets (of projects)
1
2
3
4
5
Deliveries according to specifications (of projects)
1
2
3
4
5
Compared to your competitors, how do you perceive...
Very
Very
poor
good
The overall performance of your organization
1
2
3
4
5
The overall performance of your IT organization
1
2
3
4
5
15.
What would you like to change to your IS projects evaluation method?
16.
Why have not you changed this aspect [being the answer to question 12]?
116
III.
Economic aspects evaluated
17.
Please answer the following questions concerning IS cost aspects, considering the evaluation of strategic IS projects
To what extent are [item]
included in an evaluation?
How important do
perceive
[item]
in
evaluation of IS projects?
Never
Trivial
Always
you
the
Crucial
How does your organization
perform in evaluating [item]?
Very poor
Very good
Hardware costs – Initial
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
Hardware costs – Maintenance and operation
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
Software costs – Initial
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
Software costs – Maintenance and operation
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
IT staff – Initial
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
IT staff – Maintenance and operation
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
User staff – Initial
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
User staff – Maintenance and operation
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
External staff – Initial
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
External staff – Maintenance and operation
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
Other, namely...
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
117
18.
How easy is it for subjectivity
to enter in when [item] is
evaluated?
How easy it is for politics to
enter in when [item] is
evaluated?
Very easy
Very easy
Very hard
Very hard
Hardware costs – Initial
1
2
3
4
5
1
2
3
4
5
Hardware costs – Maintenance and operation
1
2
3
4
5
1
2
3
4
5
Software costs – Initial
1
2
3
4
5
1
2
3
4
5
Software costs – Maintenance and operation
1
2
3
4
5
1
2
3
4
5
IT staff – Initial
1
2
3
4
5
1
2
3
4
5
IT staff – Maintenance and operation
1
2
3
4
5
1
2
3
4
5
User staff – Initial
1
2
3
4
5
1
2
3
4
5
User staff – Maintenance and operation
1
2
3
4
5
1
2
3
4
5
External staff – Initial
1
2
3
4
5
1
2
3
4
5
External staff – Maintenance and operation
1
2
3
4
5
1
2
3
4
5
Other, namely...
1
2
3
4
5
1
2
3
4
5
Follow up questions on the last two questions:
- Ask for reasons behind answers that stand out,
- Ask for reasons if nothing stands out?
- Why does the interviewee think the way he/she does?
118
19.
Please answer the following questions concerning IS benefit aspects, considering the evaluation of strategic IS projects
To what extent is [item]
included in an evaluation?
How important do
perceive
[item]
in
evaluation of IS projects?
Never
Trivial
Always
you
the
Crucial
How does your organization
perform in evaluating [item]?
Very poor
Very good
Efficiency gains
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
Effectiveness gains
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
Organizational transformation
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
Technological necessity and/or flexibility
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
Compliance to external necessities
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
Wider human and organizational impacts
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
Other, namely ...
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
How easy is it for subjectivity
to enter in when [item] is
evaluated?
How easy it is for politics to
enter in when [item] is
evaluated?
Very easy
Very easy
Very hard
Very hard
Efficiency gains
1
2
3
4
5
1
2
3
4
5
Effectiveness gains
1
2
3
4
5
1
2
3
4
5
Organizational transformation
1
2
3
4
5
1
2
3
4
5
Technological necessity and/or flexibility
1
2
3
4
5
1
2
3
4
5
Compliance to external necessities
1
2
3
4
5
1
2
3
4
5
Wider human and organizational impacts
1
2
3
4
5
1
2
3
4
5
Other, namely ...
1
2
3
4
5
1
2
3
4
5
119
20.
Follow up questions on the last two questions:
- Ask for reasons behind answers that stand out,
- Ask for reasons if nothing stands out?
- Why does the interviewee the way he/she does?
IV.
Context
Organizational characteristics
21.
[Ask if not found prior to the interview] Which industry is your organization primarily in?
22.
[Ask if not found prior to the interview] What is the size of your organization in terms of net
revenue (€M/y)?
23.
[Ask if not found prior to the interview] What is the size of your organization in terms of total
employees (FTE)?
24.
[Ask if not found prior to the interview] Would you describe your organization as a
multinational or a national organization?
25.
[Always ask] What is the size of your IS department’s budget in terms of organizational net
revenue (%)?
Interviewee characteristics
26.
[Ask if not found prior to the interview] What is your job?
27.
[Always ask] What is your background?
IS/IT
Business/economics
Other, namely ............................................
V.
End of interview
28.
Do you have any remarks considering the evaluation of IS economics that have not been
covered, which you would like to mention?
29.
Do you have other remarks which you would like to mention?
120
Appendix B: Latent variable correlation
Table 27: Latent variable correlation (operational costs model)
bimp
Benefit
Importance
bincl
bperf
bpol
bsubj
cimp
cincl
cperf
cpol
csubj
1,00000
Inclusion
0,611412
1,00000
Performance
0,501005
0,746072
1,00000
Politics
0,453605
0,70471
0,801876
1,00000
Subjectivity
0,319761
0,594255
0,825826
0,781596
1,00000
Importance
0,228513
0,068312
0,002195
0,073184
0,01114
1,00000
Inclusion
0,133392
0,138807
0,121448
-0,062833
0,09898
0,641807
1,00000
Performance
0,099121
0,160409
0,253485
0,123642
0,206165
0,692431
0,636061
1,00000
Politics
0,171807
0,375233
0,341826
0,273543
0,326048
0,333663
0,380009
0,761519
1,00000
Subjectivity
0,095887
0,142967
0,185137
0,12988
0,318614
0,425347
0,635882
0,724138
0,731261
1,00000
Evaluation performance
0,050186
0,140114
0,31241
0,142527
0,196864
0,465946
0,540283
0,752134
0,660045
0,648543
Cost
evaperf
1,00000
121
Table 28: Latent variable correlation (initial costs model)
bimp
Benefit
Importance
bincl
bperf
bpol
bsubj
cimp
cincl
cperf
cpol
csubj
1,00000
Inclusion
0,661346
1,00000
Performance
0,498379
0,760927
1,00000
Politics
0,410972
0,639705
0,789966
1,00000
Subjectivity
0,293961
0,568274
0,812906
0,797048
1,00000
Importance
0,03692
-0,120256
-0,090703
-0,110616
0,053136
1,00000
0,139392
0,011285
0,132092
0,170437
0,301208
0,702014
1,00000
-0,028793
-0,009523
0,201676
0,217369
0,183242
0,217746
0,261551
1,00000
Politics
0,017882
-0,068476
0,196688
0,204345
0,326161
0,524998
0,586808
0,638313
1,00000
Subjectivity
0,111118
-0,018931
0,11424
0,182881
0,126157
0,466731
0,560062
0,825923
0,685674
1,00000
Evaluation performance
-0,076706
-0,007942
0,193124
0,187529
0,107712
0,10938
0,174454
0,808532
0,385163
0,561971
Cost
Inclusion
Performance
evaperf
1,00000
122
Samenvatting in het Nederlands (Summary in Dutch)
1.
Inleiding
“Wanneer men kosten onder de loep neemt is het belangrijk dit te doen in samenhang
met de bijbehorende baten” (Ward en Daniel 2006), en visa versa. Wordt dit niet gedaan,
dan ontstaat een eenzijdig beeld waar onmogelijk een inschatting van (de) waarde aan
gekoppeld kan worden. Immers, welk niveau van kosten kan gerechtvaardigd worden
zonder dat er inzicht bestaat in de te behalen baten?
Desondanks blijft het beeld bestaan dat bij het evalueren van de kosten en baten van
informatiesystemen organisaties voortdurend worstelen met de beoordeling, realisatie
en management van baten. Bij de waardering van kosten lijkt daarentegen een redelijk
weerbarstig niveau te worden gehaald waarbij eerder de omvang dan de typering het
probleem vormt. Deze situatie zorgt ervoor dat een evaluatie op basis van de afweging
van de twee uiterst moeizaam zal verlopen. Hetgeen mogelijk weer verband houdt met
de aanhoudende berichtgeving over de verkwisting van middelen alsmede de mislukking
van projecten (Latimore, et al. 2004; McManus en Wood-Harper 2007; Tichy en Bascom
2008). Dit onderzoek probeert de oorzaken van dit probleem te achterhalen door de
verschillen tussen kosten en baten in kaart te brengen. Tevens worden de consequenties
van deze verschillen voor de evaluatie van informatiesystemen geëxploreerd.
In deze samenvatting worden achtereenvolgens het onderzoeksprobleem, de
onderzoeksresultaten, en de conclusies aangaande het doel van het onderzoek alsook
voor vervolgonderzoek besproken. Waar nodig zal hierbij worden verwezen naar de
bijbehorende sectie in het volledige onderzoeksverslag.
2.
Onderzoeksprobleem
Gegeven de situatie waarin organisaties er ogenschijnlijk niet in slagen
informatiesystemen van een nuthoudende waardebeoordeling te voorzien, omschrijven
Tillquist en Rodgers (2005) het voornaamste obstakel dat dit tegenhoudt als “een gebrek
aan een systematische, objectieve methodologie die specifiek is ontworpen om de
contributie van informatietechnologie te onderscheiden en identificeren.” Dit onder meer
doordat “de afhankelijkheid van verschillende analisten en andere betrokkenen zorgt voor
bevooroordeelde en conflicterende waardebepalingen.” Gekeken naar het sinds de jaren
’60 (Williams en Scott 1965) in ontwikkeling zijnde portfolio van evaluatiemethoden lijkt
dit er, ondanks toepassing (Al-Yaseen, et al. 2006), inderdaad niet in te slagen hiermee
om te gaan. Gezien de omvang van dit portfolio wordt het ook onwaarschijnlijk dat nóg
een volgende methode dit wel zou kunnen (Powell 1992). In plaats van zich te richten op
de ontwikkeling van een nieuwe methode richt dit onderzoek zich dan ook op de
funderingen van evaluatie, en zijn specifieke eigenschappen, toepassing en waarde in de
praktijk.
123
Hiertoe richt dit onderzoek zich op de verschillen in perceptie van hen die evalueren
richting de systematiek en objectiviteit van kosten en baten in waardebepalingen. In het
bijzonder wordt onderzocht waarom er in de evaluatie van informatiesystemen niet in
geslaagd wordt een effectieve, en uitvoerbare, constellatie van de twee af te leveren.
Hieromtrent wordt een poging ondernomen om inzichtelijk te maken of het begrip
objectiviteit een mogelijke bron is van dit probleem. Het doel van dit onderzoek luidt
daarom als volgt:
Vergroting van het begrip van objectiviteit in de evaluatie van de economische
aspecten van informatiesystemen.
Om dit doel te bereiken zijn de volgende subvragen opgesteld:
Wat zijn baten en kosten en hoe verschillend zijn ze,
wat is de afstand tussen de beoordeling van baten en kosten in de evaluatie van
informatiesystemen in de literatuur, en, aangenomen dat er een afstand bestaat,
welke middelen worden door organisaties gebruikt om baten en kosten te evalueren, en
welke richting moet genomen worden om de afstand te verkleinen?
Centraal thema bij het behandelen van deze vragen zijn de verschillen van perceptie van
baten en kosten. De beleefde objectiviteit wordt hierdoor een kernbegrip in het
onderzoek.
Ondanks dat het onderzoek naar en de toepassing van de evaluatie van
informatiesystemen een geschiedenis heeft die tot meer dan 40 jaar terug gaat, lijkt het
tot op heden niet uitermate goed te worden begrepen, noch routinematig te worden
toegepast. Inzichten in de beschreven kwesties worden in staat geacht om te kunnen
leiden tot vergroting van het begrip van sommige van de fundamentele onderliggende
problemen.
3.
Onderzoeksresultaten
Om het onderzoeksdoel te bereiken worden in het onderzoeksverslag de subvragen
achtereenvolgens behandeld. Onderliggend aan de gevonden resultaten zijn in Hoofdstuk
2 de theoretische achtergronden van de begrippen informatiesysteem, waardecreatie,
baten, kosten, en evaluatiemethodes behandeld. Daarbij voorziet Hoofdstuk 3 het
onderzoek van theoretische bouwstenen uit de Nieuwe Institutionele Economie,
Gedragseconomie, en Objectiviteit. In Hoofdstuk 4 wordt het empirische gedeelte van het
onderzoek, ingevuld met het interviewen van 32 informatiesysteemmanagers,
uiteengezet. De bij de subvragen behorende resultaten worden in deze sectie
gepresenteerd.
124
Verschillen tussen de baten en kosten van informatiesystemen
De eerste subvraag behandelde de kwestie of baten van informatiesystemen dezelfde
soort aandacht moeten krijgen als kosten. Eventueel gevonden verschillen zouden hierbij
problemen bij de gezamenlijke waardebepaling kunnen verklaren. Aan de andere kant, als
deze verschillen niet gevonden worden wijst dit er op dat een bevredigende combinatie
van de twee reeds mogelijk is. Afhankelijk van het resultaat zijn andere wijzigingen nodig
om de gelijkvormigheid van baten en kosten te verbeteren.
In Hoofdstuk 2 blijkt dat informatiesystemen zijn te definiëren in drie dimensies; de
functionele, analytische, en temporele dimensie. Wanneer de systemen gebruikt of
gewijzigd worden ontstaan er kosten en potentieel baten. Afhankelijk van de
competitieve omgeving waarin de organisatie zich bevindt, en de manier waarop zij hierin
opereert, kunnen deze aspecten leiden tot verbeterde bedrijfsresultaten.
Wanneer men de baten en kosten van informatiesystemen individueel bekijkt worden
verschillen tussen de twee duidelijk zichtbaar. Aan de ene kant ligt de nadruk wat betreft
baten sterk op de niet-financiële aspecten, hetgeen resulteert in dito problemen.
Onderzoek op het gebied van kosten, aan de andere kant, is met name financieel
georiënteerd en laat een sterke band zien met het vakgebied Accounting. Hierbij wordt
over het algemeen weinig tot geen aandacht besteed aan de niet-financiële
consequenties van informatiesystemen.
De evaluatie van baten en kosten biedt organisaties het meeste nut wanneer de twee
direct geconsolideerd kunnen worden. Echter, kijkend naar de huidige situaties lijken
deze dusdanig anders dat het onwaarschijnlijk is dat dit ook daadwerkelijk zal lukken.
Gebouwd op standaarden uit de accounting, lijkt de evaluatie van kosten reeds
standaardmethodes te hebben afgeleverd. Voortgang op het gebied van
batenbeoordelingen
blijft
op
dit
gebied
duidelijk
achter.
Waar
kostenevaluatievraagstukken gerelateerd aan meetniveau behandeld worden, zijn het
meer inzichtverschaffende kwesties die bij batenevaluatie de toon voeren. Alles bij elkaar
genomen lijken de baten en kosten van informatiesystemen momenteel een ‘ongelijke
deler’ te hebben, hierdoor zijn ze in hun huidige vorm ogenschijnlijk ongeschikt voor een
samengestelde waardebepaling.
De afstand tussen baten en kosten in de beoordeling van informatiesystemen
Nu deze verschillen tussen baten en kosten inzichtelijk zijn gemaakt is de volgende vraag
hoe hiermee wordt omgegaan in de activiteit van het evalueren. Inzichten hierin bepalen
de omvang van de kracht waarmee een samengestelde evaluatie wordt tegengewerkt.
Om antwoord te geven op deze vraag is in Hoofdstuk 3 een aanpak gekozen die stelt dat
verschillen in objectiviteit hieraan ten grondslag kunnen liggen. Deze verschillen zijn
vervolgens aangetoond in de beschikbare evaluatiemethodes, door deze te onderwerpen
125
aan een analyse van de mate waarin de methodes en hun documentatie voorzien in het
wat, hoe, welke, wie en waarom van evaluatie.
Al met al blijkt de begeleiding op het gebied van deze aspecten schaars. De
evaluatiemethoden lijken, tenminste op het gebied van objectiviteit, derhalve niet in
staat om te gaan met de verschillen tussen baten en kosten. Er kan daarom tot de
slotsom gekomen worden dat de operationalisering van evaluatie, in theorie, niet
afdoende voorziet in instrumenten om de fundamentele ongelijkheden van baten en
kosten te verhelpen en een waardevolle compositie van de twee mogelijk te maken.
De praktijk van het evalueren van de economische aspecten van informatiesystemen
Gegeven de hiervoor beschreven tekortkomingen in de evaluatieliteratuur, ontstaat de
vraag hoe organisaties in de praktijk omgaan met deze problematiek. Hiertoe zijn 32
informatiesysteemmanagers van een grote verscheidenheid geïnterviewd. De hierbij
gebruikte vragenlijst is terug te vinden in Appendix A.
Na kwalitatieve analyse van de interviews wijzen de resultaten erop dat gedurende de
levenscyclus van een project, de aandacht voor evaluatie sterk fluctueert. Over het
algemeen betekent dit dat nadruk slechts gevonden wordt bij de initiële beoordeling van
de justificatie van een project en op het moment dat een project wordt afgerond. Daar
slechts op deze twee momenten lijkt te worden geëvalueerd blijven mogelijkheden om
voortschrijdend inzicht te gebruiken om de stabiliteit en compleetheid van de bestaande
planning en verwachtingen te verhogen ongebruikt. Tevens werden er geen stimulansen
gevonden die betrokkenen ertoe zouden kunnen brengen op andere momenten ook te
evalueren. Derhalve kan de activiteit van het evalueren worden gezien als eentje van
verplichting. Helaas blijken de verplicht uit te voeren evaluaties ook nog eens
leermomenten op basis van succes te missen aangezien blijkt dat slechts wanneer
projecten falen de aandacht daadwerkelijk op niveau is.
Dit beeld wordt versterkt doordat het lerend vermogen volgens de managers
voornamelijk terug te vinden is in verhoogde ervaring bij de betrokkenen. Deze zijn echter
wel onderhevig aan ‘evaluatievermoeidheid’. Dit houdt in dat betrokkenen lopende
projecten geleidelijk zat lijken te worden, nieuwe projecten lonken en leermomenten van
de oude blijven liggen.
De activiteiten die wel worden uitgevoerd zijn meer geënt op het justificeren of alloceren
van middelen, dan op het realiseren van baten en het waarmaken van het ware
potentieel van de investering. Tevens wijzen de managers erop dat het leggen van een
verband op strategisch niveau tussen de organisatie en het project een groot probleem
vormt.
Resumerend lijken de organisaties nog te worstelen met het opnemen van het totaal van
de economische aspecten van informatiesystemen in de evaluatie van projecten, daar zij
126
zich slechts op enkele elementen hiervan richten. De combinatie van de beschreven
effecten maakt verbetering van, bijvoorbeeld, de precisie en compleetheid van evaluaties
onwaarschijnlijk. Berichtgeving over uit de hand gelopen voorspellingen en verspeelde
middelen is dientengevolge zeker nog niet de kop ingedrukt.
Hoe de afstand tussen baten en kosten te verkleinen
De laatste vraag voor de algehele conclusies betreft de richting waarin ontwikkelingen het
beste gezocht zouden kunnen worden om de gevonden afstand tussen baten en kosten te
verkleinen. Beantwoording van deze vraag is gebaseerd op de gemeten percepties van de
geïnterviewde managers over de invloed van objectiviteit in de evaluatie van baten en
kosten van informatiesystemen. Deze percepties kunnen een verklaring bieden voor
welke elementen organisaties belangrijk vinden in evaluaties die als goed worden
gekwalificeerd.
Gemeten zijn de beleefde niveaus van objectiviteit, compleetheid, meeberekening, en
evaluatieprestaties voor een verscheidenheid aan baten en lasten types. Tevens zijn de
percepties van de algehele evaluatieprestaties in kaart gebracht. Het geheel van deze
concepten komt samen in een conceptueel model. Dit model beschrijft dat de perceptie
van organisaties van hun algehele evaluatieprestatie afhangt van de individuele prestaties
op het gebied van baten- en kostenevaluatie. Deze prestaties hangen op hun beurt weer
samen met het belang dat aan de items wordt gehecht en het niveau waartoe ze worden
meegenomen in evaluaties. Tot slot wordt er gesteld dat de evaluatieprestaties beïnvloed
worden door de beleefde objectiviteit. De grafische weergave van dit model is terug te
vinden in Sectie 5.4.
De resultaten van de gemeten percepties wijzen erop dat de manager meent dat er geen
significant verschil tussen het niveau van meerekening of belang van baten- of
kostenitems bestaat. Kijkend naar de evaluatieprestaties blijken de kostenevaluaties als
hoger te worden ingeschat. Hetzelfde geldt voor de beleefde objectiviteit. De significantie
van objectiviteit blijft bestaan wanneer er gekeken wordt naar de invloed hiervan op de
evaluatieprestaties. In alle gevallen blijkt de beleving van de prestaties in een evaluatie te
verbeteren wanneer de objectiviteit toeneemt (of, wellicht beter gesteld, de subjectiviteit
afneemt). Tot slot blijken de algehele evaluatieprestaties slechts afhankelijk van het
niveau dat wordt gehaald op het gebied van kostenevaluaties. Prestaties op het gebied
van de evaluatie van baten lijken hier momenteel geen enkele rol van betekenis te
spelen.
Een potentiële conclusie van al deze effecten is echter niet dat er direct gestreefd moet
worden naar dezelfde mate van objectiviteit op het gebied van baten als bij kosten. De
evaluatie van kosten heeft een lange weg afgelegd om dit niveau te behalen, en het lijkt
juist deze weg die van belang kan zijn in de ontwikkeling van batenevaluaties. Dat wil
127
zeggen, het lijkt verstandig zich eerst te richten op de realisatie en het management van
baten, om daarna pas over te gaan op de bijbehorende niveaus.
4.
Conclusies
Voortbouwend op de ogenschijnlijke problemen bij het gezamenlijk evalueren van baten
en kosten van informatiesystemen is het doel van dit onderzoek gedefinieerd als
vergroting van het begrip van objectiviteit in de evaluatie van de economische aspecten
van informatiesystemen. Met de in de vorige sectie gepresenteerde resultaten behorende
bij de subvragen is hiervoor de weg vrijgemaakt.
Al met al wijzen de resultaten erop dat als de beleefde objectiviteit van baten of kosten
hoger wordt, de bijbehorende perceptie van de evaluatieprestaties volgt. Tevens blijkt
dat kosten gezien worden als significant objectiever dan baten, en dat de beleving van
algehele evaluatieprestaties slechts beïnvloed wordt door de prestaties van
kostenevaluatie. Wanneer deze drie constateringen gecombineerd worden kan
geconcludeerd worden dat verbetering van het objectiviteitniveau gerelateerd aan
batenevaluaties een waardevolle ontwikkelingsrichting kan zijn. Hiertoe kan de door
kostenevaluatie gevolgde weg als leidraad fungeren. Voor nu lijkt het echter
onwaarschijnlijk dat een contemplatie van baten en kosten op een bevredigende manier
kan worden uitgevoerd. Organisaties zullen er derhalve verstandig aan doen om aparte
aandacht te besteden aan de twee, en de vraag of ze het de moeite waard vinden te
prefereren boven de kwestie van winstgevendheid.
Voor vervolgonderzoek betekent dit voornamelijk dat het onderwerp van
batenmanagement speciale attentie verdient. Waarbij in het bijzonder de
vergelijkbaarheid van baten tussen projecten en de mogelijkheid van ex-post
projectevaluatie worden benoemd. Tevens wijst het gebruik van de percepties erop dat
beleving mogelijk meer invloed heeft dan de uiteindelijke evaluatie. Onderzoek naar de
communicatieve kracht van evaluatie kan daarom uiterst productief blijken.
128
Publications related to the research
Schuurman, P.M., and Berghout, E.W., (2006), "Post-project evaluation of the IT business
case: The case of an energy supply company", in: Proceedings of the 13th
European Conference on Information Technology Evaluation (ECITE 2006),
Academic conferences Ltd., Reading, pp. 424-432.
Van Wingerden, D., Berghout, E.W., and Schuurman, P.M., (2009a), "Benchmarking IT
benefits: Exploring outcome- and process-based approaches", in: Proceedings of
the 15th Americas Conference on Information Systems (AMCIS 2009), K.E. Kendall
and U. Varshney (eds.), Association of Information Systems, San Francisco.
Schuurman, P.M., Berghout, E.W., and Powell, P., (2009b), "Benefits are from Venus,
Costs are from Mars", in: Proceedings of the 3rd European Conference on
Information Management and Evaluation (ECIME 2009), Academic Conferences
International, Göthenburg.
Schuurman, P.M., and Berghout, E.W., (2008), "Identifying information systems in
practice", in: Citer Working Paper Series, Centre for IT Economics, Groningen.
Schuurman, P.M., Berghout, E.W., and Powell, P., (2008), "Calculating the importance of
information systems: The method of Bedell revisited", in: Citer Working Paper
Series, Centre for IT Economics, Groningen.
Schuurman, P.M., Berghout, E.W., and Powell, P., (2009a), "Benefits are from Venus,
Costs are from Mars", in: Citer Working Paper Series, Centre for IT Economics,
Groningen.
Schuurman, P.M., (2010), "IS evaluation: Benefits are from Venus, costs are from Mars",
CIOnet Magazine 7, p 11.
Van Wingerden, D., Schuurman, P.M., and Van der Pijl, G., (2009b), "Benchmark
effectiviteit van informationsystemen", in: Checklisten Information Management,
SDU Uitgevers.
Berghout, E.W., and Powell, P., (2010), "IS/IT benefits management and realization”, in:
Strategic information systems management, K. Grant, R. Hackney and D. Edgar
(eds.), Cengage Learning EMEA, Hampshire, ISBN: 978-1-4080-0793-8, pp. 273299.
129
References
't Hart, H., Van Dijk, J., De Goede, M., Jansen, W., and Teunissen, J., (1998), Onderzoeksmethoden,
(4th ed.), Boom, Amsterdam, ISBN: 90-5352-451-7.
Ahituv, N., and Neumann, S., (1987), "Decision making and the value of information," in:
Information analysis: Selected readings, R. Galliers (ed.), Addison-Wesley Publishing
Company, Inc., Sydney (etc.), ISBN: 0-201-19244-6, pp. 19-43.
Al-Yaseen, H., Eldabi, T., Lees, D.Y., and Paul, R.J., (2006), "Operational Use evaluation of IT
investments: An investigation into potential benefits," European Journal of Operational
Research 173:3, pp 1000-1011.
Alshawi, S., Irani, Z., and Baldwin, L., (2003), "Benchmarking information technology investment
and benefits extraction," Benchmarking: An International Journal 10:4, pp 414-423.
Alston, L.J., and Mueller, B., (2005), "Property rights and the State," in: Handbook of New
Institutional Economics, C. Ménard and M.M. Shirley (eds.), Springer, Dordrecht, The
Netherlands, ISBN: 978-1-4020-2687-4, pp. 573-590.
Anandarajan, A., and Wen, H.J., (1999), "Evaluation of information technology investment,"
Management
Decision
37:4,
pp
329-339,
http://www.emeraldinsight.com/10.1108/00251749910269375.
Andreev, P., Heart, T., Maoz, H., and Pliskin, N., (2009), "Validating formative partial least squares
(PLS) models: Methodological review and empirical illustration," in: Proceedings of the
30th International Conference on Information Systems, Phoenix, Arizona.
Anthony, R.N., (1965), Planning and control systems: A framework for analysis, Division of
Research Graduate School of Business Administration, Harvard University, Boston, ISBN:
n/a, LCCN: 65-18724.
Antonides, G., (1996), Psychology in economics and business: An introduction to economic
psychology, (2nd, revised ed.), Kluwer Academic Publishers, Dordrecht, ISBN: 0792341082.
Arrow, K.J., (1984), "Information and economic behaviour," in: Economics of information, Basil
Blackwell Limited, Oxford, ISBN: 0631137378.
Arrow, K.J., (1987), "Reflections of the essays," in: Arrow and the foundations of the theory of
economic policy, G. Feiwel (ed.), New York University Press, New York, pp. 727-734.
Ashurst, C., Doherty, N.F., and Peppard, J., (2008), "Improving the impact of IT development
projects: the benefits realization capability model," European Journal of Information
Systems 17:4, pp 352-370.
Avgerou, C., (2002), Information systems and global diversity, Oxford University Press, Oxford,
etc., ISBN: 0-19-926342-6.
Bakos, J.Y., and Kemerer, C., (1992), "Recent applications of economic theory in information
Technology research," Decision Support Systems 8:5, pp 365-386.
Bannister, F., Berghout, E.W., Griffiths, P., and Remenyi, D., (2006), "Tracing the eclectic (or
maybe even chaotic) nature of ICT evaluation," in: Proceedings of the 13th European
Conference on Information Technology Evaluation, D. Remenyi and A. Brown (eds.),
Academic conferences Ltd., Reading.
Barney, J., (1991), "Firm Resources and Sustained Competitive Advantage," Journal of
Management 17:1, p 99.
Barney, J.B., (2007), Gaining and sustaining competitive advantage, Pearson Prentice Hall, Upper
Saddle River, NJ, ISBN: 0-13-147094-9.
Bedell, E.F., (1985), The computer solution: Strategies for success in the information age, USA,
ISBN: 0-87094-474-6.
130
Bellini, P., Mersel, R., Alofs, T., et al., (2002), De end-to-end serviceorganisatie, Van Haren
publishing, ISBN: 90-80671-398.
Benham, L., (2005), "Licit and illicit responses to regulation," in: Handbook of New Institutional
Economics, C. Ménard and M.M. Shirley (eds.), pp. 591-608.
Bennington, P., and Baccarini, D., (2004), "Project benefits management in IT projects - An
Australian perspective," Project Management Journal 35:2, pp 20-30.
Berghout, E.W., (1997), "Evaluation of information system proposals: Design of a decision support
method," Delft University of Technology, p. 178.
Berghout, E.W., (2002), "Onzakelijke argumenten regeren investeringsbeleid," in: Automatisering
Gids.
Berghout, E.W., and Powell, P., (2010), "IS/IT benefits management and realization," in: Strategic
information systems management, K. Grant, R. Hackney and D. Edgar (eds.), Cengage
Learning EMEA, Hampshire, ISBN: 978-1-4080-0793-8, pp. 273-299.
Berghout, E.W., and Remenyi, D., (2005), "The eleven years of the European Conference on IT
evaluation: Retrospectives and perspectives for possible future research," Electronic
Journal of Information Systems Evaluation 8:2, pp 81-98.
Bhatt, G.D., and Grover, V., (2005), "Types of Information Technology Capabilities and Their Role
in Competitive Advantage: An Empirical Study," Journal of Management Information
Systems 22:2, pp 253-277.
Bicego, A., Koch, G., Krzanik, L., et al., (1994), "Software Process Assessment & Improvement The
BOOSTRAP Approach," Blackwell Business.
Bjerknes, G., Brattereig, T., and Espeseth, T., (1991), "Evolution of finished computer systems: The
dilemma of enhancement," Scandinavion Journal of Information Systems 3, pp 25-45.
Boehm, B.W., (1978), Characteristics of Software Quality, North-Holland.
Boston Consulting Group, (1970), "The product portfolio."
Bots, P.W.G., and Sol, H.G., (1988), "Shaping organizational information systems through coordination support," Elsevier Science Publishers B.V. (North-Holland), Amsterdam, pp.
139-154.
Brussaard, B., and Tas, P., (1980), "Information and organization policies in public administration,"
in: Information Processing 80: IFIP congress, S. Lavington (ed.), North-Holland Publishing
Co., Tokyo and Melbourne, Amsterdam, ISBN: 0-4448-6034-7, pp. 821-826.
Brynjolfsson, E., (1993), "The productivity paradox of information technology," Communications
of the ACM 36:12, pp 67-77.
Brynjolfsson, E., and Hitt, L.M., (1998), "Beyond the Productivity Paradox," Communications of the
ACM 41:8, pp 49-55.
Butler, B., and Gray, P., (2006), "Reliability, mindfulness, and information systems," MIS Quarterly
30:2, pp 211-224.
Byrd, T., Thrasher, E., Lang, T., and Davidson, N., (2006), "A process-oriented perspective of IS
success: Examining the impact of IS on operational cost," Omega 34:5, pp 448-460.
Callon, M., (1999), "Actor-network theory: The market test," in: Actor network and after, J. Law
and J. Hassard (eds.), Blackwell and the Sociological Review, Oxford and Keele, pp. 181195.
Carifio, J., and Perla, R.J., (2007), "Ten common misunderstandings, misconceptions, persistent
myths and urban legends about Likert scales and Likert response formats and their
antidotes," Journal of Social Sciences 3:3, pp 106-116.
Changchit, C., Joshi, K.D., and Lederer, A.L., (1998), "Process and reality in information systems
benefit analysis," Information Systems Journal 8:2, pp 145-162.
131
Checkland, P.B., and Holwell, S., (1998), Information, systems and information systems: Making
sense of the field, John Wiley & Sons Ltd., Chichester, ISBN: 0-471-95820-4.
Chin, W.W., (1998), "Issues and Opinion on Structural Equation Modeling," in: MIS Quarterly, pp.
1-1.
Churchman, C.W., (1971), The design of inquiring systems: Basic concepts of systems and
organization, Basic Books, New York (etc.), ISBN: 0-465-01608-1.
Coase, R.H., (1937), "The Nature of the Firm," Economica 4:16, pp 386-405.
Coff, R.W., (1999), "When competitive advantage doesn't lead to performance: The resourcebased view and stakeholder bargaining power," Organization Science 10:2, p 119.
Cornford, T., and Smithson, S., (2006), Project research in information systems: A student's guide,
Palgrave Macmillan, Hampshire, ISBN: 1-4039-3471-1.
Daft, R.L., (2007), Understanding the theory and design of organizations, Thomson SouthWestern, ISBN: 0-324-42271-7.
Darity, W.A.j., (2008), International encyclopedia of the social sciences, (2nd ed.), Macmillan
Reference USA, Detroit, ISBN: 978-0-02-865965-7 / eISBN: 978-0-02-866117-9.
Davis, G.B., (1982), "Strategies for information requirements determination," IBM Systems Journal
21:1.
Davis, G.B., and Olson, M.H., (1985), Management information systems: Conceptual foundations,
structure, and development, McGraw-Hill, New York, ISBN: 90-6233-189-0.
Dawes, J., (2008), "Do data characteristics change according to the number of scale points used?,"
International Journal of Market Research 50:1, pp 61-77.
De Leeuw, A.C.J., (1990), Organisaties: Management, analyse, ontwerp en verandering, Van
Gorcum & Comp. B.V., Assen, ISBN: 90-232-2247-4.
De Looff, L., (1997), "Information systems outsourcing decision making: A managerial approach,"
Idea Group Publishing, Hershey, p. 287.
Demsetz, H., (1967), "Toward a theory of property rights," American Economic Review 57:2, p 347.
Diamantopoulos, A., and Siguaw, J.A., (2006), "Formative versus reflective indicators in
organizational measure development: A comparison and empirical illustration," British
Journal of Management 17:4, pp 263-282.
Diamond, P., and Vartiainen, H., (2007), "Introduction," in: Behavioral economics and its
applications, P. Diamond and H. Vartiainen (eds.), ISBN: 9780691122847, pp. 1-6.
Dier, D.H., and Mooney, J.G., (1994), "Enhancing the evaluation of information technology
investment through comprehensive cost analysis," European Conference for IT Evaluation,
Henley Management College, pp. 1-15.
Doll, W.J., Deng, X., and Scazzero, J.A., (2003), "A process for post-implementation IT
benchmarking," Information & Management 41:2, p 199.
Dos Santos, B.L., (1991), "Justifying Investments in New Information Technologies," Journal of
Management Information Systems 7:4, pp 71-89.
Eisenhardt, K., (1985), "Control: Organizational and economic approaches," Management Science
31:2, pp 134-149.
Eisenhardt, K., (1989), "Agency Theory: An Assessment and Review," Academy of Management
Review 14:1, p 57.
Eisenhardt, K.M., and Martin, J.A., (2000), "Dynamic capabilities: What are they?," Strategic
Management Journal 21:10/11, p 1105.
Emans, B., (2004), Interviewing: Theory, techniques, and training, Routledge, ISBN: 9789020732801.
132
Farbey, B., Land, F., and Targett, D., (1995), "A taxonomy of information systems application: The
benefits' evaluation ladder," European Journal of Information Systems 4:4, pp 41-50.
Feldman, S.P., (2004), "The culture of objectivity: Quantification, uncertainty, and the evaluation
of risk at NASA," Human Relations 57:6, pp 691-718, 10.1177/0018726704044952.
Field, A., (2005), Discovering statistics using SPSS, (2nd ed.), Sage Publications Ltd., London, ISBN:
978-0761944522.
Fink, D., (2003), "Case analyses of the "3 Rs" of information technology benefit management:
Realise, retrofit and review," Benchmarking: An International Journal 10:4, pp 367-381.
Ford, N., (2004), "Creativity and convergence in information science research: The roles of
objectivity and subjectivity, constraint, and control," Journal of the American Society for
Information Science and Technology 55:13, pp 1169-1182, 10.1002/asi.20073.
Fowler, F.J.J., (2009), Survey research methods, (4th ed.), Sage Publications, Inc., Thousand Oaks,
ISBN: 978-1-4129-5841-7.
Galliers, R., (1987), Information analysis: Selected readings, Addison-Wesley Publication Company,
Sydney (etc.), ISBN: 0-201-19244-6.
Gefen, D., Straub, D.W., and Boudreau, M.-C., (2000), "Structural equation modeling and
regression: Guidelines for research practice," Communication of the Association for
Information Systems 4:Article 7, p 77.
Gemino, A., Reich, B.H., and Sauer, C., (2007), "A Temporal Model of Information Technology
Project Performance," Journal of Management Information Systems 24:3, pp 9-44.
Giddens, A., (1984), The constitution of society: Outline of the theory of structuration, University of
California Press, Berkeley, etc., ISBN: 0-520-05728-7.
Goodson, J.R., and McGee, G.W., (1991), "Enhancing individual perceptions of objectivity in
performance appraisal," Journal of Business Research 22:4, pp 293-303.
Grant, R.M., (1996), "Toward a knowledge-based theory of the firm," Strategic Management
Journal 17, pp 109-122.
Gregor, S., (2006), "The nature of theory in information systems," MIS Quarterly 30:3, pp 611-642.
Grossman, S.J., and Hart, O.D., (1986), "The Costs and Benefits of Ownership: A Theory of Vertical
and Lateral Integration," Journal of Political Economy 94:4, pp 691-719.
Gurbaxani, V., and Seungjin, W., (1991), "The Impact of Information Systems on Organizations and
Markets," Communications of the ACM 34:1, pp 59-73.
Hair, J.F., Jr., Black, W.C., Babin, B.J., Anderson, R.E., and Tatham, R.L., (2006), Multivariate data
analysis, Pearson Prentice Hall, ISBN: 0-13-032929-0.
Harrison, E.F., (1998), The managerial decision-making process, Houghton Mifflin, Boston, ISBN:
978-0-395-90821-1.
Hart, O., and Moore, J., (1990), "Property rights and the nature of the firm," Journal of Political
Economy 98:6, p 1119.
Hevner, A., March, S., Park, J., and Ram, S., (2004), "Design science in information systems
research," MIS Quarterly 28:1, pp 75-105.
Hinton, M., Francis, G., and Holloway, J., (2000), "Best practice benchmarking in the UK,"
Benchmarking: An International Journal 7:1, pp 52-61.
Hirschheim, R., and Smithson, S., (1999), "Evaluation of information systems: A critical
assessment," in: Beyond the IT productivity paradox, L.P. Willcocks and S. Lester (eds.),
John Wiley & Sons, Chichester, etc., pp. 381-409.
Hitt, L.M., and Brynjolfsson, E., (1996), "Productivity, Business Profitability, and Consumer Surplus:
Three Different Measures of Information Technology Value," MIS Quarterly 20:2, pp 121142.
133
Horngren, C.T., Foster, G., and Datar, S.M., (1994), Cost accounting: A managerial emphasis,
Prentic-Hall, Inc., Englewood Cliffs, New Jersey, ISBN: 0-13-184763-5.
Hubbard, D.W., (2010), How to measure anything: Finding the value of 'intangibles' in business,
(2nd ed.), John Wiley & Sons, Inc., Hoboken, NJ, ISBN: 978-0-470-53939-2.
International Society for New Institutional Economics, I.S.N.I.E., (2007). "About ISNIE: What is New
Institutional Economics?," http://www.isnie.org/about.html, visited: September 13th
2007.
Irani, Z., Ghoneim, A., and Love, P.E.D., (2006), "Evaluating cost taxonomies for information
systems management," European Journal of Operational Research 173:3, pp 1103-1122.
Irani, Z., and Love, P.E.D., (2000), "The Propagation of Technology Management Taxonomies for
Evaluating Investments in Information Systems," Journal of Management Information
Systems 17:3, pp 161-177.
ISACA, (2007). "Val IT overview," www.isaca.org/valit/, visited: 2008-05-04 2008.
ISO/IEC, (1994), "Information Technology -Software quality characteristics and metrics- “Quality
characteristics and subcharacteristics”, part 1."
Jahner, S., Leimeister, J.M., Knebel, U., and Krcmar, H., (2008), "A cross-cultural comparison of
perceived strategic importance of RFID for CIOs in Germany and Italy," in: Proceedings of
the 41st Annual hawaii International Conference on System Sciences, pp. 405-414.
Jarvis, C.B., Mackenzie, S.B., Podsakoff, P.M., Mick, D.G., and Bearden, W.O., (2003), "A Critical
Review of Construct Indicators and Measurement Model Misspecification in Marketing
and Consumer Research," Journal of Consumer Research 30:2, pp 199-218.
Jeffery, M., and Leliveld, I., (2004), "Best Practices in IT Portfolio Management," MIT Sloan
Management Review 45:3, pp 41-49.
Jensen, M.C., and Meckling, W.H., (1976), "Theory of the firm: Managerial behaviour, agency
costs and ownership structure," Journal of Financial Economics 3:4, pp 305-360.
Joslin, E.O., (1977), Computer selection, Technology Press, Inc., Fairfax Station, VA, USA, ISBN:
0893212016.
Kahneman, D., (2003), "Maps of Bounded Rationality: Psychology for Behavioral Economics,"
American Economic Review 93:5, pp 1449-1475.
Kahneman, D., Slovic, P., and Tversky, A., (1982), Judgment under uncertainty: Heuristics and
biases, Cambridge University Press, Cambridge, ISBN: 0521284147.
Kahneman, D., and Tversky, A., (1979), "Prospect theory: An analysis of decision under risk,"
Econometrica 47:2, pp 263-291.
Kahneman, D., and Tversky, A., (2000), Choices, values, and frames, Cambridge University Press,
Cambridge, ISBN: 9780521627498.
Kaplan, R.S., and Norton, D.P., (1992), "The Balanced Scorecard--Measures That Drive
Performance," Harvard Business Review 70:1, pp 71-79.
Kleijnen, J.P.C., (1984), "Quantifying the benefits of information systems," European Journal of
Operational Research 15:1, pp 38-45.
Klompé, R.H., (2003), "The alignment of operational ICT," Dissertation Delft University of
Technology, Delft, Eburon Academic Publishers, p. 193.
Kogut, B., and Zander, U., (1992), "Knowledge of the firm, combinative capabilities, and the
replication of technology," Organization Science 3:3, pp 383-397.
Kusters, R.J., and Renkema, T.J.W., (1996), "Managing IT investment decisions in their
organisational context: The design of 'local for local' evaluation models," in: Proceedings
of the 3th European Conference on IT Evaluation, A. Brown and D. Remenyi (eds.),
University of Bath, pp. 133-141.
134
Land, F., (1992), "The information systems domain," in: Information systems research: Issues,
methods, and practical guidelines, R. Galliers (ed.), Alfred Waller Ltd., Henley-on-thames,
ISBN: 1-872474-39-X, pp. 6-13.
Landrum, H., and Prybutok, V., (2004), "A service quality and success model for the information
service industry," European Journal of Operational Research 156:3, p 628.
Latimore, D.W., Petit dit de la Roche, C., Quak, K., and Wiggers, P., (2004), "Reaching efficient
frontiers in IT investment management: What financial services CIOs can learn from
portfolio theory," IBM Global Business Services.
Latour, B., (1987), Science in action: How to follow scientists and engineer through society, Milton
Keynes: Open University Press, ISBN: 0-335-15357-7.
Laudon, K., and Laudon, J., (2001), Essentials of management information systems, Prentice-Hall
Inc., New Jersey, ISBN: 0-13-019946-X.
Law,
J.,
(1997).
"Traduction/trahision:
Notes
on
ANT,"
http://www.comp.lancs.ac.uk/sociology/papers/Law-Traduction-Trahision.pdf, visited.
Lay, P.M.Q., (1985), "Beware of the cost/benefit model for IS project evaluations," Journal of
System Management 37:2, pp 7-17.
Lin, C., (2002), "An investigation of the process of IS/IT investment evaluation and benefits
realisation in large Australian organisations," in: School of Information Systems, Curtin
University of Technology.
Lincoln, T., (1986), "Do computer systems really pay-off?," Information & Management 11:1, pp
25-34.
Looijen, M., (2004), Beheer van informatiesystemen, Ten Hagen & Stam Uitgevers, Den Haag,
ISBN: 90-440-0707-6.
Lovallo, D., and Kahneman, D., (2003), "Delusions of Success," Harvard Business Review 81:7, pp
56-63.
Love, P., and Irani, Z., (2004), "An exploratory study of information technology evaluation and
benefits management practices of SMEs in the construction industry," Information &
Management 42:1, pp 227-242.
Love, P.E.D., Irani, Z., Ghoneim, A., and Themistocleous, M., (2006), "An exploratory study of
indirect ICT costs using the structured case method," International Journal of Information
Management 26:2, pp 167-177.
Love, P.E.D., Irani, Z., Standing, C., Lin, C., and Burn, J.M., (2005), "The enigma of evaluation:
benefits, costs and risks of IT in Australian small/medium-sized enterprises," Information
& Management 42:7, pp 947-964.
Macho-Stadler, I., Pérez-Castrillo, J.D., and Watt, R., (2001), An introduction to the economics of
information: Incentives and contracts, (2nd ed.), Oxford University Press, Oxford, ISBN:
0199243271.
MacKenzie, S.B., Podsakoff, P.M., and Jarvis, C.B., (2005), "The Problem of Measurement Model
Misspecification in Behavioral and Organizational Research and Some Recommended
Solutions," Journal of Applied Psychology 90:4, pp 710-730.
Markus, M.L., and Robey, D., (1988), "Information Technology and Organizational Change: Causal
Structure in Theory and Research," Management Science 34:5, pp 583-598.
Matell, M.S., and Jacoby, J., (1972), "Is there an optimal number of alternatives for Likert-scale
items?," Journal of Applied Psychology 56:6, pp 506-509.
McCall, J.A., Richards, P.K., and Walters, G.F., (1977), "Factors in Software Quality, Vols I, II, III," US
Rome Air Development Center Reports NTIS AD/A-049 14, pp 015-055.
135
McFarlan, F.W., (1981), "Portfolio approach to information systems," Harvard Business Review
59:5, pp 142-150.
McKeen, J.D., and Smith, H.A., (2003), Making IT happen: Critical issues in IT management, John
Wiley & Sons Ltd., Chichester, etc., ISBN: 0-470-85087-6.
McManus, J., and Wood-Harper, T., (2007), "Understanding the Sources of Information Systems
Project Failure," Management Services 51:3, pp 38-43.
Megill, A., (1994), "Introduction: Four senses of objectivity," in: Rethinking objectivity, A. Megill
(ed.), Duke University Press, London, ISBN: 978-0822314943, pp. 1-20.
Ménard, C., (2005), "A new institutional approach to organization," in: Handbook of New
Institutional Economics, C. Ménard and M.M. Shirley (eds.), pp. 281-318.
Ménard, C., and Shirley, M.M., (2005), "Introduction," in: Handbook of New Institutional
Economics, Springer, Dordrecht, The Netherlands, ISBN: 978-1-4020-2687-4, pp. 1-18.
Miles, M.B., and Huberman, A.M., (1994), Qualitative data analysis: An expanded sourcebook, (2nd
ed.), SAGE Publications, Inc., Thousand Oaks, California, ISBN: 0-8039-5540-5.
Milis, K., and Mercken, R., (2004), "The use of the balanced scorecard for the evaluation of
Information and Communication Technology projects," International Journal of Project
Management 22:2, pp 87-97.
Mohamed, S., and Irani, Z., (2002), "Developing taxonomy of information system's indirect human
costs," 2nd International Conference on Systems Thinking in Management, University of
Salford, UK.
Mohr, L.B., (1982), Explaining organizational behavior: the limits and possibilities of theory and
research, San Francisco: Jossey-Bass.
Nichols, G.E., (1969), "On the nature of management information," Management Accounting 913:15.
Nijland, M., (2004), "Understanding the use of IT evaluation methods in organisations,"
Dissertation London School of Economics and Political Science, Gigaboek.nl, pp. v-295.
Nijland, M., and Berghout, E.W., (2002), "Full life cycle management and the IT management
paradox," in: The make or break issues in IT management: A guide to the 21st century
effectiveness, D. Remenyi and A. Brown (eds.), Butterworth-Heinemann, ISBN: 0-750650346, pp. 77-107.
North, D.C., (2005), "Institutions and the performance of economics over time," in: Handbook of
New Institutional Economics, C. Ménard and M.M. Shirley (eds.), pp. 21-30.
OGC, O.o.G.C., (2005), Introduction to ITIL, Van Haren Publishing, ISBN: 978-0-11-330973-3.
Olle, T.W., Hagelstein, J., Macdonald, I.G., et al., (1991), Information systems methodologies: A
framework for understanding, Addison-Wesley Publishing Company, ISBN: 0-201-54443-1.
Orlikowski, W.J., and Iacono, C.S., (2001), "Research commentary: Desperately seeking the "IT" in
IT Research - A call to theorizing the IT artifact," Information Systems Research 12:2, pp
121-134.
Parker, M.M., Benson, R.J., and Trainor, H.E., (1988), Information economics: Linking business
performance to information technology, Prentice-Hall International, Englewood Cliffs,
New Jersey, ISBN: 0-13-465014-X.
Peppard, J., and Ward, J., (1999), "'Mind the Gap': diagnosing the relationship between the IT
organisation and the rest of the business," The journal of strategic information systems
8:1, pp 29-60.
Pfleeger, S.F., (1991), Software Engineering: the production of quality software, McMillan
Publishing, New York.
136
Porter, M.E., (1985), Competitive advantage: Creating and sustaining superior performance, The
Free Press, New York, ISBN: 0-02-925090-0.
Porter, M.E., and Millar, V.E., (1985), "How information gives you competitive advantage,"
Harvard Business Review 63:4, pp 149-160.
Porter, T.M., (1995), Trust in numbers: The pursuit of objectivity in science and public life,
Princeton University Press, Princeton, New Jersey, ISBN: 0-691-03776-0.
Powell, P.L., (1992), "Information technology evaluation: Is it different?," The Journal of the
Operational Research Society 43:1, pp 29-42.
Powell, T.H., and Thomas, H., (2009), "Dynamic knowledge creation," in: The handbook of
research on strategy and foresight, L.A. Costanzo and R.B. MacKay (eds.), Edward Elgar
Publishing Ltd., Cheltenham, ISBN: 978-1845429638, pp. 505-517.
Power, M., (1997), The audit society: Rituals of verification, (paperback ed.), Oxford University
Press, Oxford, ISBN: 978-0-19-829603-4.
Ray, G., Barney, J.B., and Muhanna, W.A., (2004), "Capabilities, business processes, and
competitive advantage: Choosing the dependent variable in empirical tests of the
resource-based view," Strategic Management Journal 25:1, pp 23-37.
Remenyi, D., Michael, S., and Terry, W., (1996), "Outcomes and benefits for information systems
investment," 3rd European Conference for IT Evaluation, Bath University School of
Management, Bath University, Bath.
Remenyi, D., Money, A., and Sherwood-Smith, M., (2000), The effective measurement and
management of IT costs and benefits, Butterworth-Heinemann, Oxford (etc.), ISBN: 07506-4420-6.
Remenyi, D., Williams, B., Money, A., and Swartz, E., (1998), Doing research in business and
management: An introduction to process and method, SAGE Publications Ltd., London
(etc.), ISBN: 0-7619-5950-5.
Renkema, T.J.W., and Berghout, E.W., (1997), "Methodologies for information systems
investment evaluation at the proposal stage: a comparative review," Information and
Software Technology 39:1, pp 1-13.
Renkema, T.J.W., and Berghout, E.W., (2005), Investeringsbeoordeling van IT-projecten: Een
methodische benadering, ISBN: 90-805102-3-8.
Ringle, C.M., Wende, S., and Will, A., (2005). "SmartPLS 2.0 (beta)," http://www.smartpls.de,
visited.
Ross, S.A., (1973), "The economic theory of agency: The principal's problem," American Economic
Review 63:2, pp 134-139.
Rumelt, R.P., (2003), "What in the world is competitive advantage?."
Ryan, S.D., and Harrison, D.A., (2000), "Considering social subsystem costs and benefits in
information technology investment decsions: A view from the field on anticipated
payoffs," Journal of Management Information Systems 16:4, pp 11-40.
Sanchez, R., Heene, A., and Thomas, H., (1996), "Introduction: Towards the theory and practice of
competence-based competition," in: Dynamics of competence-based competition: Theory
and practice in the new strategic management, Elsevier Science Ltd., Oxford, ISBN: 0-08042585-2, pp. 1-35.
Sauer, C., Gemino, A., and Reich, B.H., (2007), "The impact of size and volatility on IT project
performance," Communications of the ACM 50:11, pp 79-84.
Saunders, M., Lewis, P., and Thornhill, A., (2006), Research methods for business students,
Financial Times/Prentica Hall, Harlow, ISBN: 0-2737-0148-7.
137
Schuurman, P., (2006), "Post-project evaluation of the IT business case: A post-project evaluation
model for business cases of IT projects at Essent Energy," Master's thesis, University of
Groningen, pp. ix-106.
Schuurman, P., (2010), "IS evaluation: Benefits are from Venus, costs are from Mars," CIOnet
Magazine 7, p 11.
Schuurman, P., and Berghout, E.W., (2006), "Post-project evaluation of the IT business case: The
case of an energy supply company," in: Proceedings of the 13th European Conference on
Information Technology Evaluation, D. Remenyi and A. Brown (eds.), Academic
conferences Ltd., Reading, pp. 424-432.
Schuurman, P., and Berghout, E.W., (2008), "Identifying information systems in practice," in: Citer
Working Paper, Centre for IT Economics, Groningen, pp. 1-9.
Schuurman, P., Berghout, E.W., and Powell, P., (2008), "Calculating the importance of information
systems: The method of Bedell revisited," in: Citer Working Paper, Centre for IT
Economics, Groningen, pp. 1-20.
Schuurman, P., Berghout, E.W., and Powell, P., (2009a), "Benefits are from Venus, Costs are from
Mars," in: Citer Working Paper, Centre for IT Economics, Groningen, pp. 1-14.
Schuurman, P.M., Berghout, E.W., and Powell, P., (2009b), "Benefits are from Venus, Costs are
from Mars," in: 3rd European Conference on Information Management and Evaluation
(ECIME), Academic Conferences International, Göthenburg, pp. 544-.
Shang, S., and Seddon, P.B., (2002), "Assessing and managing the benefits of enterprise systems:
the business manager's perspective," Information Systems Journal 12:4, pp 271-299.
Shifrin, T., (2006), "Business fails to calculate IT costs."
Skok, W., Kophamel, A., and Richardson, I., (2001), "Diagnosing information systems success:
importance-performance maps in the health club industry," Information & Management
38:7, p 409.
Smith, J., Schuff, D., and Louis, R., (2002), "Managing your IT total cost of ownership,"
Communications of the ACM 45:1, pp 101-106.
Soh, C., and Markus, M.L., (1995), "How IT creates business value: A process theory synthesis," in:
Proceedings of the Sixteenth International Conference on Information Systems, J.I.
DeGross, G. Ariav and C.M. Beath (eds.), ACM, New York, pp. 29-41.
Sommerville, I., (2007), Software Engineering 8, (8th ed.), Pearson Education Limited, Essex,
England, ISBN: 978-0-321-31379-9.
Staples, D.S., Wong, I., and Seddon, P.B., (2002), "Having expectation of information systems
benefits that match received benefits: does it really matter?," Information &
Management 40:2, p 115.
Starreveld, R.W., De Mare, H.B., and Joëls, E.J., (1989), Bestuurlijke informatieverzorging deel 2:
Typologie van de toepassing, Samson, Alphen aan den Rijn, ISBN: 90-14-03655-8.
Stiglitz, J.E., (2000), "The contributions of the economics of information to twentieth century
economics," Quarterly Journal of Economics 115:4, pp 1441-1478.
Strassmann, P.A., (1990), The business value of computers: An executive's guide, Information
Economic Press, New Canaan, ISBN: 0-9620413-2-7.
Swinkels, G.J.P., (1999), "Managing the life cycle of information and communication technology
investments for added value," Electronic Journal of Information Systems Evaluation 2:1.
Tallon, P., Kraemer, K., and Gurbaxani, V., (2000), "Executives' Perceptions of the Business Value
of Information Technology: A Process-Oriented Approach," Journal of Management
Information Systems 16:4, pp 145-173.
138
Tan, C.-W., Benbazat, I., and Cenfetelli, R.T., (2011), "IT-mediated customer service content and
delivery in electronic governments: An empirical investigation of the antecedents of
service quality " MIS Quarterly, conditionally accepted.
Teece, D.J., Pisano, G., and Shuen, A., (1997), "Dynamic capabilities and strategic management,"
Strategic Management Journal 18:7, pp 509-533.
Thorp, J., (1998), The information paradox: Realizing the business benefits of information
technology, McGraw-Hill, Toronto [etc.], ISBN: 0-07-134265-6.
Tichy, L., and Bascom, T., (2008), "The Business End of IT Project Failure," Mortgage Banking 68:6,
pp 28-35.
Tiernan, C., and Peppard, J., (2004), "Information technology: Of value or a vulture?," European
Management Journal 22:6, pp 609-623.
Tillquist, J., and Rodgers, W., (2005), "Using asset specificity and asset scope to measure the value
of IT," Communications of the ACM 48:1, pp 75-80.
Treacy, M., and Wiersema, F., (1993), "Customer Intimacy and Other Value Disciplines," Harvard
Business Review 71:1, pp 84-93.
Van der Blonk, H., (2002), "Changing the order, ordering the change: Evolution of an information
system at Dutch Railways," p. 176.
Van der Pols, R., (2003), Nieuwe informatievoorziening: Informatieplanning en ICT in de 21 ste
eeuw, Academic Service, Schoonhoven, ISBN: 90-395-2135-2.
Van der Pols, R., (2005), Strategisch beheer van informatievoorziening met ASL en BiSL, Academic
Service, The Hague, ISBN: 90-395-2210-3.
Van Grembergen, W., (2001), Information technology evaluation methods and management, Idea
Group Publishing, Hershey, PA (etc.), ISBN: 1-87828-990-X.
Van Raaij, W.F., (1996), "Introduction," in: Psychology in economics and business: An introduction
to economic psychology, G. Antonides (ed.), Kluwer Academic Publishers, Dordrecht, pp.
1-6.
Van Wingerden, D., Berghout, E.W., and Schuurman, P.M., (2009a), "Benchmarking IT benefits:
Exploring outcome- and process-based approaches," in: the 15th Americas Conference on
Information Systems (AMCIS 2009), K.E. Kendall and U. Varshney (eds.), Association of
Information Systems, San Francisco.
Van Wingerden, D., Schuurman, P.M., and Van der Pijl, G., (2009b), "Benchmark effectiviteit van
informationsystemen," in: Checklisten Information Management, SDU Uitgevers.
Wade, M., and Hulland, J., (2004), "Review: The resource-based view and information systems
research: Review, extension, and suggestions for future research," MIS Quarterly 28:1, pp
107-142.
Walden, E.A., (2005), "Intellectual property rights and cannibalization in information technology
outsourcing contracts," MIS Quarterly 29:4, pp 699-A616.
Walker, G., and Weber, D., (1984), "A Transaction Cost Approach to Make-or-Buy Decisions,"
Administrative Science Quarterly 29:3, p 373.
Ward, J., and Daniel, E., (2006), Benefits management: Delivering value from IS & IT investments,
John Wiley & Sons Ltd., Chichester, ISBN: 978-0-470-09463-1.
Ward, J., and Peppard, J., (1996), "Reconciling the IT/business relationship: a troubled marriage in
need of guidance," The journal of strategic information systems 5:1, pp 37-65.
Ward, J., and Peppard, J., (2002), Strategic planning for information systems, (3rd ed.), Wiley, ISBN:
978-0470841471.
Ward, J., Taylor, P., and Bond, P., (1996), "Evaluation and realisation of IS-IT benefits: An empirical
study of current practice," European Journal of Information Systems 4:4, pp 214-225.
139
Ward, J.M., (1988), "Information Systems and Technology Application Portfolio Management--an
Assessment of Matrix-Based Analyses," Journal of Information Technology (Routledge,
Ltd.) 3:3, p 205.
Wärneryd, K.E., (1988), "Economic psychology as a field of study," in: Handbook of economic
psychology, W.F. Van Raaij, G.M. Van Veldhoven and K.E. Wärneryd (eds.), Kluwer
Academic Publishers, Dordrecht, pp. 2-41.
Watson, R., Pitt, L., and Kavan, C.B., (1998), "Measuring Information Systems Service Quality:
Lessons from Two Longitudinal Case Studies," MIS Quarterly 22:1, pp 61-79.
Wernerfelt, B., (1984), "A Resource-based View of the Firm," Strategic Management Journal 5:2,
pp 171-180.
Weston, J.F., and Copeland, T.E., (1986), Managerial finance, (8th ed.), The Dryden Press, New
York, NJ, ISBN: 0-03-064041-5.
Willcocks, L., (1992), "Evaluating Information Technology investments: research findings and
reappraisal," Information Systems Journal 2:4, pp 243-268.
Williams, B.R., and Scott, W.P., (1965), Investment proposals and decisions, (1st ed.), George Allen
& Unwin Ltd., London, ISBN: 978-0043320235.
Williamson, O.E., (1981), "The Economics of Organization: The Transaction Cost Approach," The
American Journal of Sociology 87:3, pp 548-577.
Williamson, O.E., (1998), "Transaction cost economics: How it works; where it is headed," De
Economist (Kluwer) 146:1, pp 23-58.
Williamson, O.E., (2000), "The new institutional economics: Taking stock, looking ahead," Journal
of Economic Literature XXXVIII, pp 595-613.
WITSA, (2008), "Digital Planet 2008."
Yin, R.K., (2003), Case study research: Design and methods, SAGE, Thousand Oaks, CA (etc.), ISBN:
0-7619-2552-X, 0-7619-2553-8.
140
Index
Agency theory, 51
New institutional economics, 47, 49
Analytics, 31
Objectivity, 55, 60, 97, 106
Behavioural economics, 47, 53
Organization, 30
Benefits, 38, 89, 109
Property rights, 52
Competitive advantage, 35
Propositions, 57, 66, 88, 108
Conceptual model, 67
Qualitative analysis, 74, 80
Contribution, 114
Quantitative analysis, 76, 89
Costs, 41, 89, 109
Questionnaire, 69, 70, 119
Data acquisition, 72
Research approach, 22
Dynamic capabilities, 37
Research design, 19, 25, 69
Economics, 14
Research philosophy, 20
Evaluation, 17, 59, 79, 111, 130
Research question, 18
Evaluation methods, 18, 44
Research strategy, 23
Evaluation practices, 57
Resource-based view, 36
Functionality, 29
Sample, 73
Hypotheses, 60, 66, 89, 108
Structural model, 77, 98
Information system, 27
Systems theory, 28
Interviews, 70
Temporality, 32
Life cycle, 32
Theory of the firm, 47
Limitations, 114
Transaction cost economics, 51
Management, 16
Value, 14
Marked-based view, 37
Value creation, 15, 34
141