Doctoral research proposal
Organisational Flexibility, Efficiency, and Performance
Mohammed Al-Awlaqi
YEMEN
A doctoral research proposal submitted in partial fulfilment of the requirements of the degree
of Master of Philosophy (M. Phil) awarded by the Maastricht School of Management (MSM),
The Netherlands.
Daily supervisor: Fred Young Phillips, PhD.
Professor of Marketing, Entrepreneurship, and Research Methods.
Maastricht School of Management
Promoter:
Reader:
Evaluator:
I
AKNOWLEDGEMENTS
First of all, I would like to show my sincere gratitude to my supervisor, Dr. Fred Young
Phillips. Without his thought-provoking remarks, valuable comments, and patience, I could
not have gotten this deep into such a research topic and completed my research proposal. His
great experience and continuous support and willingness to dedicate his valuable time while I
was writing this paper made this proposal possible.
Next, I want to express my deepest appreciation to Dr. Saeb Sallam, the former project
coordinator at Sana'a University in Yemen for his endless support.
I also would like to extend my deepest thanks and appreciation to all of the professors and
staffs of MSM for their constant support.
I also would like to thank my family for their continuous encouragement, patience, and
understanding towards my academic dreams.
II
“It requires a very unusual mind to make an analysis of the obvious.”
—Alfred North Whitehead
III
ABSTRACT
The most popular tool for measuring the efficiency of firms is the DEA (Data Envelopment
Analysis) of Charnes, Cooper, and Rhodes (1978). However, this method alone is unable to
give the full picture of the organisational performance. Because a company has achieved
satisfactory results on the DEA does not necessarily mean that the company is performing
well. To discover this, we need to use, in parallel with the DEA measure, one or more other
measures to give us the full picture required. One candidate for such a measure is the measure
of flexibility developed by Phillips and Tuladhar (2000). This study examines whether the
use of the two measures together gives us the ability to prefigure the performance of the
company.
IV
Table of Contents
CHAPTER 1 .............................................................................................................................. 1
INTRODUCTION AND BACKGROUND .............................................................................. 1
1.1
Background of the Study ............................................................................................. 1
1.2
Problem Statement ...................................................................................................... 3
1.3
Research Objectives .................................................................................................... 4
1.4
Research Questions ..................................................................................................... 4
1.5
Research Hypotheses................................................................................................... 4
1.6
Significance of the Study ............................................................................................ 4
1.7
Research Assumptions ................................................................................................ 5
1.8
Limitations of the Study .............................................................................................. 5
1.9
Methodology and Data Collection .............................................................................. 6
CHAPTER 2 .............................................................................................................................. 7
LITERATURE REVIEW .......................................................................................................... 7
2.1
Efficiency and Flexibility ............................................................................................ 7
2.2
Efficiency and Flexibility Measurements ................................................................. 12
2.2.1
Efficiency Measurement .................................................................................... 12
2.2.2
Flexibility Measurement .................................................................................... 15
2.3
Research hypotheses ................................................................................................. 19
RESEARCH METHODOLOGY............................................................................................. 20
3.1
Data Collection .......................................................................................................... 20
3.1.1
3.2
U.S. Airlines Industry ........................................................................................ 20
Research Variables Measurement ............................................................................. 23
3.2.1
Dependent Variable Measurement (organizational performance) ..................... 23
3.2.2
Independent Variable Measurement (DEA) ...................................................... 24
3.2.3
Independent Variable Measurement (RVA) ...................................................... 24
V
3.3
Calculations Software ............................................................................................... 26
3.3.1
DEA Calculations .............................................................................................. 26
3.3.2
RVA Calculations .............................................................................................. 26
3.3.3
Regression Model Calculations ......................................................................... 27
3.4
Further Research ....................................................................................................... 27
3.5
Thesis Timeline: ........................................................................................................ 28
APPENDIX I ........................................................................................................................... 37
HEINER‘S THEORY OF PREDICTABLE BEHAVIOUR ................................................... 37
3.6
Optimisation Theory ................................................................................................. 37
3.7
Heiner‘s Theory......................................................................................................... 37
3.8
Heiner‘s Reliability Condition .................................................................................. 38
APPENDIX II .......................................................................................................................... 41
CATASTROPHE THEORY .................................................................................................... 41
3.9
Cusp Model ............................................................................................................... 41
3.10
Catastrophe Properties, Relations, and Terminology ............................................ 42
3.11
Catastrophic Models .............................................................................................. 43
3.12
Variety ................................................................................................................... 44
3.13
Regulation, Disturbance, and Outcomes ............................................................... 45
3.14
Ashby‘s Law of Requisite Variety ........................................................................ 46
VI
LIST OF TABLES
Table 1. DEA Models .............................................................................................................. 14
Table 2 U.S. Airlines by Operating Revenues 2007 ................................................................ 23
Table 3 : Thesis Starting and Finishing Dates ......................................................................... 28
Table 4. Thesis Timeline.......................................................................................................... 28
VII
LIST OF FIGURES
Figure 1 Number of DEA publications ...................................................................................... 1
Figure 2. Jet Fuel Prices .......................................................................................................... 21
Figure 3. Cusp model graphical presentation. ......................................................................... 42
VIII
LIST OF ACRONYMS
BCC
Banker, Charnes, and Cooper model
CCR
Charnes, Cooper, and Rhodes model
CNC
Computer numerically controlled
CRS
Constant returns to scale
DEA
Data envelopment analysis
DMU
Decision-making unit
DRS
Decreasing returns to scale
FMS
Flexible manufacturing systems
IRR
Internal rate of return
IRS
Increasing returns to scale
LCC
Low-cost carriers
LP
Linear programming
NLC
Network legacy carriers
NPV
Net present value
RVA
Relative variety analysis
SE
Scale efficiency
TE
Technical efficiency
TSE
Technical and scale efficiency
VRS
Variable returns to scale
IX
CHAPTER 1
INTRODUCTION AND BACKGROUND
“Many of the things you can count, don’t count.
Many of the things you cannot count, really count.”
—Albert Einstein
1.1
Background of the Study
Efficiency as a concept has been available for more than a century and has been applied in
many different circumstances and on different levels of aggregation in the economic system
(Grubbstrom & Olhager, 1997). In recent decades, due to the increased speed of change in
market environments, higher competition, and narrower profit margins, the concept of
flexibility has emerged. Both efficiency and flexibility become an important source of
competitive advantage to organizations. Building high-performing organizations can be
achieved by adopting a mixed strategy of efficiency and flexibility (Adler, Goldoftas,&
Levine, 1999). Consequently, the literature insists on the adoption of both efficiency and
flexibility as the way to achieve high performance (Adler et al.,; Ahmed, Hardaker, &
Carpenter 1996; Nor, Nor, Abdullah, & Jalil, 2007).
DEA (data envelopment analysis) is the most popular tool for measuring efficiency of
organisations or other decision-making units (DMUs). DEA has been growing exponentially
since it was invented in 1978.With more than 4000 published articles from more than 2500
authors (Emrouznejad, Parker, & Tavares, 2008), DEA has proved its popularity (see Figure
1).
Figure 1 Number of DEA publications
No of publications
500
400
300
200
100
0
Source: Emrouznejad et al.,
2008.
1
DEA is a very powerful managerial and benchmarking technique originally developed by
Charnes, Cooper, and Rhodes (1978) to evaluate non-profit and public sector organisations.
Since then, DEA has become popular not only for the non-profit or public sector but to any
kind of businesses (Abbott & Doucouliagos, 2003; Agrell & Bogetoft, 2005; Alexander,
Busch, & Stringer, 2003; Mostafa, 2007).
DEA originally depended on what can be called the efficient frontier estimation. The
approach to frontier estimation was originally proposed by Farrell (1957). Farrell‘s idea was
not given much detailed empirical attention for more than two decades because it was just a
basic measurement of efficiency, involving only one input and one output. Until the paper by
Charnes et al. (1978) appeared, the term data envelopment analysis was not known. Charnes
and colleagues developed Farrell‘s idea by extending the problem to measure efficiency in
the case of multiple inputs and outputs by using a linear programming technique (the CCR
model). Since then, a large number of papers have applied and extended the original
methodology.
In 1984, Banker, Charnes, and Cooper developed the DEA further by introducing variable
returns to scale instead of the constant return to scale in the previous CCR model; the new
model is referred to as the BCC model.
In 1986, Banker and Morey introduced the idea of using DEA with nondiscretionary inputs
and outputs. Nondiscretionary inputs and outputs are those that are beyond the control of a
DMU‘s management like weather in evaluating the efficiency of maintenance units, or the
number of competitors, or a local unemployment rate.
The same authors developed another area by modifying the DEA model so that it can use
categorical inputs and outputs instead of assuming that all inputs and outputs are in the same
category. In this case, the selected input or output will separate the DMUs into different
categories, for example, an analysis interested in the productivity of a chain of hotels. Some
of the hotels have large parking areas, some have only small areas and others have none. This
will separate the hotels into three different categories and consequently, they will be
incomparable, creating a need for three different analyses for each category separately.
One of the most significant developments of the DEA is allowing prior knowledge to be
incorporated into the model. This can happen in many situations. For example:
2
-
If the DEA ignores any additional information that cannot be directly incorporated in
the model. This often happens when the DEA analysis has the full flexibility of
choosing multiplier values. The analysis will give the DMU under analysis the
maximum efficiency rating value so that it will appear efficient but in a manner
unjustified from management‘s viewpoint. This happens because management has
unincorporated information.
-
Management has a strong preference for a particular factor.
-
The model fails to distinguish the DMUs due to a small sample (Cooper, Seiford, &
Zhu, 2004).
Prior knowledge to be incorporated into the model by using multiplier models and imposing
some restrictions on the multipliers used in the analysis, which are usually calculated by
seeking their optimal value in the model itself. Some proposed techniques in this area include
imposing upper and lower bounds on individual multipliers (Dyson & Thanassoulis, 1988;
Roll & Golany, 1993), imposing bounds on ratios of multipliers (Thompson, Thrall,
Langemeier, & Lee, 1990), appending multiplier inequalities (Wong & Beasley, 1990), and
requiring multipliers to belong to given closed cones (Charnes, Cooper, Wei, & Huang,
1989).
Fare, Grosskopf, and Lovell (1985) used the concept of the additive model to evaluate overall
profit efficiency (Cooper et al., 2004).
We can see that DEA, like many other techniques, has an endless chain of developments. One
part of this chain is the ability of DEA to predict the performance of the DMU. DEA is a very
powerful technique; however, it can‘t predict the performance of the DMU by itself. The case
of Atari provides strong evidence of this fact. Although Atari was operating efficiently and
appeared to be on the efficient frontier more than other organisations in the same industrial
sector, Atari was losing its market share (Thore, Kozmetsky, & Phillips, 1994).
1.2
Problem Statement
It‘s obvious that DEA needs to be developed further so it can explain variations in
organizational performance. The next question is how? The answer in the literature relates the
performance of the organization to both its efficiency and flexibility in dealing with its
environment. Now the problem of this research can be formulated as the following question:
Can we predict the performance of the DMU by combining DEA with an accepted
operational measurement of flexibility? Fortunately, the relative variety analysis model
3
(RVA) model was developed by Phillips and Tuladhar (2000) based on system/cybernetic
ideas (Ashby, 1964) as a measurement of flexibility. Consequently, the problem can be stated
thus: Can DEA and RVA together explain most of the variation in organizational
performance?
1.3
1.4
Research Objectives
-
Developing DEA further by adding the ability to predict organizational performance.
-
Supporting RVA as a trusted operational measurement of organizational flexibility.
-
Exploring the computational characteristics of RVA.
-
Creating a new operational analysis of organizational performance.
-
Adding a new decision-making technique to the knowledge of organisation managers.
Research Questions
1. Can we depend on DEA alone to explain most of the variation in the performance of
firms?
2. Can we depend on RVA alone to explain most of the variation in the performance of
firms?
3. Can DEA as a measurement of efficiency and RVA as a measurement of flexibility
together explain a large fraction of the variation in the performance of firms?
1.5
Research Hypotheses
H1: DEA efficiency and RVA flexibility together explain a large fraction of the variation in
the performance of firms.
H2: Neither DEA efficiency nor RVA flexibility alone can explain most of the variation in
the performance of firms.
1.6
Significance of the Study
Even though many managers and academics have cited flexibility as a key competitive
capability, efforts to measure and understand this complex concept continue. Consequently,
this study will try to develop an understanding of the relative variety analysis model further
and whether it may complement data envelopment analysis model to predict the whole
picture of organizational performance.
Because this study will be one step ahead in the assessment of the relative variety analysis as
an operational measurement of flexibility, it will contribute to the theoretical background of
operation research science. This study will increase the trust atmosphere of using relative
4
variety analysis as an eligible measurement of flexibility and enrich the using of data
envelopment analysis by supporting it with an essential and complementary analysis tool.
Also this research can have practical implications. Managers can benefit from using both
RVA and DEA analysis to raise performance by balancing their companies‘ flexibility and
efficiency. Managers also can predict on which aspect (more efficient, more flexible) they
will concentrate in the next period by knowing the result of their performance depending on
RVA and DEA analysis of the current period. The measured results from the DEA combined
with RVA can help the organization‘s managers understand its advantages and disadvantages,
and recognise the existing opportunities or threats in the environment, so that each resource
can be utilized in an effective and flexible way. This can result in substantive understanding
of organizational performance.
1.7
Research Assumptions
-
Assuming a continuous relationship between RVA flexibility and DEA efficiency
without any catastrophic or sudden change. That means for any small change in RVA
and DEA as independent variables, the performance of the organization as a
dependent variable will move smoothly upward or downward. The small change in
the independent variables will not cause the dependent variable to jump suddenly to
higher or lower levels.
-
Assuming a fully optimized behavior from the economic science perspective. This
assumes perfect information and a well-defined set of alternatives. Consequently,
Heiner‘s assumption (1983) about the cost of imperfect decisions and the flexibility of
choosing those decisions to the organization‘s performance is not a subject in this
study1.
1.8
Limitations of the Study
-
This study will concentrate on only one industry to test the ability of DEA and RVA
to explain the variation in organizational performance.
-
Using ordinary linear regression to test the relation between DEA efficiency, RVA
flexibility, and organizational performance can be valid assuming a static relationship
between the independent variables and the dependent variable.
1
See Appendix I.
5
-
The study will not investigate Heiner‘s principle of predictable behaviour.
Consequently, this paper will not incorporate the cost of imperfect decisions in its
calculation models.
1.9
Methodology and Data Collection
This research will follow the quantitative paradigm to measure the relation of DEA, RVA,
and organizational performance. Historical data about the chosen inputs and outputs for a
period of 10 years will be collected from US airlines company data available from the
COMPUSTAT database. The data will cover the period from 1998 to 2008. This period saw
some of the most important events that have affected the airways industry, for example, the
recent financial crisis, oil prices, and the September 11 terrorist attacks.
DEA and RVA calculations will be done. A regression model will be developed between the
two independent variables (RVA and DEA results) and the organizational performance
measure as a dependent variable.
6
CHAPTER 2
LITERATURE REVIEW
“It is not the strongest of the species that survives, nor the most intelligent, but the one most
responsive to change.”
2.1
—Charles Darwin
Efficiency and Flexibility
A considerable number of researchers have tried to answer the question of what makes an
enterprise successful over the long run. Studies have explored various factors both financial
and nonfinancial in order to deduce performance drivers. Meng-Ling Wu (2006)
in his
meta-analyses of 121 empirical studies investigated the effect of firm size on financial
performance. Although previous studies had shown conflicting results, he found no
significant effect of firm size on financial performance. The study of Nor et al. (2007) tried to
explain the lasting presence and performance of small firms and to explain the insignificant
relation between firm size and performance. In their study of the manufacturing sector, Nor et
al. found that big firms benefit from low minimum average costs and static production
efficiency, while small firms, with higher minimum average costs, are more flexible.
The effect of market structure (concentration and market share) on organizational
performance was also explored by several studies. Behrman and Deolalikar (1989) found a
positive association of higher concentration levels with medium and large scale firms‘
performance (referred to as survival in their study). This finding was similar to the conclusion
of Hansen and Wernefelt (1989), who studied the association of firms‘ market share as
related to performance. On the other hand, Caves (1998) found a negative association
between market concentration in manufacturing industries and turnover. Brickley, Clifford,
and Jerrold (2003) also asserted that there is no empirical relationship between measures of
performance and market structure. We can explain these results with the findings of Choi and
Weiss (2005) and Tu and Chen (2000). Their studies of the financial services sector
supported the Efficient Structure hypothesis, which posits that more efficient firms can
charge lower prices than competitors, enabling them to capture larger market shares and
economic rent, leading to increased concentration. So the hidden picture behind this is just
about the efficiency (lower prices) and flexibility (capability to charge lower prices at the
time of incapability by competitors).
7
The literature discusses the importance of efficiency and flexibility from another viewpoint.
As we will see, we can in one way or another divide any period of time in the life of an
organization (for example, long periods of time) into two parts: turbulent or steady.
According to Dreyer and Gronhaug (2004) and Ahmed et al. (1996), the need for flexibility is
urgent if a company is to survive and prosper in turbulent and unpredictable environments
due to global competition and pressure arising from various sources with increasing rapidity.
On the other hand, we can see the important of achieving high efficiency in a stable
environment. An organisation can reduce its cost, benefit from economies of scale, and
achieve high performance. However, flexibility in dealing with rapid change in the turbulent
times must not result in a significant loss of productivity and efficiency (Volberda, 1998).
Thus, flexibility and efficiency can generally be regarded as antithetical. Companies must
attend to both agile response (flexibility) and to efficiency. We might think of the
mathematical "catastrophe" that happens when, in an equation like y = ax b , b varies from -1
to 1. Assume a = 1. Then when b = −1,y = 1/x, and when y goes up, x goes down. As b
moves toward 1 (or beyond), y = x ; when y increases, x increases. X and Y have moved
from being antithetical to being mutually reinforcing. This is what companies do to achieve
efficiency and quality simultaneously. (Those two things used to be thought of as
antithetical.) But using new thinking and organizational breakthroughs, firms are able to
achieve the "catastrophe" and make efficiency and quality mutually reinforcing2. There are
many examples of organizational breakthroughs and new thinking that has broken the
antithetical law between efficiency and flexibility. Some are explained immediately below.
Just in Time (JIT)
Just in time (JIT) can be defined as an inventory strategy implemented to improve the
performance of the organization, especially the return on investment (ROI) of a business. It
can improve the performance by reducing in-process inventory and its associated carrying
costs. JIT can make balanced improvements in quality, production, and cycle time using
inventory reduction as the strategy for identifying and prioritizing opportunities for
improving these parameters (Heizer & Render, 2000). It can be defined as ―a repetitive
production system in which processing and movement of material and goods occurs just as
they are needed, usually in small batches‖ (Stevenson, 1996, p. 353). However, JIT is more
2
See Appendix II for more details about Catastrophe Theory.
8
than an inventory system. JIT manufacturing is a philosophy by which an organization seeks
continually to improve its products and processes by eliminating waste (Ptack, 1987). JIT is
also a continuous improvement effort focused in the areas of quality, productivity, and
flexibility (Hegstad, 1990). JIT doesn‘t result in a superior performance only; it also makes
companies be more consistent in choosing benchmarking performance measures that are
aligned with organizational strategy (Meybodi, 2009). From cybernetics and Ashby‘s Law
(Ashby, 1964), JIT is an alternative to reducing variety at the source of the stimulus itself
(Scala, Purdy, & Safayeni, 2006) to absorb the variety instead of raising variety in the system
regulation.
Flexible manufacturing systems (FMS)
The U.S. government defined the flexible manufacturing system as a series of automatic
machine tools or pieces of fabrication equipment linked together by an automatic material
handling system, a common hierarchical digital pre-programmed computer control, and
provision for random fabrication of parts or assemblies within predetermined families (Chan,
2004). Buyurgan and Saygin (2006) also defined FMS as a highly automated machine system
that consists of a group of computer numerically controlled (CNC) machine tools,
interconnected by an automated material handling and storage system, and controlled by an
integrated computer system. Flexible manufacturing systems (FMSs) provide for the
efficiency that is lacking in batch manufacturing and the flexibility that is limited in mass
production lines. Flexible production systems are supplanting mass and batch production
systems because of their superior manufacturing performance, in terms of both productivity
and quality (MacDuffie, 1993). The secret behind the success of FMS to improve the
performance of manufacturing organizations is the use of flexibility and efficiency at the
same time. Flexibility of FMS leads to increased productivity, reduced inventory, reduced
production costs, and improved quality. That is what MacDuffie (1993) called a different
"organizational logic" (p. 199)
This logic has two dimensions: structural and cultural. The "structural logic" of a
production system is identified in terms of the deployment of resources, the link of
core production activity to the market, the structure of authority relations, and the link
between conception and execution. The "cultural logic" is identified as a way of
thinking about production activities that emphasizes their integration with innovation
9
activities. This view of flexible production is also contrasted with other post-mass
production models.
But what is flexibility? Flexibility is a wide concept and its meaning varies from one context
to another. Economics defined flexibility in the theory of the firm as ―flatness of the average
cost curve‖ (Stigler, 1939, p.309). The flexibility definition used in the Stigler study can be
explained as the ability of a single-product firm to adjust output to exogenous shocks at
relatively low costs, which in the literature is called tactical flexibility (Weiss, 2001). Weiss
(2001) differentiated this kind of flexibility from what he called ―operational flexibility,‖
which had been introduced by von Ungern-Sternberg (1990). Operational flexibility is the
ability of the firm to adjust to exogenous shocks by diversification into several products and
switching capacity from one good to another. The organizational management literature
refers to it as a slack resources strategy and calls it a firm's environmental response (Cheng
& Kesner, 1997). In the organizational theory literature, it is known as mechanistic versus
organic organizational structures (Volberda, 1998). In project management, flexibility is
defined as ‗‗the capability to adjust the project to prospective consequences of uncertain
circumstances within the context of the project‖ or simply as a ‗‗room for maneuvering‘‘
(Olsson, 2006).
From the above discussion, we can conclude that when an organization is flexible, it is
capable of multiple responses to its environment. There is a cost (in energy or money) of
maintaining an inventory of responses, and changing over from one response to another
involves set-up costs. A firm can achieve cost efficiencies by cutting back on its inventory of
responses (and gambling that the discarded responses will not be needed), for instance, by
holding no inventory or by staffing its tech support call center with untrained people!
Flexibility can be included in many unpredictable processes such as the decision-making
process. Use of decision-making tools is only a maximization process of efficiency that made
a better decision. A decision-making tool is only a process to identify the best possible
alternative to be chosen (more efficient decisions). Those tools will not give the decision
maker the chance of having more alternatives (more flexible decisions) besides the ability of
choosing the best of them. Recently, scientists confessed that including flexibility can
significantly improve this process. For this purpose, we can talk about the real options as an
example.
10
Real options
The discounted cash flow model and its tools like IRR (internal rate of return) and NPV (net
present value) are all that is needed to estimate expectations for most businesses in the case of
the certain estimation of the future cash flows. For companies fraught with uncertainty, the
estimation will depend on the mentioned techniques plus the real options value. Real options
capture the value of uncertain growth opportunities. Just as a financial option gives its owner
the right—but not the obligation—to buy or sell a security at a given price, companies that
make strategic investments have the right—but not the obligation—to exploit opportunities in
the future. These opportunities can be valued using real-options valuation techniques. Real
options are opportunities that are embedded in capital projects (real, rather than financial,
asset investments) that enable managers to alter their cash flows and risk in a way that affects
project acceptability (NPV, IRR) (Gitman, 2006). There are some common types of real
options, for example, abandonment: the option to abandon or terminate a project prior to the
end of its planned life; flexibility: the option to incorporate flexibility into the firm‘s
operations, particularly production; growth option: the option to develop follow-on projects,
expand markets, expand plants and so on; and timing: the option to determine when various
actions with respect to a given project are taken (Gitman, 2006).
From the above discussion, we can see the importance of both efficiency and flexibility in
maintaining high organizational performance over the long run, but it seems neither of them
can do it alone. Efficiency does not portray the whole picture of organisational performance.
Again, the case of Atari can attest to this (Thore et al., 1994). Despite Atari‘s slide in the
market of the 1980s, the company still operated efficiently throughout its slide because it
adjusted its input downward as its outputs declined. On the other hand, Sun Microsystems
had been operating inefficiently in this period but still managed to maintain rapid growth.
Also, Pagell and Krause (2004), in their study of how flexibility affects a manufacturing
firm‘s performance, found no support for the proposition that firms that respond to increased
uncertainty with increased flexibility will experience increased performance. Another study
conducted by Kumar and Charles (2008) was a good effort to test the relationship between
organizational performance and productivity. The study tried to relate the component of
productivity (technological change, pure technical efficiency, and scale efficiency) in the
Indian food industry with its performance. The study supported a significant relationship
11
between technological change and performance; however, it didn‘t find any significant
relationship of performance with the other components of efficiency.
This demonstrates the need for both efficiency (with which a firm transforms factors of
production (inputs) into sales, profits and market share (outputs) in its best way) and
flexibility (which adjusts the firm‘s responses to keep outputs nearly steady when input
factors change uncontrollably and vice versa). Then the firm will experience high
performance over the long term, which is what Adler et al. (1999) assessed in their study on
the needs of both flexibility and efficiency to secure a high level of organizational
performance. This idea introduces the following hypothesis:
Flexibility and efficiency are responsible for a large portion of variation in organisational
performance.
2.2
Efficiency and Flexibility Measurements
2.2.1 Efficiency Measurement
What is efficiency?
Definition 1: Full efficiency is attained by any DMU if, and only if, none of its inputs or
outputs can be improved without worsening some of its other inputs or outputs (Koopmans,
1951) But in management science applications, theoretically possible levels of efficiency will
not be known. And efficiency can be applied only with the information that is empirically
available.
Definition 2: Relative efficiency. A DMU is to be rated as fully efficient on the basis of
available evidence if, and only if, the performances of other DMUs do not show that some of
its inputs or outputs can be improved without worsening some of its other inputs or outputs.
Efficiency can be measured by using the DEA (data envelopment analysis) tool developed by
Charnes et al. (1978).Their first model, the CCR model,
had an input orientation and
assumed constant returns to scale (CRS). CRS refers to a technical property of production
that examines changes in output subsequent to a proportional change in all inputs (where all
inputs increase by a constant factor). If output increases by that same proportional change,
then there are constant returns to scale (CRS). If output increases by less than that
proportional change, there are decreasing returns to scale (DRS). If output increases by more
12
than that proportion, there are increasing returns to scale (IRS) (Eatwell, 1987). Later studies
have considered another set of assumptions. Banker et al. (1984) introduced the assumption
of variable returns to scale (VRS) into the BBC model. The CRS assumption can be used
only when all DMUs are operating at an optimal scale. However, DMUs may not be
operating at optimal scales in many situations, for example, in the case of imperfect
competition and constraints of finance. We will use the CCR and BCC models to calculate
the TSE (technical and scale efficiency), PTE (pure technical efficiency), and SE (scale
efficiency). The use of the CCR model specification when not all DMUs are operating at their
optimum results in measures of technical efficiency, which are confounded by scale
efficiencies (SEs). The use of the BCC model specification permits the calculation of
technical efficiency (TE) without the effect of these scale efficiencies. The scale inefficiency
can be calculated based on differences between the BCC and the CCR technical efficiency
scores. The term technical and scale efficiency (TSE) describes the technical efficiency
scores obtained using a CCR model. Pure technical efficiency (PTE) refers to the technical
efficiency scores obtained from a BCC model.
13
2.2.1.1 Data Envelopment Analysis CCR and BCC Models
Data envelopment analysis determines the firm‘s efficiency in transforming resources
(inputs) into productive outputs. In the DEA calculation, each firm is called a DMU
(decision-making unit), though a DMU could as well be defined as one branch of a firm. Let
[𝑦𝑟𝑗 ] be the matrix of each productive output r produced by each DMUj. In the same way, let
[𝑋𝑖𝑗 ] be the matrix of the input I used by DMUs j. Charnes et al. (1978) used what they called
the multipliers ur and vi to indicate the importance of each input and output in the efficiency
determination, which are the unknowns and which must be calculated by the following linear
programming (LP) problem (called the Multipliers model). Banker et al. (1984) later added
the convexity constraint (
𝑛
𝑗 =1 𝜆𝑗
= 1) to the model and create the BCC model.
Table 1. DEA Models
CCR multipliers model
max 𝑧 =
𝑠
𝑟=1 𝑢𝑟
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
BCC multipliers model
𝑦𝑟0
𝑠
𝑟=1 𝑢𝑟
𝑠
𝑟=1 𝑢𝑟
max 𝑧 =
𝑦𝑟𝑗 −
𝑚
𝑖=1 𝑣𝑖
𝑥𝑖𝑗 ≤
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝑦𝑟0
𝑠
𝑟=1 𝑢𝑟
𝑦𝑟𝑗 −
𝑚
𝑖=1 𝑣𝑖
𝑥𝑖0 = 1
𝑛
𝑗 =1 𝜆𝑗
=1
𝑚
𝑖=1 𝑣𝑖
𝑥𝑖𝑗 ≤ 0
0
𝑚
𝑖=1 𝑣𝑖
𝑥𝑖0 = 1
𝑎𝑛𝑑 𝑎𝑙𝑙 𝑢𝑟 𝑎𝑛𝑑 𝑣𝑖 ≥ 𝜀 > 0.
𝑎𝑛𝑑 𝑎𝑙𝑙 𝑢𝑟 𝑎𝑛𝑑 𝑣𝑖 ≥ 𝜀 > 0.
ε is a non-Archimedean number.
ε is a non-Archimedean number.
CCR envelopment model
𝜃 ∗ = 𝑚𝑖𝑛𝜃 − 𝜀(
𝑆𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝑚
−
𝑖=1 𝑠𝑖
𝑛
𝑗 =1 𝑥𝑖𝑗 𝜆𝑗
+
𝑠
+
𝑟=1 𝑠𝑟 )
+ 𝑠𝑖− = 𝜃𝑥𝑖0 𝑖 =
BCC envelopment model
𝜃 ∗ = 𝑚𝑖𝑛𝜃 − 𝜀(
𝑆𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝑚
−
𝑖=1 𝑠𝑖
𝑛
𝑗 =1 𝑥𝑖𝑗 𝜆𝑗
+
𝑠
+
𝑟=1 𝑠𝑟 )
+ 𝑠𝑖− = 𝜃𝑥𝑖0 𝑖 = 1,2, … … . , 𝑚;
1,2, … … . , 𝑚;
𝑛
𝑗 =1 𝑦𝑟𝑗
𝜆𝑗 − 𝑠𝑟+ = 𝑦𝑟0 𝑟 =
1,2, … … … , 𝑠;
𝑛
𝑗 =1 𝑦𝑟𝑗
𝜆𝑗 − 𝑠𝑟+ = 𝑦𝑟0 𝑟 = 1,2, … … … , 𝑠;
𝑛
𝑗 =1 𝜆𝑗
=1
𝜆𝑗 ≥ 0
14
𝑗 = 1,2, … … . . , 𝑛
𝜆𝑗 ≥ 0
𝑗=
1,2, … … . . , 𝑛
Source: Cooper, Seiford, & Zhu, 2004.
Both yield to the same efficiency result. DMU is efficient if 𝜃 ∗ =1 with all slacks𝑠𝑖−, 𝑠𝑟+ equal
to zero. Otherwise, DMU will be considered not efficient.
But can we draw the whole picture of organizational performance by considering only the
results of the DEA? From that point of view, we can conclude that the missing part of our
picture is the organizational flexibility. Fortunately, Phillips and Tuladhar (2000) developed a
general model to measure organisational flexibility, which has been designed to be
commensurate with the DEA model.
2.2.2 Flexibility Measurement
There was no accepted and operational measurement of organizational flexibility before the
introduction of the Relative Variety Analysis (RVA) model developed by Phillips and
Tuladhar (2000).They characterized the desirable properties of a relative flexibility measure
and constructed a measure that displayed the needed properties. It was compatible with and
complementary to DEA-based efficiency measurements.
2.2.2.1 RVA (General Model of Relative Variety Analysis)
The RVA model developed by Phillips and Tuladhar (2000)was based on system/cybernetic
ideas (Ashby, 1964)3. A firm chooses a response (from a set of possible responses) to an
environmental stimulus. The stimulus/response combination results in an outcome. A
possibly small subset of the outcomes is conducive to continued viability of the firm. The
firm is interested in minimizing the variety of outcomes that actually occur and chooses
responses accordingly. Phillips and Tuladhar (2000) defined the general model
mathematically as follows. Let:
𝑓(𝑎, 𝜏𝑗 ) = variety of outcomes for firm j.
𝑔(𝑏, 𝜍𝑗 )= variety of stimuli faced by firm j, and
3
For more details, see Appendix III.
15
(𝑐, 𝜌𝑗 )= variety of responses by firm j,
Where a, b, c are vectors of parameters and 𝜏𝑗 , 𝜍𝑗 , 𝜌𝑗 are vectors of observations of outcomes,
stimuli, and responses, respectively, for firm j, then for each company ―o‖,
Find a, b, c to minimize 𝑓(𝑎, 𝜏0 ),
Subject to: 𝑔(𝑏, 𝜍𝑗 )/ (𝑐, 𝜌𝑗 ) ≤ 𝑓(𝑎, 𝜏𝑗 ) for every j.
a, b, c>0
This optimization is solved for every company in the comparison set, each taking on the
subscript ―o‖ in turn. Variety is a function of entropy as we can use Shannon‘s inequality
(Shannon & Weaver, 1963). Phillips, Summers, and Moon (2002) mentioned the possibility
of variety being a function of standard deviation or variance of the system‘s states. But this
can be applied only in very limited cases, for example, where the input and output variable
are distributed as Gaussian and the entropy is thus simply equal to (log(2πσ2)+1)/2. We do
not expect this condition to obtain in real, empirical business situations.
2.2.2.2 Entropy
Entropy is an information-theoretic measure of the degree of indeterminacy of a random
variable. If ξ is a discrete random variable defined on a probability space (Ω,𝔉,P) and
assuming values 𝑥1 , 𝑥2 , … with probability distribution 𝑝𝑘 : 1,2, … . , 𝑝𝑘 = 𝑃{ξ = xk } , then
the entropy is defined by the formula:
∞
𝐻 ξ =−
pk log pk
k=1
Here it‘s assumed that 0log0 = 0. The base of the logarithm can be any positive number, but
as a rule, one takes logarithms to the base 2 or ℮, which corresponds to the choice of a bit or a
natural unit as the unit of measurement (Kullback, 1997). The concept of entropy can be
derived axiomatically. Consider a random variable X that can assume the values x1,…….,xn
with probabilities p1,……,pn. The target is to define a quantity H (P) = H (p1,……,pn) that
measures, in a unique way, the amount of uncertainty represented in this distribution. As the
way to get this result, three commonsense axioms, amounting to only one composition law,
are sufficient to determine H uniquely, up to a constant factor corresponding to a choice of
scale.
16
1. H is a continuous function of the pi.
2. If all pi are equal, then H (P) = H (n) = H (1/n,……1/n) is a monotonic increasing
function of n.
3. Composition law: Group all the events Xi into k disjoint classes.
Let Ai represent the indices of the events associated with the ith class, so that 𝑞𝑖 =
𝑗 ∈𝐴𝑡
𝑝𝑗 represents the corresponding probability. Then
H (P) = H (Q) +
𝑘
𝑖=1 𝑞𝑖
𝑝
𝐻(𝑞 𝑖 ),
𝑖
Where 𝑃𝑖 denotes the set of probabilities pj for 𝐽 ∈ 𝐴𝑖 . From the first condition, it is
sufficient to determine H for all rational cases where Pi = ni/n, I =1,…..,,n. But from the
second and third conditions,
𝐻(
𝑛
𝑖=1 𝑛𝑖 )
= 𝐻 𝑝1 , … … , 𝑝𝑛 +
𝑛
𝑖=1 𝑝𝑖
𝐻(𝑛𝑖. )
……………… (1)
By setting all ni equal to m, from the above equation we get:
H (m) + H (n) =H (mn)
This yields the unique solution
H (n) =C log n,
with C > 0. By substituting in (1), we finally have
𝑛
𝐻 𝑃 = −𝐶
𝑝𝑖 log 𝑝𝑖
𝑖=1
The constant C determines the base of the logarithm; we use natural logarithms so that C = 1
(Blahut, 1987; Yockey, 1992).
For the matter of calculating the entropy from empirical observations, we can imagine laying
the observations out on a number line. Making intervals (groupings of adjacent observations)
wide or narrow, we can come up with an entropy number as large or small as we want! And
finally we can calculate the average of all of these entropies. This is called the multipleinterval method, which will be used in this research. This method is consistent with the
definition of Kolmogorov entropy.
17
2.2.2.3 Kolmogorov Entropy
Kolmogorov entropy is also known as metric entropy (0 for non-chaotic motion and > 0 for
chaotic motion). Divide phase space (for a system of n first-order ordinary differential
equations,
the
n-dimensional
space
consisting
of
the
possible
values
of
(𝑥1 , 𝑥1 , 𝑥2 , 𝑥2 , … … … 𝑥𝑛 , 𝑥𝑛 ) is known as its phase space) into D-dimensional hyper cubes ( a
generalization of a 3-cube to n-dimensions, also called an n-cube) of content 𝜖 𝐷 (generalized
volume). Just as a 3-dimensional object has volume, surface area, and generalized diameter,
an n-dimensional object has "measures" of order 1, 2... n.
Let 𝑃𝑖0 ……….𝑖𝑛 be the probability that a trajectory is in hypercube 𝑖0 at 𝑡 = 0, 𝑖1 at 𝑡 = 𝑇, 𝑖2 at
𝑡 = 2𝑇,etc. Then define
𝐾𝑛 = 𝐾 = −
𝑖 0 ……𝑖 𝑛
𝑃𝑖𝑜 ………𝑖 𝑛 ln 𝑃𝑖𝑜 ……….𝑖𝑛
where 𝐾𝑁+1 − 𝐾𝑁 is the information needed to predict which hypercube the trajectory will be
in at 𝑛 + 1 𝑇, given trajectories up to nT. The Kolmogorov entropy is then defined by
1
𝐾 ≡ lim lim lim
𝑇→0 𝜖→0 𝑁→∞ 𝑁𝑇
𝑁−1
(𝐾𝑛+1 − 𝐾𝑛 )
𝑛=0
(Weisstein, 1999).
The general model of flexibility will assign a firm the maximum index that is possible while
bounding the indices of comparison companies.
Like DEA, RVA is a relative measure and considers all of the firm‘s operations jointly. This
allows different firms to score highly for excellent performance in different markets, market
segments, or strategies, and is accomplished as in DEA by allowing an optimization to decide
multipliers for the stimuli, responses, and outcomes. The model allows not just multiple
stimuli but multiple kinds of possibly incommensurate stimuli that may differ in their
contribution to the DMU‘s flexibility rating. Paralleling DEA methods, we assign coefficients
―virtual multipliers‖ to the different kinds of stimuli, resulting in a virtual stimulus. In the
same way, a virtual response and a virtual outcome are constructed.
This optimization is solved once for each firm, using data from all firms, all markets, and all
time periods. (In this, RVA differs from DEA. The latter involves a separate optimization for
18
each time period.) Its optimal objective function yields the relative flexibility score (Phillips
et al., 2002.
2.2.2.4 RVA Flexibility Model Using Shannon‘s Inequality
In this paper, the RVA general model put forth by Phillips and Tuladhar (2000) will be
recapped and an operational RVA model in which entropy is the measure of variety will be
computed. In this model, Shannon‘s inequality applies: Hstimuli-Hresponses ≤ Houtcomes. The RVA
model will look like this:
Find a, b, and c to minimize cHoutcomes,o
subject to: aHstimuli,j – bHresponses,j ≤ cHoutcomes,j for all DMUs j (including o)
a, b, c ≥ 0
Following Ashby (1964), we regard entropy as a logarithmic expression of variety. The
inequality constraint becomes a subtraction of the left side.
2.3
Research hypotheses
From the above, we can propose the research hypotheses:
H1: DEA efficiency and RVA flexibility together explain a large fraction of the variation in
the performance of firms.
H2: Neither DEA efficiency nor RVA flexibility alone can explain most of the variation in the
performance of firms.
19
CHAPTER 3
RESEARCH METHODOLOGY
“Most of the fundamental ideas of science are essentially simple…”
—Albert Einstein
3.1
Data Collection
This research will follow a quantitative paradigm to measure the relation of DEA, RVA, and
organizational performance. Historical data about the chosen inputs and outputs for a period
of 10 years will be collected from the data of U.S, airline companies that is available from the
COMPUSTAT database. The data will cover the period from 1998 to 2008. This period
includes some of the most important events affecting the airways industry, for example, the
recent financial crisis, oil prices, and the September 11 terrorist attacks.
This time period can be sliced into three time intervals. The first runs from 1998 until 2001
during which the industry encountered the terrorist attacks crisis. The next interval continues
until oil prices took hold in late 2005. The third runs from 2005 until 2008.
DEA and RVA calculations will be done on the inputs, outputs, stimuli, responses, and
outcomes described below.
A regression model will be developed between the two independent variables (RVA and
DEA results) and the organizational performance measure as a dependent variable.
Use of the DEA and RVA in this study will be in the context of total-enterprise performance.
The focus will be on corporate-level flexibility and efficiency. It will be applied on company
financial data found in external financial statements.
3.1.1 U.S. Airlines Industry
The U.S. airlines industry market has undergone considerable change. De-regulation in 1978
has led to significantly increased competition and introduced many innovative methods in the
development of this industry (Barbot, Costa, & Sochirca, 2008). Due to the sensitivity of this
industry to environmental stimuli, the airlines industry is more vulnerable and it has failed
more often than many other businesses (Gong, 2007). The U.S. airline industry has been in a
financial crisis since the beginning of this century, a crisis exacerbated by the economic
20
slowdown of Spring 2001 and the terrorist attacks on September 11 of that year. Unlike in the
economic slowdown of 1991/1992, this time the pressure was more intense and caused by
domestic demand shocks after the 9/11 attacks, which led to a demand shock of more than
30% as well as an ongoing downward shift in the demand for commercial air service of
roughly 7.4% (Ito & Lee, 2005), wars in Afghanistan and Iraq, severe acute respiratory
syndrome (SARS) epidemics threats, rapid low-cost carrier expansion that enabled high
competition in the industry, Internet expansion in travel bookings, a sustained low-fare
environment, and an unprecedented increase in fuel prices that took hold in late 2005.
Jet Fuel Prices
U.S. Marketplace
100
50
Crack Spread
Crude Oil
0
2003
2004
2005
2006
2007
Source: ATA economic report 2008
Figure 2. Jet Fuel Prices
In December 2005, almost half of the U.S. industry output, measured by available seat miles
(ASM), was under bankruptcy protection (Bhadra, 2008).
Despite these problems and pressures, only a few companies were involved in merger and
consolidation: the US Airways merger with America West in 2005, the liquidation of ATA
and Aloha, the shutdown of Skybus operations, and Frontier‘s declaration of bankruptcy—all
in April 2008, and the merger between Northwest and Delta (Bhadra, 2008). The market
structure of the airlines industry looks very much the same today as it did in the late 1990s.
The rapid growth of low-cost carriers (LCCs) in the U.S. domestic market presented the
traditional network legacy carriers (NLCs) with intense price competition, as the LCCs fully
exploited the significant cost advantage they enjoyed at the time. High costs and a declining
revenue environment pushed four out of six NLCs into bankruptcy. Whether under
bankruptcy protection (United, US Airways, Delta, and Northwest) or under the threat of
bankruptcy (American and Continental), the NLCs have made efforts to reduce their cost
structures and to improve their labour and aircraft productivity. They have retrenched into
21
their hubs, incorporated considerable schedule flexibility, cut back employment and benefits,
reduced services and outsourced routes to regional carriers, overhauled their maintenance and
repair stations, rationalized their fleet, and expanded international operations where the
revenue environment is relatively more favourable than the domestic markets. Due to this,
combined with increased load factors and some slowly emerging pricing power seen in
increased fares and fees, the U.S. airlines industry achieved modest profitability in 2006, for
the first time since 2000 (Tsoukalas, Belobaba, & Swelbar, 2008) and posted profits in 2007
(Bhadra, 2008). This highly competitive environment and the large number of stimuli
encountered made this industry ideal for testing RVA flexibility, DEA efficiency, and their
contribution to explaining organizational performance.
22
Table 2 U.S. Airlines by Operating Revenues 2007
–
Alaska Airlines
$100 Million to $1
Billion
Air Transport
International
Air Wisconsin
Airlines
Allegiant Air
American Airlines
Aloha Airlines
More than $1 Billion
ABX Air
AirTran Airways
American Eagle
Airlines
Atlantic Southeast
Airlines
FedEx Express
Frontier Airlines
JetBlue Airways
Gemini Air Cargo
Mesa Airlines
GoJet Airlines
Northwest Airlines
Hawaiian Airlines
SkyWest Airlines
Horizon Air
Southwest Airlines
Kalitta Air
United Airlines
Mesaba Airlines
Miami Air
International
Midwest Airlines
North American
Airlines
Omni Air
International
Pinnacle Airlines
Polar Air Cargo
PSA Airlines
Ryan International
Airlines
3.2
Frontier Flying
Service
Grand Canyon
Helicopters
Grant Aviation
Great Lakes
Arctic Circle Air
Service
Arctic Transportation
Asia Pacific Airlines
Service Gulf &
Caribbean
Gulfstream
International Airlines
Hageland Aviation
Services
Harris Air
Homer Air
Aviation Concepts
Iliamna Air
Air Cargo
ATA Airlines
Champion Air
Continental
Micronesia
Evergreen
International
Executive Airlines
Florida West Airlines
US Airways
Air Midwest
Alaska Central
Express
Arrow Air Ameristar
Comair
Continental Airlines
UPS Airlines
Aerodynamics
Alaska Seaplane
ASTAR Air Cargo
ExpressJet Airlines
40-Mile Air
Amerijet International
Atlas Air
Delta Air Lines
Less than $100 Million
Bemidji Airlines
Bering Air
Big Sky Airlines
Boston-Maine
Airways
Inland Aviation
Services
Island Air
Island Air Service
Kalitta Charters II
Salmon Air
Scenic Airlines
Seaborne Aviation
Airlines Servant Air
Air Shuttle America
Sierra Pacific Airlines
Skagway Air
Services Sky King
Skybus Airlines
Taxi Skyway Airlines
Smokey Bay Air
Spernak Airways
Tanana Air Service
Taquan Air Service
Kitty Hawk Air Cargo
Tatonduk Flying
Service
Harbor Tradewind
Aviation
Tradewinds Airlines
L.A.B. Flying Service
US Helicopter Corp.
Lynden Air Cargo
Victory Air Transport
Centurion Air Cargo
Lynx Aviation
Vieques Air Link
Chautauqua Airlines
M&N Aviation
Vintage Props&Jets
Colgan Air
MAXjet Airways
Virgin America
Commut Air
NetJets
Compass Airlines
Custom Air Transport
Ellis Air Taxi
New England Airlines
Northern Air Cargo
Pace Airlines
Warbelow‘s Air
Ventures
Ward Air
West Isle Air
Wings of Alaska
Empire Airlines
Pacific Airways
Wright Air Service
Cape Air
Katmai Air
Capital Cargo
International
Cargo 360
Caribbean Sun
Airlines
Casino Express
Kenmore Air
Southern Air
Eos Airlines
Spirit Airlines
Sun Country Airlines
Express
Trans States Airlines
USA 3000 Airlines
USA Jet Airlines
World Airways
Era Aviation
Pacific Wings
Airlines
PenAir
.Net Airlines
Piedmont Airlines
Falcon Air Express
Focus Air
Freedom Air
Freedom Airlines
PM Air
Primaris Airlines
Regions Air
Republic Airlines
Research Variables Measurement
3.2.1 Dependent Variable Measurement (organizational performance)
23
Yute Air Alaska
The literature has suggested that firms should emphasize both non-financial measures and
financial measures in their performance measurement systems (Kaplan & Norton, 1996;
Nanni, Dixon, & Vollmann, 1992). However, Gosselin (2005) has shown that despite these
prescriptions, financial measures are much more often used by controllers than non-financial
measures. According to financial theory (Brealey & Myers, 2003), the main target of any
firm is to maximize its shareholders‘ wealth. Shareholder wealth can be represented by share
price times number of shares owned so we can use market capitalization as a good
performance indicator. Market capitalization is a measurement of corporate or economic size
equal to the share price times the number of shares outstanding of an organization (market
capitalisation = number of outstanding shares X share price).
As owning stock represents owning the company, including all its assets, capitalization could
represent the public opinion of a company's net worth. However, using market capitalization,
a large under-performing company might look better than a good small company in our
analysis. So we are going to use growth in market capitalization.
Growth in market capitalization =
𝑀𝑎𝑟𝑘𝑒𝑡 𝑐𝑎𝑝𝑖𝑡𝑎𝑙𝑖𝑧𝑎𝑡𝑖𝑜𝑛
𝑦𝑒𝑎𝑟 𝑡
– 𝑀𝑎𝑟𝑘𝑒𝑡 𝑐𝑎𝑝𝑖𝑡𝑎𝑙𝑖𝑧𝑎𝑡𝑖𝑜𝑛
𝑀𝑎𝑟𝑘𝑒𝑡 𝑐𝑎𝑝𝑖𝑡𝑎𝑙𝑖𝑧𝑎𝑡𝑖𝑜𝑛
𝑦𝑒𝑎𝑟 𝑡−1
𝑦𝑒𝑎𝑟 𝑡−1
3.2.2 Independent Variable Measurement (DEA)
The input/output variables selected for this paper follow the literature and are common in
most related studies that have studied the efficiency of the airline industry (Assaf, 2009).
Inputs
Operating cost (excluding labour
cost)
Labour cost
Number of planes (Capital input)
3.2.3 Independent Variable Measurement (RVA)
Stimulus
24
Outputs
Total operating revenue (passenger
service and cargo operation)
Income before tax
The stimuli variables have been chosen in order to capture the different crises and changes
facing the airlines industry. On the other hand, they must have an impact on industry
performance.
The study of Chin and Tay (2001) found a positive correlation between the GDP growth rate
and air traffic growth rates. Also they found a positive correlation between air traffic growth
and the airlines‘ profitability. This can justify the selection of the growth rate of the economy
as a good stimulus. Moreover, the growth rate of the economy can capture the figures from
the recent financial crisis and terrorist attacks.
Another important stimulus that can be studied is jet fuel prices. In Assaf‘s study (2009) of
the efficiency of U.S. airlines, he mentioned fuel prices as a main cause of the overall
inefficiency in this industry. And Bhadra (2008) found the same result in his study of the
performance of U.S. airlines. So we can use the following variables to measure the RVA
stimuli: jet fuel prices and the growth rate of the economy.
Responses
In their study of European airlines, Alderighi and Cento (2004) used adjustment cost as a
proxy to test the flexibility of the company during crisis times. They used the capacity of the
company and its market prices in order to measure the adjustment cost. From, that we can
follow the same procedure to measure responses by the following variables: company‘s
capacity (total number of seats in service0 and ticket prices.
Outcomes
In the airlines industry, the total passenger-miles (PMs) is the summation of the product of
each passenger and the mileage he or she has travelled. This measurement has been chosen as
an output variable in the performance measurement for the airlines industry (Lin, 2008).
Passenger-mile =
𝑁
𝑗 𝐸𝑃𝑗
× 𝑀𝑗
.
Where EP is the number of passengers who embarked on each flight, M is the mileage each
flight travelled, and N is the total number of flights.
Again using the passenger-mile, a large less flexible company might look better than a more
flexible small company in our analysis. So growth in passenger per mile as an outcome of the
RVA analysis would be a good candidate for the RVA outcome.
25
Hansen, Gillen, and Djafarian-Tehrani (2001) and Scheraga (2004) used revenue passengermiles and non-passenger revenue like cargo revenue in their studies of operational efficiency
and cost modelling in the airlines industry, which augments the ability of using growth of
total (passenger and non-passenger) revenue as a good candidate for the RVA outcome. The
only point against this choice is that it has already been chosen as an output of the DEA
analysis, which could raise the potential correlation between the two variables.
3.3
Calculations Software
3.3.1 DEA Calculations
DEAP software will be used to calculate the DEA variable. DEAP is a program developed by
Coelli (1996) that specializes in DEA calculations. Three principal options are available in
this program.
1. The standard CRS and VRS DEA model, which is the most important part for this
study.
2. The extension of the standard model to account for cost and allocative efficiencies.
3. Application of the Malmquist DEA method.
Coeli‘s model has been used by many authors in the field of DEA analysis (cf., Hess &
Cullman, 2007; Perrigot & Barros, 2008).
Software will be used to test the significance of the DEA results. Free FEAR software
(Wilson, 2006) under the R project (R Development Core Team, 2009) will be used in this
calculation. R is freely available under the Free Software Foundation‘s GNU General Public
License. Many options are available in FEAR such as:
-
DEA estimates of technical, allocative, and overall efficiency with variable, nonincreasing, or constant returns to scale.
-
Malmquist indices.
-
The ability to do the statistical inference on the DEA results using the bootstrapping
method described in Simar and Wilson (1998) (Wilson, 2008).
The main disadvantage of using FEAR on R is that R has no friendly user interface.
3.3.2 RVA Calculations
26
Since there is no specialized software for RVA calculations, a general linear optimization
application will be used for that purpose. Microsoft Excel with its SOLVER application will
be used to run the linear optimisations to get RVA results.
3.3.3 Regression Model Calculations
Microsoft Excel will also be used to run the regression model between the DEA, RVA, and
organizational performance.
3.4
Further Research
Further studies should choose more industries to test the above relationships. This will give
the test‘s results more reliability and validity.
-
Further research is needed to test the relationship between efficiency, flexibility, and
performance assuming a dynamic relation. This can be done by testing some of the
catastrophe models introduced in Thom (1975)4.
-
Further research should be conducted using Heiner‘s principle of predictable
behaviour (1983) to determine the relation between flexibility and organizational
performance by introducing uncertainty int0 the optimization theory of behaviour5.
4
5
See Appendix II for more details about the Catastrophe Theory.
See Appendix II.
27
3.5
Thesis Timeline:
Table 3 : Thesis Starting and Finishing Dates
Starting date
Finishing date
Duration
MPhil stage
Dec. 2008
Dec. 2009
12 months
Final thesis stage
Jan. 2010
Dec. 2011
24 months
Total
36 months
Table 4. Thesis Timeline
Literature review
Methodologies
Proposal refinement
MPhil Defence
Preliminary analysis
Writing first article
Writing second article
Thesis writing
Revised thesis
Final defence
28
34-36
30-33
26-29
23-25
20-22
17-19
14-16
10-13
7-9
4-6
Tasks
1-3
Time in Months
REFERENCES
Abbott, M., & Doucouliagos, C. (2003.) The efficiency of Australian universities: A data
envelopment analysis. Economics of Education Review, 89-100.
Adler, P.S., Goldoftas, B., & Levine, D. I. (1999). Flexibility versus efficiency? A case study
of model changeovers in the Toyota production system. Organizational Science, 10, 43-68.
Agrell, P. J., & Bogetoft, P. (2005). Economic and environmental efficiency of district
heating plants.‖ Energy Policy, 33,1351.
Ahmed, P. K., Hardaker, G., & Carpenter, M. (1996). Integrated flexibility: Key to
competition in a turbulent environment. Long Range Planning, 4, 562–571.
Alchian, Armen A. (1950). Uncertainty, Evolution and Economic Theory. Journal of
Political Economy, 58, 211-221.
Alderighi, M., & Cento, A. (2004). European airlines conduct after September 11. Journal of
Air Transport Management, 10, 97-107.
Alexander, C. A., Busch, G., & Stringer, K. (2003). Implementing and interpreting a data
envelopment analysis model to assess the efficiency of health systems in developing
countries. IMA Journal of Management Mathematics, 14,35-49.
Ashby, R. (1964). An introduction to cybernetics. London, UK: Chapman & Hal.
Assaf, A. (2009). Are U.S. airlines really in crisis? Tourism Management,
doi:10.1016/j.tourman.2008.11.006.
Baack, D., & Cullen, J. B. (1992). A catastrophe theory model of technological and structural
change. Journal of High Technology Management Research, 3(1), 125-145.
Banker, R. D., Charnes, A., & Cooper, W. W. (1984). Some models for estimating technical
and scale inefficiencies in data envelopment analysis. Journal of Management Science, 30(9),
1078-1092.
Banker, R. D., & Morey, R. (1968). Efficiency analysis for exogenously fixed inputs and
outputs. Operations Research, 32, 513-521.
29
Barbot, C., Costa, A., & Sochirca, E. (2008). Airlines performance in the new market
context: A comparative productivity and efficiency analysis. Journal of Air Transport
Management, 14, 270-274.
Behrman, J. R., & Deolalikar, B. (1989). Duration of survival of manufacturing
establishments in a developing country. Journal of Industrial Economics,XXXVIII,215-237
Bhadra, D. (2008). Race to the bottom or swimming upstream: Performance analysis of US
airlines. Journal of Air Transport Management, ,15, 227-235
Blahut, R. E. (1987). Principles and practice of information theory. Reading, MA: AddisonWesley.
Brealey, R. A., & Myers, B. (2003). Principles of corporate finance. New York: McGrawHill.
Brickley, J. A., Clifford W. S., & Jerrold, L. Z. (2003). Managerial economics and
organizational architecture. NewYork,NY,Irwin: McGraw Hill.
Buyurgan, N., & Saygin, C. (2006). An integrated control famework for flexible
manufacturing systems. International Journal of Advanced Manufacturing Technology, 27,
1248-1259.
Caves, R. E. (1998). Industrial organization and new findings on the turnover and mobility of
firms. Journal of Economic Literature, 35, 1947-1982.
Chan, F. T. S. (2004). Impact of operation flexibility and dispatching rules on the
performance of a flexible manuracturing system. International Journal of Advanced
Manufacturing Technology, 24, 447-459.
Charnes, A., Cooper, W., & Rhodes, E. (1978). Measuring the efficiency of decision making
units. European Journal of Operational Research, 2,429-444.
Charnes, A., Cooper, W. W., Wei, Q. L., & Huang, Z. M. (1989). Cone ratio data
envelopment analysis and multi-objective programming. International Journal of Systems
Science, 20, 1099-1118.
Cheng, J. L. C., & Kesner, I. F. (1997). Organizational slack and response to environmental
shifts the impact of resource allocation patterns. Journal of Management, 23(1), 1-18.
30
Chin, A. T. H, & Tay, J. H. (2001). Developments in air transport: Implications on
investment decisions, profitability and survival of Asian airlines. Journal of Air Transport
Management, 7, 319-330.
Choi, B. P., & Weiss, M. A. (2005). An empirical investigation of market structure,
efficiency, and performance in property-liability insurance. Journal of Risk and Insurance,
72(4), 635-673.
Coelli, T. (1996). A guide to DEAP Version 2.1: A data envelopment analysis. Working
paper. Armidale, ME: University of New England.
Cooper, W. W., Seiford, L. M., & Zhu, J. (2004). Handbook on data envelopment analysis.
Boston, MA: Kluwer Academic.
Dreyer, B., & Grønhaug, K. (2004). Uncertainty, flexibility, and sustained competitive
advantage. Journal of Business Research, 57, 484-494.
Dyson, R.G., & Thanassoulis, E. (1988). Reducing weight flexibility in data envelopment
analysis. Journal of the Operational Research Society, 39, 563-576.
Eatwell, J. (1987). Returns to scale. A Dictionary of Economics, 165-166.
Emrouznejad, A., Parker, B. R., & Tavares, G. (2008). Evaluation of research in efficiency
and productivity: A survey and analysis of the first 30 years of scholarly literature in DEA.
Socio-Economic Planning Sciences, 42, 151-157.
Farrell, M. J. (1957). The measurement of productive efficiency. Journal of the Royal
Statistical Society, 3, 235-290.
Fӓre, F., Grosskopf, S., & Lovell, C. A. K. (1985). The measurement of efficiency of
production. Boston, MA: Kluwer Academic.
Gitman, L. J. (2006). Principles of managerial finance. Pearson Education. Needs city and
state of publication
Gong, S. X. H. (2007). Bankruptcy protection and stock market behavior in the US airline
industry. Journal of Air Transport Management, 13, 213-220.
31
Gosselin, M. (2005). An empirical study of performance measurement in manufacturing
firms. International Journal of Productivity and Performance Management, 54(5/6), 419437.
Grubbstrӧm, R. W., & Olhager, D. J. (1997). Productivity and flexibility: Fundamental
relations between two major properties and performance measures of the production system.
International Journal of Production Economics, 52, 73-82.
Hansen, G. S., & Wernefelt, B. (1989). Determinants of firm performance:The relative
importance of economic and organisational factors. Strategic Management Journal, 10, 399411.
Hansen, M. M., Gillen, D., & Djafarian-Tehrani, R. (2001). Aviation infrastructure
performance and airline cost: A statistical cost estimation approach. Transportation Research
Part E,1, 1-23.
Hegstad, M. (1990). A simple, low-risk, approach to JIT. Production Planning &
Control,1(1), 53-60.
Heiner, R. (1983). The origin of predictable behaviour. The American Economic Review,
73(4), 560-595.
Heizer, J., & Render, B. (2000). Operation management., Englewood Cliffs, NJ, Prentice
Hall.
Hess, B., & Cullmann, A. (2007). Efficiency analysis of East and West German electricity
distribution companies: Do the "Ossis" really beat the "Wessis"? Utilities Policy, 15, 206214.
International Air Transport Association. (2008). IATA economic report. Annual report.
Washington, DC: Author.
Ito, H., & Lee, D. (2005). Assessing the impact of the September 11 terrorist attacks on U.S.
airline demand. Journal of Economic and Business,57, 75-95.
Kaplan, R. S, & Norton, D. P. (1996). The balanced scorecard. Boston, MA: Harvard
Business School Press.
32
Kauffman, R., & Oliva, T. (1994). Multivariate catastrophe model estimation: Methods and
application. Academic Management Journal, 37(1), 206-221.
Koopmans, T. C. (1951). Activity analysis of production and allocation. New York: Wiley.
Kullback, S. (1997). Information theory and statistics. Courier Dover.
Kumar, M., & Charles, V. (2008). Productivity growth as the predictor of shareholders'
wealth maximization:An empirical investigation. Journal of CENTRUM Cathedra, 72-83.
Lin, T. J. (2008). Route-based performance evaluation of Taiwanese domestic airlines using
data development analysis: A comment. Transportation Research Part E, 44, 894-899.
MacDuffie, J. P. (1993). Beyond mass production: Flexible production systems and
manufacturing performance in the world auto industry. International Business Studies, 24(1),
339-346.
Meng-Ling Wu, Da-Yeh. (2006). Corporate social performance, corporate financial
performance and firm size: A meta-analysis. Journal of American Academy of Business,
8(1).197-202
Meybodi, M. Z. (2009). Benchmarking performance measures in traditional and just-in-time
companies. Benchmarking: An International Journal, 16(1), 88-102.
Mostafa, M. (2007). Modeling the efficiency of GCC banks: A data envelopment analysis
approach. International Journal of Productivity and Performance Management, 56(7), 623643.
Nanni, A. J., Dixon, R., & Vollmann, T. E. (1992). Integrated performance measurement:
Management accounting to support the new manufacturing realities. Journal of Management
Accounting Research, 4, 1-19.
Nor, N. M., Nor, N. G., Abdullah, A. Z., & Jalil, S. A. (2007). Flexibility and small firms'
survival: Further evidence from Malaysian manufacturing. Applied Economics Letters, 14,
931-934.
Olsson, N. O. E. (2006). Management of flexibility in projects. International Journal of
Project Management, 24, 66-74.
33
Pagell, M., & Krause, D. R. (2004). Re-exploring the relationship between flexibility and the
external environment. Journal of Operations Management, 21, 629-649.
Perrigot, R., & Barros, C. P. (2008). Technical efficiency of French retailers. Journal of
Retailing and Consumer Services, 15, 296-305.
Phillips, F., Summers, G., & Moon, G.-S. (2002). Flexibility, efficiency and performance of
enterprises: An optimization framework and simulation test. Working paper. Oregon Health
& Science University.
Phillips, F., & Tuladhar, S. D. (2000). Measuring organisational flexibility: An exploration
and general model. Technological Forecasting and Social Change, 23-38.
Ptack, C. (1987). MRP and Beyond: A toolbox for integrating people and sytems. Chicago,
IL: Irwin.
R Development Core Team. (2009). R: A language and environment for statistical
computing. Vienna, Austria: R Foundation for Statistical Computing.
Roll, Y., & Golany, B. (1993). Alternate methods of treating factor weights in DEA. Omega,
21, 99-109.
Scala, J., Purdy, L., & Safayeni, F. (2006). Application of cybernetics to manufacturing
flexibility: A systems perspective. Journal of Manufacturing Technology Management, 17(1),
22-41.
Scheraga, C. A. (2004). Operational efficiency versus financial mobility in the global airline
industry: A data envelopment and Tobit analysis. Transportation Research Part A, 38, 383404.
Shannon, C., & Weaver, W. (1963). Mathematical theory of communication. University of
Illinois Press.
Simar, L., & Wilson, P. (1998). Sensitivity analysis of efficiency scores: How to bootstrap in
nonparametric frontier models. Management Science, 44, 49-61.
Stevenson, W. J. (1996). Production/operations management. Chicago, IL: Irwin.
Stigler, G. J. (1939). Production and distribution in the short run. Journal of Political
Economy, 47(3), 305-327.
34
Thom, R. (1975). Structural stability and morphogenesis. Reading, MA: Addison Wesley.
Thompson, R.G., Thrall, R. M., Langemeier, L., & Lee, E. (1990). The role of multiplier
bounds in effciency analysis with application to Kansas farming. Journal of Econometrics.
Thore, S., Kozmetsky, G., & Phillips, F. (1994). DEA of financial statements data: The U.S.
computer industry. Journal of Productivity Analysis, 5(3), 229-248.
Tsoukalas, G., Belobaba, P., & Swelbar, W. (2008). Cost convergence in the US airline
industry: An analysis of unit costs 1995-2006. Journal of Air Transport Management, 14,
179-187.
Tu, A. H., & Chen, S.-Y. (2000). Bank market structure and performance in Taiwan before
and after 1991 liberalization. Review of Pacific Basin Financial Markets and Policies, 3(4),
475-490.
Volberda, H. W. (1998). Building the flexible firm: How to remain competitive. Oxford, UK:
Oxford University Press.
von Ungern-Sternberg, T. (1990). The flexibility to switch between different products.
Economica, 57, 355-369.
Weiss, C. R. (2001). On flexibility. Journal of Economic Behavior & Organization, 46, 347356.
Weisstein, E. W. (1999). Kolmogorov entropy. Retrieved May 7, 2009, from
http://mathworld.wolfram.com/KolmogorovEntropy.html
Wilson, P. W. (2006). FEAR: Frontier Efficiency Analysis with R. R package version 1.12.
Wilson, P. W. (2008). ―FEAR: A sofware package for frontier efficiency analysis with R.
Socio-Economic Planning Sciences, 42, 247-254.
Winter, Sidney G.(1964).Economic 'Natural Selection' and the Theory of the Firm.Yale
Economic Essays,4,225-272.
Wong, Y.-H., Beasley, B., & Last name of third author missing J. E. (1990). Restricting
weight flexibility in DEA. Journal of Operations Research, 41, 829-835.
35
Yockey, H. P. (1992). Information theory and molecular biology. Cambridge, UK:
Cambridge University Press.
36
APPENDIX I
HEINER’S THEORY OF PREDICTABLE BEHAVIOUR
3.6
Optimisation Theory
Optimisation theory is a common assumption used to explain economic behaviour.
Optimization stipulates strategies that offer the highest return to an agent given all the
different factors and constraints facing the agent. One of the simplest ways to arrive at an
optimal solution is to do a cost/benefit analysis. By considering the benefits of behaviour and
the costs of it, it can be seen that if the benefits compensate the cost, then behaviour will
progress and vice versa.
However, the general theory of economics assumes that the agent is behaving in a way such
that he or she is always optimizing his or her behaviour. Optimisation theory has been
attacked as an acceptable explanation of behaviour. Researchers (Alchian, (1950); Winter,
(1964)) have argued that the agent can‘t always choose the best (most favourable)
alternatives and. consequently, optimize his or her decisions. Some of obstacles facing the
agent who wishes to optimize his or her behaviour that were mentioned by these researchers
include the following:
-
Information processing limitation. Here the agent can‘t process the information in a
proper way so that he or she can select the best alternatives because of some
limitations he or she has.
-
Unreliable probability information about the complex environment. In this case, the
agent can‘t make sure that the information he or she has models and predicts the
complex environment he or she faces.
-
Nonexistence of a well-defined set of alternatives so the agent can define which of
them are preferred and which are most preferred. In other words, the agent can‘t
restrict himself or herself to a limited set of alternatives because of uncertainty.
3.7
Heiner‘s Theory
Heiner‘s theory can be explained by considering two situations. In the first situation, the
agent can optimise his or her behaviour by choosing the most preferred alternatives. In this
case, there is no uncertainty because the agent knows exactly what to do. The alternatives are
well defined and the agent can define the most preferred alternatives from the less preferred
37
ones. The role of flexibility in this situation is extremely limited because there is no need for
it as the agent can define the best behaviour.
In the second situation, the agent can‘t optimize his or her behaviour because of uncertainty.
The agent can‘t define the most preferred alternative among the available alternatives. In this
case, the agent must choose an alternative that can be a most preferred or not among many
other alternatives. By introducing uncertainty here, the role of flexibility arises. The more
flexible the choice of the agent, the more alternatives he or she will face. Consequently, he or
she has a greater chance to commit an error in his or her choice and choose a preferred
alternative but not the most preferred one.
Heiner (1983) argued that uncertainty is the basic source of predictable behaviour.
Uncertainty exists because the agent can‘t decipher all of the complexity of the decision he or
she faces, which prevents him or her from choosing the most preferred alternatives.
Consequently, the flexibility of behaviour to react to information is constrained to smaller
behavioural repertoires that can be reliably administered. Heiner did agree with standard
economic analysis in the case of assuming no uncertainty in selecting the most preferred
options. This analysis was trying to determine the uncertainty in residual ―error term‖
between observed behaviour and the more systematic patterns claimed to result from
optimization.
Heiner‘s Reliability Condition
3.8
As mentioned above, Heiner used uncertainty (𝑈) as a source of predictable behaviour. But
how did he measure uncertainty? Two major concepts here can determine uncertainty
according to standard choice theory.
-
Competence (𝑝): the decision ability of the agent to select the most preferred
alternatives.
-
Difficulty (𝑒): the difficulty in selecting the most preferred alternatives due to the
complexity of the task.
The difference between difficulty and competence is called the C-D gap:
𝐶 − 𝐷 𝑔𝑎𝑝 = max
{0, 𝐷𝑖𝑓𝑓𝑖𝑐𝑢𝑙𝑡𝑦 − 𝐶𝑜𝑚𝑝𝑒𝑡𝑒𝑛𝑐𝑒 }
When the agent‘s competence is equal or greater than the difficulty of the task, the C-D gap
will equal zero; this means the agent can decide perfectly. On the other hand, when the
38
difficulty of the task is greater than the agent‘s ability, the C-D gap will be positive and the
agent will face uncertainty and make decision errors and encounter surprises.
Uncertainty in this case is not just a 0, 1 variable; rather, it is a continuous variable that
increases or decreases as the C-D gap magnitude changes. And so we can write this
definition: “Uncertainty increases monotonically with an agent’s C-D gap and thus
monotonically increases with the difficulty of the problem and monotonically decreases with
the agent’s ability.”
Thus, we can write:
𝑈 = 𝑢(𝑝−, 𝑒 +)
By considering an example about an agent limited to a fixed repertoire of actions, we can ask
whether allowing flexibility to select an additional action will improve the agent‘s
performance. The new action will be more preferred than the other actions in the agent‘s
repertoire if it‘s selected at the right time and it will be less preferred when chosen at the
wrong time. So the probabilities of the right or wrong time to select the action depending on
the likelihood of different situations produced by the environment are written as π (e) and 1π (e), respectively.
Because of uncertainty, the agent will not necessarily select the new action when it‘s the right
time to do so. The conditional probability of selecting the action when it is actually the right
time can be written as 𝑟 (𝑈). It can be written as a function of uncertainty because the
likelihood of so doing depends on the structure of uncertainty. When this happens, a resulting
gain in performance compared with situation of staying with the initial repertoire can be
written as 𝑔(𝑒). In the same manner, we can write the conditional probability of selecting the
new action when it‘s actually the wrong time as 𝑤(𝑈), with a consequent loss in the agent‘s
performance as 𝑙(𝑒).
In a special case of no uncertainty, the new action will always be selected at the right time,
and so 𝑟 = 1 and 𝑤 = 0.However, in the general case with the presence of uncertainty, r will
be less than 1 and w>0.
r
Measuring the reliability of selecting a new action can be done by calculating the ratio w ,
which represents the chance of selecting the new action at the right time relative to selecting
it at the wrong time. This implied greater uncertainty will both reduce the chance of correct
39
selections and increase the chance of mistaken selection, consequently dropping down the
ratio r/w (reducing the reliability of selecting the new action).
Now we are ready to answer the following question: When is the selection of a new action
sufficiently reliable for an agent to benefit from allowing flexibility to select that action?
This can be achieved by making sure that the gains g(e) from selecting the action when it‘s
more preferred will accumulate faster than the loses l(e) from selecting the action when it‘s
less preferred. The right conditions occur with probability π (e), which are correctly
recognized with probability 𝑟 (𝑈) . The expected gain in this case can be determined as g (e)
r (U) π (e). In the same way the expected losses can be determined as l (e) w (U) (1- π (e)).
Thus, gains will accumulate faster than losses if g (e) r (U) π (e) > l (e) w (U) (1- π (e)). This
yields to the following reliability condition:
𝑟(𝑈)
𝑙(𝑒) 1 − 𝜋(𝑒)
>
.
𝑤(𝑈)
𝑔(𝑒)
𝜋(𝑒)
40
APPENDIX II
CATASTROPHE THEORY
Catastrophe theory is a mathematical tool originally developed by the French mathematician
René Thom in the late 1960s (Thom, 1975). Catastrophe theory is a part of bifurcation theory
(a theory interested in studying the systems characterised by sudden shifts in the behaviour of
the system arising from small changes in its parameters or circumstances). Catastrophe theory
can deal with a complex system characterized by discontinuities without reference to any
specific mechanism. This critical property makes it an important tool in studying and
modelling a system whose inner mechanism is not known (Kauffman & Oliva, 19994).
Studying continuous change can be done with many tools like calculus. On the other hand,
the greatest benefit of catastrophe theory is that it allows for analysis of both continuous
(evolutionary) and discontinuous (revolutionary) changes within the same context (Baack &
Cullen, 1992).
Catastrophe theory can be explained by the illustration of one of the most famous models of
this theory: the cusp model.
3.9
Cusp Model
To build the cusp model, three variables must be specified. One is a dependent variable and
two are independent variables, the normal or control factor and the splitting factor. While the
splitting factor is the key variable and will prompt the control variable to change the
behaviour of the dependent variable from the continuous to discontinuous, the control factor
is related to the dependent variable in consistent patterns. Generally, the splitting variable is a
moderator variable that specifies conditions in which the control variable will produce a
continuous change in the dependent variable, and other conditions under which the control
variable will produce a revolutionary change. When the predictor variables (control and
splitting) reach a critical point (a point where not just the first derivative but one or more
higher derivatives of its function are also zero), the dependent variable changes suddenly and
radically.
41
Figure 3. Cusp model graphical presentation.
The above figure is a typical cusp model. The two independent variables are represented by
the two axes shown as splitting factor and normal factor. The dependent variable is presented
by the vertical axis, where the surfaces represent the various forms of the behaviour of the
dependent variable responding to the change in the independent variables. The cusp model is
the ―s-shaped‖ fold in the behaviour surface. At the levels of the splitting factor away from
the cusp area, the relationship between the normal factor and the dependent variable are
continuous. In Figure 3, moving from A to B or from B to A is an example of this
relationship. The cusp area indicates the area that is responsible for a radical, discontinuous
change in the dependent variable. An example of this relation is moving from C to D or from
D to E.
3.10 Catastrophe Properties, Relations, and Terminology
From the above, we can see that catastrophe theory has many properties and relations. Some
are explained below.
1. Evolution: the responding of the dependent variable to the normal factor in a
continuous way.
2. Catastrophe ―revolution‖: the responding of the dependent variable to the normal
factor in a discontinuous way.
42
3. Divergence: the behaviour of the dependent variable in the cusp area.
4. Hysteresis: When the dependent variable changes in a catastrophic fashion, it requires
a severe reversal in the normal factor before any return to a previous stage.
5. Bimodality: the same values in the independent variables can produce different values
in the dependent variable.
6. Inaccessibility: the shaded area in the S shape (Figure 3), which presents the least
likely state of the model.
3.11 Catastrophic Models
Depending on the number of control and splitting factors, Thom (1975) defined some basic
catastrophic models. The cusp model mentioned above is just one of them. Here are others:
-
Fold catastrophe
𝑉 = 𝑥 3 + 𝑎𝑥
-
Cusp catastrophe
𝑉 = 𝑥 4 + 𝑎𝑥 2 + 𝑏𝑥
-
Swallowtail catastrophe
𝑉 = 𝑥 5 + 𝑎𝑥 3 + 𝑏𝑥 2 + 𝑐𝑥
-
Butterfly catastrophe
𝑉 = 𝑥 6 + 𝑎𝑥 4 + 𝑏𝑥 3 + 𝑐𝑥 2 + 𝑑𝑥
43
APPENDIX III
Ashby‘s Law and Extensions
The idea of survival law is that the survival of any system (for example any biological
system) depends upon its flexibility in adapting its external environmental stimuli (for
example, the surrounding temperature). Biological theory and cybernetic science tell us that
survival depends on maintaining a requisite level of variety in responses to environmental
stimuli (Phillips & Tuladhar, 2000).
Ashby set forth the Law of Requisite Variety as follows6:
Table A1
S1
E1
1
2
3
4
5
b
d
a
a
a
d
a
d
d
a
a
a
d
b
a
b
d
a
b
d
Table A2
E2
1
2
3
4
5
6
7
8
9
S2
f
f
k
k
e
f
m
k
a
b
b
b
c
q
c
h
h
m
j
d
d
a
p
j
l
n
h
3.12 Variety
Variety, regulation, disturbance (stimulus), and outcomes are very important concepts when
talking about Ashby‘s law and the concept of flexibility. Variety is defined as ―the number of
distinct elements in a set‖—the set of possible outcomes within some defined situation. For
example, license plates each have a number with six digits. So, the set of all possible plates
that can be produced will be one million = permutation of six digits each can have 10
possibilities = 106. This set has a variety of one million. Now if the same plates can be
6
The first example and its tables were taken from Phillips and Tuladhar (2000), and the other short examples
were taken from Scala, Purdy, and Safayeni (2006).
44
distinguished by 5 different colors without any numbers on the plates, the variety of this set is
only 5.
3.13 Regulation, Disturbance, and Outcomes
The above concepts can be explained by an example. Consider the air conditioning system in
a building. This system can face a number of disturbances (stimuli) like the outdoor
temperature and wind conditions. There are some actions that can be taken by the airconditioning system to adjust the indoor temperature, which will be the system regulation.
The indoor actual temperature is the outcome of this set. It could be a favourable result which
will be the ideal temperature, or an unfavourable one, like an unacceptable hot or cold
temperature.
To illustrate Ashby‘s Law, we can start by looking at Table A1 and Table A2, which depict
two systems or organisms (S1 and S2). S1 and S2 are experiencing a different number of
environmental stimuli. We can see that S1 within its O1 environment encounters five different
stimuli or variety, and S2 within its O2 environment faces nine distinct environmental stimuli.
Their respective repertoires of responses are represented by Greek letters in the tables above.
In the tables, the other elements are the outcome of each system response to its environment
stimuli. If, for example, we consider that the favourable outcome for the system survival is d,
we can see that some of the outcomes are favourable to the system‘s or organism‘s survival
and some are not. The outcome d is favourable because it may improve the system‘s or
organism‘s functionality toward generating more repertoires of responses to the
environmental stimuli. The relation between the stimuli, responses, and outcomes is related to
the flexibility concept. A system or organism is more flexible if it is capable of generating
more responses to its environmental stimuli and of making a good decision in the way of
generating a favourable outcome. As a matter of fact, a wrong decision can lead to an
unfavourable outcome to the system or organism survival.
Something else must be considered here. As a result of comparing both situations of S 1 and
S2, we can see a repetition in the columns of Table A1.This repetition means that system S1
will not change its responses according to every change in the environmental stimuli.
However, the situation is not the same in Table A2. There is no repetition in its columns and
system S1 must change its responses as the environmental stimuli changes.
45
Ashby‘s Law sets a relation to predicting the number of possible outcomes depending on the
number of distinct stimuli and responses.
The number of distinct outcomes ≥ the number of distinct stimuli / the number of distinct
responses
In the previous example, Table 2 shows 15 possible outcomes, which is greater than 9/3 = 3.
But as we can see, S2 can use a strategy so that S2 can minimize its environmental outcomes.
For example:
k
k
k
b
c
h
j
a
h
In Table A4, S2 reduced its outcome to seven by using such a strategy (varying its responses),
which also maintained the previous relation and was still greater than 9/3 = 3. However, S2
can also use another strategy by not varying its responses. This will result in generating as
many outcomes as as the variety of the stimuli. As a result of comparing the two strategies, if
the number of outcomes is fixed or is a fraction of E2‘s variety, S2 can reduce the number of
the outcomes by generating more responses to its environmental stimuli.
Also we can get the same conclusion in reference to the air-conditioning example. If the
system would like to maintain the desired temperature, the system should adjust its regulator
many times. And this in very simple words is the conclusion of the Ashby‘s Law of Requisite
Variety: ―Only variety in the regulator can force down the variety of the disturbance.‖ ―Only
variety can destroy or absorb variety.‖
3.14 Ashby‘s Law of Requisite Variety
Thus Ashby‘s Law of Requisite Variety: The variety in the outcomes, if minimal under
existing stimulus/response variety, can decrease further only by a corresponding increase in
the organism‘s response variety.
46
© Copyright 2026 Paperzz