The process management triangle: An empirical investigation of

Journal of Operations Management 25 (2007) 1015–1034
www.elsevier.com/locate/jom
The process management triangle: An empirical
investigation of process trade-offs
Robert D. Klassen *, Larry J. Menor 1
Richard Ivey School of Business, University of Western Ontario, 1151 Richmond Street North,
London, Ontario, Canada N6A 3K7
Available online 13 November 2006
Abstract
Advancing theory and understanding of process management issues continues to be a central concern for operations
management research and practice. While an insightful body of knowledge – based primarily on studies at the process-level –
exists on the management of capacity and inventory, the dynamism characterizing most operating and competitive systems poses
an ongoing challenge for managers having to mitigate the impact of variability across different levels of operating systems
(e.g., production processes, facilities, and supply chains). This paper builds on a conceptual framework, derived from queuing
theory and termed the ‘‘process management triangle’’, to explore the extent to which fundamental trade-offs between capacity
utilization, variability and inventory (CVI) generalize to complex operations and business systems. To do so, empirical analyses
utilizing comparatively unique data for the study of these process management issues – and collected from two distinct, vastly
different levels of analysis – are presented. First, a simulation-based facility-level analysis using teaching case study data is
presented. Second, an industry-level analysis employing archival economic data spanning three multi-year periods is considered.
Collectively, these empirical analyses provide exploratory support for the generalization and extension of analytical insights on CVI
trade-offs to both complex operations and business systems, although with decreasing explanatory power. The implications of these
studies for furthering process management theory and understanding are framed around additional research propositions intended to
guide future investigation of CVI trade-offs.
# 2006 Elsevier B.V. All rights reserved.
Keywords: Process management; Operating trade-offs; Variability; Manufacturing performance; Business systems
1. Introduction
Process management involves the understanding,
design, and improvement of processes, and is of central
interest to much of the field of operations management
* Corresponding author. Tel.: +1 519 661 3336;
fax: +1 519 661 3959.
E-mail addresses: [email protected] (R.D. Klassen),
[email protected] (L.J. Menor).
1
Tel.: +1 519 661 2103; fax: +1 519 661 3959.
0272-6963/$ – see front matter # 2006 Elsevier B.V. All rights reserved.
doi:10.1016/j.jom.2006.10.004
(OM). Theory and understanding of process-related
issues like capacity utilization and inventory – based
primarily on normative, optimization-based studies
(Pannirselvam et al., 1999; Silver, 2004) – have advanced
considerably, and insights generated from this research
have informed and improved practice in both manufacturing and services. However, the complexity and
dynamism characterizing operating and competitive
environments continues to present challenges for (1)
researchers examining these process-related issues for
operating systems that span from individual production
processes to complex supply chain networks, and (2)
1016
R.D. Klassen, L.J. Menor / Journal of Operations Management 25 (2007) 1015–1034
managers having to mitigate the impact of uncertainties
on such systems. For example, in the event of a pandemic
outbreak, many hospital executives adhering to just-intime policies that boost efficiencies are now considering
adopting a just-in-case approach to stockpiling critical
products (e.g., face masks, syringes, vaccines) given
anticipated supply chain capacity and inventory
shortages for these products (Wysocki and Lueck,
2006).
Rudimentary process-level insights to this type of
management problem are readily available. For
example, a consultant report by Strategos (2006),
entitled ‘‘Capacity, Inventory, Variability and Manufacturing Strategy’’ presents a simulation model of a
simple production line intended to illustrate the ‘‘vague
and counter-intuitive’’ way that capacity utilization,
inventory and variability are related within a factory.
While primarily of interest to managers desiring better
intuition and insights on the inherent trade-offs required
in managing these process-related issues, this report –
along with the pandemic example offered earlier –
serves to highlight the continued relevance and urgency
for greater managerial understanding of process
management fundamentals. Indeed, as managerial
practice continues to struggle with having to identify
clear pathways for operational improvement, further
research is needed to link theoretical work in process
management with practical diagnosis and improvement
decision making (Chopra et al., 2004; cf. Little, 2004).
The objective of this research is to offer conceptually, and support empirically, a generalization and
extension of the fundamental process management
trade-offs heuristic between capacity utilization, variability, and inventory (cf. Lovejoy, 1998; Schmidt,
2005). Rigorous generalizations and extensions are
critical to the theory-building process (Handfield and
Melnyk, 1998). As such, our research contribution to
process management theory and understanding is threefold. First, we conceptually generalize to more complex
operating and business systems what has been derived
previously through the analytic modeling of queues at
the process-level (e.g., Hopp and Spearman, 2001),
namely the general heuristic of capacity utilization–
variability–inventory (CVI) trade-offs for process
management. Our generalization results in the offering
of two research propositions which, to the best of our
knowledge, have remained unexamined in the process
management literature.
Second, through an analysis of data collected from
distinct, vastly different levels of analysis (i.e., facilitylevel and industry-level), we find exploratory empirical
support for the broad application of CVI trade-offs for
both complex operations and business systems. We
empirically examine both teaching case study facilitylevel data and industry-level archival data, both
constituting comparatively unique data sources for
the study of CVI trade-offs.
Third, our empirical findings extend current modeling-based understanding of the trade-offs heuristic;
hence, this research contributes to the advancement of
process management theory and understanding (Handfield and Melnyk, 1998; Swamidass, 1991). We provide
a number of meaningful, and novel, research and
managerial insights for managing variability reduction
for ongoing or improved process management performance. This underpins the paper’s development of four
additional research propositions offered to motivate
future empirical investigation in process management.
The remainder of this paper is organized as follows.
In Section 2 we offer a literature-based synthesis of
fundamentals and the related issues of trade-offs and
variability, followed by our research propositions. In
Section 3 we describe our research methodology
strategy, which is based upon McGrath’s (1982)
‘‘three-horned dilemma’’ and involves the examination
of process-management empirical data collected at the
facility-level and industry-level. Research results are
presented in Section 4. In Section 5 we present a
discussion of our findings in order to generalize the CVI
tradeoffs to other operating systems, and offer extensions of the trade-off heuristic in the form of additional
research propositions to direct future process management research, before concluding.
2. Process management: trade-offs, variability
and research propositions
A critical challenge in further developing process
management knowledge both descriptively and prescriptively is the inherent complexity and dynamism of
most operational settings (Buffa, 1980; Corbett and Van
Wassenhove, 1993). Consider the general manufacturing context, where the challenges and trade-offs facing
managers were accurately expressed by Skinner (1966,
p. 140) and still remain true today:
‘‘The corporation now demands a great deal more of
the production manager. The assignment becomes—
‘Make an increasing variety of products, on shorter
lead times with smaller runs, but with flawless
quality. Improve our return on investment by automating and introducing new technology in processes
and materials so that we can cut prices to meet local
and foreign competition. Mechanize—but keep your
R.D. Klassen, L.J. Menor / Journal of Operations Management 25 (2007) 1015–1034
schedules flexible, your inventories low, your capital
costs minimal, and your work force contented. . . The
firm whose production managers master these
apparently conflicting demands commands a strategic position of enviable advantage.’’
The continued relevance of Skinner’s observations is
notable in two ways. First, the operations manager is
faced with a complex set of operating issues and
challenges that oftentimes necessitates making trade
offs. As a result, deriving useful principles of OM that
are managerially important remains a challenge.
Second, internal and external sources of variability
related to these operating issues further complicate the
operations manager’s mandate. While there appears to
be no universal theory of process management – whose
elements span the total quality management (TQM),
just-in-time (JIT) and manufacturing planning and
control literatures, to name several – there is general
recognition and acknowledgement in both academe and
practice of the impact of variability on process
management.
Interest in process management exists at the
strategic, organizational, and operational levels (Benner
and Tushman, 2003). At the strategic and organizational
levels, programs like TQM (Kaynak, 2003) and
business process reengineering (Grover and Malhotra,
1997) are posited to spur continuous innovation that
results in efficiency improvements, cost reduction,
improved customer satisfaction and financial performance (e.g., Hendricks and Singhal, 2001; Ittner and
Larcker, 1997). At the operational level, process
management involves the evaluation of the operating
activities (e.g., both capital and labor resources),
workflows through those activities that transform inputs
into desired outputs, and inventory management (Hopp
and Spearman, 2004; Silver, 2004). In most instances,
trade-offs are required at both the strategic and
operational level for improvement.
Combining these trade-offs with variability and
uncertainty has proven particularly challenging for
operations managers given the already difficult tasks
of simultaneously planning and controlling both operating capacity and inventory. Capacity management entails
long-term planning (e.g., new facilities and equipment
investment) and short-term control (e.g., over workforce
size, overtime budgets, etc.). Inventory management
involves the planning and control of process inputs and
outputs to achieve competitive priorities while satisfying
all demands. Capacity utilization and inventory represent
two basic operational performance dimensions for
process management (Anupindi et al., 1999). Both
1017
capacity utilization and inventory have been the focus of
an abundance of research, much of it analytic modeling
based (cf. Scudder and Hill, 1998; with some empirical
work in JIT, e.g., Huson and Nanda, 1995), and continue
to be among the most frequently researched OM topics
(Pannirselvam et al., 1999).
2.1. Process management trade-offs
Underlying most OM research is the desire to develop
knowledge and understanding to the point at which
‘‘laws’’ are found (Little, 1992), ‘‘theory’’ discovered
(Lovejoy, 1998), and ‘‘science’’ practiced (Hopp and
Spearman, 2001). Little (1992) emphasized the importance of finding ‘‘laws of manufacturing’’ in order to
establish a knowledge base for the OM discipline.
Lovejoy (1998) noted that a ‘‘theory of operations
management’’ would allow for the systematic organization and integration of OM knowledge. Such a theory can
be constructed from what is already known within OM
and from supporting theories developed elsewhere (e.g.,
diffusion theory (Rogers, 1995) and the resource-based
view of the firm (Barney, 1991)). Short of the absence of
laws and theory, developing OM science remains elusive.
However, OM science that incorporates both normative
and empirical insight would be useful as it would result in
greater precision, intuition, and knowledge synthesis
(Hopp and Spearman, 2004, 2001).
Are there meaningful OM laws and theories that
would inform process management trade-offs? Yes. For
simple and stable production processes – where the
process inflow and outflow rates are identical in the
long-run – the expression L = lW is especially useful. L
represents the average number of items present in the
system (inventory); l is the average arrival rate, items
per unit time; W denotes the average time spent by an
item in the system, otherwise termed effective process
time (Hopp and Spearman, 2001). (The concept of
effective process time becomes much more important in
our later discussion, when the operations system also
includes downtime.)
This mathematical theorem, known as ‘‘Little’s law’’
(Little, 1961), is an intuitively appealing, parsimonious,
and remarkably robust relationship (Stidham, 1974).
This expression can be restated in familiar OM process
terms to link inventory (I), effective process time (W),
and mean production rate (rp):
I ¼ rp W
(1)
For manufacturing operations at steady state, the
system is typically defined from the first operation to the
1018
R.D. Klassen, L.J. Menor / Journal of Operations Management 25 (2007) 1015–1034
last operation, with I then being work-in-process
inventory (WIP) and effective process time being
throughput time. However, this convention is based on
specific boundaries of the system.
Recall that for a basic M/M/1 queue, the average
number of items in a system (inventory) is a function of
the production (i.e., arrival/departure) rate, rp, and
process capacity rate, rc (alternatively, utilization,
r = rp/rc), and is given by
I¼
rp
r
¼
rc rp 1 r
(2)
Combining (1) and (2), we can also state the average
effective process time, W, for the M/M/1 system:
W¼
r 1
1 r rp
(3)
Thus, for a simple and stable process, managers can
focus on two of these quantities, usually production rate
and capacity utilization, with the remaining measures
such as inventory and process time being determined.
That said, despite the theoretical and practical
significance of Little’s law for effective process
management (e.g., measurement of cycle time, management of inventory turns, etc.), there has been limited
empirical scrutiny that leverages and extends the
underlying logic of Little’s law for insight into the
general behavior of more complex operating systems—
especially where process variability is prevalent.
2.2. Process variability
Most operating processes and systems, however,
require that managers be concerned with more than the
long-run average values specified by Little’s law.
Instead, managers are confronted with dynamic
operating conditions and complex internal challenges
that cause changes in inputs, operations and outputs.
Variability results in nonconformities that have a
negative impact on an operations process. For example,
product characteristics, raw material quality, and
process attributes such as process time, setup time,
process quality, equipment breakdowns and repairs, and
workforce scheduling are all subject to nonconformance. Thus, the corrupting influence of variability
reduces many measures of operational performance,
such as throughput, lead time, customer service, quality,
etc. (Hopp and Spearman, 2001, p. 287). Such
variability, whether resulting from explicit management
decisions or foreseeable customer behavior (i.e.,
predictable variation), or resulting from unforeseeable
events beyond immediate control (i.e., random variation), can prove to be highly disruptive and impact the
stability of processes. Moreover, with process variability, the relationship between capacity utilization and
inventory as posited by Little’s law becomes less clear.
Schmenner and Swink (1998) related issues of
variability to the performance of different production
processes. Specifically, they offered a theory of swift,
even flow that posits that the productivity of any process
– labor-, machine-, materials-, or total factor-based –
increases with the speed of material flows through the
process, and decreases as demand and process
variability increases. This factory-specific theory allows
for broad explanation of, and added insight into, a
number of operating issues such as the reduction of
work-in-process inventories, worker cross-training, and
the product-process matrix (Hayes and Wheelwright,
1978).
2.3. Process management triangle and CVI tradeoffs
The need to manage variability (and its reduction)
complicates the manager’s responsibilities in planning
and controlling operating capacity and inventory. These
issues, while complicating scholarly efforts to understand effective process management, have distinct OM
research implications. For example – and by way of
theoretical support – Lovejoy (1998) explicitly discussed an adaptation of the M/G/1 queue and
Pollaczek–Khintchine formula (Medhi, 2003) to general process management, and posited that capacity
utilization, variability reduction (e.g., through the
acquisition and management of additional information)
and inventory are substitutes in providing better process
performance and customer service. The normative
implication of the CVI trade-off, as depicted by the
Fig. 1. Process management triangle.
R.D. Klassen, L.J. Menor / Journal of Operations Management 25 (2007) 1015–1034
process management triangle (Fig. 1), is that process
performance can be improved through either more
‘‘buffer’’ capacity (i.e. lower capacity utilization),
reduced variability, or more ‘‘buffer’’ inventory.
Schmidt (2005) descriptively discussed the trade-offs
and mutual substitutability of capacity utilization,
variability reduction and inventory at the process-level,
and identified various generic strategies for determining
the appropriate mix of CVI to optimize performance at
the process level.
Akin to a process management ‘‘heuristic’’, which
on the basis of previous research, experience and
judgment seems likely to generate a viable – though not
guaranteed optimal – solution to a problem (Foulds,
1983), the CVI trade-off implications of the process
management triangle are that ‘‘more of one means less
of the other two’’. For example, more variability
reduction effort could facilitate lowering any existing
capacity or inventory buffers without degrading process
performance.
Mathematically, these trade-offs can be drawn from
an extension of the earlier analysis in Eq. (3) to the more
general G/G/1 queue, employing an approximation
offered by Kingman (1961) for the effective process
time (throughput time) (see also Whitt, 1993), which
yields:
W¼
r
1r
cv2d þ cv2p
2
1 1
þ
rc rc
r
1r
cv2d þ cv2p
2
Iffi
r2
1r
cv2d þ cv2p
2
inventory ¼ capacity utilization factor
variability factor
(6)
While there are numerous ways to reduce the impact
of variability on the production process (see Rohleder
and Silver, 1997), Lovejoy (1998) and Schmidt (2005)
have argued that information can be a substitute for
variability reduction. Information frequently facilitates
quick adjustments to production levels in both internal
processes and the larger supply chain (Bourland et al.,
1996; Lee et al., 1997), as well as adding small amounts
of the ‘‘right’’ inventory in a judicious manner (Milgrom
and Roberts, 1988), such as in the Dell direct model
(Magretta, 1998). Interestingly, the descriptive and
explanatory logic underlying the process management
triangle (Fig. 1 and Eq. (6)) is primarily discussed in a
small number of teaching related material (e.g., Ritzman
et al., 2004; Schmidt, 2005), and has not received much
rigorous empirical scrutiny in OM research. As such, we
believe that the normative implications of the CVI tradeoffs, which emanate from the study of both M/G/1 and G/
G/1 queues, must be explored in real-world research
applications and extended to larger operating and
business systems.
2.4. Research propositions
1 rp
þ
rc rc
Assuming that the inventory in the system is much
greater than one and r < 1, the last term is dropped and
the expression yields:
relationship. Conceptually, it can be stated as
(4)
where cvd and cvp are the coefficients of variation for
demand (i.e., inter-arrival time) and process (i.e., processing time), respectively. Little’s law, Eq. (1), which
includes the average production rate, rp, then provides
an estimate of inventory in the system:
I ¼ rp
1019
(5)
Thus, both internal and external variability are
explicitly taken into account. In short, Eq. (5) is a
parsimonious representation that algebraically links
capacity (i.e., utilization), variability and inventory in a
non-linear form, and succinctly captures a CVI trade-off
Hopp and Spearman (2004, 2001) highlighted the
operational impact of variability by stating that
increasing variability always degrades the performance
of a production system. Further, they extended this view
to incorporate managerial trade-offs such that variability in a production system is buffered by some
combination of inventory, capacity, and time. These
process management tenets are analytically appropriate
when assessing the performance of the operations from
purely tactical, technical measures of process efficiency, including such metrics as throughput time,
inventory turns and quality.
Anupindi et al. (1999) argued that variability in the
process can be buffered specifically through the use of
‘‘safety inventory’’ or ‘‘safety capacity’’. This is
consistent with the basic logic of CVI trade-offs;
specifically, higher variability of any form requires a
manager to deploy or absorb extra inventory or invest in
additional capacity as countermeasures. This may be
analytically appropriate in the situation where the type
of variability, whether at the operating or business
system level, is random in nature (see Table 1).
However, specific managerial choices related to
1020
R.D. Klassen, L.J. Menor / Journal of Operations Management 25 (2007) 1015–1034
Table 1
Typology of the sources and forms of system variability
Source
Form
Random
Predictable
Internal (i.e., process)
Quality defects
Equipment breakdown
Worker absenteeism
Preventative maintenance
Set-up time
Product mix (i.e., number of SKUs)
External (i.e., supply chain)
Arrival of individual customers
Transit time for local delivery
Quality of incoming supplies
Daily or seasonal cycles of demand
Technical support following new product launch
Supplier quality improvements based on learning curve
preventive maintenance (McKone et al., 2001), product
variety (Ramdas, 2003), and operating flexibility (Koste
and Malhotra, 1999) introduce predictable forms of
variability into the production setting, which in turn
complicates process management.
The majority of the extant research that examines
process management trade-offs has largely been
analytic modeling-based and driven by making
improvements to production scheduling. Further, most
of these studies have examined capacity utilization and
inventory trade-offs, and their impact on operational or
financial performance. Karmarkar (1987) utilized
standard queuing models to examine the congestion
phenomenon and its effect on waiting times. His
analysis of the relationships between lot size, manufacturing lead times and in-process inventories, such as
the negative impact of high capacity utilization on lead
times and work-in process, highlights operational
design implications especially for batch type shops
with queues. Bradley and Arntzen (1999) examined the
financial performance implications of the trade-offs
between capacity and inventory investment. Utilizing
an aggregate planning-based model that was applied to
several manufacturing settings, the authors demonstrated the necessity to simultaneous plan capacity,
inventory and production schedules in order to generate
higher returns on assets while managing the relative
costs of capacity and inventory.
A few related studies incorporate the explicit
influence of variability into their analysis. For example,
Lovejoy and Sethuraman (2000) examine congestion
and complexity costs and their implications for
production scheduling. Their conceptual model builds
upon Banker’s et al. (1988) queuing-based model that
shows how higher variety imposes higher delays and
inventory costs. Hence, simultaneously managing
congestion (i.e. capacity) and complexity (i.e. variability) issues creates process management difficulties.
Tayur (2000) describes some of the practical challenges
facing a laminate manufacturer in implementing a
plant-management strategy based on cyclic schedules
that has to account for interactions among CVI elements
with issues of setup times, scheduling rules, and service
goals.
Most critical to our study is the work of Krajewski
et al. (1987) who examine CVI interactions and tradeoffs through their simulation analysis of control systems
and manufacturing environments. Their analysis highlights how – when assessing the joint impact of a
multitude of operating factors – specific capacity,
inventory, and variability configurations impact manufacturing effectiveness in ‘‘job-lot’’ production environments. Overall, these particular studies highlight the
criticality of considering how production scheduling,
inventory management, and stochastic operating issues
interact at the process level of analysis.
A critical reading of this operations management
literature – along with economics-based studies on
capacity utilization (Corrado and Mattey, 1997),
inventories (Blinder and Maccini, 1991), and process
trade-offs (De Vany, 1976) – reveals that seemingly
unexamined in the literature is whether the CVI tradeoff heuristic generalizes to operations that possess both
random and predictable forms of variability, and that
constitute complex operating systems that are difficult
to analytically model. A generalization and extension of
the CVI trade-off heuristic to such complex systems
warrants research attention that extends beyond conceptual arguments. Therefore, we propose:
Proposition 1. CVI trade-offs are applicable to the
management of complex operating systems that include
both predictable and random variability in either process time or demand.
Moreover, thinking beyond production processes (at
the individual process- or facility-level), does the CVI
trade-off heuristic apply generally at the business
systems level? For example, the bullwhip effect
phenomenon describes the impact that downstream
R.D. Klassen, L.J. Menor / Journal of Operations Management 25 (2007) 1015–1034
demand variability has on the overall supply chain
performance of organizations like Hewlett-Packard and
Procter and Gamble (Lee et al., 1997). Sources of
supply chain variability like raw material delays,
amplifications and distortions of demand, etc. result
in observed inefficiencies such as excessive inventory
investment, poor customer service, unnecessary capacity expansions, ineffective production schedules, and
lost revenues. Extensions of the CVI trade-off logic to
business systems (e.g. at the supply chain or industry
level) are warranted. Therefore, we also propose:
Proposition 2. CVI trade-offs are applicable to the
management of business systems.
The CVI trade-offs, as diagrammed by the process
management triangle, are intuitively straightforward
and managerially insightful. Indeed, the process
management triangle represents an OM conceptual
model with strong theoretical, but narrowly specified
underpinnings that relate the three factors of capacity
utilization, variability and inventory in a specific,
meaningful manner. However, systematic empirical
investigation is needed to generalize and extend its
normative principles for improved process management
to complex processes and business systems with the aim
of building a more generalized, ‘‘multilevel’’ theory
(Rousseau, 1985). The remainder of this paper presents
exploratory, empirical analysis of our research propositions.
3. Research methodology
The research methodology strategy adopted for this
paper was motivated by McGrath’s (1982) view that it is
not possible to conduct an unflawed study. Any research
method or data source chosen will have inherent flaws,
and the choice of method or data will limit the
conclusions that can be drawn (cf. Webb et al., 2000).
Labeled the ‘‘three-horned dilemma,’’ research design
choices require tradeoffs between the (1) generalizability of results, (2) precision in measurement and
control of the study variables, and (3) realism of the
research context. For example, rigorous analytic,
optimization-based research normally results in generalizable results, but at the expense of precision and
realism. On the other hand, findings from rigorous field
studies, while usually very realistic, tend to be less
precise or generalizable. And multiple levels of analysis
usually compound these problems. Therefore, the use of
a variety of research methods or data would likely result
in more realistic, precise and generalizable insights and
recommendations for scholars and managers that could
1021
be articulated with greater clarity and assurance. Thus,
OM research employing non-traditional approaches for
studying process management issues – and applied to
unique operating contexts – will likely yield additional
insights and understanding that would better inform
theory, understanding, and practices for managing CVI
trade-offs.
In order to empirically explore the general application of the CVI trade-off heuristic derived from queuing
analysis to complex operating systems and business
systems, two research studies – each employing a
comparatively unique method and data for process
management analysis – were undertaken. In study 1, a
simulation-based facility-level analysis was conducted
on a real-world operation, namely iron ore processing,
using empirical data gathered as part of a field-based
teaching case (Piper and Wood, 1991). (See Appendix A
for a general descriptive summary of the operations.)
The mostly continuous-flow nature of the process we
simulate nicely compliments the ‘‘job-lot’’ environment
simulations reported by Krajewski et al. (1987). Our
study focuses specifically on changes in variability in
effective processing time (cvp ). Variability in demand
(cvd ) is not captured; given the nature of the product, all
ore produced is assumed sold.
In study 2, an industry-level analysis was conducted
using publicly available, archival data over a 30-year
period extracted for Canadian manufacturing industries.
This novel data source for process management analysis
allows for the testing of CVI trade-offs at a very aggregate
unit of analysis, and by way of contrast is focused on
observed variability in demand (cvd ), along with an
estimate of observed variability in effective process time
(cvp ). Thus, an exploratory effort was made at multiple
levels of analysis to begin to assess the extent to which the
CVI trade-off heuristic can be applied to more general,
and increasingly complex, operating systems and
business systems. The methods and data for each study
are detailed in the following sections.
3.1. Study 1: field-based teaching case study and
simulation
To initially explore the general application of the
CVI model to complex operating systems, simulation
was used to model the ore processing operations
described in the Iron Ore Company of Ontario teaching
case (Piper and Wood, 1991). Iron ore was blasted once
daily, and then moved from the face of the open-pit
mine by teams of shovels and dump trucks to two large
crushers on a 23.5 hour basis (the remaining half-hour
was used to clear the mine for blasting). These crushers
1022
R.D. Klassen, L.J. Menor / Journal of Operations Management 25 (2007) 1015–1034
Fig. 2. Simplified process flow diagram for Iron Ore Company of Ontario.
reduced the size of the rock for further upgrading of the
iron content in the downstream concentrator. Between
the crushers and the concentrator, large storage silos
allowed for some buffering inventory. Data was
available on the average cycle times and capacities of
individual operations (i.e., shovels, trucks, dumping
lines, crushing, and concentrator), maximum inventory
in the storage silos, preventive maintenance schedule,
and process delays (i.e., blasting, breakdowns, and
preventive maintenance). The simplified process flow is
depicted in Fig. 2.
To narrow the scope of analysis, the teaching case
only reported variability for shovel access to the mine,
and preventive maintenance and downtime for the
crushing operation. Preventive maintenance was performed daily on one crusher during the entire 8 hour
morning shift, with each crusher receiving maintenance
every other day. This effectively reduced system capacity
by one-sixth over a 48 hour period. Downtime, termed
‘‘bridging’’ in the industry, occurred when large rocks
became lodged in the crusher and required manual
clearing. Data for downtime duration gathered over 120
days approximately resembled a lognormal distribution
(the field-based empirical distribution was used for the
base scenario of the simulation). The average downtime,
termed mean time-to-repair (MTTR), was 12.78 min; the
average time between downtime occurrences, termed
mean time between failures (MTBF), was 174 min. As
the distribution of the MTBF was not recorded, a
lognormal distribution with a standard deviation of 50
was assumed; limited testing with other distributional
assumptions showed little change.
The system capacity was limited by the most capital
intensive operation, the concentrator. Management
attempted to keep this bottleneck operation running
at all times. For the simulation, sufficient ore was
released into the system (i.e., blasted) to assess three
levels of capacity utilization (r), specifically 98%, 95%
and 90%. The storage silos between the two crushers
and the concentrator collectively held up to approximately 6 hour (i.e., 10,500 m3) of ore in inventory, and
were used to accommodate process variability.
Three levels of variability for preventive maintenance and two levels for downtime were considered.
For preventive maintenance, in addition to the existing
schedule, a second level was evaluated, whereby
preventive maintenance was performed in shorter, but
more frequent intervals, i.e., two equally spaced
4 hour periods for each crusher over a 48 hour period.
A third level reduced this further to four equally
spaced 2 hour periods for each crusher over a 48 hour
period. Scheduling more frequent, but shorter periods
of preventive maintenance decreases processing
variability, and is analogous to reducing process
batch sizes.
For downtime, in addition to the existing situation
(base scenario), a second level of downtime variability
was assessed with the coefficient of variation for MTTR
and MTBF being reduced by 80% (reduced variation of
downtime scenario; mean values remained the same).
As before, this change reduced variability while leaving
utilization of the crushing operation unchanged.
For each cell, process performance was measured for
100 days of operation after a 10-day initialization period
R.D. Klassen, L.J. Menor / Journal of Operations Management 25 (2007) 1015–1034
and averaged across 30 simulation runs. Overall, the
experimental design generated a total of 18 cells. While
additional levels could be assessed, these conditions
were chosen to explore the CVI trade off in a welldefined process of sufficient practical complexity, and
to evaluate the implications for potential managerial
action to reduce two general forms of variability.
3.2. Study 2: industry-level archival data and
statistical analysis
The industry-level, rather than the firm-level, was
chosen to investigate the CVI model for business
systems for two reasons. First, firm-level data is very
difficult to obtain for all three variables, particularly
capacity utilization, and so earlier related research in
OM has tended to rely on industry-level data (e.g.,
inventory reductions as studied by Rajagopalan and
Malhotra, 2001). Further, even if firms are willing to
report capacity and inventory data for a single period,
this is insufficient to estimate process or demand
variability. To do so, data is needed over an extended
timeframe of multiple periods. Second, this level of
analysis provides a broad-scale test of the extent to
which the CVI model can be applied to larger business
systems. In essence, the rationale underlying our
research design is that if this model offers meaningful
insight for two extreme units of analysis (i.e., real-world
facility-level in study 1, and highly aggregated industrylevel in study 2), then the process management triangle
is generalizable and likely offers important insights for
units of analysis falling between these extremes.
1023
Thus, the system here for purposes of examining
Eq. (5) is an industry, and inventory includes all materials
between entering and exiting that system, which here
must include raw materials, work-in-process and finished
goods. Moreover, given the industry-level aggregation
used here to define the business system, it is important to
try to be as consistent as possible when measuring the
capacity utilization, variability and inventory constructs
within industry-level system boundary (Fig. 3).
In general, empirical analysis might be possible at any
one or several levels of aggregation, e.g., ranging from
the three-digit North American Industry Classification
System (NAICS) down to the six-digit level. Unfortunately, changes in industry classification systems – a new
common system was established for the US, Canada, and
Mexico in 1997 – often limits the availability of archival
data reported on a consistent basis. For example, US data
for capacity utilization has been reported on a somewhat
intermittent basis over the last 20 years (annually until
1988, then bi-annually from 1990 to 1996 using the older
SIC system, then annually from 1997 onward, but now
using the newer NAICS system).
Statistics Canada maintains archival data on an
annual basis for all three variables. Like US SIC codes,
Statistics Canada SIC codes change approximately once
each decade to reflect changes in the national economy,
thus effectively creating three separate decade-windows
of industry-level data: 1970–1979, 1981–1989, and
1992–1999. The missing years occurred because of
data reporting inconsistencies, industry matching
problems and missing data; however, these breaks also
had the advantage of clearly separating each industry
Fig. 3. System boundary for industry level analysis.
1024
R.D. Klassen, L.J. Menor / Journal of Operations Management 25 (2007) 1015–1034
decade-window. In addition, adjustments to inflation or
utilization data for particular industries were occasionally needed where particular SIC codes were combined
or separated across different decades (e.g., food and
beverage were two separate codes in the 1970s, but were
combined in the 1980s, and then beverage was
combined with tobacco in the 1990s). In these cases,
adjustments were made based on the weighted average
of shipments. Thus, the number of industries also varied
slightly for each decade-window, with 19, 22, and 20
industries, respectively.
An estimate of the capacity utilization, r, was made
by averaging quarterly data over each multi-year
decade-window (Statistics Canada, 2005b). The capacity utilizations ranged from 0.719 for the non-metallic
mineral products industry in the 1980s to 0.904 for the
petroleum industry in the 1990s, well within the range
of moderate utilization specified in the Kingman
approximation (Hopp and Spearman, 2001, p. 270;
Medhi, 2003).
To estimate cvd , the coefficient of variation of demand
for each industry, several options were considered. Given
that process management is directly tied to physical
quantities of materials, components and products, an
ideal measure for both variability and inventory levels
might be physical units. Unfortunately, these data were
not available, and if so, would still require some
potentially arbitrary method to combine the materials
and products of different sub-industries. While certainly
not ideal, an alternative is to use financial metrics to
estimate both relative variation and inventory levels.
These measures are similar to those employed by other
researchers, with sales variability being used as an
estimate of uncertainty (Fiegenbaum and Karnani, 1991;
Jack and Raturi, 2003). Thus, annual shipment data was
used as a proxy for output units to estimate the coefficient
of variation. In a sense, cvd captures supply chain
variability, in particular downstream customer demand.
To begin, annual shipment data ($billions per year)
for a decade window was extracted for each industry
from online archival databases (Statistics Canada,
2005a). Price inflation rates varied widely for each
decade with particularly high inflation in Canada in the
1970s, like much of the rest of the world. Because
inflation amplified the estimate of cvd for each decade,
industry-level price index data were used to adjust all
annual shipment data to a common base year of 1992
before calculating the cvd for each industry in each
decade window (Statistics Canada, 2005d). (The
estimate of cvd is based on time per unit, i.e., the
coefficient of variation for the reciprocal of annual
shipments.) Estimates of cvd ranged from 0.028 to
0.285, for the food and chemical industries in the 1980s,
respectively.
Next, to estimate cvp for each decade for each
industry, an estimate of effective process time for each
industry-year was estimated also using financial
measures. If annual shipments for the industry system
are assumed to be equivalent to the system production
rate, then the ratio of inventory to shipments can be used
to estimate average throughput time based on Little’s
law (1).1 Like cvd , the parameter of interest is the
coefficient of variation over the decade-window, rather
than an estimate of throughput time in any particular
year. Thus, incorporating other factors that remain
constant from year-to-year over the decade-window
(e.g., to potentially adjust shipments to physical units)
will have no effect on the final estimate of cvp unless
there is reliable data to make differing adjustments to
each individual year. The coefficient of variation of
throughput time (cvp ) was then estimated based on the
average and standard deviation for each decade-window
for each industry for the ratio of the annual year-end
inventory (Statistics Canada, 2005c) to shipments. As
might be expected, for most industries cvp was
significantly smaller than cvd , on average by a factor
of 0.27.
Similar to that done by others (e.g., Huson and Nanda,
1995; Rajagopalan and Malhotra, 2001), an estimate of
inventory was also made for each decade-window for
each industry by averaging the year-end total inventory
values across the window (Statistics Canada, 2005c).
Using total inventory – including raw materials, work-inprocess and finished goods – was consistent with the
overall system boundary of the entire industry sector.
Similar to shipment data, inventory values also were
adjusted using the same industry-level price indices to
1992 levels (Statistics Canada, 2005d). The average
inventory value ranged from $0.183 to $5.89 billion in the
leather and transportation equipment industries in the
1990s, respectively.
Finally, in addition to correcting for inflation, two
other macro-economic control variables were included
in the analysis, namely interest rates and growth in gross
domestic product (GDP) (Chen et al., 2005). To control
for interest rate changes, the average Government of
1
For each year, this ratio is equivalent to the frequently used ‘‘days
of inventory’’ ratio (or here, fractional years). Another common,
related ratio employs cost of goods sold (COGS), although similar
results are reported in other inventory studies with either shipment or
COGS (Chen et al., 2005). COGS was not directly available for each
industry, and further assumptions would have been necessary to
translate either shipments or manufacturing value-added into COGS.
R.D. Klassen, L.J. Menor / Journal of Operations Management 25 (2007) 1015–1034
1025
Table 2
Average total inventory in the system
Preventive maintenance
Utilization
98%
1 8 hour
2 4 hour
4 2 hour
95%
90%
Base scenario
Reduced variation
for downtimea
Base scenario
Reduced variation
for downtimea
Base scenario
Reduced variation
for downtimea
20.19 (0.255)
15.69 (0.112)
13.65 (0.039)
19.86 (0.125)
14.92 (0.051)
13.17 (0.031)
16.29 (0.054)
13.73 (0.034)
12.77 (0.029)
16.53 (0.045)
13.76 (0.035)
12.59 (0.026)
14.39 (0.035)
12.76 (0.028)
11.95 (0.024)
14.68 (0.027)
12.79 (0.026)
11.81 (0.030)
Notes: All means, in hundred’s cubic metres, are significantly different ( p 0.05) within each column and within each row. Standard errors are
noted in parentheses ( ). Simulation: 30 runs of 100 days were used.
a
Crusher downtime: reduced coefficient of variation by 80% for MTTR and MTBF.
Canada interest bank rate (R) was computed for each
decade-window (Bank of Canada, 2005). Growth in
GDP (GGDP) was estimated for each window based on
the real GDP change each decade-window (Statistics
Canada, 2006).2
3.3. Summary of research methods and implications
for the study of CVI trade-offs
Both of the empirical studies reported in this paper
employ data that is innovative and distinctive to the
study of process management trade-offs. As described
earlier, the vast majority of the extant research has been
founded on analytic modeling at the operational process
level. One notable exception is the simulations reported
by Krajewski et al. (1987). In that study, the simulations
were based upon the reproduction of diverse job-lot
plant environments, utilizing a comprehensive list of
factors identified as important to manufacturing
effectiveness by a panel of managers. By contrast,
our simulation (study 1) is based upon teaching case
data. Previous studies employing teaching case data
have largely been conceptual in nature (e.g., Clark,
1996). Indeed, the recent calls for more case-based
research in OM have concentrated exclusively on the
merits and challenges of conducting case research (see
Meredith, 1998; Stuart et al., 2002; Voss et al., 2002).
Our use of teaching case data to examine CVI trade-offs
in a continuous flow production environment is novel
for process management study and, when coupled with
simulation, illustrates how pedagogical material can be
profitably used to improve OM understanding and
theory.
2
As in Chen et al. (2005), GGDP = ln(GDPb) ln(GDPa) where
b = last year in each decade-window, and a = year prior to the first
year each decade-window.
Our use of archival-based unobtrusive measures in
study 2 represents a distinct approach to examining the
extent to which the CVI trade-offs heuristic generalizes
to broader business systems. While other research
examining inventory levels and investment at the
industry level has drawn from archival data (e.g., Chen
et al., 2005; Rajagopalan and Malhotra, 2001), the use
of such data in the empirical study of process
management trade-offs is very limited. One exception
is Banker et al. (1988), who examined industry-level
capacity utilization data to test differences in median
capacity utilizations between production environments
that differ with respect to operating uncertainty and
variability. Uniquely, we extend this starting point much
further by utilizing industry level data to begin
examining whether the CVI trade-offs logic applies
to overall business systems, and whether such trade-offs
constitute an OM ‘‘multilevel theory’’ where patterns of
relationships are replicated across increasingly broader
levels of analysis (Rousseau, 1985). Indeed, the extent
to which the results from studies 1 and 2 – which
employ distinct types of empirical data – converge
provides an indication for the generalizability of the
propositions and ‘‘enhances our belief that the results
are valid and not a methodological artifact’’ (Bouchard,
1976, p. 268).
4. Results
4.1. Study 1 findings
As noted earlier, the 18 cells in the research design
were chosen to capture two forms of variability, namely
predictable and random variation, and multiple levels of
capacity utilization. We simulated a mostly continuous
process with two primary sources of variation at the
crushing operation: preventive maintenance (predictable) and breakdowns (random). As such, this simulation
55.62 (0.104) {0.590}
48.46 (0.099) {0.493}
44.75 (0.115) {0.386}
54.54 (0.132) {0.598}
48.34 (0.108) {0.498}
45.29 (0.090) {0.389}
59.35 (0.161) {0.578}
49.40 (0.127) {0.482}
45.22 (0.094) {0.380}
58.51 (0.194) {0.581}
49.32 (0.120) {0.485}
45.85 (0.104) {0.386}
Notes: Standard errors are noted in parentheses ( ); coefficients of variation are noted in brace brackets { }. Simulation: 30 runs of 100 days were used.
a
Crusher downtime: reduced coefficient of variation by 80% for MTTR and MTBF.
Reduced variation for
downtimea
69.06 (0.433) {0.552}
51.89 (0.179) {0.475}
45.82 (0.108) {0.376}
70.29 (0.876) {0.571}
54.57 (0.389) {0.492}
47.49 (0.134) {0.384}
1 8 hour
2 4 hour
4 2 hour
Base scenario
Base scenario
Base scenario
Reduced variation for
downtime a
90%
95%
98%
Utilization
Preventive maintenance
design represents a conservative test of Proposition 1 that
allowed for clearer isolation of the linkages in the CVI
model.
Performance statistics are reported for average total
inventory in the system (Table 2) and average effective
process time (Table 3). For total inventory, all mean
performance values are significantly different within the
base scenario across all nine cells; all reduced variation
for downtime cells were also significantly different. In
general, within each scenario, as variability and
capacity utilization was reduced in the system, both
the inventory and effective process times significantly
decreased, as predicted from theory. This was also true
between scenarios at the highest utilization (98%).
However, it was interesting to observe that some
unexpected differences between the base scenario and
reduced variation in downtime scenario occurred at
lower utilization levels with high predictable variation,
i.e., a single 8 hour preventive maintenance timeslot
(1 8 hour) (Tables 2 and 3). Here, despite a marginally lower coefficient of variation (cvp ) as noted in
Table 3, reduced downtime variation actually caused a
very small, but marginally significant increase in
average inventory and effective process time. While
additional research beyond the scope of this paper is
warranted, these results suggest that the interactions
between high variability (coefficient of variation over
0.5), high utilization and tightly coupled process
operations may benefit from a pooling effect, where
one form of variability may attenuate another.
It is important to note that while several possible
managerial options for action – to reduce either the
predictable or random variation, or the capacity
utilization – yielded improvement, reducing predictable
variation had the largest benefit for this system. For
example, by moving from a single 8 hour preventive
maintenance timeslot to four 2 hour timeslots (i.e. an
operating practice investment), the average inventory in
the system was reduced by 653 m3 or 32%. In contrast,
reducing the capacity utilization from 98% to 90%
(constituting a significant capital investment) generated
only 580 m3, or 29%, reduction.
To assess the overall CVI relationship at a very basic
level, the bivariate correlation was estimated between
inventory and the product of the utilization factor
(r2(1 r)) and the coefficient of variation factor for
effective process time (cv2p ). To estimate the latter
factor, data were gathered for both the effective process
time and its standard deviation for each of 30 runs; these
values were then averaged before taking their ratio for
each cell. The correlation coefficient was statistically
significant in the expected direction at 0.724
Reduced variation for
downtimea
R.D. Klassen, L.J. Menor / Journal of Operations Management 25 (2007) 1015–1034
Table 3
Average effective process time for the system
1026
R.D. Klassen, L.J. Menor / Journal of Operations Management 25 (2007) 1015–1034
1027
Table 4
Multi-study comparison of capacity utilization–variability–inventory empirical relationship
Variable
Facility-level (Model 1.1)
Industry-level
Demand (Model 2.1)
**
*
0.129 (0.019)
0.336** (0.041)
Utilization factor
Variance factor
Growth of GDP
Bank rate (%)
Intercept
2.78** (0.081)
51.83**
0.874
18
F-statistic
R2
Number of observations
Demand + process (Model 2.2)
0.617 (0.294)
0.189* (0.085)
0.312 (1.37)
0.674 (6.89)
14.19** (0.500)
0.644* (0.289)
0.316** (0.119)
0.414 (1.35)
4.04 (6.98)
14.77** (1.13)
2.33
0.142
61
2.90*
0.172
61
Notes: Standard errors are noted in parentheses.
*
p < 0.05.
**
p < 0.01.
( p 0.01). Linear regression was then used to assess
the significance of each parameter for the overall
relationship. Here, a natural logarithmic transformation
was made to a modified form of Eq. (5) to separate out
the two right-hand factors, i.e.,
2 r
lnðinventoryÞ ¼ b0 þ b1 ln
þ b2 lnðcv2p Þ (7)
1r
The results are reported in Table 4. The parameter
estimates for both factors were highly significant in the
expected direction ( p 0.01), with an overall R2 of
0.874.
4.2. Study 2 findings
Industry-level archival data were also assessed
against the CVI model. Like the facility-level of study
1, the bivariate correlation between inventory and the
product of the average industry-level utilization factor
and coefficient of variation factor ðcv2d þ cv2p Þ was
significant, at 0.377 ( p 0.01). By way of sensitivity
analysis, the correlation coefficient was also estimated
for each decade window individually, with estimates of
0.32, 0.43 and 0.48 for the 1970s, 1980s and 1990s,
respectively. Thus, there was general consistency across
a long time-horizon.
The significance of each factor was also tested using
linear regression with the addition of the two control
variables:
lnðinventoryi;t Þ
¼ b0 þ b1 ln
r2i;t
1 ri;t
þ b3 Rt þ b4 GGDPt
þ b2 lnððcv2d þ cv2p Þi;t Þ
(8)
where i = industry and t = decade-window, i.e., 1970s,
1980s and 1990s. Two models were tested, model 2.1
with only lnðcv2d Þ to separate out demand variability,
and model 2.2 as formulated in (8).
As with the earlier facility-level model, the
contribution of each factor was significant, although
the overall variance explained was significantly less,
with an R2 of 0.172 ( p 0.05) for model 2.2 (see
Table 4). Thus, it is important to make two observations
about these results. First, the proposed CVI trade-offs
were significant even in very large-scale business
systems. Second, potential limitations to the generalizability of the CVI model are evident as the scale of the
business system expands; with a considerable decline in
the explanatory power. Clearly, variability (from
whatever source, including macro-economic factors)
and utilization do not explain all underlying reasons for
changes in inventory, only that considering their
combination is critical even for large-scale business
systems. As such, the explanatory power of other factors
(beyond the CVI trade-offs) is reasonably small for
narrowly defined processes, but is increasingly important as the system scale expands. This, too, represents a
potentially rich area for further empirical scrutiny.
5. Discussion
These exploratory, empirical results offer intriguing
descriptive and explanatory insights for advancing
current research-based process management theory and
understanding. The following discussion elaborates on
these insights and offers additional research propositions emanating from our findings intended to motivate
future research efforts that apply the CVI model and
trade-offs heuristic to the study of a broader array of
OM issues.
1028
R.D. Klassen, L.J. Menor / Journal of Operations Management 25 (2007) 1015–1034
5.1. Generalizability and boundaries of CVI tradeoffs
Manufacturing and service operations are typically
characterized by multiple parallel and serial processes,
tandem queues, etc., where effective process management requires explicit considerations of the trade-offs
between capacity utilization, variability and inventory.
The wide-variety of complex operating configurations
and business systems that currently exists requires
managers to choose, largely based on heuristics, the
appropriate mix of process buffers to employ. Even
then, ‘‘the cost of the various options for reducing or
buffering variability will vary between environments,
no one solution is right for all systems’’ (Hopp and
Spearman, 2004, p. 146).
Both researchers and managers need strong foundational, theory-driven tenets that guide continued
investigation and simultaneously inform practice by
offering pathways for improving process management
policies and practices. To that end, a great deal of
analytic, optimization-based research has been undertaken to address these theoretical needs, although the
difficult nature of these problems has necessitated an
emphasis on modeling narrowly defined, relatively
simple processes that capture only a few basic elements
of the real world (Silver, 2004). The mathematical
intractability introduced, or assumptions necessitated,
by different forms of variability and the variety of
complex process flows requires that process management research efforts employ more simulation and
empirically based analyses of operating and business
processes.
Simulation was employed here, along with an
analysis of archival data, for different reasons. The
investigation at both the facility- and industry-level
enabled an exploratory assessment of the extent of
generalizability and application of the process management triangle. Overall, empirical evidence was found to
support the theoretical application of the process
management triangle, with its trade-offs between
capacity utilization, inventory and variability. First,
analysis of teaching case study data of the complex
process of iron ore processing offered a starting point to
identify and explore the impact of both predictable and
random variability on performance, and the relationship
with capacity utilization and inventory. Our simulation
explicitly distinguishes between predictable and random variability, and is noteworthy in being among the
first studies to have examined the trade-offs implications for these two types of variability simultaneously.
Second, the novel use of industry-level data and the
finding of a statistically significant, albeit lower,
correlation – and statistically significant regression
estimates – indicates that these relationships are
important for wide-ranging business systems as well.
Given that the process trade-offs between capacity
utilization, variability and inventory were assessed at
two operating system extremes, we believe that the
process management triangle constitutes a multilevel
theory and that the empirical results are expected to be
generalizable to units of analysis between these
extremes, such as at the strategic business unit
(SBU)- and firm-levels. Therefore, we propose that:
Proposition 3. Processes reflecting the capacity utilization, variability and inventory trade-offs are present
at the SBU- and firm-levels. These linkages form the
basis for assessing process management policies and
practices with these levels.
What research difficulties exist in studying
Proposition 3? Ideally, both financial and physical
measures should be examined, and it would be
particularly interesting to identify the range over which
individual firms within an industry operate, along with
the implications for firm performance. Unfortunately, the
data availability problems encountered when developing
the studies presented in this paper remain: obtaining firmlevel information on specific forms of inventory is
difficult, although aggregate inventory value is reasonable. Estimating coefficients of variation, ideally for both
the process time and demand, are also possible, although
multiple periods of data must be available. Finally, a
critical challenge is defining and measuring capacity
utilization. As such, further research at the firm-level is
likely to require a close working relationship with an
industry association, or comparable, to gather the
detailed historical data from a panel of firms.
5.2. Process improvement through variability
reduction
Empirical research in such areas as JIT, TQM and lean
operations has made great strides towards improved
process-based practice (e.g., Ahire and Dreyfus, 2000).
Yet, too often these improvements are implemented
independently, and without a solid theoretical base to
provide a clear rationale for why they work and – equally
important – how far they should advance before shifting
managerial attention to other pathways for improvement.
Many managers, when faced with process-based
problems, also are tempted to adopt a short-term fix,
such as employing more automation, running more
overtime, instituting additional quality inspection, etc.
R.D. Klassen, L.J. Menor / Journal of Operations Management 25 (2007) 1015–1034
By way of example, the Dell direct model referred to
earlier required a much richer set of practices than JIT to
operate effectively. Early efforts by competitors to copy
this model failed, in part, because the multiple linkages
between capacity utilization, variability, and inventory
were overlooked. Instead, the coordinated design and
implementation across multiple trade-offs are necessary,
as reflected in the process management triangle.
For example, predictable variability has been a major
focus of JIT (e.g., reliable deliveries, short set-up times,
small batch sizes, etc.), and random variability a major
focus of TQM (e.g., process capabilities, conformance).
For each type of variability, presented here in the study
1 analysis as preventive maintenance and random
breakdowns, simulation results indicated that clear
improvements in throughput time and inventory levels
were possible. Variability also was clearly present in the
overall demand at the industry level (study 2). While
this in itself is not overly surprising, it is informative to
see the linkages between capacity utilization, variability, and inventory derived from much simpler
queuing models clearly generalizing to these richer,
more complex operational contexts.
Overall, our results suggest that a Pareto-type
analysis of variability is important for several reasons.
First, consistent with TQM philosophy and tools (Choi
and Eboch, 1998; Anderson et al., 1994; Cua et al.,
2001), disaggregating process variability into its major
sources and forms encourages a more focused
improvement effort and a better allocation of resources.
Second, such an analysis helps to address the question
raised earlier: when should improvement efforts get
shifted from one pathway to another? Finally, given the
relative substitutability of different forms of the
variability demonstrated here, cost and efficiency can
be introduced to identify the best path for process
improvement and attainment of operational goals (cf.
Wacker, 1996). Therefore, we propose that:
Proposition 4. Efforts to improve business processes,
whether system-based (e.g., TQM) or technology-based
(e.g., automation), must shift their emphasis over time to
reduce or accommodate the largest sources of variability.
5.3. Attenuating CVI trade-offs
Chen et al. (2005) recently examined trends in
inventories for US firms between 1981 and 2000 and
what the implications of these trends were for
manufacturing companies. They reported the negative
performance results (i.e., poor long-term stock returns)
1029
for firms carrying abnormally high levels of inventories,
but offered little discussion as to why inventories – most
notably finished-goods inventory – did not decline
during this period. Clearly, one rational explanation is
that poor management may be a contributing factor.
Specifically, managers can always carry more inventory
than necessary based on the CVI trade-offs heuristic.
However, much research and managerial interest has
focused on flexible process-based approaches, which
may help to attenuate the relationship between
variability and capacity utilization, given a particular
inventory level. For example, volume flexibility (Jack
and Raturi, 2003) may allow industry practices and
individual firms to accommodate greater shipment
variability.
Approaches to attenuate the need for CVI trade-offs
can be divided into those factors that are derived from
options external to the firm’s processes, and those
internal. External factors related to volume flexibility
include supplier networks, contract manufacturing
(pools demand from downstream manufacturers), and
international outsourcing partners. If so, the system
boundaries are in essence being expanded to include the
supplier network (and any other competitors that draw
on capacity from the same supplier network). In
contrast, demand management techniques, such as yield
management, discounting, and price increases are not
expected to moderate CVI relationships as these adjust
the observed coefficient of variation. Therefore, we
propose that:
Proposition 5a. At the firm-level, CVI trade-offs can be
attenuated through external adjustments that affect
process time, such as outsourcing. However, external
adjustments that alter demand do not change the fundamental trade-offs at the firm level.
Internal factors can be related to labor intensity of the
process. Capital intensity, or conversely labor intensity,
varies greatly both between firms or plants within an
industry and between industries based on operations
strategy (Hayes and Wheelwright, 1978), operational
capabilities (Carrillo and Gaimon, 2004) and local labor
costs. The inherent flexibility of many labor-intensive
business processes may permit greater adjustment to
meet predictable and random variation while maintaining lower inventory levels and higher capacity utilization, subject to labor practices and union agreements.
Wholly-owned or joint-venture plant networks also
provide options to expand the flexibility of business
systems (Jack and Raturi, 2003). Therefore, we propose
that:
1030
R.D. Klassen, L.J. Menor / Journal of Operations Management 25 (2007) 1015–1034
Proposition 5b. CVI trade-offs can be attenuated
through adjustments that are internal to the business
system, such as labor-intensive processes or flexible
plant networks.
This ability to attenuate the impact of variability
might also extend to capital-intensive business processes if flexible automated technologies are employed.
For example, machine and mix flexibility (Koste et al.,
2004), if derived from investment in particular
equipment technology, might allow individual firms
to accommodate greater output variability.
5.4. Linkages to other OM conceptual models
The theoretical foundation underpinning the process
management triangle and its empirical validation
reported in this paper shed further light on the
mechanisms underlying other conceptual OM frameworks. The product-process matrix (Hayes and Wheelwright, 1978) implicitly captures capacity utilization
with the process-type axis and variability with the
product axis, which then determines based on Eq. (5)
the expected average inventory levels. With few
exceptions, being off the diagonal has generally been
viewed as a formula for competitive problems.
Yet, conflicting research on the validity and
implications of the product-process matrix (e.g.,
Safizadeh et al., 1996; McDermott et al., 1997) might
be reconciled when considering the sources and forms
of variability impacting the production systems studied.
Product variety is but one major source of variability;
instead, the process management triangle suggests that a
broader definition of variability must be captured to
properly define the diagonal. High product variety when
combined with very short setup times (i.e., low
predictable variability) can result in relatively modest
overall variability, allowing operations to adopt a more
‘‘continuous’’ process than might otherwise be predicted. Alternatively, if the cost of inventory is very low,
a configuration that employs both high setup times and
product variety along with high inventory might be
defensible in a continuous process. For example, a
paper manufacturer with high variety in colors, basis
weights, and grades may choose to hold high
inventories of semi-finished inventory in the process
between paper forming and finishing.
By extension, adopting practices that target setup time
reduction, such as JIT, translates into movement off of the
product-process diagonal relative to other firms, at least
until others adopt similar practices and the frame of
reference for the entire matrix shifts. Thus, assessments of
the appropriateness of a production process vis-à-vis the
product offering would benefit from a broader consideration of capacity utilization, variability and inventory.
Another important area of debate in the OM
literature is that of trading off capabilities (e.g., cost
versus flexibility) versus cumulative reinforcement
(e.g., quality is a necessary foundation to build delivery
reliability). Skinner’s (1974) ‘‘factory focus’’ epitomizes the former perspective, whereas Ferdows and
DeMeyer’s (1990) ‘‘sandcone’’ model illustrates the
latter. Others provided both excellent reviews and
contributions to this debate (e.g., Schmenner and
Swink, 1998; Pagell et al., 2000). However, recent
research suggests that cumulative capabilities need not
develop in a particular linear order, but can vary (cf.
Menor et al., 2001; Flynn and Flynn, 2004). From a
theoretical perspective, production frontiers illustrate
the operational strategy implications of capability tradeoffs (Clark, 1996; Hayes and Pisano, 1996; Vastag,
2000).
The critical issue, we feel, is alignment: if the market
requires or the firm’s operations strategy targets high
variability, the management policies for capacity
utilization and inventory must correspond. However,
a tight focus on a particular market and process tends to
limit agility and create risks as markets and process
technologies evolve (Bower and Christensen, 1995).
Thus, operations must develop capabilities that respond
to and plan for dynamic process management changes.
Moreover, improvement requires attention to underlying capabilities, such as quality, that extend across
particular market and process segments.
Interpreting these challenges within the context of
the process management triangle, operational focus
establishes expected levels of capacity utilization,
variability and inventory. For example, pushing too
hard to reduce inventory in a high variability environment will be counterproductive and hurt customer
responsiveness. However, over time, efforts to reduce
variability (e.g., quality capabilities initially and then
delivery capabilities subsequently) will permit a
corresponding reduction in excess capacity, and thus,
greater efficiency and lower cost. Alternatively, efforts
to develop flexibility (e.g., outsourcing capabilities)
permit a corresponding reduction in required excess
capacity, also resulting in greater efficiency and lower
cost. Therefore, we propose that:
Proposition 6. Strategic alignment between market
and process establishes the basis for relative levels of
capacity utilization, variability and inventory. However,
developing capabilities that reduce variability or
R.D. Klassen, L.J. Menor / Journal of Operations Management 25 (2007) 1015–1034
increase flexibility are necessary to make a corresponding reduction in inventory, and improve responsiveness
and cost.
Faster product innovation, new competitive forces,
and more rapidly changing customer needs can
introduce greater variability. These, in turn, must be
met with lower capacity utilization (higher inventory is
an unlikely alternative) to maintain responsiveness.
Thus, if extended to the strategic level, the process
management triangle can assist in the assessment and
development of operations capabilities over time.
5.5. Data and research limitations
As noted earlier, our use of unobtrusive data (i.e.,
pre-existing teaching case data and government/public
source databases) is innovative for the study of process
management trade-offs. However, there are certain
limitations resulting from its use that deserve mention.
First, as noted in earlier discussion, the research
methodology must accept some shortcomings in data
availability and comparability. This required the
collection of additional data and adjustment (e.g.,
adjusting annual shipment data to a common base year
in study 2), as well as using statistical methods that
allow between-study comparisons (e.g., estimating
Eq. (7) in both studies). Second, substantive choices
in the operationalization of particular constructs, even
those that appear quite straightforward on the surface
such as inventory, complicates both measurement and
analysis. These issues had to be recognized and
carefully reconciled, especially for cross study comparison purposes as reported in Table 4. Indeed, future
research can build upon our analyses by addressing the
need to consider similar industries across plant- and
industry-levels or to consider potential industry effects
associated with the aggregated data. In short, the use of
these innovative data required much more of a
systematic assessment of the appropriateness of our
research methodology choices than originally anticipated.
The studies reported here captured three important
elements of process management trade-offs at different
levels of analysis; however, much operating and
business detail was not specifically addressed. While
this allowed for a clearer interpretation of our research
findings, this clarity came at the expense of not
analyzing every source of variability, the changing
nature of capacity utilization, or inventory policies and
practices. Further insights might be possible by
disaggregating the inventory by stage of the business
1031
system in study 2 (i.e., raw materials, work-in-process
and finished goods). As a result, the research was not
able to make specific recommendations about the
impact of process management trade-offs on the
placement of inventory or other industry norms (cf.
Blinder and Maccini, 1991). Finally, managing process
management trade-offs also requires a dynamic
orientation, with an emphasis on improvement, while
the methodology here was static in nature, with its focus
on the balance between capacity utilization, variability
and inventory. For example, future research is needed to
examine the implications of particular paths for
variance reduction, pooling policies, and potentially
diminishing returns for on capacity and inventory
performance.
6. Conclusions
In this paper, we have offered an empirical
generalization of process management ‘‘conventional
wisdom’’ resulting from the analytic modeling of basic
queuing models such as the M/G/1 and the PollaczekKhintchine formula. As a starting point, one objective
of this research was to explore the scope of application,
ranging from relatively complex, real-world operational
processes to large-scale, industry-level business systems. The general convergence of studies 1 and 2 results
– emanating from the analysis of novel data sources for
the study of process management issues – provides
compelling, yet exploratory, evidence to support our
claim that CVI trade-offs occur at multiple levels of
operating and business systems, and have distinct
managerial implications beyond just the operating
process level, which is the unit of analysis for the
majority of the extant research. Indeed, the implications
of variability reduction as we have discussed provides
interesting insights for managers grappling with process
management issues and challenges. Looking for
additional means to either attenuate or accommodate
CVI trade-offs must remain a management priority,
whether through outsourcing, flexible technologies or
demand management.
Looking forward to future research, at least four
paths are identified. First, while we have suggested in
our discussion various flexibility-based approaches to
attenuate the tradeoff impact, these approaches also
impact customer-related issues such as delivery lead
times. Further study is required that examines how lead
time is related to the CVI trade-off heuristic in more
complex operating and business systems. Second,
capacity utilization, variability and inventory trade-offs
likely inform research on process improvement. Efforts
1032
R.D. Klassen, L.J. Menor / Journal of Operations Management 25 (2007) 1015–1034
to explicitly seek improvement options that reduce
variability or extend flexibility might focus on one
dimension in the short term, but in the longer term, need
to be balanced across all three CVI dimensions. Third,
factors beyond capacity utilization, variability and
inventory, such as outsourcing tactics and labor
intensity, may moderate linkages in process management. Finally, integrative conceptual models in operations strategy, such as the focused factory and
cumulative capabilities model, might be better informed
by explicitly taking into account a broader definition of
variability, including such aspects as new product
introductions, new process technologies, and reconfiguring supply chains given new concerns such as
product take-backs. We believe that efforts to explicitly
address each of these future research paths in process
management, along with the additional research
propositions offered earlier, through the utilization of
distinctive empirical research methods and data are
likely to result in further advancements in process
management theory and understanding through the
discovery of ‘‘new wine from an old bottle’’.
Acknowledgments
The authors would like to thank the Special Issue’s
guest editors, four anonymous reviewers, and Jorge
Colazo for their insightful comments and suggestions.
Their input has resulted in notable improvements to this
research project. Additionally, the authors would like to
thank the Social Science and Humanities Research
Council (SSHRC) of Canada for financial support of
this research.
Appendix A. Iron Ore Company of Ontario
(IOCO) (Piper and Wood, 1991)
IOCO operated a Canadian open-pit iron-ore mine
and ore-handling facility, where production of ore was
scheduled on a year-round, continuous basis. The IOCO
process, along with gross daily capacities, is summarized in Fig. 2. Additional blasting, shovels and trucks
were needed to handle waste operations, although they
are not included in this analysis. Each day, large drills
cut 12 metre (m) holes into solid rock which were filled
with explosive slurry. The mine was cleared for
30 minutes during blasting, during which the ore
movement out of the mine was forced to stop. After
the blast, large electric shovels loaded diesel-powered
dump trucks with ore. Typically, four trucks were
assigned to each shovel, and these traveled approximately 1.5 km to the crushing operation. The trucks
would dump the ore into one of two crushers; two trucks
could dump simultaneously into each crusher. The ore
was crushed in this operation using a series of jaw
crushers, screens and gyratory crushers. Crushed ore
then proceeded by conveyor first to storage silos, and
then to the concentrator, which upgraded the iron
content of the ore. It was company policy to operate the
concentrator at capacity at all times.
IOCO’s production faced a number of challenges
that resulted in process disruptions. For example, the
weight and hardness of the processed rock caused wear
and tear on the crusher’s mechanism. This necessitated
both planned preventive maintenance (i.e., predictable
variability) and occasional repairs (i.e., random
variability). Preventive maintenance required that one
crusher was closed through the morning shift; both
crushers were operational during the afternoon and
evening shifts. The crushers were prone to brief
downtime due to ‘‘bridging’’, when large rocks jammed
the jaw crushers. The crusher remained down until the
rock was removed or shaken loose. Such delays varied
from 1 min to over an hour; a few delays lasted a
complete shift. It was rare for both crushers to be down
during the afternoon and evening shifts. The daily
blasting was also another source of predictable
variability. When disruptions occurred, crushed crude
ore could be removed from the storage silos to ensure
that the concentrator was continuously fed.
References
Ahire, S.L., Dreyfus, P., 2000. The impact of design management and
process management on quality: an empirical investigation. Journal of Operations Management 18 (5), 549–575.
Anderson, J.C., Rungtusanatham, M., Schroeder, R.G., 1994. A theory
of quality management underlying the Deming management
method. Academy of Management Review 19, 472–509.
Anupindi, R., Chopra, S., Deshmukh, S.D., Van Mieghem, J.A.,
Zemel, E., 1999. Managing Business Process Flows. Prentice
Hall, New York.
Bank of Canada, 2005. Bank Rate, Series V122530, Ottawa, ON
(www.bankofcanada.ca, accessed January 2006).
Banker, R.D., Datar, S.M., Kekre, S., 1988. Relevant costs, congestion
and stochasticity in production environments. Journal of Accounting and Economics 10, 171–197.
Barney, J., 1991. Firm resources and sustained competitive advantage.
Journal of Management 17 (1), 99–120.
Benner, M.J., Tushman, M.L., 2003. Exploitation, exploration, and
process management: the productivity dilemma revisited. Academy of Management Review 28 (2), 238–256.
Blinder, A.S., Maccini, L.J., 1991. Taking stock: a critical assessment
of recent research on inventories. The Journal of Economic
Perspectives 5 (1), 73–96.
Bouchard, T.J., 1976. Unobtrusive measures: an inventory of uses.
Sociological Methods and Research 4, 267–300.
R.D. Klassen, L.J. Menor / Journal of Operations Management 25 (2007) 1015–1034
Bourland, K.E., Powell, S.G., Pyke, D.F., 1996. Exploiting timely
demand information to reduce inventories. European Journal of
Operational Research 92, 239–253.
Bower, J.L., Christensen, C.M., 1995. Disruptive technologies: catching the wave. Harvard Business Review 73 (1), 43–53.
Bradley, J.R., Arntzen, B.C., 1999. The simultaneous planning of
production, capacity, and inventory in seasonal demand environments. Operations Research 47 (6), 795–806.
Buffa, E.S., 1980. Research in operations management. Journal of
Operations Management 1 (1), 1–8.
Carrillo, J.E., Gaimon, C., 2004. Managing knowledge-based resource
capabilities under uncertainty. Management Science 50 (11),
1504–1518.
Chen, H., Frank, M.Z., Wu, O.Q., 2005. What actually happened to the
inventories of American companies between 1981 and 2000?
Management Science 51 (7), 1015–1031.
Choi, T.Y., Eboch, K., 1998. The TQM paradox: relations among
TQM practices, plant performance, and customer satisfaction.
Journal of Operations Management 17 (1), 59–75.
Chopra, S., Lovejoy, W., Yano, C., 2004. Five decades of operations
management and prospects ahead. Management Science 50 (1),
8–14.
Clark, K.B., 1996. Competing through manufacturing and the new
manufacturing paradigm: is manufacturing strategy passé? Production and Operations Management 5 (1), 42–58.
Corbett, C.J., Van Wassenhove, L.N., 1993. The natural drift: what
happened to operations research? Operations Research 41 (4),
625–640.
Corrado, C., Mattey, J., 1997. Capacity utilization. The Journal of
Economic Perspectives 11 (1), 151–167.
Cua, K.O., McKone, K.E., Schroeder, R.G., 2001. Relationships
between implementation of TQM, JIT, and TPM and Manufacturing Performance. Journal of Operations Management 19 (6), 675–
694.
De Vany, A., 1976. Uncertainty, waiting time, and capacity utilization:
a stochastic theory of product quality. Journal of Political Economy 84 (3), 523–541.
Fiegenbaum, A., Karnani, A., 1991. Output flexibility—a competitive
advantage for small firms. Strategic Management Journal 12 (2),
101–114.
Ferdows, K., DeMeyer, A., 1990. Lasting improvements in manufacturing performance. Journal of Operations Management 9 (2),
168–184.
Flynn, B.B., Flynn, E.J., 2004. An exploratory study of the nature of
cumulative capabilities. Journal of Operations Management 22,
439–457.
Foulds, L.R., 1983. The heuristic problem-solving approach. Journal
of the Operational Research Society 34 (10), 927–934.
Grover, V., Malhotra, M.K., 1997. Business process reengineering: a
tutorial on the concept, evolution, method, technology and application. Journal of Operations Management 15 (3), 193–213.
Handfield, R.B., Melnyk, S.A., 1998. The scientific theory-building
process: a primer using the case of TQM. Journal of Operations
Management 16 (4), 321–339.
Hayes, R.H., Pisano, G.P., 1996. Manufacturing strategy: at the
intersections of two paradigm shifts. Production and Operations
Management 5 (1), 25–41.
Hayes, R.H., Wheelwright, S.C., 1978. Link manufacturing process
and product life cycles. Harvard Business Review 56 (1), 133–140.
Hendricks, K.B., Singhal, V.R., 2001. The long-run stock price
performance of firms with effective TQM programs. Management
Science 47 (3), 359–368.
1033
Hopp, W.J., Spearman, M.L., 2001. Factory Physics: Foundations of
Manufacturing Management, 2nd ed. Irwin/McGraw-Hill, Boston, MA.
Hopp, W.J., Spearman, M.L., 2004. To pull or not to pull: what is the
question? Manufacturing and Service Operations Management 6
(2), 133–148.
Huson, M., Nanda, D., 1995. The impact of just-in-time manufacturing on firm performance in the US. Journal of Operations Management 12 (3/4), 297.
Ittner, C.D., Larcker, D.F., 1997. The performance effects of
process management techniques. Management Science 43 (4),
522–534.
Jack, E.P., Raturi, A.S., 2003. Measuring and comparing volume
flexibility in the capital goods industry. Production and Operations
Management 12 (4), 480–501.
Karmarkar, U.S., 1987. Lot sizes, lead times and in-process inventories. Management Science 33 (3), 409–418.
Kaynak, H., 2003. The relationship between total quality management
practices and their effects on firm performance. Journal of Operations Management 21 (4), 405–435.
Kingman, J.F.C., 1961. The single server queue in heavy traffic. In:
Proceedings of the Cambridge Philosophical Society, vol. 57. pp.
902–904.
Koste, L.L., Malhotra, M.K., 1999. A theoretical framework for
analyzing the dimensions of manufacturing flexibility. Journal
of Operations Management 18 (1), 75–93.
Koste, L.L., Malhotra, M.K., Sharma, S., 2004. Measuring dimensions
of manufacturing flexibility. Journal of Operations Management
22 (2), 171–196.
Krajewski, L.J., King, B.E., Ritzman, L.P., Wong, D.S., 1987. Kanban,
MRP, and shaping the manufacturing environment. Management
Science 33 (1), 39–57.
Lee, H.L., Padmanabhan, V., Whang, S., 1997. Information distortion
in a supply chain: the bullwhip effect. Management Science 43 (4),
546–558.
Little, J.D.C., 1961. A proof for the queuing formula: L = lW.
Operations Research 9, 383–387.
Little, J.D.C., 1992. Tautologies, models and theories: can we find
‘laws’ of manufacturing? IIE Transactions 24 (3), 7–13.
Little, J.D.C., 2004. Comments on ‘models and managers: the concept
of a decision calculus’. Management Science 50 (12), 1854–1860.
Lovejoy, W.S., 1998. Integrated operations: a proposal for operations
management teaching and research. Production and Operations
Management 7 (2), 106–124.
Lovejoy, W.S., Sethuraman, K., 2000. Congestion and complexity in a
plant with fixed resources that strives to make schedule. Manufacturing and Service Operations Management 2 (3), 221–239.
Magretta, J., 1998. The power of virtual integration: an interview with
Dell Computer’s Michael Dell. Harvard Business Review 76 (2),
72–84.
McDermott, C.M., Greis, N.P., Fischer, W.A., 1997. The diminishing
utility of the product/process matrix—a study of the US power tool
industry. International Journal of Operations and Production
Management 17 (1), 65–84.
McGrath, J., 1982. In: McGrath, J.E., Martin, J., Kulka, R.A.
(Eds.), Dilemmatics: the study of research choices and dilemmas. Judgment Calls In Research, Sage, Newbury Park, CA.
McKone, K.E., Schroeder, R.G., Cua, K.O., 2001. The impact of total
productive maintenance practices on manufacturing performance.
Journal of Operations Management 19 (1), 39–58.
Medhi, J., 2003. Stochastic Models in Queuing Theory, 2nd ed.
Academic Press, Amsterdam, Netherlands.
1034
R.D. Klassen, L.J. Menor / Journal of Operations Management 25 (2007) 1015–1034
Menor, L.J., Roth, A.V., Mason, C.H., 2001. Agility in retail banking:
a numerical taxonomy of strategic service groups. Manufacturing
and Service Operations Management 3 (4), 273–292.
Meredith, J., 1998. Building operations management theory through
case and field research. Journal of Operations Management 16 (4),
441–454.
Milgrom, P., Roberts, J., 1988. Communication and inventory as
substitutes in organizing production. Scandinavian Journal of
Economics 90 (3), 275–289.
Pagell, M., Melnyk, S., Handfield, R., 2000. Do trade-offs exist in
operations strategy? Insights from the stamping die industry.
Business Horizons 43 (3), 69–77.
Pannirselvam, G.P., Ferguson, L.A., Ash, R.C., Siferd, S.P., 1999.
Operations management research: an update for the 1990s. Journal
of Operations Management 18 (1), 95–112.
Piper, C.J., Wood, A.R., 1991. Iron Ore Company of Ontario,
9A91D004, Richard Ivey School of Business. University of
Western Ontario, London, ON.
Rajagopalan, S., Malhotra, A., 2001. Have US manufacturing inventories really decreased? An empirical study. Manufacturing and
Service Operations Management 3 (1), 14–24.
Ramdas, K., 2003. Managing product variety: an integrative review
and research directions. Production and Operations Management
12 (1), 79–101.
Ritzman, L.P., Krajewski, L.J., Klassen, R.D., 2004. Foundations of
Operations Management, Canadian Edition. Pearson Education
Inc., Toronto, ON.
Rogers, E.M., 1995. Diffusion of Innovation, 4th ed. The Free Press,
New York.
Rohleder, T.R., Silver, E.A., 1997. A tutorial on business process
improvement. Journal of Operations Management 15, 139–154.
Rousseau, D., 1985. Issues of level in organizational research: multilevel and cross-level perspectives. In: Cummings, L.L., Staw,
B.M. (Eds.), Research in Organizational Behavior, vol. 7. JAI
Press, Greenwich, CT.
Safizadeh, M.H., Ritzman, L.P., Sharma, D., Wood, C., 1996. An
empirical analysis of the product-process matrix. Management
Science 42 (11), 1576–1591.
Schmenner, R.W., Swink, M.L., 1998. On theory in operations
management. Journal of Operations Management 17 (1), 97–113.
Schmidt, G., 2005. The OM triangle. Operations Management Education Review 1 (1), 87–104.
Scudder, G.D., Hill, C.A., 1998. A review and classification of
empirical research in operations management. Journal of Operations Management 16 (1), 91–101.
Silver, E.A., 2004. Process management instead of operations management. Manufacturing and Service Operations Management 6
(4), 273–279.
Skinner, W., 1966. Production under pressure. Harvard Business
Review 42 (6), 139–146.
Skinner, W., 1974. The focused factory. Harvard Business Review 52
(3), 113–121.
Statistics Canada, 2005a. Annual Survey of Manufactures (ASM),
Principal Statistics by North American Industry Classification
System (NAICS), Table 301-0003; by Standard Industrial Classification, 1980 (SIC), Table 301-0001; by Standard Industrial
Classification, 1970 (SIC), Table 301-0002; Ottawa, ON (www.estat.statcan.ca, accessed September 2005).
Statistics Canada, 2005b. Industrial Capacity Utilization Rates, by
North American Industry Classification System (NAICS), Table
028-0002; and Industrial Capacity Utilization Rates, by Standard
Industrial Classification, 1980 (SIC), Table 028-0001, Ottawa, ON
(www.estat.statcan.ca, accessed September 2005).
Statistics Canada, 2005c. Manufacturers’ Shipments, Inventories,
Orders and Inventory to Shipment Ratios, by North American
Industry Classification System (NAICS), Table 304-0014; Manufacturers’ inventories, orders and inventory to shipment ratios, by
Standard Industrial Classification, 1980 (SIC), Table 304-0001;
Estimated value of shipments, orders and inventories, by Standard
Industrial Classification, 1970 (SIC), Table 304-0007, Ottawa, ON
(www.estat.statcan.ca, accessed September 2005).
Statistics Canada, 2005d. Industrial Product Price Index, by industry
and industry group, annual, Table 329-0001; by North American
Industry Classification System (NAICS), annual, Table 329-0038,
Ottawa, ON (www.estat.statcan.ca, accessed September 2005).
Statistics Canada, 2006. Table 380-0002—Gross Domestic Product
(GDP), expenditure-based, quarterly (GDP, market prices, constant 1992 prices), Table 380-0002; Ottawa, ON (www.estat.statcan.ca, accessed January 2006).
Stidham, S., 1974. A last word on L = lW. Operations Research 22,
417–421.
Strategos Incorporated, 2006. Capacity, inventory, variability and
manufacturing strategy (www.strategosinc.com/capacity_inventory.htm; accessed January 30, 2006).
Stuart, I., McCutcheon, D., Handfield, R., McLachlin, R., Samson, D.,
2002. Effective case research in operations management: a process
perspective. Journal of Operations Management 20 (5), 419–433.
Swamidass, P.M., 1991. Empirical science: new frontier in operations
management research. Academy of Management Review 16 (4),
793–814.
Tayur, S., 2000. Improving operations and quoting accurate lead times
in a laminate plant. Interfaces 30 (5), 1–15.
Vastag, G., 2000. The theory of performance frontiers. Journal of
Operations Management 18 (3), 353–360.
Voss, C., Tsikriktsis, N., Frolich, M., 2002. Case research in operations management. International Journal of Operations and Production Management 22 (2), 195–219.
Wacker, J.G., 1996. A theoretical model of manufacturing lead times
and their relationship to manufacturing goal hierarchy. Decision
Sciences 27 (3), 483–517.
Webb, E.J., Campbell, D.T., Schwartz, R.D., Sechrest, L., 2000.
Unobtrusive Measures (Revised Edition). Sage Publications,
Thousand Oaks, CA.
Whitt, W., 1993. Approximations for the GI/G/m queue. Production
and Operations Management 2 (2), 114–161.
Wysocki, B., Lueck, S., 2006. Just-in-time inventories make U.S.
vulnerable in a pandemic. The Wall Street Journal A1–A7 January
12, 2006.