Measuring Key Dimensions of Knowledge

MEASURING KEY DIMENSIONS OF KNOWLEDGE:
AN ILLUSTRATION FOR TECHNOLOGICAL KNOWLEDGE
SUSAN K. MCEVILY
Katz Graduate School of Business
University of Pittsburgh
252 Mervis Hall
Pittsburgh, PA 15260
Phone: (412) 648-1707
Fax: (412) 648-1693
E-mail: [email protected]
BALA CHAKRAVARTHY
Spencer Chair Professor of Technological Leadership
3-426 Carlson School of Management Building
University of Minnesota
321 19th Avenue South
Minneapolis, MN 55455
612-625-0882 (phone)
612-624-2056 (fax)
[email protected]
1/27/00
Draft: Comments Welcome!
The authors gratefully thank Balaji Koka for his skillful assistance with this project.
MEASURING KEY DIMENSIONS OF KNOWLEDGE:
AN ILLUSTRATION FOR TECHNOLOGICAL KNOWLEDGE
Abstract
It is widely believed that intangible assets, particularly knowledge, represent the most
promising sources of sustainable competitive advantage. Certain attributes of knowledge, such
as its tacitness, complexity, and specificity, have been theorized to affect the cost of imitation by
rivals as well as the expense a firm incurs to transfer and recombine its knowledge internally.
Based on this, prior research recommends that firms actively manage these attributes in order to
protect, leverage, and create knowledge for competitive advantage. Yet, few studies have sought
to test these theoretical claims, in large part because knowledge and its attributes are quite
difficult to measure. In this paper, we suggest an approach to measure the complexity,
specificity, and tacitness of knowledge, and illustrate its utility for technological knowledge. We
report on the validity of these measures using data from the adhesives industry.
2
Knowledge has become a focal point of both academic and practitioner quests to identify
sources of sustainable competitive advantage (Quinn, 1992; Prahalad & Hamel, 1994; Nonaka &
Takeuchi, 1995; Drucker, 1995; Grant & Spender, 1996; Teece, 1998b). Prior research on this
topic has sought to define attributes of knowledge that affect both the degree and manner in
which a firm can profit from it. Three attributes in particular: tacitness, specificity, and
complexity, are claimed to enhance a firm’s ability to appropriate profits from its knowledge
resources and influence the firm’s strategy for leveraging them (Garud & Kumaraswamy, 1995;
Sanchez & Mahoney, 1996; Szulanski, 1996; Coff, 1997; Hansen, Nohria, & Tierny, 1999).
These attributes are also believed to affect the durability of knowledge-based advantage (Winter,
1987; Reed & DeFillippi, 1990; Kogut & Zander, 1992; Argyris, 1999). However, despite a
burgeoning conceptual literature, very few studies have been able to demonstrate empirically that
knowledge attributes matter in the way that theory suggests.
This lack of empirical research reflects, in large part, the challenges associated with
measuring knowledge and its key attributes, and relating knowledge to competitive advantage.
In this paper, we suggest that these difficulties might be addressed by focusing on the structure
of a firm's ‘performance knowledge’ – i.e. the knowledge that a firm relies on to achieve superior
performance on criteria, such as manufacturing costs or product quality, that are directly linked
to a firm's profitability. As our objective is to facilitate research on knowledge-based advantage,
we discuss in detail how scholars can decompose this knowledge and select elements of its
structure that can form the basis of valid measures of complexity, specificity, and tacitness.
The remainder of the paper is organized as follows. First, we define knowledge and
suggest why it is advantageous to focus on a firm’s performance knowledge. Next, we present a
general approach to measuring knowledge and quantifying variations in its complexity,
3
specificity, and tacitness. We then report on the validity of four measurement instruments using
data from the adhesives industry. To conclude, we discuss how the field’s current understanding
of knowledge-based competitive advantage could be enhanced by empirically investigating
theoretical relationships between knowledge attributes and various performance outcomes.
KNOWLEDGE AS A SOURCE OF COMPETITIVE ADVANTAGE
The literature on knowledge encompasses three distinct research streams: (i) work on a
knowledge-based theory of the firm, (ii) articles theorizing how knowledge can be a source of
competitive advantage, and (iii) studies of knowledge management. All three broadly define
knowledge as understanding of some phenomenon or entity, which enables action, but each area
focuses on knowledge that is related to organizational action in distinct ways (Kogut & Zander,
1992; Spender, 1996; Garud, 1997).
Knowledge-based theories of the firm seek to explain what makes organizations better
institutional mechanisms for carrying out certain types of economic activity than the market. This
literature suggests that firms can transfer and integrate particular types of knowledge faster than the
market, and that this capability is rooted in organizational processes that continually define and
subtly reshape a firm's identity (Kogut & Zander, 1992, 1996; Ashforth & Mael, 1996; Grant,
1996). Because this collective sense of identity is difficult to separate from the processes
producing it, knowledge-based theories of the firm emphasize knowledge as action or process (Van
Krogh, Roos, & Slocum, 1994; Blackler, 1995; Spender, 1996).
By contrast, the latter two research streams view knowledge as a resource, or an input to
productive activities. This research maintains that what a firm knows about tangible resources,
rather than the resources themselves, enables it to create value in unique ways, and thus is the most
basic source of competitive advantage (Penrose, 1959; Spender, 1996). Profits stem from the
4
knowledge a firm uses to develop better performing or lower cost products and services (Conner,
1991; Peteraf, 1993), and superior profits may persist if this knowledge is difficult to imitate
(Barney, 1991; Mahoney & Pandian, 1992; Teece, 1998a). Research on knowledge management
examines how firms can identify, protect, and utilize knowledge in order to create and maintain
these advantages (Nonaka & Takeuchi, 1995; Leonard-Barton, 1995; Hansen, et. al., 1999).
Our interest is in knowledge as a source of competitive advantage, so in this paper we treat
knowledge as a resource that enables particular types of action. Resource-based theory attributes
the capacity of a resource, such as knowledge, to provide sustainable competitive advantage to its
intrinsic characteristics (Lippman & Rumelt, 1982; Ghemawat, 1998). For instance, since
knowledge is intangible, and hence not traded on efficient markets, competition is less likely to
cause its value to be reflected in the costs of acquiring it (Barney, 1986; Teece, 1998a). The fact
that productive knowledge often has firm-specific elements means that a firm can appropriate a
substantial proportion of the rents that its unique knowledge generates (Klein, Crawford &
Alchian, 1979; Williamson, 1985; Wernerfelt, 1989; Grant, 1991).
However, these resource-based propositions can be especially difficult to test because the
theory defines competitive advantage in terms of economic profits or rents. In order to attribute
competitive advantage to a particular resource, a study must demonstrate that a firm earns a
higher rate of returns on a (set of) resource(s) than its competitors earn from the same or
substitute resources (Barney, 1991; Peteraf, 1993; Besanko, Dranove & Shanley, 1996;
Ghemawat, 1998). This is challenging because many resources, including knowledge, contribute
to a firm’s profits in a diffuse and complex manner.
Nevertheless, studies can trace the advantages flowing from specific resources by
focusing on their relationships to intermediate performance outcomes, such as product quality or
5
manufacturing productivity. Ultimately, it is exceptional performance on these criteria that
contributes to profitability - not the possession of a unique resource per se. As such, the rate of
return to a knowledge stock should correspond to a firm's level of performance in these areas. If
a firm's performance is superior, it earnsi a higher rate of return on the underlying knowledge
stock, provided it is at least as productive in achieving that performance, and its inputs costs are
the same or lower, than its competitors (Conner, 1991; Peteraf, 1993). Competitive advantage
persists as long as a firm’s performance and/or productivity in achieving valuable criteria are
unsurpassedii.
We refer to what a firm knows about how to achieve specific performance objectives as
its ‘performance knowledge’. A further benefit of attending to a firm’s performance knowledge
is that we may gain deeper insights into the organizational and competitive processes behind
persistent profits, as well as the influence of knowledge attributes on a firm’s ability to manage
these dynamics. For instance, the tacitness, specificity, and complexityiii of knowledge are
frequently linked to its capacity to generate persistent profits; however, the literature suggests that
these attributes can either prolong or threaten a firm’s advantage.
Tacitness, specificity, and complexity are thought to create imitation barriers, which enable
a firm to sustain an advantage based on unique knowledge (Mansfield, et.al., 1981; Lippman &
Rumelt, 1982; Winter, 1987; Levin, et. al., 1987; Reed & DeFillippi, 1990; Teece, Pisano, &
Shuen, 1997). On the other hand, these attributes might negatively affect the productivity of a
firm’s continuous improvement efforts, or hinder its ability to innovate and adapt and thereby
avoid the threat of substitution (Amit & Schoemaker, 1993; Prahalad & Hamel, 1994; Winter,
1994; Sanchez & Mahoney, 1996; Galunic & Rodan, 1998; Argyris, 1999). Research that
6
identifies these effects is summarized in Table 1, and the proposed relationships between
knowledge attributes and sustained advantage are illustrated in Figure 1.
INSERT TABLE 1 AND FIGURE 1
Figure 1 suggests that the same attributes of knowledge that frustrate imitation may retard a
firm’s ability to continuously improve and to innovate. Extant research does not indicate when
each of these potentially concurrent and contradictory effects is likely to dominate. Consistently
superior financial performance could, for instance, reflect three distinct competitive dynamics. A
firm may possess unique, inimitable knowledge that enables it to offer products or services with
features or functionality competitors cannot replicate. Alternatively, competition may occur
among firms that have unique but overlapping knowledge, and sustained advantage may derive
from the ability to improve product or service performance faster than competitors. A third
possibility is that a firm's superior financial performance reflects its innovative capacity - i.e. its
ability to continuously offer new functionality, entirely new products or services, or to radically
improve the means by which current product/service features or functionality are provided.
Research that focuses attention on product or service performance outcomes, and the
knowledge that enables a firm's ability to achieve them, can determine which of these
competitive dynamics is occurring and identify characteristics of knowledge that enhance a
firm's ability to succeed in each environment. Studies at this level would also help to determine
which of the effects predicted in Table 1 is dominant in a given situation, and identify contextual
factors that moderate their effects on sustained competitive advantage.
KNOWLEDGE STRUCTURES AND KNOWLEDGE HIERARCHIES
While the often fluid nature of a firm’s performance knowledge raises some intriguing
questions about how competitive advantage is sustained, it also points to the difficulties of
7
measuring this resource. If a firm needs to continually raise its level of product or process
performance to sustain its advantage, then presumably the content of the firm’s knowledge is also
changing. How can one measure attributes of a resource that is perpetually in a state of flux? Prior
research suggests that this challenge may be addressed by focusing more attention on the structure
of a firm’s performance knowledge than on its content.
Whereas knowledge content refers to the particular facts and theories which individuals
and organizations possess, knowledge structures describe how this information is organized and
stored in memory (Walsh, 1995). Several different types of knowledge structures have been
discussed, including scripts, schemas, categories, mental models, and cognitive maps. Each gives
meaning to information about a particular domain of activity and simplifies thought processes
leading to action. Studies of intelligence have shown that individuals’ knowledge structures affect
their ability to solve problems quickly and transfer their skills to solve new problems (Chi, Glaser,
& Rees, 1982; Singley & Anderson, 1989). Analogously, a firm’s competence structure has been
related to its flexibility and speed of productivity improvement within a particular technological
regime, in the organizational literature (Aoki, 1989; Sanchez & Mahoney, 1996).
Research on both individual and firm level competence discusses the hierarchical structure
of knowledge. At the individual level, knowledge structures are often described as consisting of
nested cognitive categoriesiv, where higher level categories contain more abstract representations
of reality and the bottom level is the actual object (Rosch, 1978; Kempton, 1978; Anderson,
1983; Adelson, 1985). Entities classified at lower levels inherit the properties associated with
higher level categories and include only those properties that distinguish them from other
categories at the same level. For example, referring to an establishment as a restaurant implies
certain things about what it does (e.g. serve food). The category restaurant may have many
8
members or subcategories, such as fast-food, upscale, and Mexican, which inherit the property
'serve food' but also are described by properties that distinguish them from one another, such as
level of service and type of cuisine.
The activities that constitute a capability can also be decomposed into increasingly
specialized tasks that contribute to the achievement of certain functional objectives. For instance,
Grant (1997) describes how the activities used to manufacture telecommunications switching
equipment can be represented at various levels of abstraction. (Please refer to Figure 2.)
INSERT FIGURE 2
The knowledge a firm relies on to achieve those objectives may reside in individual
memories, organizational or group level routines, and codified standard operating procedures.
However, to describe the structure of the knowledge the firm employs to achieve its manufacturing
objectives, it is only necessary to identify unique domains of understanding and how they relate to
those performance objectives. In this paper, we refer to a particular decomposition of knowledge
into distinct categories, and the set of properties that are attached to members of each category,
as a knowledge hierarchyv. The elements of a knowledge hierarchy are illustrated in Figure 3.
INSERT FIGURE 3
The structure of a firm’s performance knowledge, which can be used to measure its
attributes, refers to these categories and properties as well as the nature of their relationships to
particular performance objectives. The members of each category may change over time, but at
some level, the categories themselves will be quite stable. The locus of stability may reflect
inherent variation in the firm's environment, or fluctuation engendered by the firm's strategy for
interacting with its environmentvi.
9
Knowledge structures also tend to vary across individuals and organizations, even when
they seek to achieve the same performance objectives. The particular categories of knowledge a
firm utilizes, and the properties it associates with each, are shaped by its unique experiences. For
instance, the design practices a firm uses may influence the number of component categories it
relies on to affect product performance, as well as the rate of change in the members of those
categories. The properties a firm associates with particular components may differ according to
whether it makes them internally or purchases them from suppliers, the extent to which its
engineers reuse components across products, and the degree to which the firm designs its
products using controlled experimentation or trial and error.
These differences in knowledge structure can be used to measure the complexity, tacitness,
and specificity of a firm’s performance knowledge. However, given the tendency of category
members and their relationships to performance criteria to evolve, researchers must understand the
context in which knowledge is applied in order to select elements of a knowledge hierarchy that
can be measured in a valid fashion. Next, we suggest some general principles to guide this process,
and illustrate how these choices may depend upon the context.
MEASURING THE COMPLEXITY, SPECIFICITY, AND TACITNESS OF KNOWLEDGE
Identifying Performance Criteria
The first step in developing valid measures of knowledge attributes is to identify
performance criteria appropriate to the study. These criteria place boundaries on the categories of
knowledge that need to be included in the measurement instrument. In order to relate knowledge
to competitive advantage, these criteria should clearly create value, and it should be possible to
compare levels of performance, or the value of alternative performance outcomes, across
competitors.
10
For example, we developed instruments to measure attributes of the technological
knowledge that adhesives manufacturers use to develop and improve products, as superior
product performance is a key source of advantage in this industry. Although firms compete to
improve adhesive performance along many dimensions, six criteria are especially critical in any
adhesive application: adhesion, stability, strength, aging, open time/set speed, and ease of
application. A researcher can work backward from performance outcomes such as these, to
identify categories of knowledge that used to manipulate them. This not only helps to limit the
scope of the knowledge that a researcher needs to include in a measurement instrument, it also
assists interviewees, by giving them a tangible focal point, in describing how quickly the
knowledge categories of interest evolve. However, as we will discuss later, the need to refer to
these outcomes as part of the measurement instrument might vary by knowledge attributes.
Describing the Structure of Performance Knowledge
Once an appropriate set of criteria has been identified, the knowledge that enables a firm to
achieve them must be described. It will often be possible to identify general categories of
knowledge in the literature. For example, we consulted prior studies of technological innovation
and product development to identify categories of technological knowledge that are relevant to
improving product performance. This literature consistently describes firms’ efforts to
manipulate product performance as focusing on two domains: improving specific components
and enhancing the way in which components interact with one anothervii (Laudan, 1984;
Henderson & Clark, 1990; Christensen, 1992).
However, as the literature tends to identify high level categories that are common to all
firms, it might be necessary to learn about the context in which a firm competes on these criteria.
The trade literature and industry experts can help to discern how firms achieve different levels of
11
performance, and identify more fine-grained elements of knowledge structure that vary across
firms. Our field research in the adhesives industry, for example, suggested that variation in
product performance largely derives from differences in the set of components firms use to
develop adhesives for a particular application.
Components are distinguished by their unique functional role (e.g. thickening, increasing
tackiness, reducing foam, preventing oxidation), and the list of functions that is typically relevant
to formulating adhesives with a particular technology is finite, commonly known, and quite
stable (Skeist, 1992). These are listed in Table 2.
INSERT TABLE 2 HERE
Although the universe of components for a technology is widely known, firms use unique
combinations of components. In addition, the specific substance a firm uses to achieve these
functions (the component variety - e.g. the use of silica or clay to thicken an adhesive) and the
amount it uses of each, determine how well an adhesive performs. Therefore, it should be
possible to capture variation in firms’ knowledge structures by decomposing the general category
of components into these functions. Also, the product architecture category can be decomposed
into two key design choices: what varieties and amounts of components to use.
On the other hand, if performance differences stem from the way individual firms classify
certain phenomena, rather than how they combine objects from widely known categories, it might
be necessary to elicit categories from key respondents in individual firms. For example, firms may
use unique categories to segment a market since the basis for these categories is less tied to
tangible, widely used resources such as product components. Many techniques exist for eliciting
cognitive categories, such as protocol analysis and cognitive mapping.
12
To completely describe the structure of a firm's performance knowledge, it is also
necessary to identify the properties (as illustrated in Figure 3) a firm associates with each of these
categoriesviii. The technology literature was useful in identifying candidate properties. In
particular, Vincenti (1990) identifies four that are relevant to technological knowledge: (i)
physical properties and characteristic behaviors, (ii) theories and heuristics, and the (iii) normal
configuration and (iv) operational principle of a device. The first two are associated with
components, while the last two help to describe differences in architectural knowledge.
Examples of physical properties are the failing strength or conductivity of materials,
viscosity of fluids, or the durability of a device (Penrose, 1959; Foster, 1986; Vincenti, 1990;
Rosenberg, 1994). Characteristic behaviors describe how these properties change when a
component interacts with other substances, devices, or environmental conditions. Firms also
acquire heuristics for exploiting the physical properties of a component and learn which theories
can be used with particular components (Laudan, 1984). Scientific theories can sometimes be
used as tools to calculate design parameters and identify conditions under which an existing
design will fail (Gibbons & Johnston, 1977; Constant, 1984).
The operational principle of a device explains how it works – ‘how its characteristic
parts … fulfill their special function ‘ (Polanyi, 1962). For example, the operational principle for
an ace inhibitor anti-hypertensive drug is to prevent conversion of angiotensin I to angiotensin II
(Henderson, 1994). These principles, as well as what developers learn about how a product is
used, influence the normal configuration of a device - i.e. the prototypical arrangement of its
characteristic parts (Clark, 1985; Rosenberg, 1982; Vincenti, 1990).
We then needed to determine which of these properties are used to inform adhesive
manufacturers’ product design choices. Our interviews with industry experts, and reading of the
13
trade literature, revealed that formulators often rely on knowledge of the physical properties and
characteristic behaviors of components, and that this knowledge is used to anticipate how
specific components will interact with each other and application environments. However,
formulators tend to use the term 'physical properties' to refer to both physical properties and
characteristic behaviors, so we adopted only the former term.
Formulators rarely rely on scientific theories, but they do acquire heuristics through
experience. These are extremely idiosyncratic, which makes it impractical to identify each
individually; however, items that ask how a formulator exploits components and their physical
properties to manipulate product performance tap into this element of technological knowledge.
The normal configuration of an adhesive is a prototypical formula for a particular application,
but there is no clear analogy to the operational principle in this industry. There are many
different theories of adhesion, and these are neither agreed upon nor used widely during adhesive
formulation. The resulting knowledge hierarchy is illustrated in Figure 4.
INSERT FIGURE 4
Selecting Categories of Knowledge to Measure Attributes
Once the knowledge hierarchy has been delineated, categories that can be used to create
valid measures of knowledge attributes must be selected. Several objectives need to be balanced
in this process. The measurement instrument must be comprehensive and reliable, as well as
tractable. Also, any instrument that seeks to measure a source of competitive advantage should be
sensitive to the potentially proprietary nature of certain questions. This is a particular concern for
technological knowledge, where lower level categories may reflect design choices that make a
firm’s products or processes unique. Figure 3 illustrates how we expect the various forces that
14
affect the validity of a measure to change, as one moves up and down a knowledge hierarchy.
We refer to the levels of knowledge illustrated in Figure 4 for our examples.
Content Validity. To be valid, a measurement instrument must capture all theoretically
important facets of a construct (Schwab, 1982). For knowledge attributes, there are two
dimensions of comprehensiveness – the extent to which an instrument captures all aspects of the
attribute, and the degree to which it captures all of a firm’s performance knowledge. An
instrument that fails to capture relevant elements of a concept cannot provide a good test of
theory, and if an instrument does not consider all categories of performance knowledge, it may
not produce an accurate measure of complexity, tacitness, and specificity. We first review the
different facets of these attributes and then discuss how researchers might comprehensively
capture a firm’s performance knowledge.
Complexity is a multi-faceted construct because it consists of several dimensions that can
be independent of one another (Wood, 1986). Usually, complexity is defined according to
dimensions that increase the difficulty of comprehending how a system (i.e. an organization,
organism, device) functions or produces some outcome. Simon (1962) defines a complex system
as one that consists of many unique and interacting elements, which have equally important effects
on the outcomes produced by the system. Elements are distinct when an individual cannot use the
same knowledge to understand them, so increasing the number of unique elements raises the
amount of information that must be processed to understand the system’s behavior. The more
equally important each element is to the achievement of a performance outcome, the less knowing
how one element functions reveals about how the system as a whole works. If individual elements
are interdependent, then one must understand their joint effects on the performance outcome, and
the number of interactions increases geometrically with the number of elements.
15
A fourth dimension of complexity has also been linked to the difficulty of comprehending a
system; that is dynamism, or the degree of change in the means-end or cause-effect chains that are
used to produce a performance outcome (Wood, 1986). The more frequently the relationships
among elements of a system and its performance change, the more difficult the system will be to
understand, as new knowledge must be acquired. Whether or not this facet is necessary to create a
valid measure of complexity will depend upon whether the knowledge hierarchy changes during
the time frame that is relevant for prediction.
Specificity appears to be a uni-dimensional construct; it is simply the loss in value that
occurs when a resource is applied in a new context. However, there may be more than one
context across which a firm could transfer, or simultaneously apply, its knowledge to achieve the
performance criteria, and the degree to which knowledge loses value may depend upon the
destinations one considers. For instance, a firm might use its product performance knowledge to
serve many customers within an application, and to serve multiple applications. The extent to
which its knowledge loses value from one customer to the next, or across applications,
corresponds to degrees of specificity in a product’s end use. Performance knowledge may also
be specific to the inputs a firm uses to develop a product, and this does not have to co-vary with
end use specificity. What a firm knows about how to exploit the core component of a product
may be more or less specific to the peripheral components it is used with, for example. Thus,
there might be several variations in the contexts in which performance knowledge is applied.
These distinct loci of transfer or application could be considered different facets of specificity.
Two dimensions of tacitness are frequently discussed in the literature. The first is the
inability to articulate what one knows about how to achieve an observed performance outcome
(Polanyi, 1962; Nelson & Winter, 1987; Winter, 1987). The procedures one relies on may be
16
inaccessible either because they have been learned implicitly or because they have become
second nature and are taken for granted or forgotten (Reber, 1993). However, even if the steps a
firm follows can eventually be articulated, this may be insufficient for another firm to achieve
the same level of performance. For example, competitors may follow the same basic procedures to
make pianos or violins, but be unable to achieve quality or product performance that is comparable
to that embodied in a Steinway or Stradivarus (Garud, 1997). Experts might subconsciously attend
to cues and makes judgments that are not communicated or observable.
On the other hand, if the causal mechanisms which influence performance are known,
these may be acted on in a variety of ways, so even if a competitor cannot imitate the same
procedures, it may be able replicate the firm’s performance. Thus the second dimension of
tacitness is the personal nature of knowledge (Polanyi, 1962; Nonaka & Takeuchi, 1995; Teece,
Pisano, & Shuen, 1997), which derives from an inability to articulate the principles that affect
the level of performance one achieves. Both dimensions help to describe knowledge that cannot
be communicated sufficiently to enable others to achieve the same level of performance.
In order to accurately quantify complexity, tacitness, and specificity, measures of
knowledge attributes should also reflect all dimensions of a firm's performance knowledge that
are likely to correspond to variation in these attributes across firms. Knowledge structures tend to
vary more as one moves down a knowledge hierarchy, as illustrated in Figure 3. Lower level
categories and properties are more idiosyncratic because they are closer to the actual object. At
this level, firms may classify entities differently, or associate different properties with those
entities according to the contexts in which the firm has been encountered itix. However,
tractability and secrecy tend to decrease as one moves down the hierarchy. An understanding of
17
what drives differentiation in categories, their properties, and their relationships to performance
in a particular context may be required to determine how to balance these objectives.
For example, as we noted earlier, in the adhesives industry, product performance
differences primarily stem from the types of components (e.g. surfactants, thickeners) and the
component varieties (e.g. silica or clay as a thickener) firms use. Both the number of components
and their relative importance may differ according to how components are exploited. Some firms
rely primarily on a backbone polymer to achieve the desired performance characteristics; others
obtain the same features through their use of several additives. In this context, complexity can be
measured using the relationships between component types (Level 3) and specific product
performance criteria.
On the other hand, in industries where there is a dominant design, firms may rely on
components for exactly the same set of functions, to affect product performance. In this case, we
would have to go further down the knowledge hierarchy to capture firm differences, such as by
asking about the varieties of components they use (Level 4) or the properties of each component
that firms exploit (Level 5). Alternatively, one could move farther up a knowledge hierarchy if
firms group common components into different subsystems and subassemblies.
We did not expect to capture much additional variation by moving past Level 3 in the
adhesives industry. Asking firms to list the varieties of each component they use would be
intractable, as there are too many for this to be a manageable task. (Varieties must be listed to
measure their relative importance, to measure complexity.) Also, we did not expect firms to
differ much at Level 5. Formulators do not all attend to the physical properties of components,
but those who do tend to focus on a small number of particularly salient properties.
18
Reliability. Valid measures should yield consistent results across time, respondents, and
items – i.e. they must be reliable. Measures of knowledge attributes should be stable over the
time interval relevant to prediction (Schwab, 1982). For example, design knowledge that is
unique to a particular product generation may evolve too quickly to provide the basis for reliable
measures of knowledge attributes, but this will depend on the outcome one is trying to predict.
In general, temporal stability will decrease as one moves down a knowledge hierarchy because
the category members and their salient properties tend to change more often at this level.
An instrument should also produce consistent results across respondents. Inter-observer
reliability will be enhanced if the performance relationships used to measure knowledge
attributes are not idiosyncratic to the members of a category (e.g. different component varieties)
that one is asking about, as individuals may have experience with different varieties. If the nature
of the relationship between thickeners and product performance depends upon the particular
variety used, then questions at this level may produce too much variation to reliably capture
firm-level attributes. The degree to which the relationships between category members and
performance criteria vary may be technologically determined.
One the other hand, the accuracy of each respondent's judgments may increase at lower
levels of a knowledge hierarchy because respondents can consider more of the properties that are
unique to particular subcategories. The degree of variability in a performance relationship may
also depend on the questions one asks to measure a particular attribute. For instance, formulators
may find it easier to generalize about the relative importance of different component types (an
important aspect of complexity) than about their ability to apply component knowledge in
different contexts (a key facet of specificity). Again, contextual understanding is required to
19
balance these objectives. Let us illustrate this by discussing how we quantified tacitness in the
adhesives industry, and comparing this to the categories we selected to measure complexity.
We measured tacitness as the inverse of an expert’s ability to predict how product
performance can be manipulated and to explain why these techniques affect product performance
the way they do. Our field interviews suggested that expert formulators tend to rely on different
problem solving approaches. Some seek to discern the causal mechanisms behind adhesive
performance, while others rely heavily on trial and error. Those who rely on trial and error may
remember which component varieties are effective in certain applications, but they have little
understanding of why they work. Given our conceptualization of tacitness, we expected these
different approaches to formulation to be the primary driver of variation in tacitnessx.
Furthermore, these characteristic problem-solving approaches will shape the nature of a
formulator's understanding at all levels of the knowledge hierarchy. A formulator who learns the
causal mechanisms behind adhesive performance will understand different things about the
components she works with than a formulator who works by trial and error will. These
differences will characterize what formulators know about each component type, as well as the
varieties of each she works with. Therefore, whereas we had to move to Level 3 to capture
variation in complexity, Levels 2 and 5 should capture variation in tacitness that is temporally
stable. In addition, characteristic problem solving approaches tend to be passed on within
adhesive manufacturers. Individuals learn how to formulate adhesives through experience, rather
than through formal education. Labor mobility is low in this industry, and firms often encourage
experienced formulators to apprentice new employees in order to pass on what they have learned
about developing adhesives. As such, responses within a firm should be consistent across
respondents at Levels 2 and 5.
20
In addition to her problem solving approach, the amount of experience a formulator has
with particular component varieties may affect the tacitness of her knowledge about how to
exploit themxi. However, we did not expect this to influence the stability or accuracy of
judgments at Levels 2 and 5. Our interviews suggest that firms fall into three categories: some
almost never use new component varieties, others continually seek out new varieties, and many
firms adopt new varieties only when those they are familiar with are insufficient. These
differences appear to be quite stable over time, so, while the specific component varieties a
formulator works with may change, the average amount of experience she has with the current
set of components is unlikely to fluctuate a great deal. Thus, the knowledge categories described
by Levels 2 and 5 enable us to capture variation in tacitness across firms that should be stable
across time and respondents. Neither tractability nor secrecy was a concern at these levels.
A third measure of reliability is internal consistency, or the extent to which different
items yield similar measures of a latent factor. If we had asked about one category of knowledge
in many different ways, then we would expect internal consistency to be very high. However,
questions about different categories need not yield identical responses. The degree of internal
consistency may vary by context according to the factors that influence the relationships between
knowledge categories and performance criteria. In some contexts, these factors may influence
all performance relationships the same way, in which case internal consistency would be higher.
Our fieldwork in the adhesives industry suggested that the various facets of specificity
are influenced in part by a firm's approach to formulation. Some firms actively accumulate
knowledge of the physical properties that particular component varieties embody and seek to
exploit those properties during formulation. Others accumulate knowledge about which
component varieties are useful for manipulating product performance, but do not focus on their
21
physical properties. Since each physical property can be used to describe many components, this
knowledge is relatively less specific than knowing which component varieties have been
effective in certain applications.
Firms also differ in their efforts to learn about conditions that may be common across
application and usage environments and affect those performance outcomes, and in the tendency
to utilize the same components across applications or tailor their formulas to individual
customers. A formulator's attention to application conditions is not necessarily related to its
efforts to learn about the physical properties of components and their relationships to
performance characteristics. Therefore, these two dimensions of specificity are not necessarily
expected to move in the same direction.
The Survey Instruments
Complexity. We selected Level 3 knowledge categories to measure complexity. The
more component types a firm relies on, and the more equally important they are, the greater the
complexity of its technological knowledge. Formulators will accumulate more knowledge about
the characteristic behaviors of each component, since they have observed how their properties
and effects on adhesive performance change when combined with many other types of
components. The knowledge a firm relies on to integrate product components is also likely to be
more complex. Formulators acquire heuristics for managing the performance tradeoffs
associated with using certain components together (e.g. how to offset the undesirable effect that
adding more filler has on open-time, so its desirable effect on ease of application can be
exploited). A firm that has to balance the effects of many components on each performance
criterion is likely to accumulate more of these heuristics.
INSERT SURVEY INSTRUMENT 1
22
The instrument asks formulators to rank order each of the component types they rely on
to manipulate the performance of their adhesives on six criteria: ease of application, open
time/set speed, adhesion, stability, strength, and aging. Although other criteria, such as
conductivity or color, are important in some applications, these six are the most basic and critical
dimensions of adhesive performance. Our interviews revealed that each of these criteria is
affected by a different set of product components, so asking about them individually, rather than
referring to product performance in general, enabled us to capture greater variation in complexity
and increase the reliability of our measures.
To quantify complexity, we computed a concentration ratio for the set of components
used to influence each of the six performance criteria. The formula for this ratio, which was
suggested by Dess and Beard (1984), is: [sum(value of a componentj2)]/[sum the value of all
componentsj] 2. Since the ratio increases as the number and equality of components decline, we
subtracted each ratio from one and took the average of these numbers to measure complexity.
The third dimension of Simon’s (1962) definition of complexity is the degree of interaction
or interdependence among the elements of a system. In our case, interdependence would be the
extent to which the effect that one component has on product performance depends on its
interactions with one or more other components. This could be quantified by counting the number
of components that each interacts with, and weighting each interdependent component by the
extent to which its joint effects are more important than its individual effects on product
performance. Unfortunately, this dimension was difficult to make operational for our context.
Adhesive components almost always interact with one another to affect product
performance, although the degree of interdependence does vary. However, when we asked about
these relationships at Level 3, most formulators found it quite difficult to generalize about these
23
effects. It seems that the degree of interdependence is heavily influenced by the specific varieties
of components that are used together, as illustrated in Level 4. We did not attempt to develop an
instrument to capture interdependence because working at this level was not tractable in this
context. This is, however, a key aspect of complexity. To quantify interdependence, researchers
should identify the type of interdependence that a particular knowledge structure exhibitsxii, as
the appropriate algorithm depends on the nature of the interdependence (Oeser & O'Brien, 1967;
Wood, 1986; Horwitch & Thietart, 1987; Frizelle & Woodcock, 1995; Zander & Kogut (1995).
We did not measure dynamic complexity because in our context, fluctuations in the
means used to affect product performance most often occur at the level of the individual product.
The dominant technology for an application tends to be the same for many decades, so the set of
components fluctuates little. Since changes in the adhesive formula are the basis for the
performance criteria we wished to track, it did not make sense for us to capture dynamic
complexity. Where it is relevant, existing formulas, such as those discussed by Wood (1986), can
be used to compute dynamic complexity based on changes in a knowledge structure.
Specificity. Although the specificity of a firm's technological knowledge is substantially
determined by its problem solving approach, it is might also be influenced by a firm’s product
strategy. A firm may select component varieties and target applications that enable it to use very
similar adhesive formulas to serve different customer groups. For instance, a firm may rely on
component knowledge that tends to be more application specific (e.g. remembering what
varieties of components to use rather than which of their physical properties can be exploited),
but use the same component varieties to formulate adhesives for many applications. Therefore,
we selected items from Level 2 and 5 to create two different instruments to measure specificity.
The first, resource specificity, measures the specificity of the firm’s problem solving approach.
24
The second captures design specificity, or the extent to which a firm's solutions to performance
problems (i.e. its product designs) are the same across applications.
INSERT SURVEY INSTRUMENTS 2 & 3
These instruments use a standard 7-point scale to capture the degree to which a firm’s
knowledge of each performance relationship is application specific. The anchors for these scales
were drawn from the Bass, Cascio and O'Connor (1974) study that identifies evenly spaced
anchors for adverbs and adjectives describing frequencies and amounts. To quantify specificity,
responses to these items, which capture the degree to which knowledge retains its value across
applications, were reverse coded and their mean computed.
Tacitness. The knowledge categories described by Levels 2 and 5 also enabled us to
capture variation in tacitness across firms. To quantify tacitness, we first reverse coded the items,
as they capture depth of causal knowledge, and then computed their mean.
INSERT SURVEY INSTRUMENT 4
Validating the Survey Instruments
Content Validity. Content validity is the degree to which an instrument captures what it
is intended to measure, and is free of non-random measurement error (Carmines & Zeller, 1979;
Schwab, 1982). We relied on the theoretical literature to ensure that we had captured all relevant
facets of complexity, tacitness, and specificity. In addition, we consulted the literature on
technological innovation, the trade literature for the adhesives industry, and worked closely with
technology experts to ensure that we had identified the key categories of knowledge that underlie
a firm's ability to manipulate product performance.
We validated the content of our instruments through pre-tests with one of the largest and
oldest firms in the industry (nine expert formulators participated in this), and with two industry
25
and technology experts, each of whom have over 30 years of experience formulating adhesives.
After completing the survey, each pre-test respondent was interviewed to assess the content and
design of the survey. The results assured us that, for each technology, we had identified all of
the relevant components for the complexity measure, and the six product performance criteria are
most critical to customers. The categories of knowledge used to measure specificity and tacitness
appear to be comprehensive, and the survey uses terminology that should be familiar to any
individual who formulates adhesivesxiii.
Reliability. Measurement instruments usually rely on items that tap into the same
underlying factor in a repetitive, highly consistent manner (Schwab, 1982), and as such, should
correlate highly with the latent factor and each other. Our instruments are somewhat different in
that we did not necessarily expect that a particular respondent would rate each item highly since
they refer to different categories of knowledge. For example, if firms do not acquire knowledge
about the physical properties of components, their responses to these items may not correlate
highly with their responses to other items for the specificity instruments. We did expect the
individual items to correlate with one another to some degree, as a firm’s product strategy and
problem solving approach might influence many categories of knowledge in the same way.
We assessed the reliability of these scales using Cronbach’s alpha. The estimates of
internal consistency were as follows: .93 for tacitness, .75 for design specificity and .79 for
resource specificity, and .89 for complexity. As our instrument captures the range of
technological knowledge that formulators rely on to manipulate product performance, these
results suggest that formulators tend to rely on development approaches that shape the character
of many categories of technological knowledge in similar ways.
26
Discriminant validity. Although conceptually distinct, these attributes of knowledge
might be correlated if they are influenced by common factors. For instance, some research
suggests that individuals acquire more tacit knowledge about complex tasks, because they do not
have time to induce the underlying structure of related events or objects (Reber, 1993). Authors
have also suggested that tacit knowledge tends to be more context specific (Polanyi, 1962; Arora
& Gambardella, 1994; Nonaka & Takeuchi1, 1995). We relied on exploratory factor analysis to
assess how well our instruments capture unique constructs. For complexity, the items are the
complexity measures for each of the six performance criteria.
First, we included all of the items for each construct in the analysis and set the number of
factors to 4. We estimated the factor loadings using both an orthogonal and oblique rotation,
which produced the same results. The complexity items each load on their own factor, and the
tacitness items load together except for TCj, which is split across two factors. All of the items
for design specificity loaded highly on one factor, except for SP2a and SP2b, which ask about
the physical properties of components. The items for resource specificity were split, with the
first four and the last four loading together and on different factors. We expected that these items
might not load together, as the first four pertain to using performance knowledge to exploit
different components, while the last four ask about using this knowledge to serve different
applications. This pattern persists when we specified 5 and 6 factors, and also when we dropped
SP1a, SP1b, SP2a, and SP2b which we expected to behave differently from the other items.
These results suggest that the items largely capture distinct constructs. We also examined
whether the mean scores for the constructs were significantly correlated with one another, and
found the measures to be only partially correlated. Tacitness and complexity were correlated at
.29, p=.02; design specificity and tacitness were correlated at .38, p=.001; and resource
27
specificity and tacitness were correlated at .56, p<.0001. We suspect that this last correlation
reflects a dual influence of knowledge about the physical properties of adhesive components.
This knowledge is inherently less application specific and formulators require more causal
knowledge to exploit it, so its possession would reduce both tacitness and specificity.
Complexity and specificity were not significantly correlated.
Confirmatory analysis. After conducting exploratory analysis on our data, we tested
whether the items included in each instrument capture the same latent construct using confirmatory
factor analysis. LISREL 8.2 was used for this analysis. A separate measurement model was
estimated for each set of items that correspond to a particular attribute. The individual loadings
were highly significant for all of the items used to measure each attribute. However, the fit
statistics improved when SP1a, SP2h, and TCh, which each explained less variance in the construct
than the other items, were removed from the measurement models. The fit indices each exceed the
minimum acceptable levels.
For the tacitness measurement model, the goodness of fit index is .94, the adjusted
goodness of fit index is .89, the normed fit index is .95, the comparative fit index is 1.0 and the
standardized root mean square residual is .035. In the complexity measurement model, the
goodness of fit index is .98, the adjusted goodness of fit index is .94, the normed fit index is .98,
the comparative fit index is 1.0, and the standardized root mean square residual is .028. The design
specificity measurement model achieved a goodness of fit index of .91, an adjusted goodness of fit
index of .81, a normed fit index of .82, comparative fit index of .89, and a standardized root mean
square residual of .075. For the resource specificity measurement model, the goodness of fit index
is .97, the adjusted goodness of fit index is .93, the normed fit index is .96, the comparative fit
28
index is 1.0 and the standardized root mean square residual is .04. The minimum fit function, chisquare statistic is non-significant for each model, as is desired.
Convergent Validity. Convergent validity is the extent to which different measurement
methods yield the same results. It is extremely difficult to measure knowledge and its attributes using
non-survey methods. However, alternative methods of developing the survey items can be utilized. In
particular, we sought to comprehensively capture the categories of knowledge that are relevant to
adhesive formulation. An alternative approach would be to focus on fewer categories of knowledge
and to construct items that are worded as distinctly as possible, while still reflecting the meaning of the
underlying construct. If both approaches are used for the same sample of firms, the variation in
measurement results can be compared.
Another approach to testing for convergent validity is to use multiple respondents.
Unfortunately, we were unable to collect multiple surveys from our sample of firms. Many of the
companies are small and rely on only one formulator to develop adhesives. Even in large firms,
individual formulators are responsible for developing products for particular customers and
applications, which made it difficult to obtain multiple respondentsxiv.
DISCUSSION
Economic activity has always involved the application of knowledge to create goods and
services that are more highly valued than the inputs a firm uses to produce them (Penrose, 1959).
However, factor market expansion, global competition, and the growth of service industries have
increased the value of knowledge, relative to tangible resources, as a source of sustainable
competitive advantage (Quinn, 1992; Drucker, 1995; Nonaka & Takeuchi, 1995; Teece, 1998b).
As a consequence, companies are investing millions of dollars to manage and value their
knowledge resources in the same way they treat other forms of capital. Researchers frequently
29
turn to resource-based theory and the dynamic capability/core competence perspectives for
insight to guide and evaluate these efforts. Unfortunately, there is relatively little empirical
evidence to validate these theories.
Measuring Knowledge Directly
We have suggested that it is important to accumulate such evidence and that replicable
measures of knowledge attributes, which can be developed using the approach outlined here, are
needed to do so. Although indirect evidence may provide insight into the veracity of alternative
theories linking knowledge to competitive advantage, direct tests have several advantages. First,
studies can determine whether variation in knowledge attributes corresponds to differences in the
magnitude or persistence of knowledge-based competitive advantage. It is possible that tacitness,
specificity, and complexity explain persistent differences among firms’ knowledge resources
without also explaining variation in their performance, as competitors can rely on substitute
knowledge to achieve comparable performance outcomes. Also, the relationship between these
attributes and persistence may not be linear, in which case managers would not wish to maximize
the height of these imitation barriers. Instead, they may have to balance multiple influences of
knowledge attributes (e.g. any opposing effects on imitation barriers versus a firm’s innovative
capacity) on competitive advantage.
Second, by measuring knowledge attributes directly, studies can disentangle the
predictions of competing theories. For example, both resource-based theory and transaction cost
economics have been used to predict the boundaries of the firm, and the distinction between
them turns on which attributes are responsible for a firm’s make or buy decisions. Authors
working in the resource-based tradition have argued that a firm’s boundaries are determined by
characteristics of organizations that reduce the costs of transferring tacit knowledge (Teece,
30
1982; Kogut & Zander, 1992, 1996; Conner & Prahalad, 1996). When the cost of transferring
knowledge through market mechanisms is high, a firm will exploit that knowledge internally.
On the other hand, the specificity of knowledge could lead to the same outcome, but for a
very different reason. If knowledge is well codified but highly firm or transaction specific,
transfer costs should be low, while the costs to negotiate and enforce a contract might be
substantial (Wiiliamson, 1985). In order to determine whether the costs of transfer or the fear of
opportunism drives the scope of a firm’s activities, these attributes must be measured and linked
to a firm’s make or buy decisions. The difference in theoretical explanations has important
managerial implications. In the first case, if there are other benefits to outsourcing, managers
may take steps to reduce the costs of transferring tacit knowledge, such as by creating a forum
for sustained communication and joint problem solving. In the second case, managers may rely
on governance mechanisms, such as shared equity in a joint venture, to align incentives and
reduce the costs of enforcing a contractual agreement. Finally, the competing predictions
identified in Table 1 cannot be resolved unless the relationships between knowledge attributes
and performance outcomes are investigated directly.
Focusing on Performance Knowledge
In this paper, we presented a set of instruments, and an approach that can be used to
develop comparable instruments for other contexts, that may facilitate research on knowledgebased advantage. We suggested that performance knowledge is particularly well suited to
investigating these issues because it can be directly linked to a firm's competitive advantage and
profits. Performance outcomes that are related to technological innovation may be especially
advantageous for testing resource-based arguments because a firm's achievements in these areas
is determined almost entirely by its knowledge resources. Other criteria, such as manufacturing
31
productivity, can be substantially influenced by the tangible inputs a firm uses, which makes it
harder to isolate how knowledge contributes to performance advantages over time.
Studies at this level can also validate resource-based propositions by investigating which
imitation barriers protect knowledge-based advantage. A firm's product or service performance
might be difficult to replicate if the associated knowledge has attributes that make it causally
ambiguous or inaccessible to competitors. On the other hand, inimitability could reflect
population level characteristics, such as heterogeneous technological knowledge or tangible
resources that prevent competitors from exploiting a focal firm's discoveries. These different
explanations of sustained competitive advantage cannot be easily disentangled if researchers
only examine relationships between general knowledge stocks (e.g. experience or competence in
marketing or R&D) and financial performance.
Further, a firm may benefit from opposing knowledge attributes (tacit vs. explicit,
specific vs. general, complex vs. simple) at different levels of activity. General resource
knowledge may enable a firm to adopt new raw materials or product components ahead of
competitors. A firm may earn rents from early adoption by procuring tangible resources at lower
costs than late adopters and scope economies by leveraging them across product markets. To
sustain an advantage within individual product markets, a firm may cultivate application specific
knowledge of how to integrate individual components or develop specialized technical service
knowledge to support its complementary capabilities. Research on the performance benefits
firms obtained from knowledge at different levels of activity may provide especially valuable
insights into the organizational dynamics behind sustained competitive advantage. Studies could
also investigate the practices firms use to shape these attributes and to balance tensions among
knowledge management objectives that may exist across organizational levels.
32
Measurement of Knowledge Attributes
Our complexity instrument is unique in its focus on the relationships between product
components and the specific performance criteria they contribute to. The formula we used to
quantify complexity has been used at the industry level (Dess & Beard (1984). The formula
captures two dimensions of complexity that were proposed by Simon (1962): the number of
distinct knowledge categories, and the equality of their importance for affecting product
performance. The equality of importance may be especially important for predicting the ease of
imitationxv (Szulanski & Winter, 1999).
Our use of knowledge structures to measure specificity departs from prior measures.
Specificity is most often measured in studies of transaction cost economics. A common
approach is to proxy specificity as the amount of effort or investment that is required to make a
component or execute some activity (Monteverde & Teece, 1982; Masten, Meehan, & Snyder,
1989, 1992; Dyer, 1993). The argument for this proxy is that activities requiring greater effort will
yield more idiosyncratic know-how. However, this measure is very similar to the use of
experience to measure tacit knowledge (e.g. Teece, 1977; Wagner & Sternberg, 1985; Wright,
1994), and knowledge that takes a great deal of effort to acquire initially is not necessarily
difficult to modify and use in new applications. Rather, this depends importantly upon how
knowledge is structured once it is acquired (Chi, et al. 1982; Holland, et. al., 1986; Garud &
Kumaraswamy, 1995). Measures of specificity that are based on a firm’s knowledge of
particular performance relationships can more directly capture the degree to which knowledge
loses value across applications.
Using knowledge structure as a basis for measuring tacitness moves beyond proxies that
have been used, and compliments some recent measures. For example, Wagner and Sternberg
33
(1985) have focused on procedural knowledge that is generally not taught through formal
education and must be acquired through experiencexvi. They measure tacitness in terms of the
amount of such knowledge an individual has acquired. While this attention to experience-based
knowledge is consistent with research on implicit learning (Reber, 1989), their measures rely on
knowledge that experts can and do articulate, as it must be communicated to develop these
measurement instruments. Also, since this knowledge is associated with a particular profession or
occupation, it is unlikely to form the basis for a firm’s competitive advantage.
Our approach is closer to the way Zander and Kogut’s (1995) measured tacitness. These
authors drew upon Winter's (1987) dimensions of knowledge to develop measures of codifiability
and teachability, which are inversely related to tacitness. We also tried to capture the inverse of
tacitness, but asked somewhat different questions to tap into these dimensions. To capture the
personal nature of performance knowledge and the degree to which a formulator can verbalize
what she knows, we asked about the extent to which formulators can explain predict and why
exploiting components in a certain way affects adhesive performance.
Adapting the Instruments to Other Knowledge Types
While we have tried to illustrate how understanding the context in which knowledge is
applied can be useful for resolving some of the tradeoffs associated with developing valid
measures of knowledge attributes, the instruments were developed using a general approach that
can be applied to other contexts. In particular, we have suggested that performance knowledge
can be decomposed hierarchically, and that the structure of this knowledge can be used to
measure its attributes. Categories of understanding can be identified according to distinct
functions (e.g. which may be embodies in components or carried out by individuals) that affect
the performance criteria of interest, as well as the mechanisms or methods used to coordinate or
34
integrate those functions. Each category can be further described using the properties that
Vincenti (1990) discusses. Table 3 offers some examples of how this approach might be applied
to other types of knowledge.
INSERT TABLE 3
CONCLUSION
This paper seeks to facilitate research on knowledge as a source of competitive advantage
by outlining an approach to measure important attributes of knowledge resources. A particular
challenge is that the content of a firm’s knowledge and its relationship to the firm’s competitive
advantage evolve over time. We describe how the complexity, specificity, and tacitness of a
firm’s knowledge can be measured by focusing on its structure, rather than its content. Certain
elements of knowledge structures tend to be stable even when the content of knowledge evolves.
In addition, we have suggested that researchers can link knowledge to competitive advantage by
focusing attention on a firm’s performance knowledge. These are knowledge stocks that can be
directly linked to valuable performance criteria, such that a firm’s success in these areas
approximates its rate of return to unique knowledge.
35
REFERENCES
Adelson, B. 1985. Comparing natural and abstract categories: A case study from computer science.
Cognitive Science, 9 417-430.
Amit, R., & Schoemaker, P.J.H. 1993. Strategic assets and organizational rent. Strategic
Management Journal, 14 33-46.
Anderson, J.R. 1983. The Architecture of Cognition. Cambridge, MA: Harvard University Press.
Argyris, C. 1999. Tacit knowledge and management. In R.J. Sternberg & J.A. Horvath (Eds.) Tacit
Knowledge in Professional Practice. Lawrence Erlbaum Associates: Mahwah, New Jersey. 123140.
Arora, A. & Gambardella, A. 1994. The changing technology of technological change: General and
abstract knowledge and the division of labor. Research Policy, 23 523-532.
Ashforth, B.E. & Mael, F.A. 1996. Organizational identity and strategy as a context for the
individual. Advances in Strategic Management, 13 19-64.
Barney, J. 1991. Firm resources and sustained competitive advantage. Journal of Management, 17
99-120.
Barney, J. 1992. Integrating organizational behavior and strategy formulation research: A
resource-based analysis. In P. Shrivastava, A. Huff, and J. Dutton (Eds.), Advances in Strategic
Management: 39-62. Vol 8: JAI Press: Greenwich, CT.
Barney, J. 1995. Looking inside for competitive advantage. Academy of Management Executive, 9
(4) 49-61.
Barney, J. 1997. Gaining and Sustaining Competitive Advantage. Addison-Wesley: Reading, MA.
Besanko, D., Dranove, D. & Shanley, M. 1996. The Economics of Strategy. John Wiley & Sons,
Inc.: New York, NY.
Blackler, F. 1995 Knowledge, knowledge work and organizations: An overview and
interpretation. Organization Studies, 16 (6) 1021-1041.
Bollen, K.A. 1984. Multiple indicators: Internal consistency or no necessary relationship? Quality
and Quantity, 18 377- 385.
Bollen, K.A. & Lennox, R. 1991. Conventional wisdom on measurement: A structural equation
perspective. Psychological Bulletin, 110 305-314.
Bohn, R. 1994. Measuring and managing technological knowledge. Sloan Management Review, 61
61-73.
36
Cantor, N. & Mischel, W. 1979. Prototypes in person perception. In L. Berkowitz (Ed.) Advances in
Experimental Social Psychology. 12 3-52. Academic Press: New York, NY.
Chi, M.T.H., Glaser, R. & Rees, E. 1982. Expertise in problem solving. In R. Sternberg (ed)
Advances in the Psychology of Human Intelligence, 1 7-75. Erlbaum: Hillsdale, NJ.
Christensen, C.M. 1992. Exploring the limits of the technology S-curve. Production and
Operations Management, 1 334-366.
Clark, K. 1985. The interaction of design hierarchies and market concepts in technological
evolution. Research Policy, 14 235-251.
Clark, K. & Fujimoto, T. 1991. Product Development in the World Automobile Industry. Harvard
Business School Press: Boston, MA.
Coff, R. 1997. Human assets and management dilemmas: Coping with hazards on the road to the
resource-based theory. Academy of Management Review, 22 (2) 374-402.
Collis, D. 1994. How valuable are organizational capabilities? Strategic Management Journal, 15
143-152.
Conner, K. 1991. A historical comparison of resource-based theory and five schools of thought
within industrial organization economics: Do we have a new theory of the firm? Journal of
Management, 17 (1) 121-154.
Constant, E. 1984. Communities and hierarchies: Structure in the practice of science and
technology. In R. Laudan (ed.) The Nature of Technological Knowledge: Are Models of Scientific
Change Relevant? Reidel Publishing Company: Dordrecht Holland. 27-46.
Cyert, R.M. & March, J.G. 1963. A Behavioral Theory of the Firm. Prentice-Hall: Englewood
Cliffs, NJ.
Dess, G.G. & Beard, D.W. 1984. Dimensions of organizational task environments. Administrative
Science Quarterly, 29 52-73.
Dierickx, I. & Cool, K. 1989. Asset stock accumulation and sustainability of competitive
advantage. Management Science, 35 (12) 1504-1514.
Drucker, P. 1995. Managing in a Time of Change. Truman Talley Books: New York.
Foster, R.N. 1986. Innovation: The Attacker’s Advantage. Summit Books: New York, NY.
Frederiksen, N. 1966. Validation of a simulation technique. Organizational Behavior and Human
Performance, 1 87-109.
37
Frizelle, G. & Woodcock, E. 1995. Measuring complexity as an aid to developing operational
strategy. International Journal of Operations and Production Management. 15 (5) 26-39.
Garud, R. & Kumaraswamy, A. 1995. Technological and organizational designs for realizing
economies of substitution. Strategic Management Journal, 16 (Summer) 93-109.
Garud, R. & Rappa, M. 1994. A socio-cognitive model of technology evolution: The case of
cochlear implants. Organization Science, 5 (3) 344-362.
Garud, R. & Nayyar, P. 1994. Transformative capacity: Continual structuring by intertemporal
technology transfer. Strategic Management Journal, 15 365-385.
Garud, R. 1997. On the distinction between know-how, know-why, and know-what. Advances in
Strategic Management, 14 81-101.
Ghemawat, P. 1991. Commitment. The Free Press: New York.
Ghemawat, P. 1998. Competition and business strategy in historical perspective, Teaching Note
9-798-010, Harvard Business School.
Gibbons, M. & Johnston, R. The roles of science in technological innovation. Research Policy, 3
220-243.
Grant, R. 1991. The resource-based theory of competitive advantage: Implications for strategy
formulation. California Management Review, 33 (3) 114-136.
Grant, R.M. 1996. Prospering in dynamically-competitive environments: organizational capability
as knowledge integration. Organization Science, 7 (4) 375-387.
Grant, R.M. 1996. Toward a knowledge-based theory of the firm. Strategic Management
Journal, 17 (Winter Special Issue) 109-122.
Grant, R.M. 1997. Contemporary Strategy Analysis. 3rd edition. Cambridge, MA: Blackwell
Business.
Gutting, G. 1984. Paradigms, revolutions, and technology. In R. Laudan (ed.) The Nature of
Technological Knowledge: Are Models of Scientific Change Relevant? Reidel Publishing Company:
Dordrecht Holland. 47-65.
Hansen, M.T., Nohria, N. & Tierny, T. 1999. What’s your strategy for managing knowledge?
Harvard Business Review, March-April 106-116.
Henderson, R. 1994. The evolution of integrative capability: Innovation in cardiovascular drug
discovery. Industrial and Corporate Change, 3 (3) 607-630.
38
Henderson, R.M. & Clark, K.B. Architectural innovation: The reconfiguration of existing product
technology and the failure of established firms. Administrative Science Quarterly, 35 9-30.
Henderson, R. & Cockburn, I. Measuring competence? Exploring firm effects in pharmaceutical
research. Strategic Management Journal, 15 63-84.
Holland, J.H., Holyoak, K.J., Nisbett, R.E., & Thagard, P.R. 1986. Induction: Processes of
Inference, Learning, and Discovery. The MIT Press: Cambridge, MA.
Itami, H. & Roehl, T.W. 1987. Mobilizing Invisible Assets. Harvard University Press: Cambridge,
MA.
Kempton, W. 1981. The Folk Classification of Ceramics: A Study of Cognitive Prototypes.
Academic Press: New York.
Klein, B., Crawford, R.G, & Alchian, A.A. 1978. Vertical integration, appropriable rents, and the
competitive contracting process. Journal of Law and Economics, 21 297-326.
Kogut, B. & Zander, U. 1992. Knowledge of the firm, combinative capabilities, and the replication
of technology. Organization Science, 3 (3) 383-397.
Laudan, R. 1984. Cognitive change in technology and science. In R. Laudan (ed.) The Nature of
Technological Knowledge: Are Models of Scientific Change Relevant. Reidel Publishing Company:
Dordrecht Holland. 83-104.
Leonard-Barton, D. 1992. Core capabilities and core rigidities: A paradox in managing new
product development. Strategic Management Journal, 13 (Summer Special Issue) 111-125.
Leonard-Barton, D. 1995. Wellsprings of Knowledge: Building and Sustaining Sources of
Innovation. Harvard Business School Press: Boston, MA.
Levitt, B. & March, J. 1988. Organizational learning. Annual Review of Sociology, 14 319-340.
MacCallum, R.C. & Brown, M.W. 1993. The use of causal indicators in covariance structure
models: Some practical issues. Psychological Bulletin, 114 533-541.
MacMillan, I. McCaffery, M. & Van Wijk, G. 1985. Competitors’ responses to easily imitated
new products - exploring commercial banking product introductions. Strategic Management
Journal, 6 75-86.
Mahoney, J. & Pandian, R.1992. The resource-based view within the conversation of strategic
management. Strategic Management Journal, 13 363-380.
Malt, B.C. & Smith, E.E. 1984. Correlated properties in natural categories. Journal of Verbal
Learning and Verbal Behavior, 23 250-269.
39
March, J.G. 1991. Exploration and exploitation in organizational learning. Organization Science, 2
71-87.
Markides, C. & Williamson, P. 1994. Related diversification, core competences, and corporate
performance. Strategic Management Journal, 15 149-165.
Marples, D.L., 1961. The decisions of engineering design. IEEE Transactions on Engineering
Management, EM-8 55-71.
Masten, S.E., Meehan, J.W., & Snyder, E.A. 1989. Vertical integration in the U.S. auto industry.
Journal of Economic Behavior and Organization, 12 265-273.
Masten, S.E., Meehan, J.W., & Snyder, E.A. 1991. The costs of organization. Journal of Law,
Economics, and Organization, 7 (1) 1-25.
Miller, D. & Shamsie, J. 1996. The resource-based view of the firm in two environments: The
Hollywood film studios from 1936 to 1965. Academy of Management Journal, 39 (3) 519-543.
Milgrom, P. & Roberts, J. 1992. Economics, Organization, and Management. Englewood, NJ:
Prentice-Hall.
Montgomery, C. & Wernerfelt, B. 1988. Diversification, Ricardian rents, and Tobin's q. RAND
Journal of Economics, 19 (4) 623-632.
Monteverde, K. & Teece, D.J. 1982. Supplier switching costs and vertical integration in the
automobile industry. Bell Journal of Economics, 13 206-213.
Nelson, R.R. & Winter, S.J. 1982. An Evolutionary Theory of Economic Change. The Belknap
Press of Harvard University: Cambridge, MA.
Nonaka, I. 1991. The knowledge-creating company. Harvard Business Review, NovemberDecember 96-104.
Nonaka, I. & Takeuchi, H. 1995. The Knowledge-Creating Company: How Japanese Companies
Create the Dynamics of Innovation. Oxford University Press: New York, NY.
Penrose, E. 1980. The Theory of the Growth of the Firm. Basil Blackwell: New York, NY.
Peteraf, M. A. 1993. The cornerstones of competitive advantage: A resource-based view. Strategic
Management Journal, 14 179-191.
Polanyi, M. (1962). Personal Knowledge: Toward A Post-Critical Philosophy. Harper
Torchbooks: New York.
Polanyi, M. (1976). Tacit knowing. In M. Marx & F. Goodson (Eds) Theories in Contemporary
Psychology. Macmillan: New York. 330-344.
40
Porac, J.F. & Thomas, H. 1990. Taxonomic mental models in competitor definition. Academy of
Management Review, 15 (2) 224-240.
Prahalad, C.K. & Hamel, G. 1994. Competing for the Future. Harvard Business School Press:
Cambridge, MA.
Quinn, J.B. 1992. Intelligent Enterprise. The Free Press: New York.
Reber, A.S. 1989. Implicit learning and tacit knowledge. Journal of Experimental Psychology,
General, 118 (3) 219-235.
Reber, A.S. 1993. Implicit Learning and Tacit Knowledge: An Essay on the Cognitive Unconscious.
Oxford University Press: New York.
Reed, R., & DeFillippi, R. J. 1990. Causal ambiguity, barriers to imitation, and sustainable
competitive advantage. Academy of Management Review, 15 (1) 88-102.
Reger, R.K., Gustafson, L.T., DeMarie, S.M. & Mullane, J.V. 1994. Reframing the
organization: Why implementing total quality is easier said than done. Academy of Management
Review, 19 565-584.
Rosch, E. 1978. Principles of categorization. In E.Rosch & B. Lloyds (Eds.) Cognition and
Categorization. Erlbaum: Hillsdale, NJ. 22-27.
Rosch, E. & Mervis C. 1981. Family resemblance: Studies in the internal structure of categories.
Cognitive Psychology. 7 573-605.
Rosenberg, N. 1982. Inside the Black Box: Technology and Economics. Cambridge University
Press: New York.
Rosenberg, N. 1994. Exploring the Black Box. Cambridge University Press: Cambridge, MA.
Rumelt, R. P. 1995. Inertia and transformation. In C. Montgomery (Ed.) Resource-based and
Evolutionary Theories of the Firm: Towards a Synthesis. Kluwer Academic Publishers: Boston,
MA. 101-132.
Sanchez, R. 1995. Strategic flexibility in product competition. Strategic Management Journal, 16
135-159.
Sanchez, R. & Mahoney, J.T. 1996. Modularity, flexibility and knowledge management in product
and organization design. Strategic Management Journal, 17 (Winter Special Issue) 63-76.
Sanderson, S. & Uzumeri, M. 1995. Managing product families: The case of the Sony Walkman.
Research Policy, 24 761-782.
41
Sarkis, J. 1997. An empirical analysis of productivity and complexity for flexible manufacturing
systems. International Journal of Production Economics. 48 (1) 39-48.
Schwab, D.P. 1980. Construct validity in organizational behavior. In Barry M. Staw & Larry
Cummings (Eds) Research in Organizational Behavior, 2 4-43. JAI Press: Greenwich, CT.
Skeist, I. (Ed.) 1992. Handbook of Adhesives. Van Nostrand Reinhold: New York, NY.
Sitkin, S.B., Sutcliffe, K.M., & Schroeder, R.G. 1994. Distinguishing control from learning in total
quality management: A contingency perspective. Academy of Management Review, 19 (3) 537-564.
Simon, H.A. 1962. The architecture of complexity. Proceedings of the American Philosophical
Society, 106 467-482.
Singley, M.K. & Anderson, J.R. 1989. The Transfer of Cognitive Skill. Harvard Press: Cambridge,
MA.
Spender, J-C. 1996. Making knowledge the basis of a dynamic theory of the firm. Strategic
Management Journal, 17 45-62.
Spender, J-C. & Grant, R.M. 1996. Knowledge and the firm: Overview. Strategic Management
Journal, 17 5-9.
Smith, E.E. & Medin, D.L. 1981. Categories and Concepts. Harvard University Press: Cambridge,
MA.
Szulanski, G. 1996. Exploring internal stickiness: Impediments to the transfer of best practice within
the firm. Strategic Management Journal, 17 (Winter Special Issue) 27-43.
Tan, H-T., & Libby, R. 1997. Tacit managerial versus technical knowledge as determinants of audit
expertise in the field. Journal of Accounting Research, 35 (1) 97-113.
Teece, D.J. 1977. Technology transfer by multinational firms: the resource cost of transferring
technolgical know-how. The Economic Journal, 87 242-261.
Teece, D.J., Pisano, G. & A. Shuen. 1997. Dynamic capabilities and strategic management.
Strategic Management Journal, 18 (7) 509-533.
Teece, D.J. 1998a. Capturing value from knowledge assets: The new economy, markets for
know-how, and intangible assets. California Management Review, 40 (3) 55-79.
Teece, D.J. 1998b. Research directions for knowledge management. California Management
Review, 40 (3) 289-292.
Thornton, G.C. & Byham, W.C. 1982. Assessment Centers and Managerial Performance. New
York, NY: Academic press.
42
Ulrich, K.T. 1995. The role of product architecture in the manufacturing firm. Research Policy, 24
419-440.
Van Krogh, G., Roos, J. & Slocum, K. 1994. An essay on corporate epistemology. Strategic
Management Journal, 15 53-71.
Vincenti, W.G. 1990. What Engineers Know and How They Know It. Johns Hopkins University
Press: Baltimore, MD.
Wagner, R.K. 1987. Tacit knowledge in everyday intelligent behavior. Journal of Personality and
Social Psychology, June 1236-1247.
Wagner, R.K. & Sternberg, R.J. 1985. Practical intelligence in real real-world pursuits: The role of
tacit knowledge. Journal of Personality and Social Psychology, August 436-458.
Wagner, R.K. & Sternberg, R.J. 1987. Tacit knowledge in managerial success. Journal of Business
and Psychology, Summer 301-312.
Walsh, J. 1995. Managerial and organizational cognition: Notes from a trip down memory lane.
Organization Science, 6 (3) 280-321.
Wernerfelt, B. 1989. From critical resources to corporate strategy. Journal of General
Management, 14 (3) 4-12.
West, G.P. III, & Dale, G. 1997. Communicated knowledge as a learning foundation,
International Journal of Organizational Analysis, 5 (1) 25-58.
Williamson, O. E. 1975. Market and Hierarchies: Analysis and Antitrust Implications. Free Press:
New York, NY.
Winter, S. 1987. Knowledge and competence as strategic assets. In David J. Teece (Ed.), The
competitive challenge: Strategies for Industrial Innovation and Renewal. Basil Blackwell: New
York, NY.
Winter, S.G. 1994 Organizing for continuous improvement: Evolutionary theory meets the
quality revolution. In J.A.C. Baum & J. Singh (Eds) The Evolutionary Dynamics of
Organizations. Oxford University Press: Cambridge. 90-108.
Szulanski, G. & Winter, S.G. 1999. Knowledge transfer with the firm: A replication perspective
on internal stickiness. Paper presented at the national meetings of the Academy of Management
in Chicago, Illinois.
Wood, R.E. 1986. Task complexity: Definition of the construct. Organizational Behavior and
Human Decision Processes, 37 60-82.
43
Wright, R. 1994. The effects of tacitness and tangibility on the diffusion of knowledge-based
resources. Academy of Management Best Paper Proceedings. Dallas, TX.
Zander, U. & Kogut, B. 1995. Knowledge and the speed of the transfer and imitation of
organizational capabilities: An empirical test. Organization Science, 6 76-92.
i
In this situation a firm can earn superior returns, unless it chooses not to capitalize on its
advantage for strategic reasons (e.g. it may price low to deter entry even if its products are of
higher quality). It is not necessary to deal with these strategic contingencies if we test resourcebased theory by focusing on product, service, or process performance outcomes, rather than, or
in addition to, financial ones.
ii
The relevant indicator of persistence is superior performance, rather than evidence that
competitors have imitated a firm’s capabilities, because comparable performance erodes a firm’s
profits even if they are based on different resources and capabilities. Moreover, firms seek to
replicate the performance of more successful competitors – they seldom attempt to duplicate
their capabilities in toto or to cultivate identical knowledge (Nelson & Winter, 1982). Instead,
once competitors recognize that a new level of performance is possible, they often try to match
that benchmark by relying on their own unique knowledge.
When knowledge is the primary input to the achievement of a performance goal,
productivity can be measured in terms of the input man-hours. Therefore, persistence can be
measured as the difference between the amount of time rivals require to match a firm’s
performance and the firm’s own development time. Salary or wage differences that are
associated with higher quality personnel should also be accounted for, as more expensive inputs
will reduce a firm’s rate of return to a knowledge stock.
iii
Complexity refers to the difficulty of comprehending how a particular outcome is
produced or objective achieved. One of the most widely accepted definitions of complexity is
Simon’s (1962), who defines a complex system as one that consists of many distinct and
interacting elements, which have equally important effects on the outcomes produced by the
system. The tacitness of knowledge is the degree to which an individual is unable to articulate
what he or she knows about how to achieve some objective or carry out a particular task
(Polanyi, 1962). Specificity is the proportion of an asset’s value, such as a knowledge stock, that is
lost when the asset is put to an alternative use (Klein, Crawford, Alchian, 1979; Williamson, 1985;
Milgrom & Roberts, 1992).
iv
Cognitive categories are groups of objects, events, or phenomena that are perceived to have
similar properties (Rosch, 1978; Mervis & Rosch, 1981). When an individual repeatedly
encounters an object, she may notice a correlation between the object and properties that are salient
to her ability to achieve particular goals (Holland, Holyoak, Nisbett & Thagard, 1986). Categories
form when a person also notices a correlation between those properties and characteristics of other
objects the person has encountered (Smith & Medin, 1981; Malt & Smith, 1984). Like objects are
then named, often with labels learned through formal education, the business press, professional
societies, or informal conversation, such that categories may be shared by communities of
individuals and organizations (Cantor & Mischel, 1979; Porac & Thomas, 1990).
44
Categories and their properties are similar to the structural elements of tasks, ‘acts’ and
‘information cues’, that Wood (1986) uses to construct measures of task complexity. In
particular, he suggests that tasks can be defined along three dimensions. First, the ‘product’ of
the task must be identified. This is both the object (e.g. an assembled radio, completed financial
statement) and key attributes of that object (e.g. quality, cost, quantity, timeliness). The rationale
for including attributes is that a set of different behaviors or knowledge may be required to
produce each attribute. Analogously, firms require different categories of understanding to
achieve unique performance objectives. Second, the acts necessary to produce those products
are delineated, where an ‘act’ is a pattern of behaviors that have some common identifiable
purpose or objective. This is comparable to distinguishing among product components
according to their function.
The third element, listing ‘information cues’ that are used to execute those acts, is
equivalent to the properties of each knowledge category (e.g. the physical properties that product
developers come to associate with individual components or materials). Cues are pieces of
information about the attributes of stimulus objects upon which an individual can base judgments
during performance of the task (Wood, 1986). For example, the cues that an air traffic controller
may use to select a hold pattern for an airplane (an act) include wind rate and direction, weather,
visibility, and expected incoming planes.
v
For example, engineers accumulate knowledge around a product’s components and critical
design choices that influence the product’s architecture (Vincenti, 1990). As such, ‘components’
and ‘architecture’ are abstract categories for classifying a firm’s product performance knowledge,
which may exist indefinitely. Components may be further distinguished by their function, i.e. the
role they play in the product (Ulrich, 1995). The decomposition of a product into functions may be
unique to a firm, or it may be standard within an industry, according to the underlying economics.
The set of functions that an industry or firm uses to develop a product may persist for many
decades, even if the members of these functional categories - the actual physical objects - fluctuate
frequently. In the same way, the functional tasks a firm relies on to achieve its customer service
goals (e.g. tracking product quality) may change little, even though the techniques employees use
to gather this information continuously evolve.
vi
vii
Similarly, research on total quality management may help researchers to identify
principles that can be used to categorize knowledge about key activities firms need to execute or
the types of problems they must solve in order to achieve their quality goals.
These are analogous to the ‘information cues’ in problem solving tasks that Wood (1986)
discusses.
viii
As a general example, most people would classify feathered, flying animals as ‘birds’ but
the subcategories they possess, and their properties, will differ according to the region of the
world they live in. Within a region, a veterinarian that rehabilitates injured birds is likely to
attend to, and store in memory, different properties of birds than the occasional bird watcher.
Analogously, the knowledge firms acquire about common technologies and raw materials differ
according to the particular supplier they are procured from and the purposes for which a firm has
used them.
ix
45
x
Lack of causal knowledge makes it harder for a formulator to communicate how another
individual could achieve the same performance outcomes. Since the formulator’s own
performance knowledge is less precise, she may forget or be unable to verbalize all the details
that need to be present for her solution to be effective. The level of performance that a particular
adhesive provides may be depend upon certain characteristics of the substrate to be bonded, the
conditions under which an adhesive is applied or used, and/or specific properties of the
components that are used. If any of these differ, adjustments to the formula may be required, but
without causal knowledge the formulator is less apt to recognize and communicate these
contingencies. On the other hand, if a formulator can articulate the principles behind product
performance, these contingencies may be anticipated even if they are not explicitly
communicated.
The tacitness of a formulator’s knowledge may also reflect the amount of experience she
has working with a technology, certain components, and the application environment. Over time,
a formulator may come to recognize consistent patterns in the relationships between performance
outcomes and the use of particular component types or varieties. Repeated experience enables an
individual to develop theories or hypotheses about the causal mechanisms that explain these
relationships. Even if it is learned implicitly, rather than through explicit hypothesis testing,
causal knowledge enables prediction (Reber, 1993). The better able a formulator is to predict
how to exploit certain components or their physical properties, the less she needs to rely on trial
and error learning to discover suitable formulas. As a result, more of each new product’s
performance is based on explicit causal understanding than on recently acquired tacit knowledge.
xi
xii
For example, Thompson (1967) discussed three types of interdependence: sequential,
pooled, and reciprocal. Pooled activities are only interdependent in the sense that the outputs of
those tasks must function together; however, they are carried out entirely independently of one
another. Sequential interdependence arises when completing one task requires the prior
completion of other tasks. For example, in order to construct the engine, the carburetor must be
completed first. Reciprocal interdependence occurs when two or more distinct parts of a system
send and receive inputs and outputs on an ongoing basis, or simultaneously affect some outcome,
such as through joint action or problem solving.
xiii
In addition to our pre-test, several of the R&D managers we interviewed by phone
remarked that the questions were thorough and well thought out. In fact, this was one reason why
some companies declined to participate - they felt that responding to the survey would reveal too
much of what they know about formulating adhesives. This additional feedback made us
comfortable that we had effectively tapped into the technological knowledge we wished to
measure.
xiv
On the other hand, these practices make it possible to capture firm level knowledge using
a key informant. A firm’s knowledge of how to manipulate product performance resides with its
experienced formulators, and labor mobility is relatively low in this industry. Further, firms
often encourage experienced formulators to apprentice new employees in order to pass on what
they have learned about developing adhesives, so characteristic approaches to formulation tend
to persist within firms.
46
xv
Zander and Kogut (1995) used a similar measure to study the complexity of technological
knowledge for processes. They distinguished among manufacturing processes by their function
(e.g. assembly, changing the shapes of materials, etc.) and asked respondents to rate the
importance of each for making a product. The mean or sum of these responses captures both
number and equality.
xvi
For instance, Wagner and Sternberg (1985) developed a measure of tacit managerial
knowledge that consists of three factors: knowledge of how to manage oneself, others, and one's
career. They measure tacit knowledge of these three types using scenarios to elicit responses to
typical work situations. After reading a scenario, individuals are asked to rate a range of
responses, which reflect heuristics that have been previously identified by experienced
individuals. The amount of tacit managerial knowledge that an individual possesses was
measured as the degree of similarity between their responses to the scenarios and the experts'
responses.
47