Knowledge driven capitalization of knowledge

 Published in:
R.Viale and H. Etzkowitz (eds.), The Capitalization of Knowledge-A Triple Helix of
University-Industry-Government, Cheltenham: Edward Elgar, 2010
Knowledge driven capitalization of knowledge
Riccardo Viale
INTRODUCTION
Capitalization of knowledge happens when knowledge generates an economic added value.
The generation of economic value can be said to be direct when one sells the knowledge for
some financial, material or behavioral good. The generation of economic value is
considered indirect when it allows the production of some material or service goods that are
sold on the market. The direct mode comprises the sale of personal know-how, such as in
the case of a plumber or of a sports instructor. It also comprises the sale of intellectual
property as in the case of patents, copyrights or teaching. The indirect mode comprises the
ways with which organizational, declarative and procedural knowledge is embodied in
goods or services. The economic return in both cases can be financial (for example cash),
material (for example the exchange of consumer goods) or behavioral (for example the
exchange of personal services). In ancient times, the direct and indirect capitalization of
knowledge was based mainly on procedural knowledge. Artisans, craftsmen, doctors, and
engineers sold their know-how in direct or indirect ways within a market or outside of it.
Up to the first industrial revolution, the knowledge that could be capitalized remained
mainly procedural. Few were the inventors that sold their designs and blueprints for the
construction of military or civil machines and mechanisms. There were some exceptions, as
in the case of Leonardo da Vinci and several of his inventions, but, since technological
knowledge remained essentially tacit, it drove a capitalization based primarily on the direct
collaboration and involvement of the inventors in the construction of machines and in the
direct training of apprentices.
In the time between the first and second industrial revolutions, there was a
progressive change in the type of knowledge that could be capitalized. The law of
diminishing returns, as it manifested itself in the economic exploitation of invention,
pushed companies and inventors, lacking a scientific base, to look for the causal
explanation of innovations (Mokyr, 2002a; Mokyr, 2002b). For example, Andrew
Carnegie, Eastman Kodak, DuPont, AT&T, General Electric, Standard Oil, Alcoa and
many others understood the importance of scientific research for innovation (Rosenberg
and Mowery, 1998). Moreover, the revolution in organic chemistry in Germany shifted
industrial attention towards the fertility of collaboration between universities and
companies. Searching for a scientific base for inventions meant developing specific parts of
declarative knowledge. Depending on the different disciplines, knowledge could be more or
less formalized and could contain more or less tacit features. In any case, from the second
industrial revolution onwards, the capitalization of technological knowledge began to
change: a growing part of knowledge became protected by intellectual property rights
(IPR); patents and copyrights were sold to companies; institutional links between academic
and industrial laboratories grew; companies began to invest in research and development
laboratories; universities amplified the range and share of applied and technological
disciplines and courses; and governments enacted laws to protect academic IPR and
introduced incentives for academy-industry collaboration. New institutions and new
organizations were founded with the aim of strengthening the capitalization of knowledge.
The purpose of this paper is to show that one of the important determinants of the
new forms of the capitalization of knowledge is its epistemological structure and cognitive
processing. The thesis of this paper is that the complexity of the declarative part of
knowledge and the three tacit dimensions of knowledge – competence, background, and
cognitive rules (Pozzali and Viale, 2007) – have a great impact on research behaviors and,
consequently, on the ways of capitalizing knowledge. This behavioral impact drives
academy-industry relations towards greater face-to-face interactions and has led to the
development of a new academic role, that of the Janus scientisti. The need for stronger and
more extensive face-to-face interaction is manifested through the phenomenon of the close
proximity between universities and companies and through the creation of hybrid
organizations of research and development (R&D). The emergence of the new academic
role of Janus scientist, one who is able to interface both with the academic and industrial
dimensions of research, reveals itself through the introduction of new institutional rules and
incentives quite different from traditional academic ones.
EPISTEMOLOGICAL AND COGNITIVE CONSTRAINTS IN KNOWLEDGE
TRANSFER
Scientific knowledge is variegated according to different fields and disciplines. The use of
formal vs. natural language, the conceptual complexity vs. simplicity, and explicit vs. tacit
features of knowledge vary a lot from theoretical physics to entomology (to remain within
the natural sciences). Different epistemological structures depend mainly from the ontology
of the relative empirical domain. For example in particle physics the ontology of particles
allows the use of numbers and of natural laws written in mathematical language. On the
contrary in entomology the ontology of insects allows to establish empirical generalizations
expressed in natural language. Different epistemological structures mean different ways of
thinking, reasoning and problem solving. And this cognitive dimension influences
behavioral and organizational reality. To better illustrate the role of epistemological
structure, I will introduce several elementary epistemological concepts.
Knowledge can be subdivided into the categories ontic and deontic. Ontic
knowledge analyzes how the world is, whereas deontic knowledge is focused on how it can
be changed. These two forms of knowledge can be represented according to two main
modes: the analytical mode deals with the linguistic forms that we use to express
knowledge; the cognitive mode deals with the psychological ways of representing and
processing knowledge. Two main epistemological features of knowledge influence the
organizational means of knowledge generation and transfer. The first is the rate of
generality. The more general the knowledge is, the easier it is to transfer and apply it to
subjects different from those envisioned by the inventor. The second is complexity. The
more conceptually and computationally complex the knowledge is, the more there will be a
concomitant organizational division of work in problem solving and reasoning.
1) Analytical Mode of Ontic Knowledge. Analytical ontic knowledge is divided into two
main types, descriptive and explanatory.
Descriptive
The first type comprises all the assertions describing a particular event according to given
space-time coordinates. These assertions have many names, such as elementary
propositions or base assertions. They correspond to the perceptual experience of an
empirical event by a human epistemic agent at a given timeii. A descriptive assertion has a
predicative field limited to the perceived event at a given time. The event is exceptional
because its time-space coordinates are unique and not reproducible. Moreover, this
uniqueness is made stronger by the irreproducibility of the perception of the agent. Even if
the same event were reproducible, the perception of it would be different because of the
continuous changes in perceptual ability. Perception is related to cortical top-down
influences corresponding to schemes, expectations, frames and other conceptual structures
that change constantly. The perception of an event causes a change in the conceptual
categorization to which the event belongs. This change can modify the perception of a
similar event that happens afterwardsiii. Therefore, a singular descriptive assertion can
correspond only to the time-space particular perception of a given epistemic agent and
cannot have any general meaning. For example, the observational data in an experiment can
be transcribed in a laboratory diary. An observational assertion has only an historical
meaning because it cannot be generalized. Therefore, a process technique described by a
succession of descriptive assertions made by an epistemic agent cannot be transferred and
replicated precisely by a different agent. It will lose part of its meaning and, consequently,
replication will be difficult. Inventions, before and during the first industrial revolution,
were mainly represented as a set of idiosyncratic descriptive assertions made by the
inventor. Understanding the assertions and replicating the data were only possible for the
inventor. Therefore, technology transfer was quite impossible at that time. Moreover, the
predicative field of an invention was narrow and fixed. It applied only to the events
described in the assertions. There was no possibility of enlarging the semantic meaning of
the assertions, that is, of enlarging the field of the application of the invention in order to
produce further innovations. As a result, the law of diminishing returns manifested itself
very quickly and effectively. In little time, the economic exploitation of the invention
reached its acme, and the diminishing returns followed. Only a knowledge that was based
not on descriptive assertions but on explanatory ones could provide the opportunity to
enlarge and expand an invention, to generate corollary innovations and, thus, to stop the
law of diminishing returns. This economic motive, among others, pushed inventors and,
mainly, entrepreneurs to look for the explanatory basis of an invention, that is, to pursue
collaborations with university labs and to establish internal research and development labs
(Mowery and Rosenberg, 1989; Rosenberg and Birdzell, 1986).
Explanatory
These assertions, contrary to descriptive ones, have a predicative field that is wide and
unfixed. They apply to past and future events and, in some cases (for example theories), to
events that are not considered by the discoverer. They can therefore allow the prediction of
novel facts. These goals are obtained because of the syntactic and semantic complexity and
flexibility of explanatory assertions. Universal or probabilistic assertions, such as the
inductive generalization of singular observations (for example 'all the crows are black' or 'a
large percentage of crows are black') are the closest to descriptive assertions. They have
little complexity and their application outside the predicative field is null. In fact, their
explanatory and predictive power is narrow, and phenomenon is explained in terms of the
input-output relations of a black box (Viale, 2008). In contrast, theories and models tend to
represent inner parts of a phenomenon. Usually, hypothetical entities are introduced that
have no direct empirical meaning. Theoretical entities are then linked indirectly to
observations through bridge principles or connecting statements. Models and metaphors
often serve as heuristic devices used to reason more easily about the theory. The
complexity, semantic richness and plasticity of a theory allows it to have wider applications
than empirical generalizations. Moreover, theories and models tend not to explain a
phenomenon in a black box way, but to represent the inner mechanisms that connect input
to output. Knowing the inner causal mechanisms allows for better management of variables
that can change the output. Therefore, they offer better technological usage.
Inductive generalizations were the typical assertions made by individual inventors
during the first industrial revolution. Compared to descriptive assertions, they represent
progress because they lend themselves to greater generalization. They avoid being highly
idiosyncratic in this way and, in principle, can be transferred to other situations.
Nevertheless, inductive generalizations are narrow in their epistemological meaning and
don’t allow further enlargement of the invention. This carries with it the inevitable
consequence of their inability to generate other innovations. Therefore, inductive
generalizations are fixed in the law of diminishing returns. In contrast, theories attempting
to give causal explanations of an invention presented the ability to stop the law of
diminishing returns. They opened the black box of the invention and allowed researchers to
better manipulate the variables involved in order to produce different outputs, or rather,
different inventions. A better understanding of inventions through the discovery of their
scientific theoretical bases began to be pursued during and after the second industrial
revolution.
To better exemplify the relations between descriptive assertions, empirical
generalizations and theories in technological innovation, I will describe an historical case
(Viale, 2008, pp. 23-25; Rosenberg and Birdzell, 1986).
At the end of the 18th century and the beginning of the 19th, the growth in urban
populations, as people moved out of the country to search for work in the city, posed
increasing problems for the provisioning of food. Long distances, adverse weather
conditions and interruptions in supplies as a result of social and political unrest meant that
food was often rotten by the time it reached its destination. The authorities urgently needed
to find a way to preserve food. In 1795, at the height of the French Revolution, Nicholas
Appert, a French confectioner who had been testing various methods of preserving edibles
using champagne bottles, found a solution. He placed the bottles containing the food in
boiling water for a certain length of time, ensuring that the seal was airtight. This stopped
the food inside the bottle from fermenting and spoiling. This apparently commonplace
discovery would be of fundamental importance in years to come and earned Appert
countless honors, including a major award from the Napoleonic Society for the
Encouragement of Industry, which was particularly interested in new victualling techniques
for the army. For many years, the developments generated by the original invention were of
limited application, such as the use of tin-coated steel containers introduced in 1810. When
Appert developed his method, he was not aware of the physical, chemical and biological
processes that prevented deterioration once the food had been heated. His invention was a
typical example of know-how introduced through 'trial and error'. The extension of the
invention into process innovation was therefore confined to descriptive knowledge, in other
words, to an empirical generalization. It was possible to test new containers or to try and
establish a link between the change in temperature, the length of time the container was
placed in the hot water and the effects on the various bottled foods, and to then draw up
specific rules for the preservation of food. However, this was a random, time-consuming
approach, involving endless possible combinations of factors and lacking any real capacity
to establish a solid basis for the standardization of the invention. Had it been a patent, it
would have been a circumscribed innovation, whose returns would have been high for a
limited initial period and would then have gradually decreased in the absence of
developments and expansions of the invention itselfiv. The scientific explanation came
some time later, in 1873. Louis Pasteur discovered the function of bacteria in certain types
of biological activity, such as in fermentation and the deterioration of food.
Microorganisms are the agents that make it difficult to preserve fresh food, and heat has the
power to destroy them. Once the scientific explanation was known, chemists, biochemists
and bacteriologists were able to study the effects of the multiple factors involved in food
spoilage: 'food composition, storage combinations, the specific microorganisms, their
concentration and sensitivity to temperature, oxygen levels, nutritional elements available
and the presence or absence of growth inhibitors' (Rosenberg and Birdzell, 1986; Italian
translation 1988, pp. 300-301). These findings and many others expanded the scope of the
innovation beyond its original confines. The method was applied to varieties of fruit,
vegetables and, later, meats that could be heated. The most suitable type of container was
identified, and the effects of canning on the food’s flavor, texture, color and nutritional
value were characterized. As often happens when the scientific bases of an invention are
identified, the innovation generated a cascade effect of developments in related fields, such
as insulating materials, conserving-agent chemistry, and genetics and agriculture for the
selection and cultivation of varieties of fruit and vegetables better suited to preservation
processes.
Why is it that the scientific explanation for an invention can expand innovation
capacity? To answer this question, reference can again be made to Appert’s invention and
Pasteur’s explanation. When a scientific explanation is produced for a phenomenon, two
results are obtained. First of all, a causal relationship is established at a more general level.
Second, once a causal agent for a phenomenon has been identified, its empirical
characteristics can be analyzed. As far as the first result is concerned, the microbic
explanation furnished by Pasteur does not apply simply to the specific phenomenon of
putrefaction in fruit and vegetables; bacteria have a far more general ability to act on other
foods and to cause pathologies in people and animals. The greater generality of the causal
explanation compared with the 'local' explanation – the relationship between heat and the
preservation of food – means the innovation can be applied on a wider scale. The use of
heat to destroy microbes means it is possible to preserve not only fruit and vegetables, but
meat as well, to sterilize milk and water, to prepare surgical instruments before an
operation, to protect the human body from bacterial infection (by raising body temperature)
and so on. All this knowledge about the role of heat in relation to microbes can be applied
in the development of new products and new innovative processes, from tinned meat to
autoclaves to sterilized scalpels. As to the second result, once a causal agent has been
identified, it can be characterized and, in the case of Pasteur’s microbes, other methods can
be developed to neutralize or utilize them. An analysis of a causal agent begins by
identifying all possible varieties. Frequently, as in the case of microbes, the natural
category identified by the scientific discovery comprises a huge range of entities. And each
microbe – bacterium, fungus, yeast, and so on – presents different characteristics depending
on the natural environment. Some of these properties can be harnessed for innovative
applications. Yeasts and bacilli, for example, are used to produce many kinds of food, from
bread and beer to wine and yoghurt; bacteria are used in waste disposal. And this has led
scientists, with the advent of biotechnology, to attempt to transform the genetic code of
microorganisms in order to exploit their metabolic reactions for useful purposes. Returning
to our starting point, the preservation of food, once the agent responsible for putrefaction
had been identified, it also became possible to extend the range of methods used to
neutralize it. It was eventually discovered that microbes could also be destroyed with
ultraviolet light, with alcohol or with other substances, which, for a variety of reasons,
would subsequently be termed disinfectants.
So to answer our opening question, the scientific explanation for an invention
expands the development potential of the original innovation because it 'reduces' the
ontological level of the causes and extends the predicative reach of the explanation. Put
simply, if the phenomenon to be explained is a 'black box', the explanation identifies the
causal mechanisms inside the box ('reduction' of the ontological level), which are common
to other black boxes (extension of the 'predicative reach' of the explanation). Consequently,
it is possible to develop many other applications or microinventions, some of which may
constitute a product or process innovation. Innovation capacity cannot be expanded,
however, when the knowledge that generates the invention is simply an empirical
generalization describing the local relationship between antecedent and causal consequent
(in the example of food preservation, the relationship between heat and the absence of
deterioration). In this case, knowledge merely describes the external reality of the black
box, that is, the relationship between input (heat) and output (absence of deterioration); it
does not extend to the internal causal mechanisms and the processes that generate the
causal relationship. It is of specific, local value, and may be applied to other contexts or
manipulated to generate other applications to only a very limited degree.
The knowledge inherent in Appert’s invention, which can be described as an
empirical generalization, is regarded as genuine scientific knowledge by some authors
(Mokyr, 2002a, trad. it. 2004). We do not want to get involved here in the on-going
epistemological dispute over what constitutes scientific knowledge (Viale, 1991):
accidental generalizations of 'local' value only (for example the statement 'the pebbles in
this box are black'), empirical generalizations of 'universal' value (for example Appert’s
invention) and causal nomological universals (for example a theory like Pasteur’s
discovery). The point to stress is that although an empirical generalization is 'useful' in
generating technological innovation (useful in the sense adopted by Mokyr, 2002b, p. 25,
derived from Kuznets, 1965, pp. 84-87), it does not possess the generality and ontological
depth that permit the potential of the innovation to be easily expanded in the way that
Pasteur’s discovery produced multiple innovative effects. In conclusion, after Pasteur’s
discovery of the scientific basis of Appert’s invention, a situation of 'growing economic
returns' developed, driven by the gradual expansion of the potential of the innovation and a
causal concatenation of microinventions and innovations in related areas. This could be
described as a recursive cascade phenomenon or as a 'dual' system (Kauffman, 1995) where
the explanation of the causal mechanism for putrefaction gave rise to a tree structure of
other scientific problems whose solution would generate new technological applications
and innovations and also raise new problems in need of a solution.
The example of Appert’s invention vs. Pasteur’s discovery is also revealed in
another phenomenon. The discovery of the scientific basis of an invention allows the
horizontal enlargement of the invention into areas different from the original one (for
example from alimentation to hygiene and health). The interdisciplinarity of inventive
activities has grown progressively from the second industrial revolution until now, with the
recent birth of the converging technology program (National Science Foundation, 2002).
The new technologies are often the result of the expansion of a theory outside its original
borders. This phenomenon implies the participation of different disciplines and
specializations in order to be able to understand, grasp and exploit the application of the
theory. The strong interdisciplinarity of current inventions implies a great division of expert
labour and increased collaboration among different experts in various disciplines. Thus,
only complex organizations supporting many different experts can cope with the demands
entailed in the strong interdisciplinarity of current inventive activity.
2) Cognitive Mode of Ontic Knowledge
The cognitive approach to science (Giere, 1988; Viale, 1991; Carruthers et al., 2002;
Johnson-Laird, 2008) considers scientific activity as a dynamic and interactive process
between mental representation and processing on the one hand, and external representation
in some media by some language on the other. According to this approach, scientific
knowledge has two dimensions: the mental representations of a natural or social
phenomenon and its linguistic external representation. The first dimension includes the
mental models stemming from perceptive and memory input and from their cognitive
processing. This cognitive processing is mainly inductive, deductive or abductive. The
models are realized by a set of rules: heuristics, explicit rules and algorithms. The cognitive
processing and progressive shaping of the mental representations of a natural phenomenon
utilize external representations in natural or formal language. The continuous interaction
between the internal mental representation and the external linguistic one induces the
scientist to generate two products: the mental model of the phenomenon and its external
propositional representation.
What is the nature of the representation of knowledge in the mind? It seems to be
different in the case of declarative (ontic) knowledge than in the case of procedural
(deontic) knowledge. The first is represented by networks, while the second is represented
by production-systems. The ACT-R (Adaptive Control of Thought-Rational) networks of
Anderson (1983; 1996) include images of objects and corresponding spatial configurations
and relationships; temporal information, such as relationships involving the sequencing of
actions, events and the order in which items appear; and information about statistical
regularities in the environment. As with semantic networks (Collins and Quillian, 1969) or
schemas (Barsalou, 2000) there is a mechanism for retrieving information and a structure
for storing information. In all network models, a node represents a piece of information that
can be activated by external stimuli, such as sensations, or by internal stimuli, such as
memories or thought processes. Given each node’s receptivity to stimulation from
neighboring nodes, activation can easily spread from one node to another. Of course, as
more nodes are activated and the spread of activation reaches greater distances from the
initial source of the activation, the activation weakens. In other words, when a concept or a
set of concepts that constitutes a theory contains a wide and dense hierarchy of
interconnected nodes, the connection of one node to a distant node will be difficult to
detect. It will, therefore, be difficult to pay the same attention to all of the consequences of
a given assertion. For example (Sternberg, 2009), as the conceptual category of a predicate
(for example, animal) becomes more hierarchically remote from the category of the subject
of the statement (for example, robin), people generally take longer to verify a true statement
(for example, a robin is an animal) in comparison with a statement that implies a less
hierarchically remote category (for example, a robin is a bird). Moreover, since working
memory can process only a limited amount of information (according to Miller’s magical
number 5 ± 2 items) a singular mind cannot compute a large amount of structured
information or too many complex concepts, such as those contained in theories. These
cognitive aspects explain various features of knowledge production and capitalization in
science and technology:
1) The great importance given to external representation in natural and formal
language and the institutional value of publication satisfy two goals: because of the natural
limitations of memory, these serve as memory devices, useful for allowing the cognitive
processing of perceptive and memory input; because of the complexity of concepts and the
need for different minds working within the same subject, these are social devices, useful
for allowing the communication of knowledge and the interaction and collaboration among
peers.
2) Before and during the first industrial revolution, the computational effort of
inventors was made apparent primarily in their perceptual ability in detecting relevant
features of the functioning of machines and prototypes and in elaborating mental figurative
concepts or models that were depicted externally in diagrams, designs, figures, flow charts,
drafts, sketches and other representations. The single mind of an inventor could cope with
this computational burden. Interaction with other subjects consisted mainly of that with
artisans and workers in order to prepare and tune the parts of a machine or of that with
apprentices involving knowledge transfer. Few theoretical concepts were needed, and
cognitive activity was focused on procedural knowledge (that is practical know-how
represented, mentally, by production systems) and simple declarative knowledge (that is
simple schemes that generalize physical phenomena, like the Appert scheme involving the
relation between heat and food preservation). The situation changes dramatically after the
second industrial revolution with the growing role of scientific research, particularly in the
life sciences. Conceptual categories increase in number; concepts become wider with many
semantic nodes; and there are increasing overlaps among different concepts. One mind
alone cannot cope with this increased complexity, so a growing selective pressure to share
the computational burden between different minds arose. The inadequacy of a single mind
to manage this conceptual complexity brought about the emergence of the division of
expert labour, or in other words, the birth of specializations, collective organizations of
research and different roles and areas of expertisev.
3) Knowledge complexity and limited cognition explain the emergence of many
institutional phenomena in scientific and technological research, such as the importance of
publication, the birth of disciplines and specializations, the division of labour and the
growth in the size of organizations. What were the effects of these emergent phenomena on
the capitalization of knowledge? While the inventor of the first industrial revolution could
capitalize his knowledge by 'selling his mind' and the incomplete knowledge represented in
the patent or in the draft, since the second industrial revolution, many minds now share
different pieces of knowledge that can fill the gaps in knowledge contained in the patent or
publication. This is particularly true in technological fields where the science push
dimension is strong. In an emerging technology like biotechnology, nanotechnology, ICT,
or in new materials and in the next converging technology (National Science Foundation,
2002), the complexity of knowledge and its differentiation lead to interdisciplinary
organizations and collaborations and to the creation of hybrid organizations. Knowledge
contained in a formal document, be it patent, publication or working paper, is not the full
representation of the knowledge contained in the invention. There are tacit aspects of the
invention that are crucial to its transfer and reproduction which are linked to the particular
conceptual interpretation and understanding of the invention occasioned by the peculiar
background knowledge and cognitive rules of the inventors (Balconi, Pozzali, Viale, 2007;
Pozzali and Viale 2007). Therefore, the only way to allow transfer is to create hybrid
organizations that put together, face-to-face, the varied expertise of inventors with that of
entrepreneurs and industrial researchers aiming to capitalize knowledge through successful
innovations.
BACKGROUND KNOWLEDGE AND COGNITIVE RULES
An important part of knowledge is not related to the representation of the physical and
human world but to the ways in which to interact with it. Deontic knowledge corresponds
to the universe of norms. Rules, prescriptions, permissions, technical norms, customs,
moral principles and ideal rules (von Wright, 1963) are the main categories of norms.
Various categories of norms are implied in research and technological innovation. Customs
or social norms represent the background knowledge that guides and gives meaning to the
behavior of scientists and industrial researchers. Some norms are moral principles and
values that correspond to a professional deontology or academic ethos (Merton, 1973).
They represent norms for moral actions and are slightly different from ideal rules (Moore,
1922) which are a way of characterizing a model of goodness and virtue (as in the Greek
meaning of arête), or in this case, of what it means to be a good researcher. Prescriptions
and regulations are the legal norms established by public authorities that constrain research
activity and opportunities for the capitalization of knowledge. Rules are mainly identifiable
in reasoning and decision making cognitive rules applied in solving problems, drawing
inferences, making computations and so forth. Lastly, technical norms are those
methodological norms and techniques that characterize the research methodology and
procedures in generating and reproducing a given innovation. From this point of view, it is
possible to assert that a scientific theory or a technological prototype is a mixture of ontic
knowledge (propositions and mental models) and deontic knowledge (values, principles,
methodologies, techniques, practical rules, and so on).
Deontic knowledge has been examined analytically as involving a logic of action by
some authors (for example von Wright, 1963; but see also the work of Davidson, Chisholm
and Kenny)vi. An analytic approach has been applied primarily to the representation of
legal and moral norms. For the purposes of this paper, the analytic mode of deontic
knowledge doesn’t appear relevant. Firstly, it is difficult to apply a truth-functional logic to
norms whose ontological existence is not clear. Secondly, unlike ontic knowledge where
the knowledge relevant for technological innovation is, to a certain extent, expressed in
some language and transcribed in some media, deontic knowledge relevant for
technological innovation is mainly present at a socio-psychological level.
As we will see, norms are greatly involved in shaping the behaviors responsible for
knowledge capitalization. Moral principles and social values are part of the background
knowledge that influences the social behavior of scientists and entrepreneurs as well as the
modalities of their interaction and collaboration. They play an important role in
determining different styles of thinking, problem solving, reasoning and decision making
between academic and industrial researchers (Viale, 2009) that can also have an effect on
shaping the institutions and organizations for capitalizing knowledge.
Before analyzing background knowledge and cognitive rules, I wish to focus briefly
on technical norms. According to von Wright (1963) technical norms correspond to the
means of reaching a given goal. Analytically, they can be represented as conditional
assertions of the elliptical form if p, then q where the antecedent p is characterized by the
goal and the consequent q by the action that should be taken to reach the goal. They
represent the bridge between ontic and deontic knowledge. In fact, the antecedent is
characterized not only explicitly by the goal but also implicitly by the empirical initial
conditions and knowledge that allow the selection of the proper action. In other words, a
technical norm would be better represented in the following way: if (p & a) then q where a
represents the empirical initial conditions and theoretical knowledge for action. From this
analytical representation of technical norms we infer certain features: 1) the more a
corresponds to complex theoretical knowledge, the more computationally complex the
application of the norm will be; 2) the more a corresponds to descriptive assertions, the
more difficult it will be to generalize the understanding and application of the norm; and 3)
the more the relevant knowledge contained in a is characterized by tacit features, the more
difficult it will be to generalize to others the understanding and application of the norm.
Technical norms corresponding to the procedures and techniques needed to generate an
invention can manifest these features. Inventions from the first industrial revolution, such
as Appert’s, presented technical norms characterized by descriptive assertions and tacit
knowledge (mainly of the competential type). Thus, knowledge transfer was very difficult,
and there was no need for the division of expert labour. Inventions after the second
industrial revolution, however, involved a growing share of theoretical knowledge and a
decrease in competential tacit knowledge; therefore, the transfer of knowledge could be, in
theory, easier. In any case, it required a greater amount of expertise that was only possible
with a complex division of expert labour. This was particularly necessary in discipline as
physics and chemistry where the particular ontology and mathematical language to
represent the phenomena allow the generation of complex theoretical structures.
From a cognitive point of view, technical norms correspond to pragmatic schemes
(Cheng and Holyoak, 1985, 1989) that have the form of production systems composed of
condition-action rules (corresponding to conditional assertions in logic and to production
rules in Artificial Intelligence). Pragmatic schemes are a set of abstract and context
dependent rules corresponding to actions and goals relevant from a pragmatic point of
view. According to the analytical formulation of von Wright (1963) the main cognitive
rules in pragmatic schemes are that of permission and obligation. More generally, a schema
(an evolution of the semantic network of Collins and Quillian, 1969) is a structured
representation that captures the information that typically applies to a situation or event
(Barsalou, 2000). Schemas establish a set of relations that link properties. For example, the
schema for a birthday party might include guests, gifts, a cake, and so on. The structure of a
birthday party is that the guests give gifts to the birthday celebrant, everyone eats cake, and
so on. The pragmatic schema links information about the world with the goal to be attained
according to this information.
A pragmatic schema can serve as a cognitive theory for most deontic knowledge
relevant in innovation. It can represent values and principles characterizing background
knowledge. Social norms ruling research behaviour, moral principles characterizing the
ethos of academic community, pragmatic goals driving the decision making of industrial
researchers and social values given to variables such as time, risk, money and property can
be represented by pragmatic schemes. These schemes also seem to influence the application
of cognitive rules, such as those used in deduction, induction, causality, decision making
and so forth. The topic is controversial. The dependence of cognitive rules on pragmatic
schemes is not justified by theories supporting an autonomous syntactic mental logic.
According to these theories (Beth and Piaget, 1961; Braine, 1978; Rumain, Connell,
Braine, 1983) the mind contains a natural deductive logic (which for Piaget offers the
propositional calculus) that allows the inference of some things and not others. For
example, the human mind is able to apply modus ponens but not modus tollens. In the same
way, we could also presuppose the existence of a natural probability calculus, a causal
reasoning rule and a risk assessment rule, among others. Many empirical studies and
several good theories give alternative explanations that neglect the existence of mental
logic and of other syntactic rules (for the pragmatic scheme theories: Cheng and Holyoak,
1985, 1989; Cheng and Nisbett, 1993; for the mental models theory: Johnson Laird, 1983,
2008; for the conceptual semantic theory see Jackendoff, 2007). The first point is that there
are many rules that are not applied when the format is abstract but which are applied when
the format is pragmatic – that is, when it is linked to everyday experience. For example, the
solution of the selection task problem, namely, the successful application of modus tollens,
is possible only when the questions are not abstract but are linked to problems of everyday
life (Politzer, 1986; Politzer and Nguyen-Xuan, 1992). The second point is that most of the
time rules are implicitly learned through pragmatic experience (Reber, 1993; Cleeremans,
1995; Cleeremans, Destrebecqz and Boyer, 1998). The phenomenon of implicit learning
seems so strong that it occurs even when the cognitive faculties are compromised. From
recent studies (Grossman, Smith et al., 2003) conducted with Alzheimer patients, it appears
that they are able to learn rules implicitly but not explicitly. Lastly, the rules that are learnt
explicitly in a class or that are part of the inferential repertoire of experts are often not
applied in everyday life or in tests based on intuition (see the experiments with statisticians
of Kaheneman and Tversky).
At the same time, pragmatic experience and the meaning that people give to social
and natural events is driven by background knowledge (Searle, 1995 and 2008; Smith and
Kossylin, 2007). The values, principles and categories of background knowledge, stored in
memory, allow us to interpret reality, to make inferences and to act, that is, to have a
pragmatic experience. Therefore, background knowledge affects implicit learning and the
application of cognitive rules through the pragmatic and semantic dimension of reasoning
and decision makingvii. What seems likely is that the relationships within schemas and
among different schemas allow us to make inferences, that is, they correspond to implicit
cognitive rules. For example, let us consider our schema for glass. It specifies that if an
object made of glass falls onto a hard surface, the object may break. This is an example of
causal inference. Similar schemas can allow you to make inductive, deductive or analogical
inferences and to solve problems and to take decisions (Markman and Gentner, 2001; Ross,
1996). In conclusion, the schema theory seems to be a good candidate to explain the
dependence of cognitive rules on background knowledge. If this is the case, we can expect
that different cognitive rules should correspond to different background knowledge,
characterizing, in this way, different cognitive styles. Nisbett (2003) has shown that the
relation between background knowledge and cognitive rules supports the differences of
thinking and reasoning between Americans and East Asians. These differences can explain
the difficulties in reciprocal understanding and cooperation between people of different
cultures. If this is the situation in industrial and academic research, we can expect obstacles
to collaboration and the transfer of knowledge and the consequent emergence of institutions
and organizations dedicated to overcoming these obstacles to the capitalization of
knowledge.
THE EFFECTS OF DIFFERENT DEONTIC KNOWLEDGE ON ACADEMYINDUSTRY RELATIONS
Usually, the obstacles to the collaboration between universities and companies are analyzed
by comparing entrepreneurs and managers to academic scientists (plus the academic TTO
officers, as in the case of Siegel et al., 1999). In my opinion, this choice is correct in the
case of the transfer of patents and in licensing technology, because here the relationship is
between an academic scientist and an entrepreneur or manager, often mediated by an
academic TTO officer. The situation is different in the collaboration between a university
and industrial labs in order to achieve a common goal, such as the development of a
prototype, the invention of a new technology, the solution to an industrial problem and so
on. In these cases, interaction occurs mainly between academic and industrial researchers.
Entrepreneurs, managers and TTO officers might only play the role of establishing and
facilitating the relationship. Since academy-industry relations are not simply reducible to
patents and licences (Agrawal and Henderson, 2002) but find in joint research collaboration
itself their priority, I prefer to focus on academic and industrial researcher behaviours.
Previous studies on obstacles between universities and companies analyzed only superficial
economic, legal and organizational aspects and focused mainly on the transfer of patents
and licences (Nooteboom et al., 2007; Siegel et al., 1999). Since research collaboration
implies a complex phenomenon of linguistic and cognitive coordination and adjustment
among members of the research group, I think that a deeper cognitive investigation into this
dimension might offer some interesting answers to the academy-industry problem. The
main hypothesis is that there can be different cognitive styles in thinking, problem solving,
reasoning and decision making that can hamper collaboration between academic and
industrial researchers. These different cognitive styles are linked and mostly determined by
different sets of values and norms that are part of background knowledge (as we have seen
above). Different background knowledge is also responsible for bad linguistic coordination,
misunderstanding and for impeding the successful psychological interaction of the group.
The general hypotheses that will be inferred in the following represent a research
programme of empirical tests to control the effects of cognitive styles on different scientific
and technological domains and geographical contexts (for a more complete analysis see
Viale, 2009).
1) Background knowledge
What is the difference in background knowledge between the university and industrial labs,
and how can this influence cognitive styles?
Studies in the sociology of science have focused on the values and principles that
drive scientific and industrial research.
Academic research seems to be governed by a set of norms and values that are close
to the Mertonian ethos (Merton, 1973). Qualities such as communitarianism, scepticism,
originality, disinterestedness, universalism, and so on, were proposed by Robert Merton as
the social norms of the scientific community. He justified the proposal theoretically. Other
authors, such as Mitroff (1974) criticized the Mertonian ethos on an empirical basis. He
discovered that scientists often follow Mertonian norms, but that there are, nevertheless,
cases in which scientists seem to follow the opposite of these norms. More recent studies
(Broesterhuizen and Rip, 1984) confirm most of the norms of Merton. These studies assert
that research should be Strategic, founded on Hybrid and interdisciplinary communities,
able to stimulate Innovative critique, and should be Public and based on Scepticism
(SHIPS). Other recent studies (Siegel et al., 1999; Viale, 2001) confirm the presence of
social norms that are reminiscent of the Mertonian ethos. Scientists believe in the pursuit of
knowledge per se, in the innovative role of critique, in the universal dimension of the
scientific enterprise and in science as a public good. They believe in scientific method
based on empirical testing, the comparison of hypotheses, enhanced problem solving and
truth as a representation of the world (Viale, 2001, pp. 216-219). But the simple fact that
scientists have these beliefs doesn’t prove that they act accordingly. Beliefs can be deviated
by contingent interests and opportunistic reasons. They can also represent the invented
image that scientists wish to show to society. They can also vary from one discipline and
specialization to another. Nevertheless, the presence of these beliefs seems to characterize
the cultural identity of academic scientists. They constitute part of their background
knowledge and can, therefore, influence the implicit cognitive rules for reasoning and
decision making. On the contrary, industrial researchers are driven by norms that are quite
different from academic ones. They can be summarized by the acronym PLACE (Ziman,
1987): Propriety, Local, Authoritarian, Commissioned, Expert. Research is commissioned
by the company, which has ownership of the results, which can’t be diffused, and which are
valid locally to improve the competitiveness of the company. The researchers are subjected
to the authoritarian decisions of the company and develop a particular expertise valid
locally. PLACE is a set of norms and values that characterizes the cultural identity of
industrial researchers. These norms constitute part of their background knowledge and may
influence the inferential processes of reasoning and decision making.
In summary, the state of the art of studies on social norms in academic and
industrial research seems insufficient and empirically obsolete. A new empirical study of
norms contained in background knowledge is essential. This study should control the main
features characterizing the cultural identity of academic and industrial researchers as
established by previous studies. These main features can be summarized in the following
way:
- Criticism vs. Dogmatism: academic researchers follow the norm of systematic critique,
scepticism and falsificatory control of knowledge produced by colleagues; industrial
researchers aim at maintaining knowledge that works in solving technological problems.
- Interest vs. Indifference: academic researchers are not impelled in their activity primarily
by economic interest but by epistemological goals; industrial researchers are motivated
mainly by economic ends like technological competitiveness, commercial primacy and
capital gain.
- Universalism vs. Localism: academic researchers believe in a universal audience of peers
and in universal criteria of judgement that can establish their reputation; industrial
researchers think locally in terms of both the audience and the criteria of judgement and
social promotion.
- Communitarianism vs. Exclusivism: academic researchers believe in the open and public
dimension of the pool of knowledge to which they must contribute in order for it to
increase; industrial researchers believe in the private and proprietary features of knowledge.
To the different backgrounds we should also add the different contingent features of
the contexts of decision making (we refer here to the decision making context of research
managers who are heads of a research unit or of a research group) that become operational
norms. The main features are related to time, results and funding.
In a pure academic contextviii, the time allowed for conducting research is usually
loose. There are certain temporal requirements when one is working with funds coming
from a public source (particularly in the case of public contracts). However, in a contract
with a public agency or government department, the deadline is usually not as strict as that
of a private contract, and the requested results not quite as well-defined nor as specific to a
particular product (for example a prototype or a new molecule or theorem). Thus, time
constraints don’t weigh as much on the reasoning and decision making processes of the
researchers. In contrast, when an academic researcher works with an industrial contract, the
time constraints are similar to those of the corporate researcher. Moreover, in a fixed given
time, a precise set of results must be produced and presented to the company. According to
private law, the clauses of a contract with a company can be very punitive for the
researcher and the university if the signed expected requirements are not complied with. In
any case, the consequences of sub-optimal results from an academician working with a
company are less punitive than for a corporate researcher. For the latter, the time pressure is
heavier because the results, in a direct or semi-direct way, are linked to the commercial
survival of the company. Sub-optimal behaviour increases the risks to his career and to his
job security. As a result, the great expectation of the fast production of positive concrete
results presses on him more heavily. These different environmental pressures may generate
a different adaptive psychology of time and a different adaptive ontology of what the result
of the research might be. In the case of academic research, time might be less discounted.
That is, future events tend not to be as underestimated as they may be in industrial
research. The corporate researcher might fall into the bias of time discounting and myopia
because of the overestimation of short-term results. Even the ontology of an academic
researcher in respect to the final products of the research might be different from that of a
corporate researcher. While the former is interested in a declarative ontology that aims at
the expression of the result in linguistic form (for example a report, a publication, a speech,
and so on) the second aims at an object ontology. The results for him should be linked in a
direct or indirect way to the creation of an object (for example a new molecule, a new
machine, a new material, or a new process to produce them or a patent that describes the
process of producing them).
The third, different operational norm concerns financial possibilities. In this case,
the problem does not concern the quantity of funding. Funding for academic research is
usually lower for each unit of research (or, better, for each researcher) than for industrial
research. The crucial problem is the psychological weight of the funds and how much the
funds constrain and affect the reasoning and decision making processes of the researchers.
In other words (all other things being equal) this involves the amount of money at disposal
and the level to which cognitive processes and, in particular, attention processes refer to a
sort of value for money judgment in deciding how to act. It is still a topic to be
investigated, but from this point of view, it seems that the psychological weight of money
on academic researchers is less than that on industrial researchers. Money is perceived to
have less value and, therefore, influences decision making less. The reasons for this
different mental representation and evaluation may come from: 1) the way in which
funding is communicated and the ways it can constitute a decision frame (with more
frequency and relevance within the company because it is linked to important decisions
concerning the annual budget); 2) the symbolic representation of money (with much greater
emphasis in the company, whose raison d’etre is the commercial success of its products and
increased earnings); 3) the social identity of the researchers is linked more or less strongly
to the monetary levels of the wage (with greater importance on the monetary level as an
indicator of a successful career in a private company than in the university). The different
psychological weight of money has been analyzed by many authors and in particular by
Thaler (1999).
To summarize, operational norms can be schematized in loose time vs. pressing
time; undefined results vs. well-defined results; and financial lightness vs. financial
heaviness.
How can the values in background knowledge and operational norms influence the
implicit cognitive rules of reasoning and decision making, and how can they be an obstacle
to collaboration between industrial and academic researchers?
There are many aspects of cognition that are important in research activity. We can
say that every aspect is involved, from motor activity to perception, to memory, to
attention, to reasoning, to decision making and so on. My aim, however, is to focus on the
cognitive obstacles to reciprocal communication, understanding, joint decision making and
coordination between academic and corporate researchers and how these might hinder their
collaboration.
I will analyse three dimensions of interaction: language, group and inference (that is
the cognitive rules in thinking, problem solving, reasoning and decision making).
2) Language
It might be interesting to investigate the pragmatic aspects of language and communication.
To collaborate on a common project means to communicate, mainly by natural language.
To collaborate means to exchange information in order to coordinate one’s own actions
with those of others in the pursuit of a common aim. This means 'using language', as the
title of Clark’s book (1996) suggests, in order to reach the established common goal. Any
linguistic act is at the same time an individual and a social act. It is individual because it is
the individual who by motor and cognitive activity articulates the sounds that correspond to
words and phrases and who receives and interprets these sounds. Or in Goffman’s (1981)
terminology on linguistic roles, it is the subject that vocalizes, formulates, and means, and
it is another subject that attends the vocalization, identifies the utterances and understands
the meaning (Clark, 1996, p. 21). It is social because every linguistic act of a speaker has
the aim of communicating something to one or more addressees (even in the case of private
settings where we talk to ourselves, since here we ourselves play the role of an addressee).
In order to achieve this goal, there should be coordination between the speaker’s meaning
and the addressee’s understanding of the communication. However, meaning and
understanding are based on shared knowledge, beliefs and suppositions, namely, on shared
background knowledge. Therefore, the first important point is that it is impossible for two
or more actors in a conversation to coordinate meaning and understanding without
reference to common background knowledge. 'A common background is the foundation for
all joint actions, and that makes it essential to the creation of the speaker’s meaning and
addressee’s understanding as well' (Clark, 1996, p. 14). A common background is shared
by the members of the same cultural community.
A second important point is that the coordination between meaning and
understanding is more effective when the same physical environment is shared (for
example the same room at a university or the same bench in a park) and the vehicle of
communication is the richest possible. The environment represents a communicative frame
that can influence meaning and understanding. Even more, gestures and facial expressions
are rich in non-linguistic information and are also very important aids in coordination.
From this point of view, face-to-face conversation is considered the basic and most
powerful setting for communication.
The third point is that the more simple and direct the coordination, the more
effective the communication. There are different ways of complicating communication. The
roles of speaking and listening (see above regarding linguistic roles) can be decoupled. The
use of spokesmen, ghost writers and translators are examples of decoupling. A
spokeswoman for a minister is only a vocalizer, while the formulation vocalized is the
ghost writer’s, and the meaning is the minister’s. Obviously, in this case, the coordination
of meaning and understanding becomes more difficult (and even more so because of the
fact that it is an institutional setting with many addressees). The non-verbal communication
of the spokeswoman might be inconsistent with the meaning of the minister, and the ghost
writer might not be able to formulate correctly this meaning. Moreover, in many types of
discourse, such as plays, story telling, media news and reading, there is more than one layer
of action. The first layer is that of the real conversation. The second layer concerns the
hypothetical domain that is created by the speaker (when he is describing an event).
Through recursion there can be further layers as well. For example, a play requires three
layers: the first is the real world interaction among the actors, the second is the fictional
role of the actors, and the third is the communication with the audience. In face-to-face
conversation there is only one layer and no decoupling. The roles of vocalizing,
formulating and producing meaning are performed by the same person. The domain of
action identifies itself with the conversation; coordination is direct without intermediaries.
Thus, face-to-face conversation is the most effective way of coordinating meaning and
understanding, resulting in only minor distortions of meaning and fewer
misunderstandings. Academic and industrial researchers are members of different cultural
communities and, therefore, have different background knowledge. In the collaboration
between academic and industrial researchers, coordination between meanings and
understandings can be difficult if background knowledge is different. When this is the case,
as we have seen before, the result of the various linguistic settings will likely be the
distortion of meaning and an increase in misunderstanding. When fundamental values are
different (as in SHIPS vs. PLACE) and also when the operational norms of loose time vs.
pressing time, undefined product vs. well-defined product and financial lightness vs.
financial heaviness are different, it is impossible to transfer knowledge without losing or
distorting shares of meaning.
Moreover, difficulty in coordination will increase in settings that utilize
intermediaries between the academic inventor and the potential industrial user (mediated
settings in Clark, 1996, p. 5). These are cases in which an intermediate technology transfer
agent (such as the TTO of a university or the TTA of a private company or government)
tries to transfer knowledge from the university to corporate labs. In this case, there is a
decoupling of speech. The academic researcher is the one who formulates and gives
meaning to the linguistic message (also in a written format), while the TT agent is merely a
vocalizer. As a result, there may be frequent distortion of the original meaning, in particular
when the knowledge contains a great share of tacit knowledge. This distortion is
strengthened by the likely difference in background knowledge between the TT agent and
that of the other two actors in the transfer. TT agents are members of a different cultural
community (if they are professional, from a private TT company) or come from different
sub-communities inside the university (if they are members of a TTO). Usually, they are
neither active academic researchers nor corporate researchers. Finally, the transfer of
technology can also be accompanied by the complexity of having more than one domain of
action. For example, if the relation between an academic and an industrial researcher is not
face-to-face but is instead mediated by an intermediary, there is an emergent second layer
of discourse. This is the layer of the story told by the intermediary about the original
process and the techniques needed to generate the technology invented by the academic
researchers. The story can also be communicated with the help of a written setting, for
example, a patent or publication. All three points show that common background
knowledge is essential for reciprocal understanding and that face-to-face communication is
a pre-requisite for minimizing the distortion of meaning and the misunderstandings that can
undermine the effectiveness of knowledge transfer.
3) Group
The second dimension of analysis is that of the group. When two or more persons
collaborate to solve a common problem, they elicit interesting emergent phenomenon. In
theory, a group can be a powerful problem solver (Hinsz, Tindale, Vollrath, 1997). But in
order to be so, members of the group must share information, models, values and cognitive
processes (Hinsz, Tindale, Vollrath, 1997). It is likely that heterogeneity of skill and
knowledge is very useful for detecting solutions more easily. Some authors have analyzed
the role of heterogeneity in cognitive tasks (for example the solution of a mathematical
problem) and the generation of ideas (for example the production of a new logo) and have
found a positive correlation between it and the successful completion of these tasks
(Jackson, 1992). In theory, this result seems very likely, since finding a solution entails
looking at the problem from different points of view. Different perspectives allow the
phenomenon of entrenched mental set to be overcome, that is, the fixation on a strategy that
normally works well in solving many problems but that does not work well in solving this
particular problem (Sternberg, 2009) However, the type of diversity that works concerns
primarily cognitive skills or personality traits (Jackson, 1992). In contrast, when diversity is
based on values, social categories and professional identity, it can hinder the problem
solving ability of the group. In fact, this type of heterogeneity generates the categorization
of differences and similarities between the self and others and results in the emergent
phenomenon of the conflict/distance between ingroup and outgroup (Van Knippenberg and
Schippers, 2007). The relational conflict/distance of ingroup vs. outgroup is the most social
expression of the negative impact of diversity of background knowledge on group problem
solving. As was demonstrated by Manz and Neck (1995), without a common background
knowledge, there can be no sharing of goals, of the social meaning of the work, of the
criteria to assess and to correct the ongoing activity, of foresight on the results nor on the
impact of the results and so on. As described by the theory of teamthink (Manz and Neck,
1995), the establishment of an effective group in problem solving relies on the common
sharing of values, beliefs, expectations and, a priori, on the physical and social world. For
example, academic and industrial researchers present different approaches concerning
disciplinary identity. The academic has a strong faith in the disciplinary matrix (Kuhn,
1962) composed of the members of a discipline with their particular set of disciplinary
knowledge and methods. In contrast, the industrial researcher tends to be opportunistic both
in using knowledge and in choosing peers. He doesn’t feel the need to be a member of an
invisible disciplinary college of peers and instead chooses à la carte which peers are helpful
to him and what knowledge is useful to attain the goal of the research. This asymmetry
between academic and corporate researchers is an obstacle to the proper functioning of
teamthink. The epistemological and social referents are different, and communication here
can resemble a dialogue between deaf mutes. Lastly, there is the linguistic dimension. As
we have seen above, without common background knowledge, the coordination of meaning
and understanding among the members of the group (that is, the fundamental basis of
collaboration) is impossible. Moreover, without common background knowledge, the
pragmatic context of communication (Grice, 1989; Sperber and Wilson, 1986) doesn’t
allow the generation of correct automatic and non-automatic inferences between speaker
and addressee. For example, the addressee will not be able to generate proper implicatures
(Grice, 1989) to fill in the lack of information and the elliptical features of the discourse.
4) Cognitive rules
Finally, different background knowledge influences inference, that is, the cognitive rules in
thinking, problem solving, reasoning and decision making activity. Different implicit
cognitive rules mean asymmetry, asynchrony and dissonance in the cognitive coordination
among the members of the research group. This generates obstacles to the transfer of
knowledge, to the application of academic expertise and knowledge to the industrial goal
and to the development of an initial prototype or technological idea towards a commercial
end. I will only talk briefly on this subject, which is fully developed in Viale (2009).
There are two systems of thinking which affect the way we reason, decide and solve
problems. The first is the associative system, which involves mental operations based on
observed similarities and temporal contiguities (Sloman, 1996). It can lead to speedy
responses that are highly sensitive to patterns and to general tendency. This system
corresponds to system 1 of Kahneman (2003) and represents the intuitive dimension of
thinking. The second is the rule-based system, which involves manipulations based on the
relations among symbols (Sloman, 1996). This usually requires the use of deliberate, slow
procedures to reach conclusions. Through this system, we carefully analyze relevant
features of the available data based on rules stored in memory. This corresponds to system
2 of Kahneman (2003). The intuitive and analytical systems can produce different results in
reasoning and decision making. The intuitive system is responsible for the biases and errors
of everyday life reasoning, whereas the analytical system allows us to reason according to
the canons of rationality. The prevalence of one of these two systems in the cognitive
activity of academic and industrial researchers will depend on contingent factors, such as
the need to finish the work quickly, and on the diverse styles of thinking. I hypothesize that
the operational norms of pressing time and well-defined results and the social norm of
dogmatism and localism will support a propensity towards the activity of the intuitive
system. In contrast, the operational norms of loose time and undefined results and the social
norms of criticism and universalism will support the activity of the analytical system. The
role of time in the activation of the two systems is evident. Industrial researchers are used
to following time limits and to giving value to time. Thus, this operational norm influences
the speed of reasoning and decision making and the activation of the intuitive system. The
contrary happens in academic labs. The operational norm concerning results seems less
evident. Those operating without the constraints of well-defined results have the ability to
indulge in slow and attentive ways of analyzing the features of the variables and in
applying rule-based reasoning.
Those who must produce an accomplished work can’t stop to analyze the details
and must proceed quickly to the final results. The social norm of criticism is more evident.
The tendency to check, monitor and to criticize results produced by other scientists
strengthens the analytical attitude in reasoning. Any form of control requires a slow and
precise analysis of the logical coherence, methodological fitness and empirical foundations
of a study. On the contrary, in corporate labs the aim is to use high-quality knowledge for
practical results and not to increase the knowledge pool by overcoming previous
hypotheses through control and critique. Finally, the social norm of universalism vs.
localism is less evident. Scientists believe in a universal dimension to their activity. The
rules of the scientific community should be clear and comprehensible to their peers.
Furthermore, the scientific method, reasoning style and methodological techniques can’t
only be understood and followed by a small and local subset of scientists but must be
explicit to all in order to allow the diffusion of their work to the entire community. Thus,
universality tends to strengthen the analytical system of mind. The contrary happens where
there is no need for the explicitness of rules and where evaluation is made locally by peers
according to the success of the final product.
The other dimension concerns problem solving. At the end of the 1950, Herbert
Simon and his colleagues analyzed the effect of professional knowledge on problem
representation. They discovered the phenomenon of selective perception (Dearborn &
Simon, 1958), that is, the relation between different professional roles and different
problem representations. In the case of industrial and academic scientists, I presume that
selective perception will be effective not only in relation to professional and disciplinary
roles but also to social values and operational norms. These norms and values might
characterize the problem representation and, therefore, might influence reasoning and
decision making. For example, in representing the problem of the failure of a research
programme, industrial researchers might point more to variables like cost and time, whereas
academic scientists might point more towards an insufficiently critical attitude and too local
an approach.
In any case, it might prove interesting to analyze the time spent by academic and
industrial researchers in problem representation. The hypothesis is that time pressure
together with an intuitive system of thinking might cause the industrial researchers to
dedicate less time to problem representation than academic researchers.
Time pressure can affect the entire problem solving cycle, which includes
(Sternberg, 2009) problem identification, definition of problem, constructing a strategy for
problem solving, organizing information about a problem, allocation of resources,
monitoring problem solving, and evaluating problem solving. In particular, it might be
interesting to analyze the effect of pressing vs. loose time on the monitoring and evaluation
phases of the cycle. An increase in time pressures could diminish the time devoted to these
phases. Dogmatism can accelerate the time spent on monitoring and evaluation, whereas
criticism might lead to better and deeper monitoring and evaluation of the problem solution.
Finally, time pressure might also have an effect on incubation. In order to allow old
associations resulting from negative transfer to weaken, one needs to put a problem aside
for a while without consciously thinking about it. One permits for the possibility that the
problem will be processed subconsciously in order to find a solution. There are several
possible mechanisms for enhancing the beneficial effects of incubation (Sternberg, 2009).
Incubation needs time. Thus, the pressing time norm of industrial research may hinder
success in problem solving.
The third dimension concerns reasoning. Reasoning is the process of drawing
conclusions from principles and from evidence. In reasoning, we move from what is
already known to infer a new conclusion or to evaluate a proposed conclusion. There are
many features of reasoning that differentiate academic from corporate scientists.
Probabilistic reasoning is aimed at updating an hypothesis according to new
empirical evidence. Kahneman and Tversky (1973) and Tversky and Kahneman (1980,
1982a and 1982b) have proven experimentally that we forget prior probability and give
excessive weight to new data. According to Bar Hillel (1980), we give more weight to new
data because we consider it more relevant compared to old data. Relevance in this case
might mean more affective or emotional weight given to the data and, consequently,
stronger attentional processes focused on it. An opposite conservative phenomenon
happens when old data is more relevant. In this case we tend to ignore new data. In the case
of industrial researchers, it can be hypothesized that time pressure, financial weight and
well-defined results tend to give more relevance to new data. New experiments are costly
and should be an important step towards the conclusion of the work. They are, therefore,
more relevant and privileged by the mechanisms of attention. In contrast, academic
scientists, without the pressures of cost, time and the swift conclusion of the project, can
have a more balanced perception of the relevance of both old and new data.
In deductive reasoning and, in particular, hypothesis testing, Wason (1960) and
Johnson Laird (1983) proved that, in formal tests, people tend to commit confirmation bias.
New studies analyzing the emotional and affective dimension of hypothesis testing have
found that when an individual is emotionally involved in a thesis, he will tend to commit
confirmation bias. The involvement can be varied: economic (when one has invested
money in developing an idea), social (when social position is linked to the success of a
project), organizational (when a leader holding a thesis is always right) or biographical
(when one has spent many years of one’s life in developing a theory). The emotional
content of a theory causes a sort of regret phenomenon that can push the individual to avoid
falsification of the theory. From this point of view, it is likely that financial heaviness and
dogmatism, together with other social and organizational factors, would induce industrial
researchers to commit confirmation bias more easily. Research is costly but fundamental to
the commercial survival of a company. Therefore, their work should be successful and the
results well-defined in order for them to retain or to improve their position. Moreover,
industrial researchers don’t follow the academic norm of criticism that prescribes the
falsificationist approach towards scientific knowledge. This is contrary to the situation of
academic scientists, who tend to be critics, and who are not (and should not be) obliged to
be successful in their research. It is, therefore, likely that they are less prone to confirmation
bias.
Causal reasoning, according to Mackie (1974), is based on a causal field, that is, the
set of relevant variables able to cause an effect. It is well known that each individual expert
presented with the same event will support a particular causal explanation. Often, once the
expert has identified one of the suspected causes of a phenomenon, he stops searching for
additional alternative causes. This phenomenon is called discounting error. From this point
of view, the hypothesis posits that the different operational norms and social values of
academic and corporate research may produce different discounting errors. Financial
heaviness, pressing time and well-defined results compared to financial lightness, slow time
and ill-defined results may limit different causal fields in the entire project. For example,
the corporate scientist can consider time as a crucial causal variable for the success of the
project, whereas the academic researcher is unconcerned with it. At the same time, the
academic researcher can consider the value of universal scientific excellence of the results
to be crucial, whereas the industrial researcher is unconcerned with it.
The fourth dimension deals with decision making. Decision making involves
evaluating opportunities and selecting one choice over another. There are many effects and
biases connected to decision making. I will focus on certain aspects of decision making that
can differentiate academic from industrial researchers.
The first deals with risk. According to prospect theory (Kahneman and Tversky,
1979; Tversky and Kahneman, 1992), risk propensity is stronger in situations of loss and
weaker in situations of gain. A loss of 5$ causes a negative utility bigger than the positive
utility caused by the gain of 5$. Therefore, people react to a loss with risky choices aimed
at recovering the loss. Two other conditions that increase risk propensity are
overconfidence (Fischhoff et al., 1977; Kahneman and Tversky, 1996) and illusion of
control (Langer, 1973). People often tend to overestimate the accuracy of their judgements
and the probability of the success of their performance. Both the perception of loss and
overconfidence occur when there is competition; decisions are charged with economic
meaning and have economic effects. The operational norm of financial heaviness and
pressing time and the social value of exclusivity and the interests of the industrial
researcher can increase the economic value of choices and intensify the perception of
competitiveness. This, consequently, can increase risk propensity. In contrast, the social
values of communitarianism and indifference and the operational norms of financial
lightness and the slow time of academic scientists may create an environment that doesn’t
induce a perception of loss nor overconfidence. Thus, behaviour tends to be more risk
averse.
A second feature of decision making is connected to regret and loss aversion. We
saw before that, according to prospect theory, an individual doesn’t like to lose and reacts
with increased risk propensity. Loss aversion is based on the regret that loss produces in the
individual. This regret is responsible for many effects. One of the most important is
irrational escalation (Stanovich, 1999) in all kinds of investments (not only economic, but
also political and affective). When one is involved in the investment of money in order to
reach a goal, such as the building of a new missile prototype or the creation of a new
molecule to cure AIDS, one has to consider the possibility of failure. One should monitor
the various steps of the programme and, especially when funding ends, one must coldly
analyze the project’s chances for success. In this case, one must consider the monies
invested in the project as sunk cost, forget them and proceed rationally. People tend,
however to become affectively attached to their project (Nozick, 1990; Stanovich, 1999).
They feel strong regret in admitting failure and the loss of money and tend to continue
investment in an irrational escalation of wasteful spending in an attempt to attain the
established goal. This psychological mechanism is also linked to prospect theory and risk
propensity under conditions of loss. Irrational escalation is stronger when there is a stronger
emphasis on the economic importance of the project. This is the typical situation of a
private company, which links the success of its technological projects to its commercial
survival. Industrial researchers have the perception that their job and the possibility of
promotion are linked to the success of their technological projects. Therefore, it is likely
that they will tend to succumb more easily to irrational escalation than academic
researchers, who have the operational norm of financial lightness and the social norm of
indifference and whose career is only loosely linked to the success of research projects.
The third aspect of decision making has to do with an irrational bias called myopia
(Elster, 1979) or temporal discounting. People tend to strongly devalue long-term gains
over time. They prefer small, immediate gains to big gains projected in the future. Usually,
this behaviour is associated with overconfidence and the illusion of control. Those who
discount time prefer the present, because they imagine themselves able to control output
and results beyond any chance estimations. In the case of industrial researchers, and of
entrepreneurial culture in general, the need to have results at once, to find fast solutions to
problems and to assure share holders and the market that the company is stable and
growing seems to align with the propensity towards time discounting. Future results don’t
matter. What it is important is the 'now' and the ability to have new competitive products in
order to survive commercially. Financial heaviness, pressing time and well-defined results
may be responsible for the tendency to give more weight to the attainment of fast and
complete results at once, even at the risk of making products that in the future will be
defective, obsolete and easily overcome by competing products. In the case of academic
scientists, temporal discounting might be less strong. In fact, the three operational norms of
financial lightness, loose time and undefined results together with criticism and
universalism may help immunize them from myopic behaviours. Criticism is important
because it pushes the scientist to not be easily satisfied by quick and unripe results that can
be easily falsified by one’s peers. Universalism is important because the academician wants
to pursue results that are not valid locally but that can be recognized and accepted by the
entire community and that can increase his scientific reputation. In the academic
community, it is well known that a reputation is built through a lengthy process and can be
destroyed very quickly.
'NUDGE'10: SUGGESTIONS TO THE TRIPLE HELIX: THE JANUS SCIENTIST
AND PROXIMITY
The capitalization of knowledge is usually analyzed by recourse to external socioeconomic
factors. An example is the way in which the model of the Triple Helix is proposed. The
main determinants of the interaction between university, industry and government in
supporting innovation and of the emergence of hybrid organizations, entrepreneurial
universities, dual academic careers and so forth (Etzkowitz, 2008) are economic (mainly
industrial competitiveness and academic fundraising) and political (mainly regional
primacy). Economic and political forces are able to shape organizations and to change
institutional norms.
In contrast, the thesis of this chapter is that we can’t explain and predict the
organizational and institutional development of the capitalization of knowledge without
considering the internal dynamics driven by the epistemological and cognitive features of
knowledge. Various authors have pinpointed the importance of the features of knowledge
and cognition in shaping organizations. For Jim March and Herbert Simon (1993), bounded
rationality is the conceptual key to understanding the emergence of the organization, the
division of labour and of routines. When the human mind cannot process the amount of
information that it faces in complex problem solving, it needs to share this burden with
other minds. Different complementary roles in problem solving emerge. These roles
include a great amount of routine, that is, reasoning and decision making realized in
automatic or quasi-automatic ways. Moreover, according to Douglas North (2005) an
organization is characterized by the structure of institutional values and norms. The norms
and values, or in other words background knowledge, is responsible for shaping the
organization and for pushing the actors to act and interact in particular ways.
If we follow those authors who wish to explain, predict and also intervene in the
organization, we should consider, primarily, variables such as complexity of information,
limited cognition and the background knowledge of the actors. It is pointless to try and
design organizations and social actions through top-down economic and political planning
without considering the microstructure of motivations, norms, cognitive resources and
knowledge. Nudge (Thaler and Sustein, 2008) is a thesis that starts from these simple
observations. When a policy maker, a social planner and an economic planner want to reach
certain collective goals, they must single out the right institutional tools capable of nudging
the individual actors to behave coherently according to the planned aim. In order to nudge
the actors effectively, one must be able to consider their cognitive limitations and
motivations and the environmental complexity in which they are placed. If a policy maker
wants to devise successful policy recipes, he should reason as a cognitive rule ergonomist;
that is, he should extract the rules from the knowledge of the minds of the actors interacting
within a given initial environment.
In this chapter, I have analyzed the effects of the epistemological and cognitive
features of knowledge on the capitalization of knowledge. In particular, I have
hypothesized that some intrinsic features of knowledge can have effects on how knowledge
can be generated, transferred and developed in order to achieve commercial aims. These
effects, in turn, constrain the organizational and institutional forms aimed at capitalizing
knowledge. The following is a summary of the main features of the knowledge relevant for
capitalization:
1) Generality vs. singularity
When knowledge is composed of descriptive assertions (i.e. elementary propositions or
base assertions) it refers to singular empirical events without any claim of generality. As
was the case with the descriptive assertions of 18th century inventors, the predicative field
was limited to the empirical experience of inventors themselves. The justification, however,
is not only epistemological but cognitive as well. In fact, the conceptual categorization of
an empirical event changes with the experience. Thus, we have a different mental
representation of the same object at different times. In any case, descriptive assertions have
no explanatory power and can’t allow the enlargement of the field of innovation. The effect
of singularity on knowledge was a capitalization that failed in the law of diminishing
returns. Exploitation was rapid, and only slow and small incremental innovations were
generated from the original invention. Research was mainly conducted by individuals,
outside the university, with the participation of apprentices. The short-ranging nature of the
work and other institutional and economic factors (Rosenberg and Birdzell, 1986; Mowery
and Rosenberg, 1989; Mokyr, 2002a; Mokyr, 2002b) pushed industrial companies to try to
widen the scientific base of inventions in order to increase generality in knowledge. As we
saw in the Appert vs. Pasteur case, a theory explaining the causal mechanisms of food
preservation allowed the improvement of the same innovation and, moreover, its
application outside the original innovative field. The effect of general explanatory
knowledge was the development of a capitalization that overcomes the constraints of the
law of diminishing returns. Research needs to be conducted in laboratories, inside or in
collaboration with a university, concurrent with the birth and development of new applied
specializations (that is applications of a general theory to find solutions to practical
problems and to invent commercial products). Moreover, general theories often apply
across different natural and disciplinary domains. For example, DNA theory applies to
agriculture, the environment, human health, animal health and so on. Information theory
and technology can also be applied in many different areas, from genomics to industrial
robotics. The generality of the application of a theory requires interdisciplinary training and
research organizations that are able to single out promising areas of innovation and to
generate the proper corresponding technological knowledge. This implies an
interdisciplinary division of labour that can be afforded only by research universities and by
the largest of companies.
2) Complexity vs. Simplicity
Analytically, simple knowledge is categorized by a syntactic structure composed of few
assertions with few terms whose meaning is conceptually evident (because, for example,
they are empirically and directly linked to external objects like ‘cat’ or ‘dog’ that have a
well-defined categorization). A descriptive assertion, such as 'this crow is black', or an
empirical generalization, such as 'all crows are black', is an example of simple knowledge.
These analytical features correspond to cognitive ones. The semantic network representing
this knowledge is composed of a few interrelated nodes. Complexity, on the other hand, is
analytically represented by a great number of assertions, containing many terms whose
meaning is conceptually obscure (as, for example, when there are theoretical terms that
have indirect empirical meanings that derive from long linguistic chains, such as in the case
of quarks or black holes). Quantum mechanics and string theory are examples of complex
knowledge mainly from the point of view of the meaning of the terms. Linnaeus’ natural
taxonomy and Mendeleev’s periodic table of elements are examples mainly from the point
of view of the number of assertions they contain. Analytical complexity implies
computational overloading. The cognitive representation of a theory or of several theories
might correspond to an intricate semantic network with many small, interrelated and distant
nodes. For an individual mind, it is usually impossible to have a complete representation of
a complex theory, not to speak of several theories. The cognitive network will represent the
conceptual structure of the theory only partially. Usually, some mental model of a theory
will play the role of a heuristic device in reasoning and problem solving. The model serves
as a pictorial analogy of the theory and, therefore, does not ensure the completeness or
consistency of the problem solving results. It is evident from what I have previously stated
that knowledge simplicity doesn’t require organization in knowledge generation. An
individual mind can represent, process and compute knowledge and its consequences. The
more complex knowledge becomes, the greater organizational division of labour is needed
to cope with it. Only a network of interacting minds can have a complete representation,
process the relevant information and compute the deductive consequences of complex
knowledge. An organization should be shaped to give room to theoretical scientists
working on concepts, to experimental scientists working on bridge laws between theoretical
concepts and natural phenomena and to applied scientists working on technological
applications. Statisticians, mathematicians and experimental technicians will also play an
important role. Big Science projects such as the Los Alamos nuclear bomb, human genome
mapping, nuclear fusion and particle physics are examples of this collective problem
solving. When complexity is also linked to generality, the division of labour will be
reflected in the interdisciplinarity of the experts involved in collective problem solving.
Most companies will not be endowed with this level of expertise and will, therefore, always
rely more on academic support in applying knowledge to commercial aims. Consequently,
increasing complexity and generality means a growing “industrial” role for universities.
The Obama program for green technologies might be a future example of the generation
and capitalization of complex and general knowledge that will see universities playing a
central 'industrial' role.
3) Explicitness vs. Tacitness
To capitalize knowledge, one should be able to sell it or use it to produce economic added
value. In both cases, knowledge must be completely understandable and reproducible by
both the inventor and by others. In the latter case, knowledge must not lose part of its
meaning in transfer. When knowledge was mainly composed of descriptive assertions and
technical norms, it was highly idiosyncratic. Descriptive assertions corresponded to the
perceptual and conceptual apparatus of the inventor. Technical norms were represented by
competential know-how. Thus, knowledge was greatly tacit, and its transfer through
linguistic media almost impossible. The organizational centre of capitalization was the
inventor’s laboratory, where he attempted to transfer knowledge to apprentices through
face-to-face teaching and by doing and interacting. Selling patents was pointless without
'transfer by head' or proper apprenticeship. With the growth of science-based innovation the
situation changed substantially. In life sciences, for example, ontic knowledge is composed
of explanatory assertions, mainly theories and models. Technical norms are less represented
by competential know-how than by explicit condition-action rules. Thus, the degree of
tacitness seems, at first sight, to be less. Ontic knowledge explaining an invention may be
represented, explicitly, by general theories and models, and the process for reproducing the
invention would be little characterized by know-how. A patent might be sold because it
would allow complete knowledge transfer. Academic labs and companies might interact at
a distance, and there would be no need for university-industry proximity. The explicitness
of technological knowledge would soon become complete with the ICT revolution (Cowan,
David, and Foray, 2000), that is, able even to automatize know-how. As I have shown in
previous articles (Balconi, Pozzali, Viale, 2007; Pozzali and Viale, 2007), this optimistic
representation of the disappearance of tacit knowledge is an error. It considers tacitness
only at the level of competential know-how and does not account for the other two aspects
of tacitness, namely, background knowledge and cognitive rules. Background knowledge
not only includes social norms and values but also principles and categories that give
meaning to actions and events. Cognitive rules serve to apply reason to the data and to find
solutions to problems. Both tend to be individually variable. The knowledge represented in
a patent is, obviously, elliptical from this point of view. A patent can’t explicitly contain
background knowledge and cognitive rules used to reason and interpret information
contained in it. These irreducible tacit aspects of knowledge oblige technology generators
and users to interact directly in order to stimulate a convergent calibration of the conceptual
and cognitive tools needed to reason and interpret knowledge. This entails a stimulus
towards proximity between university and company and the creation of hybrid
organizations between them to jointly develop knowledge towards commercial aims.
4) Shared background knowledge vs. Unshared background knowledge
norms and values used for action, together with principles and concepts used for
understanding, constitute background knowledge. Beyond knowledge transfer, shared
background knowledge is necessary for linguistic communication and for effective
collaboration in a group. The linguistic dimension has never been analyzed in knowledge
capitalization. Its importance is evident both in patent transfer and in research
collaboration. As I stated above, academic and industrial researchers are members of
different cultural communities and, therefore, have different background knowledge. In the
collaboration between academic and industrial researchers, the coordination between
meanings and understandings can be difficult if background knowledge is different. When
this is the case, as we have seen above, the effect on the various linguistic settings will
likely be the distortion of meaning and the creation of misunderstandings. Moreover,
difficulties in coordination will increase in settings that utilize intermediaries (such as the
members of the TTO of a university or of the TTA of a private company or government)
between the academic inventor and the potential industrial user (mediated settings in Clark,
1996, p. 5). In this case, there is decoupling of speaking. The academic researcher
formulates and gives meaning to the linguistic message (also in a written setting), while the
TT agent is merely a vocalizer of the message. Thus, there may be frequent distortion of the
original meaning, in particular, when the knowledge contains a great share of tacit
knowledge. This distortion is strengthened by the likely differences in background
knowledge between the TT agent and the other actors in the transfer. Finally, in the transfer
of technology, the complexity of having more than one domain of action can also exist. For
example, if the relation between an academic and industrial researcher is not face-to-face
but is instead mediated by an intermediary, there is an emergent second layer of discourse.
This is the layer of the story that is told by the intermediary about the original process and
techniques used to generate the technology invented by the academic researchers. All three
points show that common background knowledge is essential for reciprocal understanding
and that face-to-face communication is a prerequisite for minimizing the distortion of
meaning and the misunderstandings that can undermine effective knowledge transfer.
Organizations of knowledge capitalization must, therefore, emphasize the feature of
proximity between knowledge producers and users and support the creation of public
spaces for meeting and cultural exchange between members of universities and companies.
Moreover, universities primarily, but companies also, should promote the emergence of a
new professional figure, a researcher capable of cultural mediation between academic and
industrial background knowledge.
5) Shared cognitive style vs. Unshared cognitive style
Analytically, cognitive rules for inference are part of deontic knowledge. Cognitively, they
can be considered the emergent results of pragmatic knowledge. In any case, they are
influenced by norms and values contained in background knowledge, as was shown by
Nisbett (2003) in his study on American and East Asian ways of thinking. The hypothesis
of different cognitive rules generated by different background knowledge seems likely but
must still be confirmed empirically (Viale, Del Missier, Rumiati, Franzoni, forthcoming).
We will now look at some examples of these differences (analyzed in the pilot study of
Fondazione Rosselli, 2008). Time perception and the loose time vs. pressing time of the
operational norm differentiate business oriented academicians from entrepreneurial
researchers. For the latter, time is pressing, and it is important to find concrete results
quickly and not waste money. Their responses show a clear temporal discounting. The
business participants charge academicians with looking too far ahead and not caring enough
about the practical needs of the present. The short-term logic of the industrial researchers
seems to follow the Latin saying 'Primum vivere deinde filosofare'. For them, it is better to
concentrate their efforts on the application of existing models in order to obtain certain
results. The academic has the opposite impetus, that is, to explore boundaries and uncertain
knowledge. The different temporal perceptions are linked to risk assessment. The need to
obtain fast results for the survival of the company increases the risk perception of the
money spent in projects of R&D. In contrast, even if the academic participants are not pure
but business oriented, they don’t exhibit the temporal discounting phenomenon, and for
them risk is perceived in connection with scientific reputation inside the academic
community (the social norm of universalism). What is risky to the academic researchers is
the possibility of failing to gain scientific recognition (vestiges of academic values).
Academic researchers also are more inclined towards communitarianism than exclusivity
(vestiges of academic values). They believe knowledge should be open and public and not
used as exclusive private property to be monopolized. For all participants,
misunderstandings concerning time and risk are the main obstacles to collaboration.
University members accuse company members of being too short-sighted and overly
prudent in the development of new ideas; entrepreneurial participants charge university
members with being too high-minded and overly farsighted in innovation proposals. This
creates organizational dissonance in planning the milestones of the projects and in setting
the amount of time needed for the various aspects of research. Differences in cognitive
rules are a strong factor in creating dissonance among researchers. The likely solution to
this dissonance is the emergence in universities of a new research figure trained in close
contact with industrial labs. She should have the academic skills of her pure scientist
colleagues and, at the same time, knowledge of industrial cognitive styles and values.
Obviously, hybrid organizations can also play an important role, acting as a type of 'gym' in
which to train towards the convergence between cognitive styles and values.
In conclusion, if the main hypotheses of this paper are confirmed, we will know what the
main epistemological and cognitive determinants in capitalizing knowledge are. This will
give us clues on how to nudge (Thaler and Sustein, 2008) the main stakeholders in
academy-industry relations to act to improve collaboration and knowledge transfer. In my
opinion, nudging should be the main strategy of public institutions and policy makers
wishing to strengthen the triple helix model of innovation.
For example, if results confirm the link between social values, operational norms
and cognitive style, it might be difficult to overcome the distances between pure academic
scientists and entrepreneurial researchers. What would be more reasonable would be to
support the emergence of a new kind of researcher. Together with the pure academic
researcher and the applied researcher, universities must promote a mestizo, a hybrid figure
that, like a two-faced Janus (Viale, 2009), is able to activate mentally two inconsistent sets
of values and operational norms, i.e. the academic and the entrepreneurial. These Janus
Scientists would not believe the norms, but would accept them as if they believed them
(Cohen, 1992). They would act as the cultural mediators and translators between the two
worlds. They should be members of the same department as the pure and applied scientists
and should collaborate with them as well as with the industrial scientists. A reciprocal
figure such as this is difficult to introduce into a company unless it is very large and
financially well endowed. In fact, commercial competitiveness is the main condition for
company survival. Time and risk are leading factors for competitiveness. And cognitive
style is strongly linked to them. This creates a certain rigidity when faced with the
possibility of change. Thus, adaptation should favour the side of university, where there is
more potential elasticity in shaping behaviour. Moreover, the two-faced Janus figure is
different from that involved in a TTO. It is a figure that should collaborate directly in
research activity with corporate scientists, whereas a member of a TTO has the function of
establishing the bridge between academics and the company. The first allows R&D
collaboration, whereas the second facilitates technology transfer. Empirical confirmation of
the emergence of these figures can be found in the trajectories of the development of
strongly science-based sectors such as biotechnologies, which have followed a totally
different path in America than in Europe (Orsenigo, 2001). While the American system is
characterized by a strong proximity between the industrial sector and the world of research
with the universities in the first line in internalizing and in taking on many functions typical
of the business world, in Europe universities have been much more reluctant to take on a
similar propulsive role.
A second nudge suggestion that may emerge from this paper, and in particular from
the growing generality and complexity of the knowledge involved in innovation, is the
importance of face-to-face interaction and proximity between universities and companies.
The need for proximity has been underlined in recent studies (Arundel and Geuna, 2004;
for an explanation according to the theory of complexity see Viale and Pozzali, in press).
Virtual clusters and meta districts can’t play the same role in innovation. Proximity and
face-to-face interactions are not only important for minimizing the tacitness bottleneck in
technology transfer, but face-to-face interaction is also fundamental to collaboration
because of the linguistic and pragmatic effect on understanding (see above). It also
improves the rate of trust, as has been proved by neuroeconomics (Camerer et al., 2005).
Proximity can also increase the respective permeability of social values and operational
norms. From this point of view, universities might promote the birth of open spaces of
discussion and comparison where academicians and business members might develop a
kind of learning by interacting. Lastly, proximity allows better interaction between
companies and the varied academic areas of expertise and knowledge resources. Indeed,
only the university has the potentiality to cope with the growing complexity and
interdisciplinarity of the new ways of generating innovation. Emergent and convergent
technologies require a growing division of expert labour that includes the overall
knowledge chain, from pure and basic research to development. Only a company that can
interact and rely on the material and immaterial facilities of a research university can find
proper commercial solutions in the age of hybrid technological innovation.
REFERENCES
Agrawal, A. and R. Henderson (2002), 'Putting patents in context: Exploring knowledge
transfer from MIT', Management Science, 48 (1), 44-60.
Anderson J.R. (1983), The Architecture of Cognition, Cambridge: Harvard University
Press.
Anderson, J.R. (1996), 'ACT: A simple theory of complex cognition', American
Psychologist, (51), 355-365.
Arundel, A. and A. Geuna (2004), 'Proximity and the use of public science by innovative
european firms', Economic Innovation and New technologies. 13 (6), 559-580. Balconi, M., Pozzali, A., and R. Viale (2007), "The 'codification debate' revisited: a
conceptual framework to analyze the role of tacit knowledge in economics", Industrial and
Corporate Change, 16 (5), 823-849.
Bar Hillel, M. (1980), 'The base-rate fallacy in probabilistic Judgements', Acta
Psychologica, (44), 211-233.
Barsalou, L.W. (2000), 'Cocepts: Structure', in A.E. Kazdin (ed.), Encyclopedia of
Psychology, vol. 2, Washington: American Psychological Association.
Beth, E. and J. Piaget (1961), Etudes d’Epistemologie Genetique, XIV: Epistemologie
Mathematique et Psichologie, Paris: PUF.
Braine, M.D.S. (1978), 'On the relation between the natural logic of reasoning and standard
logic', Psychological Review, (85), 1-21.
Broesterhuizen, E. and A. Rip (1984), 'No Place for Cudos', EASST Newsletter, 3.
Camerer, C., Loewenstein, G., and D. Prelec (2005), 'Neuroeconomics: how neuroscience
can inform economics?', Journal of economic Literature, 43 (1), 9-64.
Carruthers, P., Stich, S., and M. Siegal (eds) (2002), The cognitive basis of science,
Cambridge: Cambridge University Press.
Cheng, P.W., and R. Nisbett (1993), 'Pragmatic constraints on causal deduction', in R.
Nisbett (ed.), Rules for Reasoning, Hillsdale: Erlbaum.
Cheng, P.W. and K.J. Holyoak (1985), 'Pragmatic versus syntactic aproaches to training
deductive reasoning', Cognitive Psychology, (17), 391-416.
Cheng, P.W. and Holyoak, K.J. (1989), 'On the natural selection of reasoning theories',
Cognition, (33), 285-313.
Clark, H.H. (1996), Using Language, Cambridge: Cambridge University Press.
Cleeremans A. (1995), 'Implicit Learning in the Presence of Multiple Cues', Proceedings of
the 17th Annual Conference of the Cognitive Science Society, Hillsdale: Erlbaum.
Cleeremans A., Destrebecqz A. and M. Boyer (1998), 'Implicit Learning: News from the
Front', Trends in Cognitive Science, (2), 406-416.
Cohen, J. (1992), An essay on belief and acceptance, Oxford: Blackwell.
Collins, A.M. and M.R. Quillian (1969), 'Retrieval time from semantic memory', Journal of
Verbal Learning and Verbal Behaviour, (8), 240-248. Cowan R., David P.A. and D. Foray (2000), 'The Explicit Economics of Knowledge
Codification and Tacitness', Industrial and Corporate Change, (9), 211-253.
Dearborn, D.C., Simon, H.A. (1958), 'Selective perception: A note on the departmental
identifications of executives', Sociometry, 21 (June), 140-144.
Elster, J. (1979), Ulysses and the Syrenes. Studies in Rationality and Irrationality,
Cambridge: Cambridge University Press.
Etzkowitz, H. (2008). The Triple Helix: University-Industry-Government Innovation In
Action, London: Routledge.
Fischhoff, B., Slovic, P., and S. Lichtenstein (1977), 'Knowing with certainty: the
appropriateness of extreme confidence', Journal of Experimental Psychology, (3), 552-564.
Fondazione Rosselli, (2008). Different Cognitive Styles between Academic and Industrial
Researchers: A Pilot Study, www.fondazionerosselli.it.
Giere, R.N. (1988), Explaining Science, Chicago: The Chicago University Press.
Goffman, E. (1981), Forms of Talk, Philadelphia: University of Pennsylvania Press.
Grice, H.P. (1989), Studies in the Way of Words, Cambridge (Mass.): Harvard University
Press.
Grossman, M., Smith, E.E., Koenig, P., Glosser, G., Rhee, J., and K. Dennis (2003),
'Categorization of object descriptions in Alzheimer’s disease and frontal temporal
demential: limitation in rule-based processing', Cognitive Affective and Behavioural
Neuroscience, 3 (2), 120-132.
Hinsz, V., Tindale, S., and D. Vollrath (1997), 'The emerging conceptualization of groups
as information processors', Psychological Bulletin, (96), 43-64.
Jackendoff, R. (2007), Language, Consciuosness, Culture, Cambridge (Mass.): MIT Press.
Jackson, S. (1992), 'Team composition in organizational settings: Issues in mamaging an
increasingly diverse work force', in S. Worchel, W. Wood and J. Simpson (eds), Group
Process and Productivity, Newbury Park: Sage.
Johnson Laird, P. (2008), How We Reason, Oxford: Oxford University Press.
Johnson-Laird, P. (1983), Mental Models, Cambridge: Cambridge University Press.
Kahneman, D. (2003), Maps of Bounded Rationality, in T. Frangsmyr (ed.), Nobel Prizes
2002, Stockolm: Almquist and Wiksell.
Kahneman, D. and A. Tversky (1973), 'On the Psychology of Prediction', Psychological
review, 80.
Kahneman, D. and A. Tversky (1979), 'Prospect theory: an analysis of decision under risk',
Econometrica, (47), 263-291.
Kahneman, D. and A. Tversky (1996), 'On the reality of cognitive illusions', Psychological
Review, (103), 582-591.
Kauffman, S. (1995), At Home in the Universe: The Search for the Laws of SelfOrganization and Complexity, New York: Oxford University Press.
Kuhn, T. (1962), The Structure of Scientific Revolutions, Chicago: University of Chicago
Press.
Kuznets, S. (1965), Economic Growth and Structure, New York: W.W. Norton.
Langer, E. (1973), 'Reduction of psychological stress in surgical patients', Journal of
Experimental Social Psychology, (11), 155-165.
Mackie, J. (1974), The cement of the universe. A study on causation, Oxford: Oxford
University Press.
Manz, C.C., and C.P. Neck (1995), 'Teamthink. Beyond the groupthink syndrome in selfmanaging work teams', Journal of Managerial Psychology, (10), 7-15.
March, J, and H.A. Simon (2nd edition 1993), Organizations, Cambridge (Mass.):
Blackwell Publishers.
Markman, A.B. and D. Gentner (2001), 'Thinking', Annual Review of Psychology, (52),
223-247.
Merton, R. (1973), The Sociology of Science. Theoretical and Empirical Investigations,
Chicago: University of Chicago Press.
Mitroff, I.I. (1974), The subject side of science, Amsterdam: Elsevier.
Mokyr, J. (2002a), The Gifts of Athena: Historical Origins of the Knowledge Economy,
(trad. it., 2004, I doni di Atena, Bologna: il Mulino), Princeton: Princeton University Press.
Mokyr, J. (2002b), 'Innovations in an Historical Perspective: Tales of Technology and
Evolution', in B. Steil, G. Victor and R. Nelson (eds), Technolgical Innovation and
Economic Performance, Princeton: Princeton University Press.
Moore, G.E. (1922), 'The Nature of Moral Philosophy', in Philosophical Studies, London:
Routledge & Kegan Paul.
Mowery, D. and N. Rosenberg (1989), Technology and the Pursuit of Economic Growth.
Cambridge: Cambridge University Press. National Science Foundation (2002), Converging technologies for Improving Human
Performance, Washington:
Nisbett R.E. (2003), The Geography of Thought, New York: The Free Press.
Nooteboom, B., Van Haverbeke, W., Duysters, G., Gilsing V. and A. Van den Oord (2007).
'Optimal cognitive distance and absorptive capacity', Research Policy, (36), 1016-1034.
North, D.C. (2005), Understanding the Process of Economic Change (trad. it. 2006, Capire
il processo di cambiamento economico, Bologna: il Mulino), Princeton: Princeton
University Press.
Nozick, R. (1990), 'Newcomb’s problem and two principles of choice', in P.K. Moser (ed.).
Rationality in action. Contemporary approach, New York: Cambridge University Press.
Orsenigo, L. (2001), 'The (Failed) Development of a Biotechnology Cluster: The Case of
Lombardy', Small Business Economics, Springer, 17 (1-2), pp. 77-92, August-September.
Politzer, G. (1986), 'Laws of Language Use and Formal Logic', Journal of Psycholinguistic
Research, (15), 47-92.
Politzer, G. and A. Nguyen-Xuan (1992), 'Reasoning about Promises and Warnings:
Darwinian Algorithms, Mental Models, Relevance Judgements or Pragmatic Schemas?',
Quarterly Journal of Experimental Psychology, (44), 401-421.
Pozzali, A. and R. Viale (2007), "Cognition, Types of ‘Tacit Knowledge’ and Technology
Transfer”, in R. Topol and B. Walliser (eds), Cognitive Economics: New Trends, Oxford:
Elsevier.
Reber A.S. (1993), Implicit Learning and Tacit Knowledge. An Essay on the Cognitive
Unconscious, Oxford: Oxford University Press.
Rosenberg, N. and L.E. Birdzell (1986), How The West Grew Rich. The Economic
Transformation of the Economic World, (trad. it. 1988, Come l’Occidente è diventato ricco,
Bologna: il Mulino), New York: Basic Books.
Rosenberg, N. and D. Mowery (1998), Paths of Innovation. Technological Change in 20thCentury America, (trad. it. 2001, Il Secolo dell’Innovazione, Milano: EGEA), Cambridge:
Cambridge University Press.
Ross, B.H. (1996), 'Category Learning as Problem Solving', in Medin D.L. (ed.), The
Psychology of Learning and Motivation: Advances in research and theory, (35), 165-192.
Rumain, B., Connell, J., and M.D.S. Braine (1983), 'Conversational comprehension
processes are responsible for fallacies in children as well as in adults: If is not the
biconditional', Developmental Psychology, (19), 471-481.
Ryle G., (1949/1984), The Concept of Mind, Chicago: Chicago University Press.
Searle, J. (1995), The Construction of Social Reality, New York: Free Press.
Searle, J. (2008), Philosophy in a new century, Cambridge: Cambridge University Press.
Shinn, T. (1983), 'Scientific disciplines and organizational specificity: the social and
cognitive configuration of laboratory activities', Sociology of Science, (4), 239-264.
Siegel, D., Waldman, D. and A. Link (1999), 'Assessing the Impact of Organizational
Practices on the Productivity of University Technology Transfer Offices: An Exploratory
Study', NBER Working Paper#7256.
Sloman, S.A. (1996), 'The empirical case for two systems of reasoning', Psychological
Bulletin, (119), 3-22.
Smith, E.E. and S.M. Kossylin (2007), Cognitive Psychology. Mind and Brain, Upper
Saddle River (NJ): Prentice Hall.
Sperber, D. and D. Wilson (1986), Relevance. Communication and Cognition, Oxford:
Oxford University Press.
Stanovich, K. (1999), Who Is Rational: Studies of Individual Differences in Reasoning,
Mahwah (NJ): Lawrence Erlbaum Associates.
Sternberg, R.J. (2009), Cognitive Psychology, Belmont: Wadsworth.
Thaler, R. (1999), 'Mental accounting matters', Journal of Behavioural Decision Making,
(12), 183-206.
Thaler, R.H. and C.R. Sunstein (2008), Nudge. Improving Decisions About Health, Wealth,
and Happiness, New Haven: Yale University Press.
Tversky, A. and D. Kahneman (1980), 'Causal Schemas in Judgements under Uncertainty',
in Fishbein M. (ed.), Progress in Social Psychology, Hillsdale (NJ): Erlbaum.
Tversky, A. and D. Kahneman (1982a), 'Judgements of and by Representativeness', in D:
Kahneman, P. Slovic and A. Tversky (eds), Judgement under Uncertainty: Heuristics and
Biases, Cambridge: Cambridge University Press.
Tversky, A. and D. Kahneman (1982b), 'Evidential Impact of Base Rate', in D. Kahneman,
P. Slovic and A. Tversky (eds), Judgement under Uncertainty: Heuristics and Biases,
Cambridge: Cambridge University Press.
Tversky, A. and D. Kahneman (1992), 'Advances in propspect theory: cumulative
representation of uncertainty', Journal of Risk and Uncertainty, (5), 547-567.
Van Knippenberg, D. and M.C. Schippers (2007), 'Work group diversity', Annual Review
of Psychology, (58), 515-541.
Viale, R. (1991), Metodo e società nella scienza, Milano: Franco Angeli.
Viale, R. (2001), 'Reasons and Reasoning: What comes First', in R. Boudon, P.
Dmeleulenaere and R. Viale, R. (eds), L’explication des normes sociales, Paris: PUF.
Viale R. (2008), 'Origini storiche dell’innovazione permanente', in R. Viale (a cura di), La
cultura dell’innovazione, Milano: Editrice Il Sole 24 Ore.
Viale, R. (2009), Different Cognitive Styles between Academic and Industrial Researchers:
An Agenda of Empirical Research, New York: Columbia University,
http://www.italianacademy.columbia.edu/publications_working.html#0809.
Viale, R. and A. Pozzali (in press), 'Complex Adaptive Systems and the Evolutionary
Triple Helix', Critical Sociology.
Viale, R., Del Missier, F., Rumiati, R., and C. Franzoni (forthcoming), Different Cognitive
Styles between Academic and Industrial Researchers: An Empirical Study.
Von Wright, G.H. (1963), Norms and Action. A Logical Enquiry, London: Routledge.
Wason, P.C. (1960), 'On the failure to eliminate hypotheses in a conceptual task', Quarterly
Journal of Experimental Psychology, (12), 129-140.
Ziman, J.M. (1987), 'L’individuo in una professione collettivizzata', Sociologia e Ricerca
Sociale, (24), 9-30.
i
Janus is a two-faced god popular in the Greek and Roman tradition. One face looks to the
past (or to tradition) and the other looks toward the future (or to innovation).
ii
From a formal point of view, the descriptive assertion may be expressed in the following
way:
(E x, t) (a→b)
This means that an event x exists in a given time t such that if x is perceived by the agent a
then it has the features b. Contrary to the pure analytical interpretation, this formulation is
epistemic, that is, it includes the epistemic actor a who is responsible for perceiving the
feature b of event x.
iii
In theory, this variability could be overcome by artificial epistemic agents without
plasticity in conceptual categorization. Some Artificial Intelligence systems used at the
industrial level have these features.
iv
In actual fact, these cases may generate 'compound techniques' (Mokyr, 2002b, p. 35)
based on new combinations of known techniques (for example the use of a metal container
instead of a glass bottle) without knowledge of the causal mechanisms behind the
phenomenon. This type of innovation is short-lived, however, and soon exhausts its
economic potential.
5
The organizational impact of the cognitive features of scientific knowledge has been
singled out in some studies on scientific disciplines and specializations. For example, the
different organizations of experimental physicists compared to organic chemists or
biologists is explained mainly by the different complexity of knowledge (Shinn, 1983).
6
Deontic logic (the name was proposed by Broad to von Wright) uses two operators: O for
obligation and P for permission. The pretense of building a logic based on these two
operators as prefixes to names of acts A, B, C, and so on, which is similar to propositional
logic, has been strongly criticized by many, among them von Wright himself (1963).
7
It is not clear if the process is linear or circular and recursive. In this case, cognitive rules
might become part of background knowledge, and this could change its role in pragmatic
experience and in reasoning and decision making processes.
8
The analysis refers mainly to the academic environment of the universities of Continental
Europe.
9
Nudge (Thaler and Sustain, 2008) is the title of a book and a metaphor characterizing a
way to gently push social actors towards certain collective aims according to their cognitive
characteristics.