6 Mimetic Minds

Building Mimetic Minds
From the Prehistoric Brains to the Universal Machines
Lorenzo Magnani
Department of Philosophy and Computational Philosophy Laboratory,
University of Pavia, Pavia, Italy and Department of Philosophy, Sun Yat-sen University,
Guangzhou (Canton), P.R. China
[email protected]
Abstract
The imitation game between man and machine, proposed by Turing in 1950, is a game between a discrete and a
continuous system. In the framework of the recent studies about embodied and distributed cognition and about
prehistoric brains the machine Turing’s “discrete-state machine” can be seen as an externalized cognitive mediator
that constitutively integrates human cognitive behavior. Through the description of a subclass of these cognitive
mediators I call “mimetic minds”, the presentation will deal with some of their cognitive and epistemological
aspects and with the cognitive role played by the manipulations of the environment that includes them.
1
Turing Unorganized Machines
1.1 Logical, Practical, Unorganized, and Paper Machines
Aiming at building intelligent machines Turing first of all provides an analogy between human brain and
computational machines. In “Intelligent Machinery”, written in 1948 [Turing, 1969], he maintains that
“the potentialities of human intelligence can only be realized if suitable education is provided” [p. 3].
The concept of unorganized machine is then introduced, and it is maintained that the infant human
cortex is of this nature. The argumentation is indeed related to showing how such machines can be
educated by means of “rewards and punishments”.
Unorganized machines are listed among different kinds of existent machineries:
- Universal) Logical Computing Machines (LCMs). A LCM is a kind of discrete machine Turing
introduced in 1937 that has
[…] an infinite memory capacity obtained in the form of an infinite tape marked out into squares on each of which a
symbol could be printed. At any moment there is one symbol in the machine; it is called the scanned symbol. The
machine can alter the scanned symbol and its behavior is in part described by that symbol, but the symbols on the tape
elsewhere do not affect the behavior of the machine. However, the tape can be moved back and forth through the
machine, this being one of the elementary operations of the machine. Any symbol on the tape may therefore eventually
have innings [Turing, 1969, p. 6]
This machine is called Universal if it is “such that if the standard description of some other LCM is
imposed on the otherwise blank tape from outside, and the (universal) machine then set going it will
carry out the operations of the particular machine whose description is given” [p. 7]. The importance of
this machine resorts to the fact that we do not need to have an infinity of different machines doing
different jobs. A single one suffices: it is only necessary “to program” the universal machine to do these
jobs.
- (Universal) Practical Computing Machines (PCMs). PCMs are machines that put their stored
information in a form very different from the tape form. Given the fact that in LCMs the number of steps
involved tends to be enormous because of the arrangement of the memory along the tape, in the case of
PCMs “by means of a system that is reminiscent of a telephone exchange it is made possible to obtain a
piece of information almost immediately by ‘dialing’ the position of this information in the store” [p. 8].
Turing adds that “nearly” all the PCMs under construction have the fundamental properties of the
Universal Logical Computing Machines: “given any job which could have be done on an LCM one can
also do it on one of these digital computers” (ibid.) so we can speak of Universal Practical computing
Machines.
- Unorganized Machines. Machines that are largely random in their constructions are called “Unorganized
Machines”: “So far we have been considering machines which are designed for a definite purpose
(though the universal machines are in a sense an exception). We might instead consider what happens
when we make up a machine in a comparatively unsystematic way from some kind of standard
components. […] Machines which are largely random in their construction in this way will be called
‘Unorganized Machines’. This does not pretend to be an accurate term. It is conceivable that the same
machine might be regarded by one man as organized and by another as unorganized.” [p. 9]. They are
machines made up from a large number of similar units. Each unit is endowed with two input terminals
and has an output terminals that can be connected to the input terminals of 0 or more of other units. An
example of the so-called unorganized A-type machine with all units connected to a synchronizing unit
from which synchronizing pulses are emitted at more or less equal intervals of times is given in Figure 1
(the times when the pulses arrive are called moments and each unit is capable of having two states at
each moment). The so-called A-type unorganized machines are considered very interesting because they
are the simplest model a nervous system with a random arrangement of neurons (cf. the following
section “Brains as unorganized machines”).
Figure 1. (In Turing, 1969).
- Paper Machines. “It is possible to produce the effect of a computing machine by writing down a set of
rules of procedure and asking a man to carry them out. […] A man provided with paper, pencil and
rubber, and subject to strict discipline, is in effect a universal machine” [p. 9]. Turing calls this kind of
machine “Paper Machine”.
1.2 Continuous, Discrete, and Active Machines
The machines described above are all discrete machines because it is possible to describe their possible
states as a discrete set, with the motion of the machines occurring by jumping from one state to another.
Turing remarks that all machinery can be regarded as continuous (where the states form a continuous
manifold and the behavior of the machine is described by a curve on this manifold) but “when it is
possible to regard it as discrete it is usually best to do so. Moreover machineries are called “controlling”
if they only deal with information, and “active” if aim at producing some definite physical effect. A
bulldozer will be a continuous and active machine, a telephone continuous and controlling. But also
brains can be considered machines and they are – Turing says “probably” – continuous and controlling
but “very similar to much discrete machinery” [p. 5].
Brains very nearly fall into this class [discrete controlling machinery – when it is natural to describe its possible states as
a discrete set] and there seems every reason to believe that they could have been made to fall genuinely into it without
any change in their essential properties. However, the property of being “discrete” is only an advantage for the theoretical
investigator, and serves no evolutionary purpose, so we could not expect Nature to assist us by producing truly “discrete
brains” (p. 6).
Brains can be treated as machines but they can also be considered discrete machines. The
epistemological reason is clear: this is just an advantage for the “theoretical investigator” that aims at
knowing what are intelligent machines, but certainly it would not be an evolutionary advantage. “Real”
humans brains are of course continuous systems, only “theoretically” they can be treated as discrete.
Following Turing’s perspective we have derived two new achievements about machines and
intelligence: brains can be considered machines, the simplest nervous systems with a random
arrangement of neurons can be considered unorganized machines, in both cases with the property of
being “discrete”.
1.3 Mimicking Human Education
Turing adds:
The types of machine that we have considered so far are mainly ones that are allowed to continue in their own way for
indefinite periods without interference from outside. The universal machines were an exception to this, in that from time
to time one might change the description of the machine which is being imitated. We shall now consider machines in
which such interference is the rule rather than the exception [p. 11].
Screwdriver interference is when parts of the machine are removed and replaced with others, giving rise
to completely new machines. Paper interference is when mere communication of information to the
machine modifies its behavior. It is clear that in the case of the universal machine, paper interference
can be as useful as screwdriver interference: we are interested in this kind of interference. We can say
that each time an interference occurs the machine is probably changed. It has to be noted that paper
interference provides information that is both “external” and “material” (further consideration on the
status of this information are given below section 5.)
Turing thought that the fact that human beings have made machinery able to imitate any small part of a
man was positive in order to believe in the possibility of building thinking machinery: trivial examples
are the microphone for the ear, and the television camera for the eye. What about the nervous system?
We can copy the behavior of nerves with suitable electrical models and the electrical circuits which are
used in electronic computing machinery seem to have essential properties of nerves because they are
able to transmit information and to store it.
Education in human beings can model “education of machinery” “Mimicking education, we should hope
to modify the machine until it could be relied on to produce definite reactions to certain commands” [p.
14]. A graduate has had interactions with other human beings for twenty years or more and at the end of
this period “a large number of standard routines will have been superimposed on the original pattern of
his brain” [ibid.].
Turing maintains that
1) in human beings the interaction is manly with other men and the receiving of visual and other stimuli
constitutes the main forms of interference;
2) it is only when a man is “concentrating” that he approximates a machine without interference;
3) even when a man is concentrating his behavior is mainly conditioned by previous interference.
2
Brains as Unorganized and Organized Machines
2.1 The Infant Cortex as an Unorganized Machine
In many unorganized machines when a configuration1 is reached and possible interference suitably
constrained, the machine behaves as one organized (and even universal) machine for a definite purpose.
Turing provides the example of a B-type unorganized machine with sufficient units where we can find
particular initial conditions able to make it a universal machine also endowed with a given storage
capacity. The set up of these initial conditions is called “organizing the machine” that indeed is seen a
kind of “modification” of a preexisting unorganized machine through external interference.
Infant brain can be considered an unorganized machine. Given the analogy previously established (cf.
subsection 1.1 above “Logical, Practical, Unorganized, and Paper Machines), what are the events that
modify it in an organized universal brain/machine? “The cortex of an infant is an unorganized
machinery, which can be organized by suitable interference training. The organization might result in
the modification of the machine into a universal machine or something like it. […] This picture of the
cortex as an unorganized machinery is very satisfactory from the point of view of evolution and
genetics.” [p. 16]. The presence of human cortex is not meaningful in itself: “[…] the possession of a
human cortex (say) would be virtually useless if no attempt was made to organize it. Thus if a wolf by a
mutation acquired a human cortex there is little reason to believe that he would have any selective
advantage” [ibid.]. Indeed the exploitation of a big cortex (that is its possible organization) requires a
suitable environment: “If however the mutation occurred in a milieu where speech had developed
(parrot-like wolves), and if the mutation by chance had well permeated a small community, then some
selective advantage might be felt. It would then be possible to pass information on from generation to
generation. [ibid.].
Hence, organizing human brains into universal machines strongly relates to the presence of
1) speech (even if only at the level rudimentary parrot-like wolves)
2) and a social setting where some “techniques” are learnt (“the isolated man does not develop any
intellectual power. It is necessary for him to be immersed in an environment of other men, whose
techniques he absorbs during the first twenty years of his life. He may then perhaps do a little research
of his own and make a very few discoveries which are passed on to other men. From this point of view
the search for new techniques must be regarded as carried out by human community as a whole, rather
than by individuals” (p. 23).
This means that a big cortex can provide an evolutionary advantage only in presence of that massive
storage of information and knowledge on external supports that only an already developed small
community can possess. Turing himself consider this picture rather speculative but evidence from
paleoanthropology can support it, as I will describe in the following section.
Moreover, the training of a human child depends on a system of rewards and punishments, that suggests
that organization can occur only through two inputs. The example of an unorganized P-type machine,
that can be regarded as a LCM without a tape and largely incompletely described, is given. Through
suitable stimuli of pleasure and pain (and the provision of an external memory) the P-type machine can
become an universal machine [p. 20].
When the infant brain is transformed in an intelligent one both discipline and initiative are acquired: “to
convert a brain or machine into a universal machine is the extremest form of discipline. […] But
discipline is certainly not enough in itself to produce intelligence. That which is required in addition we
call initiative. […] Our task is to discover the nature of this residue as it occurs in man, and try and copy
it in machines” [p. 21].
1
A configuration is a state of a discrete machinery.
Examples of problems requiring initiative are the following: “Find a number n such that…”, “see if you
can find a way of calculating the function which will enable us to obtain the values for arguments….”.
The problem is equivalent to that of finding a program to put on the machine in question.
We have seen how a brain can be “organized”, but how is the relation of that brain with the idea of
“mimetic mind”?
3 From the Prehistoric Brains to the Universal Machines
I have said that a big cortex can provide an evolutionary advantage only in presence of a massive storage
of information and knowledge on external supports that only an already developed small community of
human beings can possess. Evidence from paleoanthropology seems to support this perspective. Some
research in cognitive paleoanthropology teaches us that high level and reflective consciousness in terms
of thoughts about our own thoughts and about our feelings (that is consciousness not merely considered
as raw sensation) is intertwined with the development of modern language (speech) and material
culture. After 250.000 years ago several hominid species had brains as large as ours today, but their
behavior lacked any sign of art or symbolic behavior. If we consider high-level consciousness as related
to a high-level organization – in Turing’s sense – of human cortex, its origins can be related to the active
role of environmental, social, linguistic, and cultural aspects.
Handaxes were made by Early Humans and firstly appeared 1,4 million years ago, still made by some of
the Neanderthals in Europe just 50.000 years ago. The making of handaxes is strictly intertwined with
the development of consciousness. Many needed capabilities constitute a part of an evolved psychology
that appeared long before the first handaxed were manufactured. Consequently, it seems humans were
pre-adapted for some components required to make handaxes [Mithen, 1996, 1999]:
1. imposition of symmetry (already evolved through predators escape and social interaction). It has been
an unintentional by-product of the bifacial knapping technique but also deliberately imposed in other
cases. It is also well-known that the attention to symmetry may have developed through social
interaction and predator escape, as it may allow one to recognize that one is being directly stared at
[Dennett, 1991]. It seems that “Hominid handaxes makers may have been keying into this attraction to
symmetry when producing tools to attract the attention of other hominids, especially those of the
opposite sex” [Mithen, 1999, p. 287];
2. understanding fracture dynamics (for example evident from Oldowan tools and from nut cracking by
chimpanzees today);
3. ability to plan ahead (modifying plans and reacting to contingencies, such unexpected flaws in the
material and miss-hits), still evident in the minds of Oldowan tool makers and in chimpanzees;
4. high degree of sensory-motor control: The origin of this capability is usually tracked back to
encephalization – the increased number of nerve tracts and of the integration between them allows for
the firing of smaller muscle groups - and bipedalism – that requires a more complex integrated highly
fractionated nervous system, which in turn presupposes a larger brain.
The combination of these four resources produced the birth of what Mithen calls technical intelligence
of early human mind, that is consequently related to the construction of handaxes. Indeed they indicate
high intelligence and good health. They cannot be compared to the artefacts made by animals, like
honeycomb or spider web, deriving from the iteration of fixed actions which do not require
consciousness and intelligence.
3.1 Private Speech and Fleeting Consciousness
Two central factors play a fundamental role in the combination of the four resources above:
-
the exploitation of private speech (speaking to oneself) to trail between planning, fracture
dynamic, motor control and symmetry (also in children there is a kind of private muttering which
makes explicit what is implicit in the various abilities);
a good degree of fleeting consciousness (thoughts about thoughts).
In the meantime these two aspects played a fundamental role in the development of consciousness and
thought:
So my argument is that when our ancestors made handaxes there were private mutterings accompanying the crack of
stone against stone. Those private mutterings were instrumental in pulling the knowledge required for handaxes
manufacture into an emergent consciousness. But what type of consciousness? I think probably one that was fleeting one:
one that existed during the act of manufacture and that did not the endure. One quite unlike the consciousness about one’s
emotions, feelings, and desires that were associated with the social world and that probably were part of a completely
separated cognitive domain, that of social intelligence, in the early human mind” [p. 288].
This use of private speech can be certainly considered a “tool” for organizing brains and so for
manipulating, expanding, and exploring minds, a tool that probably evolved with another: talking to
each other. Both private and public language act as tools for thought and play a central role in the
evolution of consciousness.
3.2 Material Culture
Another tool appeared in the latter stages of human evolution, that played a great role in the evolutions
of minds in mimetic minds, that is in a further organization of human brains. Handaxes are at the birth of
material culture, so as new cognitive chances can co-evolve:
- the mind of some early humans, like the Neanderthals, were constituted by relatively isolated
cognitive domains, Mithen calls different intelligences, probably endowed with different degree
of consciousness about the thoughts and knowledge within each domain (natural history
intelligence, technical intelligence, social intelligence). These isolated cognitive domains became
integrated also taking advantage of the role of public language;
- degrees of high level consciousness appear, human beings need thoughts about thoughts;
- social intelligence and public language arise.
It is extremely important to stress that material culture is not just the product of this massive cognitive
chance but also cause of it. “The clever trick that humans learnt was to disembody their minds into the
material world around them: a linguistic utterance might be considered as a disembodied thought. But
such utterances last just for a few seconds. Material culture endures” [p. 291].
In this perspective we acknowledge that material artefacts are tools for thoughts as is language: tools for
exploring, expanding, and manipulating our own minds. In this regard the evolution of culture is
inextricably linked with the evolution of consciousness and thought.
Early human brain becomes a kind of universal “intelligent” machine, extremely flexible so that we did
no longer need different “separated” intelligent machines doing different jobs. A single one will suffice.
As the engineering problem of producing various machines for various jobs is replaced by the office
work of “programming” the universal machine to do these jobs, so the different intelligences become
integrated in a new universal device endowed with a high-level type of consciousness.
From this perspective the expansion of the minds is in the meantime a continuous process of
disembodiment of the minds themselves into the material world around them. In this regard the
evolution of the mind is inextricably linked with the evolution of large, integrated, material cognitive
systems. In the following sections I will illustrate this extraordinary interplay between human brains and
the cognitive systems they make, which is at the origins of the first interesting features of the modern
human mind.
4 The Disembodiment of Mind
A wonderful example of the cognitive effects of the disembodiment of mind is the carving of what most
likely is the mythical being from the last ice age, 30.000 years ago, a half human/half lion figure carved
from mammoth ivory found at Hohlenstein Stadel, Germany.
An evolved mind is unlikely to have a natural home for this being, as such entities do not exist in the natural world: so
whereas evolved minds could think about humans by exploiting modules shaped by natural selection, and about lions by
deploying content rich mental modules moulded by natural selection and about other lions by using other content rich
modules from the natural history cognitive domain, how could one think about entities that were part human and part
animal? Such entities had no home in the mind [p. 291].
A mind consisting of different “separated intelligences” (for instance “thinking about animals” as
separated from “thinking about people”) cannot come up with such entity. The only way is to extend the
mind into the material word, exploiting rocks, blackboards, paper, ivory, and writing, painting, and
carving: “artefacts such as this figure play the role of anchors for ideas and have no natural home within
the mind; for ideas that take us beyond those that natural selection could enable us to possess” (p. 291)
(cf. Figure 2).
In the case of our figure we face with an anthropomorphic thinking created by the material
representation serving to anchor the cognitive representation of supernatural being. In this case the
material culture disembodies thoughts, that otherwise will soon disappear, without being transmitted to
other human beings. The early human mind possessed two separated intelligences for thinking about
animals and people. Through the mediation of the material culture the modern human mind can
creatively arrive to “internally” think about animals and people at the same time.
Figure 2. (In Mithen, 1999).
Artefacts as external objects allowed humans to loosen and cut those chains on our unorganized brains
imposed by our evolutionary past. Chains that always limited the brains of other human beings, such as
the Neanderthals. Loosing chains and securing ideas to external objects was a way to re-organize brains
as universal machines for thinking. Still important in human reasoning and in computational modeling is
the role of external representations and mediators. I devoted part of my research to illustrate their role at
the epistemological and ethical level [Magnani, 2001, 2005].
5
Mimetic and Creative Representations
5.1 External and Internal Representations
We have said that through the mediation of the material culture the modern human mind can creatively
arrive to internally think about animals and people at the same time. We can also account for this
process of disembodiment from an interesting cognitive point of view.
I maintain that representations can be external and internal. We can say that
– external representations are formed by external materials that express (through reification) concepts
and problems that do not have a natural home in the brain;
– internalized representations are internal re-projections, a kind of recapitulations, (learning) of
external representations in terms of neural patterns of activation in the brain. They can sometimes be
“internally” manipulated like external objects and can originate new internal reconstructed
representations through the neural activity of transformation and integration.
This process explains why human beings seem to perform both computations of a connectionist type
such as the ones involving representations as
– (I Level) patterns of neural activation that arise as the result of the interaction between body and
environment – that interaction that is extremely fruitful for creative results - (and suitably shaped by the
evolution and the individual history): pattern completion or image recognition,
and computations that use representations as
– (II Level) derived combinatorial syntax and semantics dynamically shaped by the various external
representations and reasoning devices found or constructed in the environment (for example geometrical
diagrams in mathematical creativity); they are neurologically represented contingently as pattern of
neural activations that “sometimes” tend to become stabilized structures and to fix and so to
permanently belong to the I Level above.
The I Level originates those sensations [they constitute a kind of “face” we think the world has], that
provide room for the II Level to reflect the structure of the environment, and, most important, that can
follow the computations suggested by these external structures. It is clear we can now conclude that the
growth of the brain and especially the synaptic and dendritic growth are profoundly determined by the
environment.
When the fixation is reached the patterns of neural activation no longer need a direct stimulus from the
environment for their construction. In a certain sense they can be viewed as fixed internal records of
external structures that can exist also in the absence of such external structures. These patterns of neural
activation that constitute the I Level Representations always keep record of the experience that
generated them and, thus, always carry the II Level Representation associated to them, even if in a
different form, the form of memory and not the form of a vivid sensorial experience. Now, the human
agent, via neural mechanisms, can retrieve these II Level Representations and use them as internal
representations or use parts of them to construct new internal representations very different from the
ones stored in memory (cf. also [Gatti and Magnani, 2005]).
Human beings delegate cognitive features to external representations because in many problem solving
situations the internal computation would be impossible or it would involve a very great effort because
of human mind’s limited capacity. First a kind of alienation is performed, second a recapitulation is
accomplished at the neuronal level by re-representing internally that which was “discovered” outside.
Consequently only later on we perform cognitive operations on the structure of data that synaptic
patterns have “picked up” in an analogical way from the environment. Internal representations used in
cognitive processes have a deep origin in the experience lived in the environment.
I think there are two kinds of artefacts that play the role of external objects (representations) active in
this process of disembodiment of the mind: creative and mimetic. Mimetic external representations
mirror concepts and problems that are already represented in the brain and need to be enhanced, solved,
further complicated, etc., so they sometimes can give rise to new concepts, models, and perspectives.2
2
I studied the role of diagrams in mathematical reasoning endowed both of mirroring and creative roles [Magnani and Dossena, 2003]. I
also think this discussion about external and internal representations can be used to enhance the Representational Redescription model
introduced by Karmiloff-Smith, that accounts for how these levels of representation are generated in the infant mind.
Following my perspective it is at this point evident that the “mind” transcends the boundary of the
individual and includes parts of that individual’s environment.
6
Mimetic Minds
It is well-known that there are external representations that are representations of other external
representations. In some cases they carry new scientific knowledge. To make an example, Hilbert’s
Grundlagen der Geometrie is a “formal” representation of the geometrical problem solving through
diagrams: in Hilbertian systems solutions of problems become proofs of theorems in terms of an
axiomatic model. In turn a calculator is able to re-represent (through an artifact) (and to perform) those
geometrical proofs with diagrams already performed by human beings with pencil and paper. In this
case we have representations that mimic particular cognitive performances that we usually attribute to
our minds.
We have seen that our brains delegate cognitive (and epistemic) roles to externalities and then tend to
“adopt” and recapitulate what they have checked occurring outside, over there, after having manipulated
– often with creative results - the external invented structured model. A simple example: it is relatively
neurologically easy to perform an addition of numbers by depicting in our mind – thanks to that brain
device that is called visual buffer – the images of that addition thought as it occurs concretely, with
paper and pencil, taking advantage of external materials. We have said that mind representations are also
over there, in the environment, where mind has objectified itself in various structures that mimic and
enhance its internal representations.
Turing adds a new structure to this list of external objectified devices: an abstract tool (LCM) endowed
with powerful mimetic properties. We have concluded the previous section remarking that the “mind” is
in itself extended and, so to say, both internal and external: the mind transcends the boundary of the
individual and includes parts of that individual’s environment. Turing’s LCM, which is an externalized
device, is able to mimic human cognitive operations that occur in that interplay between the internal
mind and the external one. Indeed Turing already in 1950 maintains that, taking advantage of the
existence of the LCM, “Digital computers […] can be constructed, and indeed have been constructed,
and […] they can in fact mimic the actions of a human computer very closely”[Turing, 1950].
In the light of my perspective both (Universal) Logical Computing Machine (LCM) (the theoretical
artifact) and (Universal) Practical Computing Machine (PCM) (the practical artifact) are mimetic minds
because they are able to mimic the mind in a kind of universal way (wonderfully continuing the activity
of disembodiment of minds our ancestors rudimentary started). LCM and PCM are able to re-represent
and perform in a very powerful way plenty of cognitive skills of human beings.
Universal Turing machines are discrete-state machines, DMS, “with a Laplacian behavior” [Longo,
2002; Lassègue, 1998, 2002]: “it is always possible to predict all future states”) and they are equivalent
to all formalisms for computability (what is thinkable is calculable and mechanizable), and because
universal they are able to simulate – that is to mimic – any human cognitive function, that is what is
usually called mind.
Universal Turing machines are just a further extremely fruitful step of the disembodiment of the mind I
have described above. A natural consequence of this perspective is that they do not represent (against
classical AI and modern cognitivist computationalism) a “knowledge” of the mind and of human
intelligence. Turing is perfectly aware of the fact that brain is not a DSM, but as he says, a “continuous”
system, where instead a mathematical modeling can guarantee a satisfactory scientific intelligibility (cf.
his studies on morphogenesis).
We have seen that our brains delegate cognitive (and epistemic) roles to externalities and then tend to
“adopt” what they have checked occurring outside, over there, in the external invented structured and
model.
Our view about the disembodiment of the mind certainly involves that the Mind/Body dualist view is
less credible as well as Cartesian computationalism. Also the view that Mind is Computational
independently of the physical (functionalism) is jeopardized. In my perspective on human cognition in
terms of mimetic minds we no longer need Descartes dualism: we only have brains that make up large,
integrated, material cognitive systems like for example LCMs and PCMs. The only problem seems
“How meat knows”:we can reverse the Cartesian motto and say “sum ergo cogito”. In this perspective
what we usually call mind simply consist sin the union of both the changing neural configurations of
brains together with those large, integrated, and material cognitive systems the brains themselves are
continuously building.
6
Conclusion
The main thesis of this paper is that the disembodiment of mind is a significant cognitive perspective
able to unveil some basic features of creative thinking. Its fertility in explaining the interplay between
internal and external levels of cognition is evident. I maintain that various aspects of cognition could
take advantage of the research on this interplay: for instance study on external mediators can provide a
better understanding of the processes of explanation and discovery in science and in some areas of
artificial intelligence related to mechanizing discovery processes.3 For example, concrete manipulations
of external objects influence the generation of hypotheses: what I have called manipulative abduction
shows how we can find methods of constructivity in scientific and everyday reasoning based on external
models and “epistemic mediators” [Magnani, 2004].
Finally, I think the cognitive role of what I call “mimetic minds” can be further studied also taking
advantage of the research on ipercomputation. The imminent construction of new types of universal
“abstract” and “practical” machines will constitute important and interesting new “mimetic minds”
externalized and available over there, in the environment, as sources of mechanisms underlying the
emergence of new meaning processes. They will provide new tools for creating meaning formation in
classical areas like analogical, visual, and spatial inferences, both in science and everyday situations, so
that this can extend the epistemological and the psychological theory.
References
[Dennett, 1991] Daniel Dennett. Consciousness Explained. Little, Brown, and Company, New York, 1991.
[Gatti and Magnani, 2005] Alberto Gatti and Lorenzo Magnani. On the representational role of the environment and on the
cognitive nature of manipulations. In Computing, Philosophy, and Cognition, Proceedings of the European Conference of
Computing and Philosophy, Pavia, Italy, 3-4 June 2004, forthcoming.
[Hameroff, et al., 1999]. Stuart R. Hameroff, Alfred W. Kaszniak, and David J. Chalmers (Eds.). Toward a Science of Consciousness III.
The Third Tucson Discussions and Debates. MIT Press, Cambridge, MA, 1999.
[Karmiloff-Smith, 1992] Annette Karmiloff-Smith. Beyond Modularity: A Developmental Perspective on Cognitive Science.
MIT Press, Cambridge, MA, 1992.
[Lassègue, 1998] Jean Lassègue. Turing. Les Belles Lettres, Paris, 1998.
[Lassègue, 2002] Jean Lassègue. Turing entre formel et forme; remarque sur la convergence des perspectives
morphologiques. Intellectica, 35(2):185-198, 2002.
[Longo, 2002] Giuseppe Longo. Laplace, Turing, et la géométrie impossible du “jeu de l’imitation”: aléas, determinisme e
programmes dans le test de Turing. Intellectica, 35(2):131-161, 2002.
3 On the recent achievements in the area of the machine discovery simulations of model-based creative tasks cf. [Magnani, Nersessian,
and Pizzi, 2002].
[Magnani, 2001] Lorenzo Magnani. Abduction, Reason, and Science. Processes of Discovery and Explanation. Kluwer
Academic/Plenum Publishers, New York, 2001.
[Magnani, 2004] Lorenzo Magnani. Conjectures and manipulations. Computational modeling and the extra-theoretical
dimension of scientific discovery. Minds and Machines 14: 507-537, 2004.
[Magnani, 2005] Lorenzo Magnani. Knowledge as a Duty. Distributed Morality in a Technological World, forthcoming.
[Magnani and Dossena, 2003] Lorenzo Magnani and Riccardo Dossena. Perceiving the infinite and the infinitesimal world:
unveiling and optical diagrams and the construction of mathematical concepts. In Proceedings of CogSci2003. CD-ROM
produced by X-CD Technologies, Boston, MA, 2003.
[Magnani, et al., 2002] Lorenzo Magnani, Nancy J. Nersessian, and Claudio Pizzi (Eds.). Logical and Computational Aspects
of Model-Based Reasoning, Kluwer Academic, Dordrecht, 2002.
[Mithen, 1996] Steven Mithen. The Prehistory of the Mind. A Search for the Origins of Art, Religion, and Science. Thames
and Hudson, London, 1996.
[Mithen, 1999] Steven Mithen. Handaxes and ice age carvings: hard evidence for the evolution of consciousness. In
[Hameroff, et al., 1999], pages. 281-296.
[Turing, 1937] Alan M. Turing. On computable numbers with an application to the Entscheidungsproblem. Proceedings of
the London Mathematical Society, 42:230-265, 1937.
[Turing, 1950] Alan M. Turing. Computing machinery and intelligence. Mind, 49:433-460, 1950. Also in [Turing, 1992a],
pages 133-160.
[Turing, 1969] Alan M. Turing, Intelligent machinery [1948]. In Bernard Meltzer and Donald Michie. Machine Intelligence
5:3-23. Also in [Turing, 1992a], pages 3-23.
[Turing, 1992a] Alan M. Turing. Collected Works of Alan Turing, Mechanical Intelligence. Ed. by Darrel C. Ince, Elsevier,
Amsterdam, 1992.
[Turing, 1992b] Alan M. Turing, 1992b, Collected Works of Alan Turing, Morphogenesis, ed. by Peter T. Saunders, Elsevier,
Amsterdam.