An Enactive Theory of Concepts

The Unified Conceptual Space Theory: An Enactive Theory
of Concepts
Joel Parthemore
Centre for Cognitive Semiotics
Box 201, 22100, Lund, Sweden
[email protected]
Abstract
Theories of concepts address systematically and productively structured thought. Until the
Unified Conceptual Space Theory (UCST), based on Peter Gärdenfors’ Conceptual Spaces Theory,
no one had attempted to offer an explicitly enactive theory of concepts. UCST is set apart from
its competitors in locating concepts not in the mind (or brain) of the conceptual agent nor in the
affordances of the agent’s environment but in the interaction between the two. On the UCST
account, concepts are never truly static: conceptual knowledge is always in the process of being
"brought forth", such that neither agent nor environment can cleanly be separated from the other,
and the preconceptual noumena cannot be reconstructed free of conceptual taint. Through such
conceptual coloring, mind extends into the world. Concepts create binary distinctions – beginning,
most importantly, with the self/non-self distinction – and discrete entities that mask what are, with
respect to the conceptual framework, underlying continua. These distinctions – implying notions
of e.g. internal and external, inner experience and outer world – are both conceptually necessary
1
and, at the same time, lacking prior ontological status. They are meaningful only with respect to
some identifiable observer (which could, in appropriate circumstances, be the organism itself). In
consequence, phenomenology has a key role to play, and first-person methods are indispensable to
any empirical investigation of concepts.
keywords: enaction, enactive, autopoiesis, concepts, systematicity, productivity, embodiment,
representation, conceptual spaces, Voronoi tessellation, extended mind.
1
Introduction
1.1
Concepts1
Concepts allow the conceptual agent to step back from the present moment to then engage with the
present moment in a distinctively conceptual way. I will take a concept to be a unit of structured
thought – or the ability to structure one’s thought in a certain way (I take the two formulations to be
essentially equivalent)2 – so that the resulting thoughts are:
• systematic: what is substantively the same concept can be applied in substantively the same
way across unboundedly many contexts;
• productive: a finite number of concepts can be used to create an unbounded number of complex
concepts and (at least in the case of linguistic agents) propositions;
• compositional : concepts fit together in reasonably reliably predictable ways to form complex
concepts, and at least certain concepts can be decomposed into component concepts;
1 Nothing
of what I say about concepts directly addresses the neurological structures that – at least in part (cf.
the extended-mind hypothesis (Clark and Chalmers, 1998; Clark, 2008)) – underlie them; even as it attempts not to
contravene that which neuroscience has to say untendentiously about so-called higher-level cognition. Indeed, as the
subsequent discussion will make clear, I believe that an enactive account of concepts rejects both a reductionist account
such as mind/brain equivalency or any emergentist account whereby higher-level cognition emerges straightforwardly and
so reconstructably from lower-level brain processes. I argue instead for the necessity of combining bottom-up approaches
– such as neuroscience generally favours – with top-down ones.
2 Tending toward the first perspective are such diverse approaches as (Gärdenfors, 2004; Prinz, 2004; Fodor, 1998);
tending toward the latter are e.g. (Frege, 1951; Evans, 1982; Noë, 2004). Much metaphorical blood has been spilt arguing
who is right.
2
• spontaneous (in the Kantian sense): the concepts are somehow actively under the agent’s own
control; and
• subject to revision: what is substantively the same concept can be applied in substantively
the same way across unboundedly many contexts precisely because of its ability to adapt to each
new context.
The first two fall out directly from Gareth Evans’ generality constraint (1982, pp. 100-104) – subscribed
to by most (though by no means all3 ) researchers in this area; this third is strongly implied by the first
two. The fourth – what Jesse Prinz (2004, p. 97) calls endogenous control – is likewise uncontroversial.
The fifth is anathema to e.g. Jerry Fodor and his informational atomism theory of concepts (1998) but
is standardly assumed by prototype and prototype-inspired theorists and, indeed, anyone who takes
beliefs to be partly constitutive of concepts (rather than only, as Fodor would have it, the other way
around). What I have notably left out – even though, in earlier writings, I have included it (Chrisley
and Parthemore, 2007) – is articulability; precisely because I want to side with the so-called animal
concepts philosophers (see e.g. (Newen and Bartels, 2007; Allen, 1999)) in holding that, although
language transforms and extends our conceptual abilities, it does not make them possible in the first
place. Like them, I take the empirical evidence to be overwhelming that, in a number of cases, nonlinguistic species do exhibit conceptually structured thought in the sense of the five criteria above.
Two traditions have historically been dominant in discussions over concepts. First is so-called classical definitionism, by which concepts are like entries in a dictionary; in technical terms, their specification provides the necessary and sufficient conditions for their appropriate application.4 Concepts are,
paradigmatically, symbolic representations. The second is classical imagism, by which concepts are
(like) pictures in the mind, and the paradigmatic concepts are not symbolic but iconic representations.
The former is more aligned with the rationalist tradition, the latter the empiricist. In contemporary
3 See
e.g. (Beck, 2007; Travis, 1994).
definitionist accounts tended to conflate conditions for appropriately identifying or employing concepts
with the ultimate ontological nature of concepts: i.e., “how they are implemented”. This is not to say that a useful
distinction cannot be made, although contemporary opinion is divided; I happen to think that one can.
4 Classical
3
times, these begat, on the rationalist side, Fodor’s aforementioned informational atomism theory, by
which concepts are physical symbols in the brain (cf. (Newell, 1980)) that track mind-independent
entities in the world; and, on the empiricist side, Prinz’s proxytypes theory, by which concepts are
like scale models, playing out their assigned roles in mental simulations. Meanwhile – in contrast to
either of these representation-rich approaches – concepts, on Ruth Milliken’s teleological account(2010;
1998), are not representations at all but abilities to form representations 5 ; and Millikan clearly eschews
representationalist language in her writings.
To an enactivist philosopher, all of these accounts suffer a number of crucial shortcomings. Be the
approach representationalist (per Fodor or Prinz), non-representationalist (per Millikan), or explicitly
anti-representationalist (Brooks, 1991; Perry, 1986), concepts are still basically “in the head”. They
are detachable, in principle and in practice, from a given agent’s interaction with its environment: i.e.,
they work just fine “off line”. They are either not subject to revision at all (per Fodor) or, nevertheless,
highly stable: subject only, at most, to incremental change.
1.2
Enaction
In contrast to the dominant rationalist or empiricist traditions – and with more than a passing nod
to Edmund Husserl’s phenomenology – enactive philosophy emphasizes the way in which living agents
actively create the worlds they experience. One of the founders of enactivism, Humberto Maturana,
writes (1992, p. 255): “I have proposed using the term enactive to. . . evoke the idea that what is known
is brought forth, in contraposition to the more classical views of either cognitivism or connectionism”.
Cognition is not brain bound; Evan Thompson writes (2007, p. ix):
The roots of mental life lie not simply in the brain, but ramify through the body
and environment. Our mental lives involve our body and the world beyond the surface
membrane of our organism, and so cannot be reduced simply to brain processes inside the
5 Personal
communication.
4
head.
In one stroke, any reductionist or straightforward emergentist account of the relation between mind
and brain is rejected. Although importantly related, enaction should not be confused or conflated with
the twin notions of embeddedness (or situatedness): the way an agent is located always in a particular
spatiotemporal context, with which it substantively interacts; and embodiment : the way an agent takes
a specific physical form that constrains its interactions with its environment. Enaction goes beyond
embeddedness and embodiment by:
• Understanding cognition as a lived, dynamic process logically grounded in an organism’s skillful
activity;
• Typically perceiving continuities as underlying that which appears individuable and discrete:
most notably, the continuity between agent and environment, such that each brings the other
forth and the agent’s mental life extends, in a meaningful way, into the world;
• Taking an agent/environment, internal/external distinction to be both conceptually mandated
and, at the same time, meaningful only with respect to an observer, and not to the organism
itself, independent of some identifiable observer – so Maturana writes (1978, p. 30): “everything
that is said, is said by an observer to another observer that could be himself” 6 ; and
• Giving a foundational role to phenomenology qua Husserl and emphasizing the essential contribution to be made by first-person methods (see e.g. (Varela and Shear, 1999)).
6 Maturana’s point I believe, and mine certainly, is not that these distinctions have no prior ontological basis – to the
contrary! – but that they are conceptual distinctions, not prior ontological ones. Conceptual distinctions can only be
made by certain sorts of agents – namely, conceptual ones – capable of taking on the role of observer and being at least
implicitly aware of doing so. Pace the natural kinds philosophers, they impose categorical divisions onto what are, with
respect to the conceptual frameworks, pre-existing continua – without requiring any assumptions about the ultimate
ontological basis of reality, other than it exists and it constantly constrains their formation and development (to borrow
a Kantian formulation). Putting this another way, meaning-making is a conceptual ability; there is no reason to think
that a bacterium’s cell wall is meaningful, in any strict sense, to the bacterium itself, even though the cell wall is critical
to its continued existence.
5
Closely related to the concept of enaction is that of autopoiesis, which offers an alternative description
of what qualifies as a living organism in terms of operational and organizational closure (the dynamics
of the system are produced from within the system; anything external to the system can play only
the role of catalyst: organisms are “continually self-producing” (Maturana and Varela, 1992, p. 43)),
a high degree of autonomy (“endogenous, self-organizing, and self-controlling dynamics. . . determines
the cognitive domain in which it operates” (Thompson, 2007, p. 43)), and adaptivity (the ability to
adapt oneself to one’s environment and one’s environment to oneself: see especially (Di Paolo, 2005)).
An enactive agent is necessarily a living agent – but there is no a priori reason it must be alive in the
traditional biologically evolved sense.
2
Concepts and Enaction
Given my interest in concepts, I can take that a step further: a conceptual agent is necessarily a
conscious (though not necessarily self-conscious) agent,7 where conscious presupposes enactive and
enactive presupposes living. My direct inspiration here is Jordan Zlatev’s (2009; 2002; 2001) semiotic
hierarchy of dependencies. After all, concepts are meant to be under the agent’s endogenous control,
and that is only possible if the agent is at least implicitly aware of them. Meanwhile, consciousness
presupposes cognition and cognition – if the enactive perspective is right – presupposes life (indeed,
on some accounts – e.g., (Stewart, 1995) – is co-extensive with it).
Let me be clear: I am not attempting in this paper to offer a theory of consciousness, let alone
cognition. I am to a large extent bracketing concepts from consciousness or, more broadly, cognition:
something that the very enactive philosophy I embrace, with its holistic tendencies, warns against. At
the same time, I do so to make the present project workable. Cognition is a huge area, philosophically
or empirically, on which volumes are written; opinion ranges from those like John Stewart who find
cognition in the simplest of organisms to those who, in the footsteps of Descartes, consider non-human
7 By “conscious”, I mean roughly “implicitly if not explicitly aware of one’s thoughts, as revealed through a distinctive
kind of interaction with one’s physical and social environment.”
6
species to be machine-like automata possessing nothing like a mind (e.g., (Davidson, 1987; Torey,
2009)).
The relationship between concepts and consciousness I discuss elsewhere (Parthemore and Whitby,
2013, 2012; Parthemore, 2011a).8 It should be apparent that, given my approach to concepts, I see
theories of concepts and theories of consciousness as closely intertwined; it is difficult to imagine
offering a theory of consciousness that does not address the conceptually structured nature of that
consciousness: for no one, it seems (excepting perhaps the panpsychists9 ), would deny that conscious
thought is so structured. The omission of discussion of concepts from much of consciousness studies
might seem like a significant omission.
So, without recapitulating Zlatev’s arguments or spelling out their precise relation to consciousness
or cognition, concepts sit on a hierarchy of dependencies. At the same time, pace Donald Davidson,
Zoltan Torey, Wilfred Sellars (1956), John McDowell(1996) and others, there is no obvious reason why
concepts require language. As Merlin Donald writes, “humans are undoubtedly unique in their spontaneous invention of language and symbols; but, as I have argued elsewhere10 , our special advantage is
more on the conceptual side of the ledger. Animals know much more than they can express” (Donald,
1998, p. 185).
Before describing the unified conceptual space theory as a distinctly and explicitly enactive theory
of concepts – to the best of my knowledge, the first such effort to do so – I need to say something
about the conceptual spaces theory on which it is built.
2.1
Conceptual spaces theory
Peter Gärdenfors’ conceptual spaces theory (CST) (2004) is a similarity-space-based theory of concepts:
by which I mean that the closer two concepts are located within a given conceptual “space”, the more
8 See
also (Parthemore and Morse, 2010).
panpsychists would seem to require that consciousness, being in their eyes universal, precedes cognition: fair
enough; but consciousness without thought is not what most people have in mind, and it is far from clear from the
panpsychist literature what it is even meant to look like.
10 See e.g. (Donald, 1993), also (Donald, 2001).
9 The
7
similar they are judged to be. One creates a space by specifying the integral dimensions for that space
along with a metric for each dimension.
CST owes particular debt to the prototype theories developed by Eleanor Rosch starting in the 1970s
(1975; 1999). Drawing richly on the language of geometry, it pictures concepts as convex sub-regions of
conceptual spaces (or associated sets of such sub-regions across different spaces, representing different
domains), whose dimensions are precisely the integral dimensions for that space: i.e., the properties
that collectively define the space (as hue, saturation, and brightness do for colour). The progressive
carving up of a domain into its constituent concepts and sub-concepts imposes a Voronoi tessellation
on the space – an example of which is given in Figure 1, from a mind-mapping software tool written
for my doctoral thesis (2011a). Notice how all of the cells are (as the Voronoi tessellation requires!)
convex.
CST is not at any point explicitly enactive; however, it is – as Gärdenfors himself notes11 – amenable
to an enactivist interpretation:
• Concepts are intrinsically dynamic entities, arising and adapting continuously as the agent engages with its environment;
• The process of progressively carving up a domain into concepts and sub-concepts – in technical
terms, the imposition of a Voronoi tessellation – imposes sharp boundaries on what are, from a
conceptual point of view12 , underlying continua: cf. the phenomenon of categorical perception,
described e.g. in (Gärdenfors and Williams, 2001; Harnad, 1990);
• Concepts are never free-floating entities but are always concepts for a particular agent, who comes
with her own perspectival biases – probably reflective of Gärdenfors’ metaphysical anti-realism;
• Husserl may receive scant mention, but experience is – as it was for Husserl – taken as foundational.
11 personal
communication.
do not, of course, have a point of view; presumably only cognitive agents do. What I mean by “conceptual
point of view” is, roughly, shorthand for “with respect to any given conceptual framework”; see Footnote #4.
12 Concepts
8
Figure 1:
An example of a Voronoi tessellation, taken from (Parthemore, 2011a).
At the same time CST is – in light of the discomfort if not open hostility of many in the enactive
community toward talk of representations (especially so-called mental or internal representations) –
somewhat unfortunately loose with the representational talk. I shall say something about how I think
the term “representation” should be used a little later on.
2.2
Unified conceptual spaces theory
The unified conceptual spaces theory (UCST) – as described in (Parthemore, 2011a,b; Parthemore
and Morse, 2010) – is an attempt to fill in some of the self-acknowledged gaps in CST, at the same
time pushing it in a more algorithmically amenable (and, therefore, potentially empirically testable)
direction. It purports to describe how all of an individual conceptual agent’s many conceptual spaces
9
map or weave together into a single unified space13 – and, by extension, how all of the individual
conceptual agents’ unified spaces map together, at least in linguistic human society, into a single,
common unified space of the society. In either case, that unified space constitutes a “space of spaces”
describable along certain common axes, to wit:
• An axis of generalization that reprises the familiar concept hierarchy: e.g., a poodle is a dog is
a mammal is an animal is an organism is a something;
• An axis of abstraction from “zeroth-order concepts” (non-concepts) through first- (concepts of
non-concepts) to higher-order concepts: from maximally concrete to maximally abstract (albeit
in a different fashion from the axis of generalization)14 ; and
• An axis of alternatives derived from incrementing/decrementing the value of any one or more of
the integral dimensions of a given concept.
It should be clear, on some reflection, that all three of these axes are divergent in both directions –
noting that each point in the unified space, and so each point along each of these axes, represents
a distinguishably different concept. So many concepts can be described relative to multiple concept
hierarchies (e.g., a person as a biological organism versus a person as a cognitive entity); many if not
most “zeroth-order concepts” can give rise to multiple first- and higher-order concepts (e.g., the celestial
body that is the planet Venus15 gives rise to “the Morning Star” and “the Evening Star”; likewise many
first- and higher-order concepts can be described relative to multiple lower-order concepts); finally,
the axis of abstraction will be divergent wherever a given concept is defined according to more than
one integral dimension (colour can be adjusted according to any combination of hue, saturation, and
13 Something that Gärdenfors attempted to achieve in a final chapter of (2004) but left out because he was dissatisfied
with what he had at that time (personal communication).
14 At the one extreme, the axis of abstraction becomes difficult to distinguish from the axis of generalization: a
maximally abstract concept has much in common with a maximally inclusive one. The converse does not follow: a
maximally specific concept need not be maximally concrete: e.g., the sudden, intense feeling of Weltschmerz I felt at
15:11 yesterday.
15 The Ding an sich that is, for the conceptual agent, forever out of her reach, precisely because it would require
stepping outside of her conceptual framework.
10
brightness). In this way, a given space – relative to a particular instantiation of a particular concept –
opens up into multiple, parallel spaces within an overall unified (hyper)space. In implicitly navigating
this (hyper)space, the conceptual agent toggles constantly (if unconsciously) between two competing
perspectives on that space: one whereby any given concept is a partially reified entity (paradigmatically
noun-like; viewed this way, concepts are the building blocks of structured thought); the other whereby
that same concept is a fluid process (paradigmatically verb-like; viewed this way, concepts are more
like abilities to think structured thoughts). Viewing one and the same concept simultaneously from
both perspectives would be – for most people at least! – like seeing the same object as simultaneously
red and green.
In addition to proximal connections along the three axes, concepts are connected to distal points
of the unified (hyper)space in three possible ways:
• Some things conceptually bear a mereological (part-whole) relation to other things: e.g., the
head is a part of the body of an animal; I call these component relations;
• Some things are, conceptually speaking, properties of other things: i.e., they constitute one of
the integral dimensions for other concepts; e.g., colour is a property of certain perceived (nontransparent or not fully transparent) surfaces; I call these parameter relations;
• All things exist, somehow and somewhere, in the context of other things16 : the presence of one
thing is commonly associated with the presence of others; e.g., sand is commonly associated with
beaches; I call these contextual relations.
Take the colour space: in CST, it is one among many conceptual spaces, with no specification of how it
relates to those other spaces except in terms of its common internal structure. In UCST, it is a space
within the unified space of spaces, proximally connected to such other properties of objects as weight,
16 Ergo the idea of a something devoid of context – like a solitary concept in isolation from any conceptual framework
– is, pace Fodor (2008, p. 54), incoherent.
11
mass, density, and texture and distally connected both to its integral dimensions of hue, saturation,
and brightness, and to those objects (and non-objects: i.e., light) to which, as a property, it applies.
All concepts are derived from a small set of proto-conceptual entities, consisting of proto-object,
proto-event, and proto-property. These proto-concepts are not true concepts because they fail to meet
all the desiderata I listed earlier; most notably, they are not under the agent’s endogenous control
(being, by one means or another, innate17 ), and they are too few in number to be productive.
Unlike CST, UCST is explicitly enactive: concepts (along with conceptual spaces, and the unified
space itself) are located not in the agent nor in the environment but arise out of the interaction of
agent and environment: a formulation which, I believe, Gärdenfors would be amenable to, but does
not, as said, explicitly endorse. Because that interaction is dynamic, the concepts themselves must be
as well: never truly static, their apparent stability belies continuous underlying motion and change –
if often only incremental. Concepts are continually in a process of being brought forth.
In keeping with the child development literature, UCST holds the self/other – self/non-self –
self/world distinction to be foundational to all the other conceptual distinctions that a conceptual agent
comes to make (see e.g. the discussion in (Zachar, 2000, p. 144 ff.)); at the same time, it maintains that
the distinction is ultimately a conceptual distinction: one that presupposes an underlying continuum.
Indeed, according to UCST, what concepts do in general is to impose categorical boundaries onto
what is, from a conceptual point of view, a continuous world; although the world constantly constrains
how we conceptually carve it up, nevertheless, contra the natural kinds philosophers – I have in mind
someone like Brian Ellis (2005)– it does not come with pre-existing categories nor joints to carve along.
To quote Francesco Varela’s translation of Anotonio Machado’s poem (1987, p. 63):
Wanderer, the road is your
footsteps, nothing else; wanderer, there is no path,
you lay down a path in walking.
17 Contrast this with Jerry Fodor’s informational atomism (Fodor, 1998), according to which – on some standard
readings at least – all concepts are innate.
12
In walking you lay down a path. . .
2.3
Enactive caveats
A few caveats are in order. Though it tries to be far more careful with its representational language
than CST, nevertheless, UCST lacks the strongly anti-representational bias often found in enactivist
circles – particularly amongst so-called radical enactivists (see e.g. (Hutto, 2005)). Indeed, it requires
a role for representations, properly understood. Cognition is not just about being in the world but
also – critically for human beings – about imagining oneself to step back from the world. It is not just
about doing but – again, critically for human beings – about pausing and reflecting and representing.
“Representation”, as I wish to use the term, is a four-place relation involving an agent actively
and intentionally using a something to stand in place of a something else for someone (who might be
another agent, or might be the agent herself). I take my cue here from Inman Harvey:
The underlying assumption of many is that a real world exists independently of any
observer; and that symbols are entities that can ’stand for’ objects in this real world in some
abstract and absolute sense. In practice, the role of the observer in the act of representing
something is ignored. . . . The gun I reach for when I hear the word representation has this
engraved on it: ’When P is used by Q to represent R to S, who is Q and who is S? (1992,
pp. 5-7)
Note the way he sets up the four-place relation. I actually prefer a more restrictive definition of
“representation” than Harvey, who is happy to attribute “minimal” representations to some of his
artificial-life creations. For me, representations – like concepts – must be under the agent’s endogenous
control, which Harvey’s “minimal representations” are not. Neither properly internal nor external to
the agent, they are not reified entities of one sort or another but a perspective that certain agents are
able to take toward the world they encounter. That is to say, it ultimately matters not where the
representation is located – a painting on the wall or a poem in one’s thoughts – but how the agent
13
views it. Nevertheless, I keep Harvey’s gun close at hand as well: one never knows when one might
have to defend oneself!
In consequence of its modest representationalism, UCST is favorable to the enactivist view of
cognition as skillful activity, but only when interpreted sufficiently broadly – at least when it comes
to concepts – as to bridge the knowing that / knowing how divide (Ryle, 1949). Concepts sit between
knowledge that and knowledge how, beholden to neither (an idea that, again, finds echoes in Gärdenfors
(2004)). (Contrast this with Dan Hutto’s (2005) own commitment, along with the radical enactivists,
to founding cognition in knowing how.)
UCST sees cognition as dependent on – though, contra e.g. John Stewart (1995), not necessarily
equivalent to – life. That makes conceptual agency likewise dependent on life. At the same time,
it borrows a page from autopoiesis in wishing to avoid any overly narrow biological view on life as
“naturally” evolved. UCST leaves open the possibility that cognition, as we currently understand it,
and life, as we currently understand it, could pull apart; while making it dependent on someone who
thinks they can pull apart to show how.
Some enactivists are inclined to “solve” cognition bottom up: from simple to complex, from concrete
to abstract, from “mere” associations to meaning-rich symbols. UCST is committed to the necessity
of combining bottom-up with top-down approaches. For all of the unresolvable tension between the
two approaches, UCST insists that neither can be given ultimate priority – in no small part because
of our inability, as researchers, to set our intellectual, reason-driven, representation-rich, conceptually
structured nature aside. The non-representational must, seemingly, logically be prior – on this point
the radical enactivists are right; but the representational is conceptually prior: wherever we look for
them, representations (as I define them) are there.
14
3
Enactive Consequences
As should be clear by now, taking an enactive perspective on concepts has, if I am at all right, a number
of fundamental consequences; among other things , the stability of concepts is ultimately illusory, and
the claim that concepts are ever timelessly fixed is wrong: concepts are intrinsically temporal entities.
Beginning with the self/other – self/non-self – self/world distinction, concepts create binary distinctions
and discrete entities out of what were continuities. These distinctions – implying notions of e.g. inner
experience and outer world, internal thoughts and external actions – are both conceptually obligatory
and, at the same time, not committed to any particular prior ontology. Furthermore, they shift over
time or, sometimes, dissolve altogether as the conceptual agent lays down a path in walking. They
are implicitly “meaningful” to any agent possessing and employing them non-reflectively but explicitly
meaningful only with respect to an observer – and not to the organism independent of some identifiable
observer. Phenomenology has a key role to play, and first-person methods are indispensable to any
empirical investigation of concepts: a project to which I, for one, am committed – with several grant
applications based on UCST and its associated mind-mapping application in development.
One of the most exciting consequences, to me, lies in how one thinks about that very foundational
self/other – self/non-self – self/world distinction: that cornerstone of the conceptual worlds in which
we all find ourselves. Some look at it and see a very clear, very well-defined boundary, whereby mind
is strictly bound to body if not in fact to brain. I have in mind such people as Robert Rupert (2009b;
2009a; 2004) and Frederick Adams and Kenneth Aizawa (Adams and Aizawa, 2008, 2001), all strident
critics of the so-called extended-mind hypothesis: the idea, put forward by David Chalmers and Andy
Clark (Clark and Chalmers, 1998), that mind extends in substantive ways into the world.
I happen not to think much of the Otto/Inga thought experiment that motivated much of Chalmers
and Clark’s original paper. Indeed, I agree with Rupert that, if Otto does represent a case of extended
cognition, it either is so unusual as not to say much about cognition in general; or it risks overgeneralizing, so that everything becomes cognitive. Far better, to my thinking, is the argument Clark
15
puts forward at one point in Supersizing the Mind Clark (2008, p. 34):
[Profoundly embodied] agents are able constantly to negotiate and renegotiate the agentworld boundary itself. Although our own capacity for such renegotiation is, I believe, vastly
underappreciated, it really should come as no great surprise, given the facts of biological
bodily growth and change.
As I argued in (2011b), the critics of extended mind are all committed to a well-defined and fixed
boundary: one that can be determined untendentiously by empirical investigation, if it has not sufficiently been already. All the extended-mind proponent needs to do is show – not that the boundary
between self and world does not exist! – but that there are strong reasons to think it is flexible. If it
is flexible enough, then the extended-mind proponent has proven her point.
Not surprisingly, many if not most enactivist philosophers speak highly of the extended mind
hypothesis. The often heard complaint is that the extended mind hypothesis does not go far enough,
leaving much of cognition trapped in the head, cut off in some way from its environment. Much of
Clark’s language, in particular, is strikingly in keeping with older, input-output-oriented, “cognitivist”
ways of understanding cognition that most enactivist philosophers – including myself – think should
best be left behind.
Remember that UCST holds the distinction between self and world to be, at heart, a conceptual
and not a prior ontological one. Remember, too, that, on the UCST account, it is in the nature of all
conceptual boundaries to be flexible and re-negotiable (albeit some more, and more visibly and easily,
than others).
Here lies the heart of the UCST argument for extended mind. Consider that, if anything bears
the “mark of the cognitive” (Adams and Aizawa’s pet phrase for determining where mind stops and
world begins), concepts do. Experience gives rise to concepts; but – as I argued in (Parthemore and
Morse, 2010) – concepts in turn structure experience, including all the experience that gives rise to
further concepts. The resulting model of causality is not linear but circular. Concepts and experience
16
are brought forth from their interaction, just as agent and environment are brought forth from their
interaction – such that neither can cleanly be separated from the other. If this view on things is right,
then the preconceptual noumena – to borrow a page from Immanuel Kant – cannot be reconstructed
free of conceptual taint: by which I mean, neither preconceptual agent nor preconceptual world.18
Conceptual residue is there, wherever one looks in the lifeworld, because it is through our concepts
that we apprehend the lifeworld. Through such conceptual coloring, mind extends in a meaningful and
substantive way into the world.
4
Conclusions
According to UCST, concepts are neither in the agent nor in the environment. In as much as concepts
are located anywhere, they are “in” the interaction of agent and environment. Like enactivist approaches in general, UCST seeks to avoid what it sees as the extremes of internalism and externalism.
Terms like “inside” and “outside” apply to physical volumes; but mind – though physically realized –
is not, itself, a physical volume (Parthemore, 2011b). To say that concepts are in the mind is not like
saying that neurons are in the brain.
Borrowing a page from Gärdenfors, concepts, according to UCST, sit between “low”- and “high”level cognition, beholden to neither (Parthemore, 2013; Parthemore and Morse, 2010). Push them in
one direction, and they look more like iconic and symbolic representations; they get bound up with
language. Push them the other, and they look more like non-representational bundles of associations.
Just as concepts sit between levels of cognition, so, too, any account of concepts should complement an
account of “low”- and “high”-level cognition. Both bottom-up and top-down methodologies are needed:
where they meet and agree with each other, the theories can be considered at least partially vindicated.
18 I
am taking on board Kant’s metaphysics without arguing for it – other than to note that concepts and conceptual
frameworks seem, logically, to assume the existence of that which is not concepts at the same time that they – necessarily,
by their very nature – bring it under the domain of concepts. Although a discussion of metaphysics lies outside the
scope of this paper, I believe it in the nature of metaphysical positions that they play the role of axioms in mathematics.
Rather than being things one proves or disproves, they serve as starting assumptions: if one assumes this, then that and
that follow. The key is not to choose the “right” axioms but the most explanatorily useful ones.
17
UCST is committed to a role for representation(s), properly understood. It requires both a representational (reflective) and non-representational (non-reflective) perspective on concepts for agents
who are capable of both. Neither perspective is or can be primary. Concepts are one thing when we
stop and look at them; they are logically another when we get on with possessing and employing them
non-reflectively. Yet: the moment we reflect on our possessing and employing them non-reflectively,
they become representational.
To the best of my knowledge, UCST marks the first explicit attempt to offer an enactive theory of
concepts drawing on the many lessons of the enactivist tradition. In contrast to the essentially skeletal
structure of CST – on which point CST is refreshingly honest19 – it offers an algorithmically detailed
framework for specifying concepts as they evolve.20 At the same time, UCST is committed, by the
metaphysical stance it takes, to the view that there is no one “right” theory of concepts: that would
require stepping outside one’s conceptual framework; something that, UCST argues, one can never do.
Rather, a theory of concepts is particular to the questions one is asking and the applications to which
one hopes to put the answers.
A parting thought: sciences of concepts – like sciences of consciousness – remind us of what
we should have remembered all along: that the observer is always present, when we look for her;
that the subjective is inseparably bound up with the objective; that science yields up not timeless
understandings freed from cultural and historical contexts but working hypotheses. It is not only the
mind that we cannot know fully and consistently, but the world.
19 Philosophers will complain that my arguments are weak; psychologists will point to a wealth of evidence about
concept formation that I have not accounted for; linguistics [sic] will indict me for glossing over the intricacies of
language in my analysis of semantics; and computer scientists will ridicule me for not developing algorithms for the
various processes that I describe. I plead guilty to all four charges. . . . My ambition here is to present a coherent
research program that others will find attractive and use as a basis for more detailed investigations (Gärdenfors, 2004,
p. ix).
20 At the same time, details of that algorithm, along with its empirical application, lie outside the scope of this paper;
instead, see (2013; 2012; 2011a; 2010). As the references show, the algorithm and its software implementation – in the
form of a mind-mapping program – continue to evolve, just as they predict any conceptual framework (and any concept
within that framework!) should continue to evolve. Empirical testing – plans for which are under development, using
the mind-mapping program – is needed to show whether UCST is on the right track or not.
18
Acknowledgments
The author gratefully acknowledges the financial support and supportive academic environment of the
Centre for Cognitive Semiotics at Lund University, Sweden, directed by Prof. Göran Sonesson and
assisted by Prof. Jordan Zlatev; as well as helpful discussion and feedback at seminars of the Centre
for Cognitive Semiotics, as well as at the University of Skövde, Sweden. He particularly thanks Prof.
Zlatev for the discussions of Zlatev’s semiotic hierarchy.
Funding
This research received no specific grant from any funding agency in the public, commercial, or not-forprofit sectors.
References
Adams, F. and Aizawa, K. (2001). The bounds of cognition. Philosophical Psychology, 14(1):43–64.
Adams, F. and Aizawa, K. (2008). The Bounds of Cognition. John Wiley and Sons.
Allen, C. (1999). Animal concepts revisited: The use of self-monitoring as an empirical approach.
Erkenntnis, 51(1):33–40.
Beck,
J.
S.
(2007).
The
generality
constraint
http://www.webpages.ttu.edu/jabeck/GCST.pdf.
and
the
structure
of
thought.
Presentation at MindGrad2006 conference,
University of Warwick, UK, and unpublished paper.
Brooks, R. A. (1991). Intelligence without representation. Artificial Intelligence, 47:139–159.
Chrisley, R. and Parthemore, J. (2007). Synthetic phenomenology: Exploiting embodiment to specify
the non-conceptual content of visual experience. Journal of Consciousness Studies, 14(7):44–58.
19
Clark,
sion.
A. (2008).
Supersizing the Mind:
Oxford University Press.
Embodiment,
Action,
and Cognitive Exten-
Also available as ebook through Oxford University Press
(http://www.oxfordscholarship.com).
Clark, A. and Chalmers, D. (1998). The extended mind. Analysis, 58(1):7–19.
Davidson, D. (1987). Rational animals. Dialectica, 36(4):317–327.
Di Paolo, E. A. (2005). Autopoiesis, adaptivity, teleology, agency. Phenomenology and the Cognitive
Sciences, 4:429–452.
Donald, M. (1993). Origins of the Modern Mind: Three Stages in the Evolution of Culture and
Cognition. Harvard University Press.
Donald, M. (1998). Material culture and cognition: Concluding thoughts. In Renshaw, C. and Scarre,
C., editors, Cognition and Material Culture: The Archaeology of Symbolic Storage. McDonald Institute for Archaeological Research.
Donald, M. (2001). A Mind So Rare: The Evolution of Human Consciousness. W.W. Norton, London.
Ellis, B. (2005). Physical realism. Ratio, 18(4):371–384.
Evans, G. (1982). Varieties of Reference. Clarendon Press. Edited by John McDowell.
Fodor, J. A. (1998). Concepts: Where Cognitive Science Went Wrong. Clarendon Press, Oxford.
Fodor, J. A. (2008). LOT 2: The Language of Thought Revisited. Oxford University Press.
Frege, G. (1951). On concept and object. Mind, 60(238):168–180. Translated by P.T. Geach and Max
Black. First published in 1892 as "Über Begriff und Gegenstand".
Gärdenfors, P. (2004). Conceptual Spaces: The Geometry of Thought. Bradford Books. First published
2000.
20
Gärdenfors, P. and Williams, M.-A. (2001). Reasoning about categories in conceptual spaces. In
Proceedings of the Fourteenth International Joint Conference of Artificial Intelligence, pages 385–
392. Morgan Kaufmann.
Harnad, S. (1990). Introduction: Psychophysical and cognitive aspects of cognitive perception: A
critical overview. In Harnad, S., editor, Categorical Perception: The Groundwork of Cognition,
pages 1–25. Cambridge University Press. First publication 1987.
Harvey, I. (1992). Untimed and misrepresented: Connectionism and the computer metaphor (CSRP
245). FTP archive currently offline as of April 2011. University of Sussex (UK) Cognitive Science
Research Papers (CSRP) series.
Hutto, D. D. (2005). Knowing what? radical versus conservative enactivism. Phenomenology and the
Cognitive Sciences, 4:389–405.
Maturana, H. (1978).
Cognition.
In Hejl, P. M., Köck, W. K., and Roth, G., editors,
Wahrnehmung und Kommunikation, pages 29–49. Peter Lang, Frankfurt.
Available online at
http://www.enolagaia.com/M78bCog.html, with the original page numbering retained.
Maturana, H. R. and Varela, F. J. (1992). The Tree of Knowledge: The Biological Roots of Human
Understanding. Shambhala, London.
McDowell, J. (1996). Mind and World. Harvard University Press, Cambridge, Massachusetts.
Millikan, R. (1998). A common structure for concepts of individuals, stuffs, and real kinds: More
mama, more milk, and more mouse. Behavioral and Brain Sciences, 21:55–100.
Millikan, R. (2010). On knowing the meaning; with a coda to Swampman. Mind, 119(473):43–81.
Newell, A. (1980). Physical symbol systems. Cognitive Science, 4(2):135–183.
21
Newen, A. and Bartels, A. (2007). Animal minds and the possession of concepts. Philosophical
Psychology, 20(3):283–308.
Noë, A. (2004). Action in Perception. MIT Press.
Parthemore, J. (2011a). Concepts Enacted: Confronting the Obstacles and Paradoxes Inherent in Pursuing a Scientific Understanding of the Building Blocks of Human Thought. PhD thesis, University
of Sussex, Falmer, Brighton, UK.
Parthemore, J. (2011b). Of boundaries and metaphysical starting points: Why the extended mind
cannot be so lightly dismissed. Teorema, 30(2):79–94.
Parthemore, J. (2013). Representations, symbols, icons, concepts... and why there are no mental representations. Proceedings of the Seventh Conference of the Nordic Association for Semiotic Studies
6-8 May 2011. forthcoming.
Parthemore, J. and Morse, A. F. (2010). Representations reclaimed: Accounting for the co-emergence
of concepts and experience. Pragmatics & Cognition, 18(2):273–312.
Parthemore, J. and Whitby, B. (2012). Moral agency, moral responsibility, and artefacts. In Gunkel,
D. J., Bryson, J. J., and Torrance, S., editors, The Machine Question: AI, Ethics and Moral Responsibility, pages 8–16. Society for the Study of Artificial Intelligence and Simulation of Behaviour
(AISB). Available online from http://events.cs.bham.ac.uk/turing12/proceedings/14.pdf.
Parthemore, J. and Whitby, B. (2013). When is any agent a moral agent?: Reflections on machine
consciousness and moral agency. International Journal of Machine Consciousness, 4(2). in press.
Perry, J. (1986). Thought without representation. In Proceedings of the Aristotelian Society, volume 60,
pages 137–151.
Prinz, J. (2004). Furnishing the Mind: Concepts and Their Perceptual Basis. MIT Press. First
published 2002.
22
Rosch, E. (1975). Family resemblances: Studies in the internal structure of categories. Cognitive
Psychology, 7:573–605.
Rosch, E. (1999). Principles of categorization. In Margolis, E. and Laurence, S., editors, Concepts:
Core Readings, chapter 8, pages 189–206. MIT Press.
Rupert, R. (2004). Challenges to the hypothesis of extended cognition. Journal of Philosophy, 101:389–
428.
Rupert, R. (2009a). Cognitive Systems and the Extended Mind. Oxford University Press.
Rupert, R. (2009b). Critical study of Andy Clark’s Supersizing the Mind. Journal of Mind and
Behavior, 30:313–330.
Ryle, G. (1949). The Concept of Mind. Penguin.
Sellars, W. (1956). Empiricism and the philosophy of mind. In Feigl, H. and Scriven, M., editors,
Minnesota Studies in the Philosophy of Science, Volume I: The Foundations of Science and the
Concepts of Psychology and Psychoanalysis, volume I, pages 253–329. University of Minnesota Press.
Available online from http://www.ditext.com/sellars/epm.html.
Stewart, J. (1995). Cognition = life: Implications for higher-level cognition. Behavioural Processes,
35(1-3):311–326.
Thompson, E. (2007). Mind in Life: Biology, Phenomenology and the Sciences of Mind. Harvard
University Press.
Torey, Z. (2009). The Crucible of Consciousness: An Integrated Theory of Mind and Brain. MIT
Press.
Travis, C. (1994). On constraints of generality. Proceedings of the Aristotelian Society, New Series,
94:165–188. Published by Blackwell Publishing on behalf of The Aristotelian Society.
23
Varela, F. J. (1987). Laying down a path in walking. In Thompson, W., editor, Gaia: A Way of
Knowing. Political Implications of the New Biology, pages 48–64. Lindisfarne Press, Hudson, NY,
USA.
Varela, F. J. and Shear, J. (1999). The View From Within: First-Person Approaches to the Study of
Consciousness. Imprint Academic.
Zachar, P. (2000). Psychological Concepts and Biological Psychiatry: A Philosophical Analysis. John
Benjamins Publishing Company.
Zlatev, J. (2001). The epigenesis of meaning in human beings, and possibly in robots. Minds and
Machines, 11:155–195.
Zlatev, J. (2002). Meaning = life (+ culture). Evolution of Communication, 4(2):253–296.
Zlatev, J. (2009). The semiotic hierarchy: Life, consciousness, signs and language. Cognitive Semiotics,
2009(4):169–200.
24