Principles of Representation

Topics in Cognitive Science (2014) 1–17
Copyright © 2014 Cognitive Science Society, Inc. All rights reserved.
ISSN:1756-8757 print / 1756-8765 online
DOI: 10.1111/tops.12097
Principles of Representation: Why You Can’t Represent
the Same Concept Twice
Louise Connell,a,c Dermot Lynottb,c
a
School of Psychological Sciences, University of Manchester
b
Manchester Business School, University of Manchester
c
Department of Psychology, Lancaster University
Received 30 March 2012; received in revised form 3 July 2013; accepted 25 April 2014
Abstract
As embodied theories of cognition are increasingly formalized and tested, care must be taken
to make informed assumptions regarding the nature of concepts and representations. In this study,
we outline three reasons why one cannot, in effect, represent the same concept twice. First, online
perception affects offline representation: Current representational content depends on how ongoing
demands direct attention to modality-specific systems. Second, language is a fundamental facilitator of offline representation: Bootstrapping and shortcuts within the computationally cheaper linguistic system continuously modify representational content. Third, time itself is a source of
representational change: As the content of underlying concepts shifts with the accumulation of
direct and vicarious experience, so too does the content of representations that draw upon these
concepts. We discuss the ramifications of these principles for research into both human and synthetic cognitive systems.
Keywords: Embodied cognition; Concepts; Representation; Perceptual simulation; Language;
Linguistic shortcut; Linguistic bootstrapping
1. Introduction
“One cannot step in the same river twice” (Heraclitus).
Concepts are the basis of the human cognitive system. In online processing of the realtime environment, people’s ability to turn potential sensory overload into a set of recognizable percepts depends on top-down categorization of the car, street, and barking dog
The order of authorship is arbitrary.
Correspondence should be sent to Louise Connell or Dermot Lynott, Department of Psychology, Fylde
College, Lancaster University, Bailrigg, Lancaster LA1 4YF, UK. E-mail: [email protected] or
[email protected]
2
L. Connell, D. Lynott / Topics in Cognitive Science (2014)
in front of them. In offline cognitive processing, the ability to discuss the relative musical
merits of Shostakovich and dubstep, or to plan a different route home tomorrow to avoid
the annoying barking dog, depends on coherent representation of the relevant conceptual
information.
Recent evidence regarding the nature of mental representation suggests that the conceptual system is both linguistic and embodied (Barsalou, Santos, Simmons, & Wilson,
2008; Louwerse & Jeuniaux, 2008; Lynott & Connell, 2010; Vigliocco, Meteyard,
Andrews, & Kousta, 2009). In many ways, the conceptual system can be described as
having co-opted the perceptual, motor, and affective systems (Connell & Lynott, 2010;
Pecher, Zeelenberg, & Barsalou, 2003; Pulverm€uller, 2005; Wilson-Mendenhall, Barrett,
Simmons, & Barsalou, 2011). The same neural systems that are involved in processing
perceptual, motor, and affective online experience are responsible for representing (or
simulating) this information in offline conceptual processing when people think about
something not currently in front of them. For example, reading the word “cinnamon”
activates olfactory processing areas in the piriform cortex (Gonz"alez et al., 2006; see also
Hauk, Johnsrude, & Pulverm€
uller, 2004; Newman, Klatzky, Lederman, & Just, 2005),
and many phenomena originally found in online perceptual processing have also been
found to emerge in equivalent offline conceptual tasks (Connell & Lynott, 2010; Dils &
Boroditsky, 2010; Pecher, Zeelenberg, & Barsalou, 2003). Furthermore, humans are a linguistic species, in that both online processing of ongoing experience and offline processing of past, future, and hypothetical situations can be labeled with words and phrases.
The statistical associations between such linguistic labels are a rich source of information, and processing in a range of conceptual tasks is subject to both simulation and linguistic effects (Louwerse & Connell, 2011; Louwerse & Jeuniaux, 2008, 2010; Santos,
Chaigneau, Simmons, & Barsalou, 2011; Solomon & Barsalou, 2004). For instance, there
is a delay in judging that seaweed can be slimy if you have just judged that leaves can
be rustling, both because there is a modality switching cost in shifting attention between
auditory and haptic simulation systems, and because the words slimy and rustling rarely
co-occur and belong to different linguistic clusters (Louwerse & Connell, 2011).
So, what is the difference between a representation and a concept? While these two
terms are often used somewhat interchangeably, it is important for our present purposes
to distinguish them. In this paper, we use the term representation to refer to a specific,
situated, contextual instantiation of one or more concepts necessary for the current task.
It is an ongoing conglomeration of neural activation patterns across perceptual, motor,
affective, linguistic, and other areas, and, as such, includes both online (i.e., part of current experience) and offline (i.e., not part of current experience) information (see Principle 1). Moreover, since the conceptual system is both linguistic and simulation based, a
representation also includes linguistic labels, which play important roles in conceptual
processing (see Principle 2). If a friend tells you she has just acquired a new Labrador
puppy, your representation of this particular dog, which you have never seen, will be situated in what you know of dogs in general and your friend’s circumstances in particular.
It will draw upon your existing concepts of Labradors and puppies to form a coherent
simulation of its likely appearance (e.g., visual information of golden fur), behavior (e.g.,
L. Connell, D. Lynott / Topics in Cognitive Science (2014)
3
auditory information of whining; visual-haptic information of excited wriggling), and possible consequences of meeting your friend’s resident elderly cat (i.e., prediction: Barsalou,
2009), and this situated representation will also include relevant linguistic labels such as
“puppy,” “labrador,” “cat,” and (for those exposed to U.K. advertising campaigns)
“Andrex.”
A concept, on the other hand, refers to a general, aggregated, canonical (i.e., contextfree) aspect of experience that has the potential to form the basis of an offline representation. A concept is formed when particular aspects of experience are attended to often
enough that they lead to overlapping patterns of neural activation that can be re-activated
relatively easily in an offline representation. For example, imagine your experience of
patting a Labrador on the head at time t1 includes visual experience of its size and golden
color, haptic experience of its soft fur, and motor experience of reaching and scratching.
Your next experience at time t2 of walking a terrier in the rain is likely to include visual
experience of a small, brown creature, olfactory experience of the effect of rain on fur,
and motor experience of pulling on a lead. While these experiences at t1 and t2 are far
from identical, over time there will be a large enough statistical overlap in the neural
activation patterns, and this overlap will occur frequently enough that some version of
the pattern can be re-activated relatively easily offline (i.e., without direct experience).
Thus, the concept of dog is born, and will change over time as more experience is accumulated (see Principle 3). Since any aspect of experience that repeatedly receives attention can give rise to an easily reactivated activation pattern, any aspect of experience can
be a concept. Red things, logarithmic curves, that feeling as you pull your front door shut
that you might have forgotten your house keys—all are concepts to some extent, even if
they do not all possess convenient linguistic labels in English. However, only common
patterns with linguistic labels are likely to receive widespread agreement regarding their
concept-hood. Indeed, following theories that the conceptual system is both linguistic and
simulation based, we would argue that the label itself—“dog,” “chien,” “inu,” whatever
your native language—will also form part of the concept of dog. Our definitions of longterm concepts and instantiated representations therefore follow the type-token distinction,
but are not identical to Barsalou’s (1999, 2003) definitions of simulators and simulations
in that we explicitly include linguistic information. We discuss this inclusion in more
depth in Principle 2.
Many cognitive scientists working from the embodiment perspective agree that the
content of mental representations changes dynamically with experiences, current goals,
and available resources (Barsalou, 1999, 2008; Glenberg, 1997; Glenberg & Robertson,
2000; Lynott & Connell, 2010; Vigliocco et al., 2009; Wilson, 2002). However, the
key embodied tenets that the conceptual system is central to both online and offline
cognition, and that concepts are grounded in perceptual, motor, and affective systems,
have some critical corollaries that must not be overlooked. In this paper, therefore, we
outline three principles of representation that demonstrate why it is not possible to represent the same content on two separate occasions, and discuss the impact for theories
and models of cognition. These principles are orthogonal in that each one independently
4
L. Connell, D. Lynott / Topics in Cognitive Science (2014)
contributes to the assertion that one cannot, in effect, ever represent the same concept
twice.
2. Principle 1: Online perception affects offline representation
Because the conceptual system co-opts the perceptual system for the purposes of
representation, it means that concurrent perceptual and attentional processing of the surrounding environment affects what is conceptually retrieved and represented. Several
studies have shown that representational content depends on how ongoing demands
direct attention to modality-specific systems, and there are three different ways in
which this can happen: strategic attention, implicit task demands, and perceptual stimulation.
2.1. Strategic attention
People can consciously and strategically direct endogenous attention to the perceptual
modality of their choice (Spence, Nicholls, & Driver, 2001; Turatto, Bridgeman, Galfano,
& Umilt#a, 2004). For example, when we asked people in a recent study to judge whether
a presented word related to a particular perceptual modality, we found that people were
slower and less accurate for touch-related words than those relating to vision, sound,
smell, or taste (Connell & Lynott, 2010). Critically, when it came to bimodal words like
jagged or fluffy that relate equally strongly to vision and touch, we found that the tactile
disadvantage persisted when people were asked to focus on the tactile modality (i.e.,
judging whether jagged could be perceived through touch) but disappeared for the visual
modality (i.e., judging whether jagged could be perceived through sight). In other words,
even when the same lexical item is presented, consciously directing attention to the sense
of touch resulted in a more effortful and error-prone representation than when consciously directing attention to the sense of vision.
2.2. Implicit task demands
Perceptual attention does not have to be consciously directed to elicit modality-specific
effects in conceptual processing. A second means of attentional capture comes from the
perceptual nature of the stimuli or task. Modality switching effects have robustly demonstrated that there is a processing cost involved in re-allocating attentional resources from
one modality-specific system (e.g., verifying the auditory property that a blender can be
loud) to another (e.g., gustatory cucumber can be bland: Collins, Pecher, Zeelenberg, &
Coulson, 2011; Connell & Lynott, 2011; Hald, Marshall, Janssen, & Garnham, 2011;
Lynott & Connell, 2009; Marques, 2006; Pecher et al., 2003). Furthermore, the nature of
the task itself can implicitly direct attention. Lexical decision tasks involve recognition
of visual word forms, and thus preferentially direct attention to the visual modality;
hence, words that refer to concepts with a strong visual component (e.g., small) show
L. Connell, D. Lynott / Topics in Cognitive Science (2014)
5
faster and more accurate lexical decision performance than weakly visual words (e.g.,
husky), even when length, frequency, and other variables have been controlled (Connell
& Lynott, 2014). By contrast, saying the same words aloud in a naming task preferentially directs attention to the auditory modality, since the goal is correct pronunciation,
and hence strongly auditory words (e.g., husky) are faster and more accurate to name
than weakly auditory words (e.g., small). Directing attention to a particular perceptual
modality facilitates processing in that modality, and so differential attentional demands
of each task interact with the strength of perceptual experience in the referent concepts
to produce different representations of small and husky during lexical decision and
naming.
2.3. Perceptual stimulation
Finally, attentional demands can be driven by concurrent perception in affecting representation, even when the task at hand remains constant. Whether concurrent perception
facilitates or interferes with conceptual processing depends on the extent to which the
perceptual stimulus tends to monopolize shared attentional and representational resources
(Connell & Lynott, 2012). In particular, the temporal dynamics of a moving perceptual
stimulus require continuous attention to monitor it for change, and hence leave few attentional resources in the input modality free for simultaneous conceptual processing (i.e.,
resulting in interference). For example, Kaschak et al. (2005) asked people to listen to
sentences describing upward or downward motion (e.g., “The cat climbed the tree”) while
simultaneously watching a high-contrast display scroll upwards or downwards. People
were slower to judge that the sentences made sense when the directions of motion were
congruent, leading Kaschak and colleagues to conclude that when the neural systems for
processing a particular direction of motion were involved in perception, they were less
available for simulating that same direction of motion in the sentence representation. Similar effects have been shown for single motion verbs rather than sentences (Meteyard,
Bahrami, & Vigliocco, 2007), and for concurrent perception of auditory rather than visual
motion (i.e., when the spatial source of a sound appears to change location: Kaschak,
Zwaan, Aveyard, & Yaxley, 2006).
In contrast, some perceptual stimuli do not require continuous monitoring, and instead
serve to direct attention to the input modality while leaving resources free for concurrent
conceptual processing (i.e., resulting in facilitation). In a recent study, we showed that
stimulating the hands with tactile vibrations made it easier for people to represent and
compare small, manipulable objects like wallet and key, because the concepts included
modality-specific information about touch that specifically related to the hands (Connell,
Lynott, & Dreyer, 2012). Objects that were too large to be physically manipulable, like
cottage and mansion, lacked relevant tactile information in their concepts and hence
were unaffected by tactile stimulation. We found the same effects for proprioceptive
stimulation (i.e., passive arm/hand positioning). Indeed, bodily posture has also been
shown to facilitate retrieval of autobiographical memories, such that people’s recounting
of their first visit to the dentist is more prompt and detailed when lying supine on a
6
L. Connell, D. Lynott / Topics in Cognitive Science (2014)
couch than when standing upright (Dijkstra, Kaschak, & Zwaan, 2007; see also Riskind
& Gotay, 1983). In short, ongoing perception of the body and external stimuli influences
the content of mental representations because they effectively share patterns of activation.
2.4. Summary
Since conceptual information is perceptually grounded, attention on modality-specific
perceptual systems—whether directed strategically, unconsciously by implicit task
demands, or directly by perceptual stimuli—has profound effects on the content of simulated representations. One cannot be expected to represent the same concept twice when
the task, goals, motivations, surrounding environment, and bodily configuration all contrive to shape the resulting simulation.
3. Principle 2: Language is a fundamental facilitator of offline representation
The co-option of perceptual and other systems for offline simulation enables powerful
conceptual processing, but human offline representational capacity is nonetheless limited
and is even subject to further encroachment by online, real-time experience (Principle 1).
However, because humans are equipped with language, it means that some of these
resource limitations can be circumvented. Natural languages are full of statistical regularities: Words and phrases tend to occur repeatedly in similar contexts, just as their referents
tend to occur repeatedly in similar situations, and people’s sensitivity to such regularities
(e.g., Aslin, Saffran, & Newport, 1998; Louwerse & Connell, 2011; Solomon & Barsalou,
2004) allows them to build up substantial distributional knowledge of linguistic associations. This dynamic web of word-to-word (and word-to-phrase, phrase-to-phrase, etc.)
associations constitutes the linguistic system, which is just as central to human conceptualization as the simulation system (Andrews, Vigliocco, & Vinson, 2009; Barsalou et al.,
2008; Johns & Jones, 2012; Louwerse & Jeuniaux, 2008, 2010; Lynott & Connell, 2010).
The linguistic and simulation systems are closely interconnected and mutually supportive;
linguistic information can activate simulation information, which may in turn activate further linguistic information, and so on. For example, hearing the word “dog” will simultaneously activate closely associated linguistic tokens (i.e., a word or phrase, such as “cat,”
“puppy,” “tail”) and the simulation of our aforementioned barking, drooling, tail-wagging
mammal. These activated tokens will, in their turn, begin to activate their relevant simulations, whereas the various aspects of the simulated dog will activate their own linguistic
labels (e.g., “barking,” “woof,” “tail”). This rolling wave of linguistic and simulation activations help to form a stable, situated simulation of dog for the task at hand. Here, we
outline two ways that the linguistic system leads to representational differences by helping to circumvent unavoidable resource limitations: linguistic bootstrapping and the linguistic shortcut.
L. Connell, D. Lynott / Topics in Cognitive Science (2014)
7
3.1. Linguistic bootstrapping
Representations are continuously modified by linguistic bootstrapping, where words
and phrases act as pointers or placeholders in a limited-resource cognitive system. When
a linguistic label is associated with the neural activation pattern that constitutes a particular concept, it no longer becomes necessary to re-activate the entire pattern as part of a
simulated representation. Rather, when there are insufficient representational resources to
maintain a simulation in full, such as when online processing demands attention (see
Principle 1) or when the representation itself is highly complex or detailed, a word or
phrase can replace a portion of the simulation with a linguistic placeholder that preserves
structure but frees up resources to extend the simulation in other directions as needed. A
linguistic label consequently acts as a temporary proxy for a simulated concept in an
ongoing representation, and can be fleshed out into a simulation again at any time if
resources are available. Having linguistic labels for concepts, particularly complex concepts, therefore allows us to build up recursive layers of representational depth that would
not be possible if we were not a linguistic species (Lynott & Connell, 2010).
In this sense, linguistic bootstrapping helps to overcome the problems that ensue when
the sheer complexity of what one is trying to represent outstrips the capacity available.
An increasingly larger vocabulary therefore facilitates increasingly complex thought, at
least up to some presumed threshold beyond which the cost of learning and retaining new
words is larger than the savings made by using these words as linguistic placeholders.
For example, children with larger vocabularies are better able to identify atypical examples of objects (Smith, 2003) and show greater flexibility in recategorizing objects (Jaswal, 2007). Individual differences in vocabulary size are also strongly related to more
general tests of intelligence (e.g., Dunn & Dunn, 1997; Raven, 1948). The simulation system may be relatively continuous from humans to nonhumans (Barsalou, 2005), but the
ability of linguistic bootstrapping to facilitate complex representation is major advantage
for humans and offers a strong mechanism for how the evolution of language—whether
spoken or gestural in nature—could have accelerated cognitive evolution. While we are
not attempting to claim that all complex cognition is impossible without language, we
would nevertheless argue that some complex thought—particularly that which suffers
from limits of representational capacity—relies on linguistic bootstrapping.
3.2. Linguistic shortcut
An important difference between the linguistic and simulation systems is one of relative speed: Responses based on shallow, linguistic distributional information tend to be
faster than responses that rely on deeper, situated simulation (Barsalou et al., 2008;
Connell & Lynott, 2013; Louwerse & Connell, 2011; Louwerse & Jeuniaux, 2008, 2010;
Lynott & Connell, 2010). When a word such as “dog” is heard or read, both systems are
initiated but the linguistic system peaks in activation (i.e., spreads activation to other
tokens, such as “cat” or “tail”) before the simulation system peaks (e.g., forms a visual,
haptic, olfactory, affective situated simulation of a dog). However, the linguistic speed
8
L. Connell, D. Lynott / Topics in Cognitive Science (2014)
advantage does not mean that simulation is slow; for example, words like kick can activate the leg region of the primary motor cortex in less than 200 ms (e.g., Hauk & Pulverm€
uller, 2004; Pulverm€
uller, Shtyrov, & Ilmoniemi, 2005). Rather, the relative
importance of each type of system will change according to the current context or specific task demands. Information from the linguistic system is both faster to access and
less precise than simulation information (Louwerse & Connell, 2011; Louwerse & Hutchinson, 2012), and hence has the hallmark of a “quick and dirty,” computationally cheap
heuristic. The linguistic system therefore has the potential to act as a shortcut and provide
a response before the relatively more expensive simulation system is fully engaged.
Support for the linguistic shortcut comes from a number of empirical studies. One
approach has focused on the content of what people retrieve when asked to generate
properties for a particular concept. When given the cue word “bee,” Santos et al. (2011)
found that the first properties that people listed tended to be close linguistic associates
like “sting,” “honey,” or “hive,” whereas later properties tended to describe details of the
broader situation such as “flowers,” “wings,” or “summer.” In an imaging version of this
study (Simmons, Hamann, Harenski, Hu, & Barsalou, 2008), properties that people listed
in the first 7.5 s after seeing a cue word produced greater activation in Broca’s area and
other regions usually associated with classic language tasks. In contrast, properties that
people listed 7.5–15 s after a cue word preferentially activated areas usually involved in
mental imagery, episodic memory, and location processing (precuneus and right middle
temporal gyrus). Simmons and colleagues conclude that while both systems were active
in conceptual processing throughout the 15 s range, the linguistic system dominated for
the first half of property generation and the simulation systems dominated for the second
half. In other words, people tended to rely on the computationally cheap linguistic shortcut to provide responses for as long as possible, and they drew more heavily upon the
simulation system as linguistic responses ran out.
An alternative approach has been to use large-scale corpora to model the kind of linguistic distributional associations that are likely to be captured by the human linguistic
system. For example, in Louwerse and Connell (2011), we used the Google Web 1T corpus (a trillion-word snapshot of Google-indexed pages from 2006: Brants & Franz, 2006)
to calculate associative frequencies of modality-specific words such as scarlet (visual),
loud (auditory), or pungent (olfactory). Using a modality switching cost paradigm (e.g.,
Pecher et al., 2003; Lynott & Connell, 2009) to compare the predictive abilities of linguistic and perceptual information, we found that the linguistic shortcut was the best predictor of fast responses, whereas perceptual simulation was the best predictor of slow
responses (see also Louwerse & Hutchinson, 2012; for related ERP evidence). Further
work in our laboratory has shown that use of the linguistic shortcut is not merely
restricted to relatively shallow conceptual retrieval (i.e., verifying a known property of a
known concept; see also Solomon & Barsalou, 2004) but is also useful in deeper conceptual processing as a form of cognitive triage. When dealing with novel stimuli in conceptual combination, the frequency with which two words have previously appeared in close
proximity affects not only the time course of successful processing but also the time
course and likelihood of failure (Connell & Lynott, 2013). The less often two words have
L. Connell, D. Lynott / Topics in Cognitive Science (2014)
9
appeared close together (e.g., octopus and apartment), the more likely people are to reject
their compound (e.g., octopus apartment) as nonsensical or uninterpretable rather than
risk costly failure in the simulation system.
3.3. Summary
Via linguistic bootstrapping and the linguistic shortcut, language can help to circumvent resources limitations caused by online processing (see Principle 1), although the utility of these mechanisms will vary according to whether there are suitable linguistic labels
for the relevant patterns of activation in the task at hand. An important implication of
both linguistic bootstrapping and shortcut is that although the concept to which a word
refers is ultimately grounded in the simulation system, a word does not need to be fully
grounded every time it is processed (Louwerse & Connell, 2011). In this sense, words
can help to overcome the representational bottleneck that ensues “when situations demand
fast and continuously evolving responses, there may simply not be time to build up a
full-blown mental model of the environment” (Wilson, 2002, p. 628). Because the linguistic system is faster and computationally cheaper than basing a judgment on the simulation system, and because on-the-fly conceptual processing does not have to be perfect
(only “good enough”: Ferreira, Bailey, & Ferraro, 2002), people can safely exploit the
linguistic system much of the time.
While some parallels may be drawn between the fast linguistic system/slow simulation
system and dual processing theories of reasoning and cognition (Evans & Over, 1996;
Kahneman, 2003; Sloman, 1996; Stanovich, 1999; cf. Carruthers, 2006; for an opposite
perspective linking language to system 2), it is important to note that they are not the
same. The main contrast between system 1 and system 2 in dual processing theories is
that the first is fast and unconsciously automatic, whereas the latter is slow and consciously deliberative (Evans, 2008). However, both linguistic and simulation systems are
fast, unconscious, and automatically engaged, albeit with relative differences in speed
(Barsalou et al., 2008; Louwerse & Jeuniaux, 2008; Lynott & Connell, 2010), meaning
that there is little about the usual system 1/system 2 distinction that can map onto the
roles of linguistic and simulation systems in human cognition. One cannot represent the
same concept twice when different aspects of the situation may be represented in linguistic and/or simulated form according to available resources, task demands, and processing
goals.
4. Principle 3: Time itself is a source of representational change
Even if online perceptual constraints (Principle 1) and linguistic bootstrapping/shortcuts
(Principle 2) are replicated exactly on two separate occasions (i.e., trying to retrieve the
same thing under precisely the same circumstances), the representation formed at t1 will
never be identical to that formed at t2 because the pattern of activation that constitutes
the concept to be retrieved (i.e., instantiated) is subject to change over time. Because
10
L. Connell, D. Lynott / Topics in Cognitive Science (2014)
concepts alter with the accumulation of direct and vicarious experience, and with the act
of retrieval itself, the representations that draw on such conceptual content are also inevitably mutable. Or, in other words, one of the drivers of representational change over time
is conceptual change over time.
Borges’s (1962) fictional creation of Funes the Memorious has become one of the most
famous examples in cognitive science of the importance of categorization and abstraction
in memory. Funes, afflicted by separately encoding every instance of every item he
encountered, “was disturbed by the fact that a dog at three-fourteen (seen in profile)
should have the same name as the dog at three-fifteen (seen from the front)” (Borges,
1962, p. 70). In many ways, of course, the representation Funes formed of a dog at 3:14
differed profoundly from his representation of a dog at 3:15, but where things go wrong
for Funes is his inability to draw upon a general, canonical concept of dog to help situate
his representation. Even within such a short time window, however, there will still be differences between a person’s concept of dog at 3:14 and 3:15, because every further experience alters the nature of experience-based concepts.
The act of conceptual retrieval is not a passive read-only access of memory, but
actively alters what is being retrieved, both in strength and content. In this sense, encoding and retrieval cannot be clearly separated because “retrieval transfers memory into a
state of transient plasticity” (Hardt, Einarsson, & Nader, 2010; p. 151) in which conceptual content is malleable. Every single representation of a dog draws upon the dog concept with a slightly different focus; if one moved to a city where handbag dogs were
prevalent, every sighting of a chihuahua would not only activate one’s concept of dog,
but subtly change it by reinforcing perceptual experience of small, yappy creatures. Furthermore, every offline thought of dogs may be influenced by this chihuahua-laden environment, because the perceptual information that was most recently represented in a
concept is preferentially re-activated the next time that concept is retrieved, even after a
delay, and even when it is not strategically useful to the task at hand (Pecher, Zanolie, &
Zeelenberg, 2007; Pecher, Zeelenberg, & Barsalou, 2004; van Dantzig, Cowell, Zeelenberg, & Pecher, 2011). Indeed, since simulated and experienced reality are not encoded
differently (e.g., Loftus, 1975, 1996), vicarious experience of dogs (e.g., reading Stephen
King’s Cujo) would have a similar effect in influencing the perceptual, motor, and affective information that makes up the dog concept. Even if one managed to spend a year
without seeing or thinking about dogs, experience of related concepts and situations,
including that triggered by exposure to related words, would still serve to shift the pattern
of activation in the dog concept. In other words, the constant and subtle shifting in the
composition of concepts influences the content of representations, since different information is available to instantiate on each occasion.
4.1. Summary
Concepts are subject to constant change throughout the lifespan, even though the
notion of invariance is often implicit in discussions of the adult conceptual system. For
example, in their critique of perceptually grounded concepts, Mahon and Caramazza
L. Connell, D. Lynott / Topics in Cognitive Science (2014)
11
(2008) argue that incorporating differing sensory information as part of a concept creates
a problem where “the same person may not instantiate the same concept at different
points in time” (p. 68). However, since concepts are formed from direct and vicarious
experience, there is no reason to expect the same person’s representations of a concept to
remain invariant from one occasion to the next. Concepts (i.e., the generic dog) will
always vary by personal experience both within and between individuals. Instantiated representations (i.e., a specific example of a real or imaginary dog) will therefore always
vary according to the current form of the underlying concept, as well as task goals, available resources, and so on.
5. Discussion
As embodied theories of cognition are increasingly formalized and tested (Pezzulo
et al., 2011), care must be taken to make informed assumptions regarding the nature of
concepts and representation. In this paper, we have argued that three principles of representation dictate why one cannot, in effect, ever represent the same concept twice. As
theoretical constraints, our principles state that (a) online perception affects offline representation, (b) language is a fundamental facilitator of offline representation, and (c) time
itself is a source of representational change. The principles we outline therefore have
important ramifications for research into both human and synthetic representational systems.
5.1. Representation in communication
All three of our principles highlight the variability of mental representations, whereby
representational change is driven by a diverse range of sources. Capacity for change,
however, does not mean instability. Like an attractor in a dynamical system, a long-term
concept may be stable (i.e., not subject to massive changes from small perturbations)
while still being subject to gradual drift over time (Principle 3). That an offline representation of such a stable concept is subject to further influences from both online perception
(Principle 1) and language (Principle 2) does not render it unstable; rather, as long as the
ongoing representation can appropriately fulfill current task goals, it is fit for purpose.
For example, a common task that makes heavy use of the conceptual system is that of
communication. Since the cumulative experience that makes up the conceptual system
will differ between individuals, often considerably so, the ability of two individuals to
communicate effectively depends on there being a minimum degree of overlap between
their ongoing representations and, therefore, the concepts from which these representations derive. The question then becomes: How much overlap is enough? Critically, understanding one another in communication does not require identical representations between
speaker and listener (cf. Mahon & Caramazza, 2008, p. 68), just representations that overlap enough to achieve the current communicative goal. In dialog, speakers can negotiate
to ensure that they share enough common ground to communicate effectively (Clark,
12
L. Connell, D. Lynott / Topics in Cognitive Science (2014)
1996). Nonetheless, even single-word utterances are embedded in valuable situational and
nonverbal cues. Take the word “fire”: a person running from a building shouting “fire” is
likely to be well understood by passers-by, unlike (for instance) a person standing on a
street corner quietly speaking the same word. In the first case, the representation of a
building on fire in the minds of passers-by will be close enough to that of the fleeing
speaker to achieve the communicative goal of warning. In the second case, there is insufficient situational context to constrain what the speaker intends and passers-by are not
very likely to arrive at the same representation.
Furthermore, we propose that the linguistic system also plays a critical role in providing conceptual and representational overlap. Since the conceptual system comprises both
linguistic and simulation information (Principle 2), linguistic labels provide a valuable
source of shared experience in their own right. Words are not just the vehicles of meaning, but, because of their ability to bootstrap representations by acting as placeholders
and shape representations by their distributional information, actually contribute to overlap of representation between speaker and listener. Indeed, one could argue that the linguistic label is the only truly identical element between the concepts and representations
of two individuals. It is entirely possible for two individuals to communicate smoothly,
and believe they understand one another very well, while having very little overlap in the
content of what they are mentally representing. For example, imagine your lifetime experience of dogs has been entirely of the small, handbag-dog variety, and that you are unaware that dogs come in any form larger than a chihuahua. You then meet someone who
has only ever experienced large working dogs and is unaware that dogs come in any form
smaller than a German shepherd. An exchange such as “Do you like dogs?,” “Yes, we
have one at home,” “Same here, we just got one last week from the shelter,” is perfectly
effective communication where each party understands the other, even though each individual is representing quite a different dog in both canonical (i.e., liking dogs in general)
and specific (i.e., my pet dog at home) forms. Without the convenient label of “dog,”
however, it is unlikely that communication about such different concepts would succeed
so easily. In other words, however much the underlying concept might vary between individuals, a shared communicative context will help considerably in forming representations
that overlap enough for the purposes of the current goal.
5.2. Representation in cognitive models
As modeling constraints, our principles can be restated to imply that (a) offline processing is continuously subject to online perceptual inputs, (b) language is not just a system input/output but is part of representation and processing, and (c) experience over
time means that identical inputs at t1 and t2 will not give rise to identical outputs. These
constraints impact upon both the high-level assumptions and practical design of cognitive
models that attempt to implement human conceptual representations.
Regarding constraint 1, it is, of course, not trivial to include online perceptual inputs
into the offline representations of a cognitive model: most models are physically disembodied (regardless of their underlying design principles) in that they are hardware-independent
L. Connell, D. Lynott / Topics in Cognitive Science (2014)
13
software that is disconnected from the external, real-time environment. Since perceptual
inputs are not just adding noise in the system, but systematically altering whether and how
information is represented, their absence means that a significant source of variance in
human data is not being captured. One way to deal with this constraint is to take the
embodied robotics approach; models such as the iCub (Marocco, Cangelosi, Fischer, &
Belpaeme, 2010) possess external sensory systems and at least theoretically have the
capacity to enfold perceptual inputs into offline simulations. However, all is not lost for
creators of software cognitive models. If the attentional demands a particular task and paradigm places on modality-specific processing are sufficiently well understood (e.g., see
Connell & Lynott, 2012, for discussion of processing sentences about visual motion under
primed versus occupied visual attention), then a model can be designed to capture human
performance under these different conditions.
Regarding constraint 2, the cognitive sciences have seen much debate over the last
15 years regarding the role of language and linguistic information in conceptual processing. Early work with models such as LSA (Landauer & Dumais, 1997) and HAL (Burgess & Lund, 1997) proposed the ambitious claim that the distributional information
contained in word-to-word (and word-to-phrase, etc.) associations could constitute word
meaning in conceptual processing, but this claim was quickly checked by demonstrations
that distributional information could not explain everything (e.g., novel use of objects:
Glenberg & Robertson, 2000). The majority of cognitive models currently omit linguistic
distributional information (e.g., Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; Dilkina, McClelland, & Plaut, 2008; Dry & Storms, 2010; Minda & Smith, 2010). However,
as with perceptual inputs, linguistic information systematically alters whether and how
information is represented, and so its absence means a significant source of variance in
human data is not being captured. The best way for cognitive models to deal with the
constraint that language forms a part of representation is to actually include both a linguistic and a grounded component in the model. Indeed, some recent cognitive models
have begun to do so (Andrews et al., 2009; Johns & Jones, 2012), showing that models
with both a linguistic distributional component and a grounded simulation component predict human data better than either component alone. While such a modeling approach is
still in the minority, we believe that the importance of the language constraint has been
hitherto underestimated, and we look forward to seeing its inclusion more often in the
future.
Finally, regarding constraint 3, many existing theoretical and computational models of
human cognitive processes implicitly treat conceptual knowledge as invariant (e.g., Coltheart et al., 2001; Dry & Storms, 2010; Johns & Jones, 2012; Machery, 2007; Mahon &
Caramazza, 2008; Markman, 2012; Minda & Smith, 2010; Pinker, 1999; cf. Elman, 2009;
Marocco et al., 2010). Once something has been learned or is known, invariant models
assume that it will be always retrieved and represented in the same way for the same
cue: cat always primes dog, spoons are typically metal, a job is always like a jail. Invariant representation, however, is not possible: Task-specific representations change from t1
to t2 because the underlying concept itself changes from t1 to t2. One way to step around
this constraint is to regard invariant models as snapshots of cognition—that is, an isolated
14
L. Connell, D. Lynott / Topics in Cognitive Science (2014)
time slice of the conceptual system at a point of commencing a particular task that resets
to t0 before every new input—because a snapshot assumption is arguably less harmful to
a model’s psychological validity than an invariance assumption. Ideally, however, a cognitive model would be able to deal with this representational change constraint by continually evolving; rather than having separate training and testing phases, each new stimulus
would have the ability to shift the underlying concept while still maintaining its fit to
human performance.
6. Conclusions
In sum, one should be cautious about the broader generalizability of any cognitive
account that implicitly treats the unendingly malleable human conceptual system as a static state space, blind to concurrent perception, or linguistic information. Humans do not
work like that. If we wish to build a proper understanding of human cognition, then we
must allow for the fact that people are attentionally limited, linguistic entities whose representations continuously change over time. Similarly, if we hope to model how humans
learn about and represent the world, then we may well see new generations of distractable, language-reliant, inconstant, embodied robots. Whether human or synthetic, a cognizer can get by quite well without ever representing the same concept twice.
References
Andrews, M., Vigliocco, G., & Vinson, D. (2009). Integrating experiential and distributional data to learn
semantic representations. Psychological Review, 116, 463–498.
Aslin, R. N., Saffran, J. R., & Newport, E. L. (1998). Computation of conditional probability statistics by
8-month-old infants. Psychological Science, 9, 321–324.
Barsalou, L. W. (1999). Perceptual symbol systems. Behavioral and Brain Sciences, 22, 577–660.
Barsalou, L. W. (2003). Abstraction in perceptual symbol systems. Philosophical Transactions of the Royal
Society B, 358, 1177–1187.
Barsalou, L. W. (2005). Abstractions as dynamic interpretation in perceptual symbol systems. In L.
Gershkoff-Stowe, & D. Rakison (Eds.), Building object categories (pp. 389–431). Carnegie Symposium
Series. Majwah, NJ: Erlbaum.
Barsalou, L.W. (2008). Grounded cognition. Annual Review of Psychology, 59, 617–645.
Barsalou, L. W. (2009). Simulation, situated conceptualization, and prediction. Philosophical Transactions of
the Royal Society B, 364, 1281–1289.
Barsalou, L. W., Santos, A., Simmons, W. K., & Wilson, C. D. (2008). Language and simulation in
conceptual processing. In M. De Vega, A. M. Glenberg, & A. C. Graesser (Eds.). Symbols, embodiment,
and meaning (pp. 245–283). Oxford, UK: Oxford University Press.
Borges, J. L. (1962). Funes the Memorious (J. E. Irby, Trans.). In D. A. Yates & J. E. Irby (Eds.), Labyrinths
(pp. 65–71). New York: New Directions (Original work published 1942).
Brants, T., & Franz, A. (2006). Web 1T 5-gram Version 1. Philadelphia, PA: Linguistic Data Consortium.
Burgess, C., & Lund, K. (1997). Modelling parsing constraints with high-dimensional context space.
Language and Cognitive Processes, 12, 177–210.
Carruthers, P. (2006). The architecture of the mind. Oxford, UK: Oxford University Press.
L. Connell, D. Lynott / Topics in Cognitive Science (2014)
15
Clark, H. H. (1996). Using language. Cambridge, UK: Cambridge University Press.
Collins, J., Pecher, D., Zeelenberg, R., & Coulson, S. (2011). Modality switching in a property verification
task: an ERP study of what happens when candles flicker after high heels click. Frontiers in Psychology,
2(10), 1–10.
Coltheart, M., Rastle, K., Perry, C., Langdon, R., & Ziegler, J. (2001). DRC: A dual route cascaded model of
visual word recognition and reading aloud. Psychological Review, 108, 204–256.
Connell, L., & Lynott, D. (2010). Look but don’t touch: tactile disadvantage in processing modality-specific
words. Cognition, 115, 1–9.
Connell, L., & Lynott, D. (2011). Modality switching costs emerge in concept creation as well as retrieval.
Cognitive Science, 35, 763–778.
Connell, L., & Lynott, D. (2012). When does perception facilitate or interfere with conceptual processing?
The effect of attentional modulation. Frontiers in Psychology, 3(474), 1–3.
Connell, L., & Lynott, D. (2013). Flexible and fast: Linguistic shortcut affects both shallow and deep
conceptual processing. Psychonomic Bulletin & Review, 20, 542–550.
Connell, L., & Lynott, D. (2014). I see/hear what you mean: Semantic activation depends on perceptual
attention. Journal of Experimental Psychology: General, 143, 527–533.
Connell, L., Lynott, D., & Dreyer, F. (2012). A functional role for modality-specific perceptual systems in
conceptual representations. PLoS ONE, 7(3), e33321.
Dijkstra, K., Kaschak, M. P., & Zwaan, R. A. (2007). Body posture facilitates retrieval of autobiographical
memories. Cognition, 102, 139–149.
Dilkina, K., McClelland, J. L., & Plaut, D. C. (2008). A single-system account of semantic and lexical
deficits in five semantic dementia patients. Cognitive Neuropsychology, 25, 136–164.
Dils, A. T., & Boroditsky, L. (2010). Visual motion aftereffect from understanding motion language.
Proceedings of the National Academy of Sciences, 107, 16396–16400.
Dry, M. J., & Storms, G. (2010). Features of graded category structure: Generalizing the family resemblance
and polymorphous concept models. Acta Psychologica, 133, 244–255.
Dunn, L. M., & Dunn, L. M. (1997). Peabody Picture Vocabulary Test (3rd ed). Circle Pines, MN: American
Guidance Service.
Elman, J. L. (2009). On the meaning of words and dinosaur bones: Lexical knowledge without a lexicon.
Cognitive Science, 33, 547–582.
Evans, J. St. B. T. (2008). Dual-processing accounts of reasoning, judgment, and social cognition. Annual
Reviews of Psychology, 59, 255–278.
Evans, J. St. B. T., & Over, D. E. (1996). Rationality and reasoning. Hove, UK: Psychology Press.
Ferreira, F., Bailey, K. G., & Ferraro, V. (2002). Good-enough representations in language comprehension.
Current Directions in Psychological Science, 11, 11–15.
Glenberg, A. M. (1997). What memory is for. Behavioral and Brain Sciences, 20, 41–50.
Glenberg, A. M., & Robertson, D. A. (2000). Symbol grounding and meaning: A comparison of highdimensional and embodied theories of meaning. Journal of Memory and Language, 43, 379–401.
"
Gonz"alez, J., Barros-Loscertales, A., Pulverm€uller, F., Meseguer, V., Sanju"an, A., Belloch, V., & Avila,
C.
(2006). Reading cinnamon activates olfactory brain regions. Neuroimage, 32, 906–912.
Hald, L. A., Marshall, J. A., Janssen, D. P., & Garnham, A. (2011). Switching modalities in a sentence
verification task: ERP evidence for embodied language processing. Frontiers in Psychology, 2(45), 1–15.
Hardt, O., Einarsson, E. O., & Nader, K. (2010). A bridge over troubled water: Reconsolidation as a link
between cognitive and neuroscientific memory research traditions. Annual Review of Psychology, 61, 141–
167.
Hauk, O., Johnsrude, I., & Pulverm€uller, F. (2004). Somatotopic representation of action words in human
motor and premotor cortex. Neuron, 41, 301–307.
Hauk, O., & Pulverm€uller, F. (2004). Neurophysiological distinction of action words in the fronto-central
cortex. Human Brain Mapping, 21, 191–201.
16
L. Connell, D. Lynott / Topics in Cognitive Science (2014)
Jaswal, V. K. (2007). The effect of vocabulary size on toddlers’ receptiveness to unexpected testimony about
category Mmembership. Infancy, 12, 169–187.
Johns, B. T., & Jones, M. N. (2012). Perceptual inference through global lexical similarity. Topics in
Cognitive Science, 4, 103–120.
Kahneman, D. (2003). Maps of bounded rationality: Psychology for behavioral economics. The American
Economic Review, 93, 1449–1475.
Kaschak, M. P., Madden, C. J., Therriault, D. J., Yaxley, R. H., Aveyard, M., Blanchard, A. A., & Zwaan, R.
A. (2005). Perception of motion affects language processing. Cognition, 94, B79–B89.
Kaschak, M. P., Zwaan, R. A., Aveyard, M., & Yaxley, R. H. (2006). Perception of auditory motion affects
language processing. Cognitive Science, 30, 733–744.
Landauer, T. K., & Dumais, S. T. (1997). A solution to Plato’s problem: The latent semantic analysis theory
of acquisition, induction, and representation of knowledge. Psychological Review, 104, 211.
Loftus, E. F. (1975). Leading questions and the eyewitness report. Cognitive Psychology, 7, 560–572.
Loftus, E. F. (1996). Eyewitness testimony. Cambridge, MA: Harvard University Press.
Louwerse, M. M., & Connell, L. (2011). A taste of words: Linguistic context and perceptual simulation
predict the modality of words. Cognitive Science, 35, 381–398.
Louwerse, M., & Hutchinson, S. (2012). Neurological evidence linguistic processes precede perceptual
simulation in conceptual processing. Frontiers in Psychology, 3(385), 1–11.
Louwerse, M. M., & Jeuniaux, P. (2008). Language comprehension is both embodied and symbolic. In M. de
Vega, A. Glenberg, & A. C. Graesser (Eds.), Symbols, embodiment, and meaning. Oxford University
Press.
Louwerse, M. M., & Jeuniaux, P. (2010). The linguistic and embodied nature of conceptual processing.
Cognition, 114, 96–104.
Lynott, D., & Connell, L. (2009). Modality exclusivity norms for 423 object properties. Behavior Research
Methods, 41, 558–564.
Lynott, D., & Connell, L. (2010). Embodied conceptual combination. Frontiers in Psychology, 1(212), 1–14.
Machery, E. (2007). Concept empiricism: A methodological critique. Cognition, 104, 19–46.
Mahon, B. Z., & Caramazza, A. (2008). A critical look at the embodied cognition hypothesis and a new
proposal for grounding conceptual content. Journal of Physiology-Paris, 102, 59–70.
Markman, A. B. (2012). Knowledge representation. In K. J. Holyoak, & R. G. Morrison (Eds.), The Oxford
Handbook of Thinking and Reasoning (pp. 36–51). New York: Oxford University Press.
Marocco, D., Cangelosi, A., Fischer, K., & Belpaeme, T. (2010). Grounding action words in the sensorimotor
interaction with the world: Experiments with a simulated iCub humanoid Rrobot. Frontiers in
Neurorobotics, 4(7), 1–15.
Marques, J. F. (2006). Specialization and semantic organization: Evidence for multiple semantics linked to
sensory modalities. Memory & Cognition, 34, 60–67.
Meteyard, L., Bahrami, B., & Vigliocco, G. (2007). Motion detection and motion verbs language affects lowlevel visual perception. Psychological Science, 18, 1007–1013.
Minda, J. P., & Smith, J. D. (2010). Prototype models of categorization: Basic formulation, predictions, and
limitations. In E. Pothos, & A. Wills (Eds.), Formal approaches in categorization (pp. 40–64).
Cambridge: Cambridge University Press.
Newman, S. D., Klatzky, R. L., Lederman, S. J., & Just, M. A. (2005). Imagining material versus geometric
properties of objects: An f MRI study. Cognitive Brain Research, 23, 235–246.
Pecher, D., Zanolie, K., & Zeelenberg, R. (2007). Verifying visual properties in sentence verification
facilitates picture recognition memory. Experimental Psychology, 54, 173–179.
Pecher, D., Zeelenberg, R., & Barsalou, L. W. (2003). Verifying properties from different modalities for
concepts produces switching costs. Psychological Science, 14, 119–124.
Pecher, D., Zeelenberg, R., & Barsalou, L. W. (2004). Sensorimotor simulations underlie conceptual
representations: Modality-specific effects of prior activation. Psychonomic Bulletin and Review, 11, 164–
167.
L. Connell, D. Lynott / Topics in Cognitive Science (2014)
17
Pezzulo, G., Barsalou, L. W., Cangelosi, A., Fischer, M. H., McRae, K., & Spivey, M. J. (2011). The
mechanics of embodiment: a dialog on embodiment and computational modeling. Frontiers in
Psychology, 2(5), 1–21.
Pinker, S. (1999). Words and rules: The ingredients of language. New York: Basic Books.
Pulverm€uller, F. (2005). Brain mechanisms linking language and action. Nature Reviews Neuroscience, 6,
576–582.
Pulverm€uller, F., Shtyrov, Y., & Ilmoniemi, R. (2005). Brain signatures of meaning access in action word
recognition. Journal of Cognitive Neuroscience, 17, 884–892.
Raven, J. C. (1948). The comparative assessment of intellectual ability. British Journal of Psychology.
General Section, 39, 12–19.
Riskind, J. H., & Gotay, C. C. (1983). Physical posture: Could it have regulatory or feedback effects on
motivation and emotion? Motivation and Emotion, 6, 273–2898.
Santos, A., Chaigneau, S. E., Simmons, W. K., & Barsalou, L. W. (2011). Property generation reflects word
association and situated simulation. Language and Cognition, 3, 83–119.
Simmons, W. K., Hamann, S. B., Harenski, C. N., Hu, X. P., & Barsalou, L. W. (2008). fMRI evidence for
word association and situated simulation in conceptual processing. Journal of Physiology – Paris, 102,
106–119.
Sloman, S. A. (1996). The empirical case for two systems of reasoning. Psychological Bulletin, 119, 3–22.
Smith, L. B. (2003). Learning to recognize objects. Psychological Science, 14, 244–250.
Solomon, K. O., & Barsalou, L. W. (2004). Perceptual simulation in property verification. Memory &
Cognition, 32, 244–259.
Spence, C., Nicholls, M. E. R., & Driver, J. (2001). The cost of expecting events in the wrong sensory
modality. Perception & Psychophysics, 63, 330–336.
Stanovich, K. E. (1999). Who is rational? Studies of individual differences in reasoning. Mahwah, NJ:
Elrbaum.
Turatto, M., Galfano, G., Bridgeman, B., & Umilt#a, C. (2004). Space-independent modality-driven attentional
capture in auditory, tactile and visual systems. Experimental Brain Research, 155, 301–310.
van Dantzig, S., Cowell, R. A., Zeelenberg, R., & Pecher, D. (2011). A sharp image or a sharp knife: norms
for the modality-exclusivity of 774 concept-property items. Behavior Research Methods, 43, 145–154.
Vigliocco, G., Meteyard, L., Andrews, M., & Kousta, S. (2009). Toward a theory of semantic representation.
Language and Cognition, 1, 219–248.
Wilson, M. (2002). Six views of embodied cognition. Psychonomic Bulletin & Review, 9, 625–636.
Wilson-Mendenhall, C. D., Barrett, L. F., Simmons, W. K., & Barsalou, L. W. (2011). Grounding emotion in
situated conceptualization. Neuropsychologia, 49, 1105–1127.