Using conceptual knowledge in action and language

25-Haggard-Chap25
5/5/07
11:34 PM
Page 567
25
Using conceptual knowledge in action
and language
M. van Elk, H. T. van Schie, O. Lindemann
and H. Bekkering
Action semantics refer to the conceptual knowledge that is necessary to guide our actions
and can be defined in terms of action goals and action means. The present chapter outlines
the concept of action semantics and presents a review of the existing literature dealing
with functional mechanisms underlying conceptual knowledge and goal-directed behavior.
Recent empirical findings are presented which indicate that semantic information is
selected in a flexible manner in accordance with the action intention of the actor, whereby
long-term conceptual associations may be overruled by current behavioral goals. These
results extend the selection-for-action principle beyond perception to include the selection
of semantic information that is relevant for the upcoming action. Findings are interpreted
within the light of contemporary models of conceptual knowledge in action.
Introduction
Within the booming field of cognitive neuroscience, a growing number of studies underline the interrelatedness of perception, action and language. It is acknowledged that
cognitive faculties do not operate in isolation, but rather share common neural mechanisms. This may come as no surprise, regarding the fact that the division of the mind in
separate cognitive subsystems was originally inherited from intuitive folk-psychological
concepts. The separate consideration of cognitive faculties has proven to be useful in
acquiring a basic understanding of the workings of the brain. However, given the close
interactions between different cognitive subsystems, it may be useful to think about
adjusting our terminology to better cover the phenomena investigated. An exclusive
focus on the uniqueness of each cognitive faculty may blind us for the elements that are
common to different systems.
In the present chapter we introduce the concept of ‘action semantics’ as a useful tool to
bring together the closely related fields of language, action and conceptual knowledge. In
short, action semantics refer to the conceptual knowledge that is necessary to guide our
actions. Both long-term and short-term action semantics are important for a successful
interaction with the surrounding world. For example, although most of the time we use
everyday objects in an appropriate manner, sometimes objects acquire a new function
when used in a different context. Imagine yourself reading a newspaper while being
25-Haggard-Chap25
568
5/5/07
11:34 PM
Page 568
ACTION SEMANTICS
disturbed by a mosquito. The folded newspaper will be an excellent device to get rid of
the insect. Countless examples show that conceptual knowledge is selected and activated
in a flexible way, according to the actor’s current action goals (e.g. using a screwdriver to
open a paint-bucket).
In the first part of this chapter we will discuss the nature of action semantics, the
neural mechanisms supporting action semantics and its development over the course of
life. It is suggested that action goals play a dominant role in understanding the behavior
of others and in the selection of perceptual information for our own actions. In the
second part we turn to the question of how conceptual knowledge might be represented
within the brain. Different theories of conceptual knowledge in action will be discussed
that disagree on the principles underlying the organization of conceptual knowledge.
In the third section we will focus on the close relation between language and action.
Whereas several studies report effects of language on action preparation, hardly any
study investigated effects of action intention on the selection of semantic information.
Recent experiments from our group show that semantics are automatically selected
according to an actor’s present action goal, even when actions are in conflict with long-term
knowledge of object use (Lindemann et al., 2006; M. Van Elk et al., unpublished data). In
the final part of this chapter we will discuss theoretical implications of the findings
presented for theories about conceptual knowledge in action. We suggest that the study
of action semantics brings together several closely related fields and provides insight into
the functional mechanisms underlying goal-directed behavior.
Action semantics
We start this section by introducing the concept of action semantics. In subsequent paragraphs we will then focus on (i) the neural mechanisms that mediate the use of conceptual knowledge in action, (ii) the acquisition of action semantics in young children and
(iii) the selection of perceptual and semantic information for action.
What are action semantics?
Action semantics refer to the conceptual knowledge that is necessary to guide our actions.
More specifically, the conceptual knowledge used for actions consists of our knowledge
about the function and the use of concrete objects. Whenever we encounter a wellknown object, semantic information about object use will be retrieved automatically from
memory. It is exactly this interplay between conceptual knowledge and action that is the
topic of our investigation. The relation between action planning and conceptual knowledge
has often been overlooked by theories in the domain of motor control. Likewise, theories
of conceptual knowledge have paid little attention to the use of semantic knowledge for
guiding behavior. For example, theoretical discussions on the modal or amodal nature of
conceptual knowledge bypass the idea that our conceptual system in the first place evolved
to enable us to cope with environmental demands. Being able to deal effectively with one’s
environment is essential to survival and a conceptual system that enables organisms to
successfully manipulate the surrounding world surely has great evolutionary advantages.
25-Haggard-Chap25
5/5/07
11:34 PM
Page 569
ACTION SEMANTICS
Action semantics capitalizes on the idea that conceptual knowledge is closely related to
the control of action.
Much of the information that is necessary to guide our actions is directly available to our
senses. For example, we quickly pick up the natural object affordances that allow us to
interact with categories of objects (e.g. seats) as well as specific instances of these categories (e.g. reclining chair, Gibson, 1986). However, not all information that is necessary
to act successfully can be obtained directly through our senses. Within the first six months
of their lives, children are able to grasp an object like a water faucet correctly, but the
knowledge of how to turn a faucet is something that cannot be perceived directly upon
mere observation of the object by itself. Most water faucets are round and can be acted upon
in many ways in order to achieve the desired outcome. For most people, the knowledge
that in most Western countries faucets are opened by means of a counter clockwise rotation
will lead to the decision to grasp the tap with the thumb pointing upwards or away from
the body allowing for a comfortable turning motion (Rosenbaum et al., 1995). In other
words, the stored conceptual knowledge about water faucets is used for guiding our actions.
Action semantics can be defined in terms of action goals (what to use an object for) and
action means (how to use an object). With action means we typically denote the manner
in which an object is handled (e.g. grasped), or more generally, the way in which we
preferably interact with an object (e.g. a toothbrush is grasped at the handle, not at the
brush). With action goals, on the other hand, we refer to the purpose or intent for which
an object is used (e.g. a toothbrush is used for brushing teeth). Different levels of
complexity can be discerned when describing action goals, ranging from high-level goalrepresentations to lower levels. For example, lessening one’s thirst may be a general action
goal, whereas lifting a cup and bringing it to the mouth represents a lower level of action
goals. A useful taxonomy to distinguish between different levels of goal-representations
includes: (1) physical goals that refer to concrete objects, (2) action goals that refer to the
functional use of objects and (3) mental goals that reflect a desired state of affairs in the
world (Bekkering and Wohlschläger, 2000). Both action and mental goals are closely related
to the intention or purpose of the actor. Physical goals reflect a lower level description of
action goals and can be defined in terms of a movement’s goal-location.
Still little is known about the functional processes through which selection and coordination of action goals and action means come about. At the behavioral level, the selection of
action means (e.g. the way in which an object is grasped) often seems to fulfill the
requirements of the action goal (what one is intending to do with the object; Rosenbaum
et al., 1995), suggesting the possibility of a hierarchical organization supporting action
semantics for goals and means. In accordance with this suggestion, previous research has
implicated the importance of action goals for guiding the integrity and consistency
of behavior over prolonged periods of time (Zalla et al., 2003; H. T. Van Schie et al.,
unpublished data).
Neural mechanisms mediating action semantics
The division between semantics for action goals and means is supported by the study of
neuropsychological patients with semantic dementia, who sometimes display selective loss
569
25-Haggard-Chap25
570
5/5/07
11:34 PM
Page 570
ACTION SEMANTICS
of a particular category of semantic knowledge. Previous research has distinguished between
different types of action deficits in patients with otherwise fully operational motor functions. Patients with ideational apraxia show a general loss of conceptual knowledge about
the function of and the purpose for which they are used (Ochipa et al., 1992). These patients
are able to perform simple motor-acts and to retrieve tool names, but fail to use objects in an
appropriate way (e.g. they can use a toothbrush as a spoon). In contrast, ideomotor apraxia
refers to the inability to execute specific motor commands after damage to the left parietal or
fronto-parietal cortex (Buxbaum, 2001). Ideomotor apraxic patients show spatio-temporal
errors when pantomiming object use or may grasp an object in an inappropriate way to find
themselves at a loss as to which movements to make with it (grasp a stylus with a full grip).
The behavior displayed by both groups of patients suggests a dissociation between knowledge
about the functional use of objects (action goals) and knowledge of the usual interaction
(action means) associated with objects (cf. Leiguarda and Marsden, 2000).
Ideomotor deficits have been associated with damage to parietal cortex, which is involved
in visuo-motor transformations necessary for grasping (e.g. Heilman et al., 1981;
Buxbaum et al., 2003). In addition, damage to the left posterior parietal lobe is often
associated with a deficit demonstrating the appropriate actions associated with particular
objects (Ochipa et al., 1989). Ideational apraxia on the other hand has been found in
association with lesions in premotor and prefrontal areas (Sirigu et al., 1998; Leiguarda
and Marsden, 2000), consistent with recent evidence on the role of these areas in supporting goal-directed actions with objects after their prehension (H. T. Van Schie and
H. Bekkering, unpublished data).
Another important region that has been implicated in the retrieval of knowledge about
actions is the posterior middle temporal region (MT) that is involved in the perception
of motion (Tootell et al., 1995). Damage to the middle temporal region in neuropsychological patients is associated with a deficit in the recognition of tools (Tranel et al., 2003).
The same area is also activated in association with verb generation reflecting actions to
visually presented nouns (Fiez et al., 1996) or objects (Martin et al., 1995). Furthermore,
Kable et al. (2002) found greater activity in bilateral MT to the presentation of action
pictures than to pictures of objects, whereas words instead of pictures mainly activated
left-sided MT. The reported activation of MT is consistent with the suggestion that
accessing action knowledge involves incorporating motion features. In sum, these findings suggest that accessing action knowledge involves activating motor structures
and areas supporting motor performance, consistent with the idea that action knowledge
is stored or represented within the motor system (reviews in Martin and Chao, 2001;
Tranel et al., 2003). Thus, having discussed the neural mechanisms involved in action
semantics, we now turn to the question of how we acquire action semantics over the
course of our lives. Insight into the acquisition of conceptual knowledge for action is
gained from developmental studies, which will be the focus of the next section.
Development of action semantics
Piaget was among the first to suggest that the central role of action experience in cognitive
development should be addressed more broadly (Piaget, 1953). The production of
25-Haggard-Chap25
5/5/07
11:34 PM
Page 571
ACTION SEMANTICS
goal-directed acts in the first year of life may not only refine the production of goal-directed
acts in infants, it might also be crucial to the acquisition of conceptual knowledge about
the world in general. Information gleaned from early action experiences may, for instance,
facilitate sensitivity to the goal structure of infants’ own and others’ actions.
Even at a very young age, children extract the meaningful structure of the surrounding
world at multiple levels of analysis. Moreover, developmental theorists have considered
this capacity a prerequisite both for having intentions and for understanding the intentions
of others (Gergely et al., 1995; Meltzoff and Moore, 1995). Indeed, the ability to construe
actions in terms of goals can be observed in preschoolers. Eighteen-month-old infants
infer the goal of uncompleted human action sequences and infants as young as 14 months
tend to imitate intended but not accidental acts (Meltzoff, 1995; Carpenter et al., 1998).
In addition, 14-month-old infants re-enact the final goal of a modeled action, but do not
always reproduce the means. Gergely et al. (2002) convincingly showed that children will
follow a given course of action to achieve the given goal only if they consider it to be the
most rational alternative. Their results indicate that imitation of goal-directed action by
preverbal infants is a selective, conceptual knowledge-based process, rather than a simple
re-enactment of the means used by a demonstrator, as was previously thought. This action
knowledge may subsequently be extended to allow infants to interpret a broad range of
events with respect to their meaning and goal structure (Sommerville and Woodward, 2005).
For instance, after watching an actor reach for and grasp a toy, infants show a stronger
novelty response to a change in the actor’s goal than to a change in the spatial location or
trajectory of her reach (Woodward, 1998). Recent findings have stressed a correlation
between the emergence of goal-directed behaviors and infants’ ability to detect goals in
the acts of others (Sommerville and Woodward, 2005). In addition, a measurable impact
of action experience on action interpretation was demonstrated.
Developmental studies suggest that the ability to infer the intention of an observed
actor is already present in children aged 12 months (e.g. Gergely et al., 1995). In a habituation
study, 12-month-old infants were found to look longer at an object-directed action when
an indirect reach-movement was made, but if no object was present no such preference
was found (Phillips and Wellman, 2005). This suggests that children understand the
observed action in terms of the actor’s goal. However, recent findings suggest that children’s
understanding of actions is constrained by task context. While children tended automatically to imitate the observed action goal rather than the means by which it is performed
(Bekkering and Wohlschläger, 2000), making the means more salient (e.g. putting an
arrow on the movement-path) results in movement-related adjustments, although the
goal of the action remains the same (Cox et al., unpublished data). Apparently, both goals
and means play an important role in the acquisition of conceptual knowledge to interact
with the surrounding world.
The mechanism underlying the acquisition of action semantics seems to be the ability
to imitate, which is probably mediated by the human mirror system. The discovery of
mirror neurons in primates that respond selectively to both the execution of an action
and the observation of actions being performed by another actor, led to the idea of an
observation–execution matching system that is crucial to the understanding of observed
571
25-Haggard-Chap25
572
5/5/07
11:34 PM
Page 572
ACTION SEMANTICS
actions (Rizzolatti et al., 1996). In humans a comparable mirror system was identified,
encompassing such brain areas as the superior temporal sulcus, the inferior parietal
lobule and the inferior frontal gyrus (e.g. Grafton et al., 1996; Calvo-Merino et al., 2005).
Several possible functions regarding the activity in mirror neurons have been put
forward, of which the most important include (1) supporting a goal representation of
intended action, (2) providing a mechanism to understand observed behavior and (3)
supporting the acquisition of new behavior by imitation (e.g. Iacoboni et al., 1999;
Umiltà et al., 2001; Rizzolatti et al., 2001). Mirror-related areas appear crucial to the
understanding of observed actions and thereby provide the child with a mechanism to
learn by observation. For example, recently it has been reported that the observation of
errors performed by another subject led to an error-related brain response that was
comparable to self-generated errors (Van Schie et al., 2004). This suggests that similar
mechanisms are recruited during monitoring of one’s own or another’s action.
Taken together, it seems that action understanding and acquisition of novel actions
through observation are goal-directed activities, mediated by systems that support both
the generation and observation of actions. The development of action semantics in young
children enables them to make sense of observed actions and to perform goal-directed
actions themselves. The acquisition and use of goal representations seems a critical aspect
for the development of action semantics. In line with this suggestion are findings that
underline the importance of action goals in dealing effectively with the enormous
amount of information that enters our senses.
Action goals in perception and action
A guiding principle in perceptual processing is the ability to selectively attend to particular
stimuli. According to the framework of selection-for-action (Allport, 1987) relevant
information is selected in line with the action intention of the actor. For example, in the
domain of visual perception it has repeatedly been found that specific visual information
(e.g. orientation of an object) is selectively processed, depending on the action goal of the
actor, e.g. grasping or pointing (Bekkering and Neggers, 2002; Hannus et al., 2005). In a
recent study we found a selective enhancement of early visual ERP components when
subjects intended to grasp as compared to when they intended to point towards visually
presented objects (H. T. Van Schie, M. Van Elk and H. Bekkering, unpublished data).
Because processing of orientation is relevant for grasping but not for pointing, the orientation of the object needs to be processed more actively in visual areas. Craighero et al. (1999)
instructed participants to grasp objects of a particular orientation in response to stimuli
showing a final hand position that could be congruent or incongruent with respect to the
intended grasp. Response latencies were fastest to pictures with the greatest similarity to
the final hand posture of the intended grasping movement, suggesting a close link
between action preparation and perceptual processing. Interestingly, enhancement of
visual processing has also been reported in relation to language comprehension. Upon
reading a description of a nail being pounded either into the wall or into the floor, participants were faster in processing visual stimuli congruent with the implied orientation
(Stanfield and Zwaan, 2001).
25-Haggard-Chap25
5/5/07
11:34 PM
Page 573
ACTION SEMANTICS
These and other studies suggest that in addition to bottom-up information that is
elicited by the perception of an object, top-down effects of the action context and intention of an actor may selectively modulate visual processing. A similar principle may also
support the selection of object semantics in accordance with the action intention of an
individual. One hypothesis that plays a central part in the current chapter is that the
selection of conceptual knowledge for action is guided by the same principle that is
proposed to underlie the selection of perceptual information for action.
Theories of motor control suggest that conceptual knowledge is involved in the selection of appropriate action plans (Rosenbaum et al., 2001). Functional imaging studies
confirm that conceptual knowledge about the appropriate manner of grasping an object
becomes activated automatically upon object identification (Chao and Martin, 2000;
Grèzes and Decety, 2002; Grèzes et al., 2003). More specifically, it has been reported that
action preparation and object perception are mutually dependent processes. For example, passive observation of tools has been reported to elicit motor activation in areas that
are normally activated with tool use (e.g. Chao and Martin, 2000). In monkeys it has
been found that neurons in grasping-related areas, namely the anterior intraparietal
sulcus (AIP) and the ventral premotor cortex (F5), selectively respond to the passive
observation of graspable objects (Murata et al., 1997). These canonical neurons are selectively responsive to objects of a particular size and shape and probably play an important
role in specifying the motor properties that are required for directing an action towards a
particular object.
At the behavioral level it has been found that viewing an object automatically activates
the appropriate hand shape for using it (Klatzky et al., 1989). The same automatic effect
of appropriate handgrip aperture has also been reported in response to reading words of
graspable objects that afforded either a full or a precision grip (Glover et al., 2004). Reading
a word that represented a large object (e.g. ‘apple’) resulted in a larger grip aperture in the
early part of the grasping movement as compared to reading a word that referred to a
small object (e.g. ‘grape’). In a similar fashion, Tucker and Ellis (2001) presented subjects
with pictures for a categorical judgment and found a compatibility effect between response
type (making a precision or full grip with the hand) and object size (e.g. faster responding
to the presentation of car keys when making a precision grip).
Other studies suggest that action knowledge plays an important role in the retrieval of
conceptual information. Chainay and Humphreys (2002) reported faster responding when
making action decisions about the use of particular objects (e.g. do you perform a twisting
or a pouring action with the object?) as compared to making contextual decisions (is the
object typically found in the kitchen?). In addition, Yoon and Humphreys (2005) showed
that activation of action knowledge plays a constitutive role in making functional judgments about object use. Participants observed objects either grasped with a congruent,
an incongruent or without any handgrip and they verified the object name (‘Is this the
name of the object?’) or the object action (‘Is the object used for …?’). A congruency effect
of the observed action on making action decisions was found, while no such effect was
present for naming the object. Together these studies support the view that the action and
motor systems are involved in object identification.
573
25-Haggard-Chap25
574
5/5/07
11:34 PM
Page 574
ACTION SEMANTICS
Thus, perception of an object is found to evoke activation of an appropriate motorprogram that allows interaction with the object (e.g. facilitation of the appropriate handgrip). These findings are consistent with the theoretical notion of Gibson (1986) that
whenever an actor perceives an object, affordances will be inferred that inform the actor
about the possible actions that can be directed towards the object. However, in addition
to automatic (bottom-up) effects resulting from object perception, in many cases the
manner of object interaction is determined by the intention of an actor or the action goal.
That is, although chairs typically afford sitting down, in the context of a different action goal
(e.g. getting something from a top shelf) a different manner of interaction may be selected.
The flexible use of action semantics will be investigated in the third part of this chapter.
Before doing so, however, we turn to different views on the organization of conceptual
knowledge, in order to provide a theoretical background against which empirical findings
may be evaluated.
Theories of conceptual knowledge
In general, theories on conceptual knowledge may be classified into two broad categories.
According to amodal theories, concepts are abstract, amodal and arbitrary entities that
operate independently from the sensorimotor systems (e.g. Fodor, 1983). Concepts are
considered as part of symbolic representational structures that function in an analogous
way to language, (e.g. ‘language-of-thought’ hypothesis). An important argument in favor
of amodal theories of representation concerns the ease with which different tokens of a
particular category (type) are recognized, independent of perceptual differences. However,
amodal theories of concepts also face important difficulties such as a lack of empirical
validation, giving a specification of the transduction process from perceptual states to
symbols and the grounding problem. If all knowledge is only represented as a set of interrelated symbols, the meaning of a particular symbol can only be specified by its relation
to, and in terms of, other symbols.
In contrast, modal theories of representation argue that concepts are grounded in
modality-specific systems (e.g. Barsalou et al., 2003). Neuropsychological patients with
damage to a particular sensory area and a specific deficit in categorical knowledge provide
a powerful argument for the modal representation of concepts. Embodied cognition
refers to different theories of conceptual representation that share the assumption of a
multi-modal grounding of concepts across different sensorimotor areas (e.g. Clark, 1997;
Gallese and Goldman, 1998; Lakoff and Johnson, 1999). However, embodied theories
differ widely on the emphasis that is put on separate cognitive subsystems and on the way
in which our conceptual knowledge is organized. It may be useful to think of modal
theories on conceptual knowledge as differentiating along two different axes:
◆
the first axis covers whether conceptual knowledge is organized by principles internal
to the organism or that it is rather determined by external, stimulus-driven processes;
◆
the second axis represents the theory’s relative focus on perceptual or action-related
processes.
25-Haggard-Chap25
5/5/07
11:34 PM
Page 575
THEORIES OF CONCEPTUAL KNOWLEDGE
Accordingly, theories of conceptual knowledge may be represented in a two-dimensional
conceptual space, as represented in Figure 25.1. Of course, the division of theories along
two axes provides us with a simplified picture that overlooks finer nuances of different theories. Nevertheless, we suggest that the proposed division provides us with a useful tool to
think about different approaches in studying the acquisition and organization of conceptual
knowledge. The different axes represent the relative emphasis that is put on top-down or
bottom-up processes on the one hand and on action- or perception-related systems on
the other hand. In the remainder of this section we will discuss different proposals for the
retrieval of conceptual knowledge in relation to the domain of action and focus on the
strengths and weaknesses of embodied accounts of conceptual representation.
Sensory functional hypothesis versus correlated structure
principle
The sensory–functional hypothesis, put forward by Warrington and Shallice (1984),
argues that conceptual knowledge is stored in modality-specific sensory and motor
structures. Hence, identification of instances of different semantic categories depends
critically upon activating particular modality-specific semantic subsystems in the brain.
Category-specific semantic deficits are the result of damage to modality-specific semantic
subsystems. For example, damage to the visual semantic subsystem probably impairs
semantic knowledge of categories, for which visual features are a defining characteristic.
Borgo and Shallice (2001) report a patient who had suffered from herpetic encephalitis,
showing a severe deficit in category-specific knowledge for living things. Interestingly,
this patient also was impaired when tested for knowledge of other sensory quality categories
Correlated
structure
principle
Perceptual Symbols
Theory
Perceptual systems
Gibson’s
Affordances
Associative Sequence
Learning
Sensory-functional
hypothesis
Ideomotor Theory
of Action
Action-semantics
Motor-systems
Two-route model
to action
Stimulus-driven
Driven by organism
Figure 25.1 Theories of conceptual knowledge represented in conceptual space. Axes represent
the relative emphasis that is put on bottom-up or top-down processing on the one hand, and
on perceptual or motor systems on the other hand.
575
25-Haggard-Chap25
576
5/5/07
11:34 PM
Page 576
ACTION SEMANTICS
(substances, materials and liquids), while performance with man-made artifacts was relatively spared. This suggests that retrieval of specific conceptual categories may be highly
dependent upon the processing of sensory qualities.
In contrast, Caramazza and Mahon (2003) propose that evolution favored domain-specific
knowledge systems that resulted in a category-specific organization of semantic knowledge.
According to the correlated structure principle, the organization of conceptual knowledge
reflects the way in which different object properties are related to each other. Proponents
of this alternative account usually point to the fine-graininess of the categories of category-specific deficits, e.g. independent impairment for the categories of fruits or animals
(Caramazza and Shelton, 1998) and to the absence of a direct relation between severity of
affected semantic modality and loss of category-specific semantic deficits. For example,
Laiacona et al. (2003) report a patient with a good performance for ‘sensory quality’ categories, as assessed by verbal description compared to performance in a visual–verbal
task, while showing a deficit in knowledge for living things. The assumption of a
domain-specific organization of conceptual knowledge is consistent with the observation
of fine-grained dissociations between different semantic categories.
Perceptual symbols theory
Perceptual symbols theory provides a perceptual theory of knowledge, according to which
schematic representations are extracted from perceptual experience through a process of
selective attention. A concept is represented by running a simulation of actual experiences
with the concept and this process is referred to as re-enactment (Barsalou et al., 2003).
For example, when viewing a visual stimulus, neurons in sensory areas are activated and
conjunctive neurons in association areas store the activated sensory features in memory.
In the absence of the stimulus, the stored representation can be retrieved through reenactment of the sensory areas by the association areas.
Perceptual symbols theory has received empirical support for example from property
verification studies in which subjects are asked to decide whether a given property is
related to a particular concept (Pecher et al., 2004). Properties on subsequent trials could
be either from the same or from a different modality. A modality-switching effect was
found when a concept was previously presented with a property from a different modality.
For example, reaction times were faster when verifying loud (auditory modality) for blender
after previously verifying rustling (auditory) for leaves than after verifying tart (gustatory)
for cranberries. These results suggest that subjects tend towards simulating a concept in a
particular modality that was previously activated, thereby supporting a modal theory of
conceptual representation. However, perceptual symbols theory focuses mainly on
perceptual representations, leaving aside the organization of action knowledge, which is
represented in Figure 25.1.
Ideomotor theory of action
We now turn to theories that explicitly focus on the representation of conceptual knowledge
used in action. An intuitive and empirically validated idea dates back to William James,
who stated that ‘every representation of a movement awakens in some degree the actual
25-Haggard-Chap25
5/5/07
11:34 PM
Page 577
THEORIES OF CONCEPTUAL KNOWLEDGE
movement which is its object’ (James, 1890). Although ideomotor theory initially
emphasized the intentional and private mental causes of action, it was hypothesized that
goal representations are crucially important in action control. According to the ideomotor
theory, actions are represented in terms of their effects and these effect-representations
are helpful in the selection of actions, by predicting the consequences of one’s actions
(Hommel et al., 2001). The acquisition of effect-representations depends strongly upon
the formation of response–effect associations, in which a given response becomes paired
with its associated sensory effects. A preschooler pressing a button of the television’s
remote control will soon find out the relation between a simple motor act and the
switching on of the television. Consequently, this acquired response–effect association
may guide future behavior.
In a typical experiment, it is shown that after training participants that a left key
response elicits a high tone, a response facilitation occurs for left responses upon hearing
the high tone (e.g. Hommel, 1996). According to the ‘theory of event’ coding, actions and
perceptual contents are coded in a common event-code (Hommel et al., 2001). In principle, event-codes contain stored representations of contingent response–effect couplings.
Thereby, the main focus of ideomotor theory of action is on the automatic processes by
which arbitrary connections are established. The selection process, whereby an actor
chooses a particular action on the basis of its stored effect-representations, remains
largely out of focus.
Associative sequence learning
A closely related view that was specifically intended to account for imitation of observed
actions is called associative sequence learning (ASL). Instead of focusing on response–effect
couplings, ASL emphasizes the coupling between stimuli and particular responses. In
principle, any association could develop between a stimulus and a particular response if
they are co-activated repeatedly. Therefore, the way in which much of our knowledge is
organized consists of contingent stimulus–response associations.
Heyes et al. (2005) have argued that imitation relies on learned associations between
stimuli and actions. Participants had to respond to the observation of hand movements,
by making a hand movement that was either congruent or incongruent to the observed
movement (opening and closing). A stimulus–response compatibility effect was found
for the observation of hand movements compatible with their own movement. However,
when subjects were given a training 24 h prior to testing, in which they learned to close
their hands in response to observed hand opening and vice versa, the stimulus–response
compatibility of observed movement disappeared. Apparently, the automatic imitation
of observed movements is mediated by experience and the authors suggest that imitation
relies on processes of associative learning. According to this interpretation, neural
connections involved in action observation and motor activation primarily reflect the
learned association between observing and executing a particular action. Therefore, action
representations consist of the learned association between stimuli and particular responses.
Accordingly, ASL puts a big emphasis on bottom-up processing in the acquisition of
conceptual knowledge for action.
577
25-Haggard-Chap25
578
5/5/07
11:34 PM
Page 578
ACTION SEMANTICS
Two-route model to action
Although many of our everyday actions rely on stored long-term representations—as is
the case for bringing a cup to your mouth, which you have probably done countless
times—we sometimes need to bypass this semantic knowledge, for example when you
intend to catch a spider with a cup. Rumiati and Humphreys (1998) propose a two-route
model to action, with a semantic route accessing stored action representations and a
second route that runs directly from perception to action. This two-route model has been
proposed, partly based on the study of neuropsychological patients showing a dissociation between semantic knowledge and the ability to manipulate objects (e.g. Buxbaum
and Saffran, 2002). In a behavioral study with neuropsychologically healthy adults, participants performed either gesturing responses (gesturing the use of an object) or naming
responses (verbally giving names) in response to pictures of objects (Rumiati and
Humphreys, 1998). Responses for both speech and gestures were distinguished according
to whether a visually related error (e.g. razor instead of hammer which have a similar
shape but a different function) or a semantically related error (e.g. hammer instead of
saw which differ in shape but are semantically related) was made. More visual errors relative
to semantically related errors for gesturing object use as compared to naming objects
were found, suggesting that naming and using tools relies on different processing routes.
Furthermore Rumiati and Tessari (2002) found that healthy volunteers imitated meaningful
actions better than meaningless actions and this enhanced performance contributed to
the availability of stored action goals in long-term memory.
For meaningless actions, no long-term representation is available and imitation therefore relies on a direct route from vision to action, thereby placing more demands on
action working memory. In a positron emission tomography study the neural mechanisms
underlying imitation of both meaningful and meaningless actions were investigated
(Rumiati et al., 2005). Imitation of meaningful actions activated the left inferior temporal
gyrus, the angular gyrus and the parahippocampal gyrus (ventral stream), while the right
parieto-occipital junction and superior parietal areas (dorsal stream) were mainly activated
during imitation of meaningless actions, thereby supporting the view that both actions
are mediated by different neural and functional systems. Imitation and execution of
meaningful actions rely on accessing stored action semantic knowledge and online visual
control, while the execution of meaningless actions in first instance relies on a dorsal
perception–action system.
Overview
Theories discussed thus far disagree about the mechanisms underlying conceptual knowledge (e.g. perceptual simulation, learned association, direct or semantic routes to action).
Each theory is based on specific neuropsychological findings and puts different emphasis
on the importance of particular sensorimotor areas in the retrieval of conceptual knowledge.
As represented in Figure 25.1, we propose that a theory of action semantics should take into
account the important role played by the actor’s intentions. We argue that action goals
play an important role in the acquisition and use of conceptual knowledge for action.
25-Haggard-Chap25
5/5/07
11:34 PM
Page 579
SEMANTICS OF ACTION AND LANGUAGE
Furthermore, we agree that conceptual knowledge for action is represented in modalityspecific brain areas, but that the organization and use of conceptual knowledge is
guided by action goals. In the next section we will provide support for our position by
presenting recent studies investigating effects of action on language processing. Before
doing so, however, we will discuss the close relation between language and action in
general. The topic of the organization of conceptual knowledge will be revisited in the
final section, when we integrate empirical findings with contemporary theories of
conceptual knowledge.
Semantics of action and language
Studies on the relation between language and action may give us more insight into the
organization and selection of conceptual knowledge. Different experimental methods
have established that action preparation can be affected by language and that action
preparation involves accessing semantic knowledge. In the present section we will argue
that action and language are highly dependent processes. Although experiments
discussed might seem to suggest unidirectional effects from one subsystem to the other,
we would like to emphasize the common functional and neural mechanisms that underlie reported effects.
Links between language and action
In recent years a couple of studies have suggested that action planning and semantic
processing are mutually dependent processes. For instance, Creem and Proffitt (2001)
asked subjects to grasp common household objects while at the same time they either
performed a visuo-spatial or a semantic task. Concurrent semantic processing resulted in
more inappropriate grasping of the objects as compared to concurrent visuo-spatial
processing, suggesting the involvement of the semantic system when preparing to grasp a
meaningful object. Other studies showed that semantic properties of distracting words
or objects influenced the planning and execution of grasping movements (e.g. Jervis et al.,
1999; Glover and Dixon, 2002). Glover and Dixon (2002) had participants grasping
objects, which resulted in an enlargement of maximum grip aperture for objects with the
word ‘large’ printed on top of them as compared to objects with the word ‘small’. In a
similar way Gentilucci and Gangitano (1998) found that the words denoting ‘far’ and
‘near’ printed on objects affected movement kinematics similarly to the actual greater or
shorter distances between hand position and object. These findings fit well with Glover’s
model of action control, according to which action preparation involves semantic processing, while action execution operates relatively independently from semantic processes
(Glover, 2004).
Glenberg and Kaschak (2002) report the action sentence compatibility effect when
participants judge the meaning of a sentence by making a movement towards or away
from the own body. Participants showed difficulty when judging sentences that implied
an action in the direction opposite to the response required for the sensibility judgment
(e.g. responding ‘yes’ by making a movement towards the body while judging the
579
25-Haggard-Chap25
580
5/5/07
11:34 PM
Page 580
ACTION SEMANTICS
sentence ‘Close the drawer’, which implies movement away from the body). Interestingly,
the action sentence compatibility effect was found for imperative sentences (e.g. ‘Close the
drawer’), as well as for sentences describing the transfer of both concrete (e.g. ‘Liz gave
you the book’) and abstract entities (e.g. ‘Liz told you the story’). A similar effect was
found by Zwaan and Taylor (2006) who had subjects judge the sensibility of sentences
describing actions implying a particular direction by rotating a manual device (e.g. ‘He
turned down the volume’ implies movement in a counter-clockwise orientation). A congruency effect was found for direction of manual responses and implied direction by the
sentences, suggesting that understanding action sentences relies on motor resonance.
These findings support the idea that understanding of meaning is grounded in bodily
activity.
According to the principle of Hebbian learning, the efficacy of a neuronal ensemble
onto another is strengthened whenever they become repeatedly co-activated (Hebb, 1949).
This neuronal principle might underlie the close relation between language processing
and action preparation. During language acquisition, for instance, children often develop
specific links between a particular action and an associated word, probably resulting in a
distributed functional network representing the meaning of words (Pulvermüller et al.,
2005). Electrophysiological support for the involvement of the motor system in the
understanding of action-related sentences comes from a study by Pulvermüller et al. (2005)
that investigated the neural correlates of the processing of action verbs. Words that
referred to specific body parts (e.g. ‘kick’ refers to foot, ‘lick’ to tongue etc.) elicited activity
across the motor-strip close to the representation of the body parts to which the action
words primarily referred. Tettamanti et al. (2005) identified a fronto-parietal circuit
underlying listening to action-related sentences. This finding has been replicated several
times and with different methods, including functional magnetic resonance imaging and
transcranial magnetic stimulation (e.g. de Lafuente and Romo, 2004; Buccino et al., 2005)
and suggests an underlying representation of action words, that consists of somatotopically
organized neuronal assemblies.
Semantic information in action preparation
Taken together, recent findings suggest close relations between language and action
systems. Although most studies discussed thus far investigated the influence of language
on action, hardly any experiments have been conducted to explore the activation of semantic knowledge during action preparation. In a series of experiments we investigated
whether the intention of an actor to use an object would involve selection of relevant
semantic knowledge (Lindemann et al., 2006). Semantic activation in action preparation
was investigated by using a language task in which priming effects of action context on
the subsequent processing of words were measured. Semantic priming effects have reliably been reported when semantic context (e.g. a prime word or picture) facilitates
processing of semantically related words, both for linguistic and nonlinguistic stimuli
(e.g. Neely, 1991; Maxfield, 1997). For example, reading the word ‘doctor’ facilitates
subsequent recognition of the word ‘nurse’, and no priming effect is expected if the
25-Haggard-Chap25
5/5/07
11:34 PM
Page 581
SEMANTICS OF ACTION AND LANGUAGE
subsequent word would be ‘fireman’. Considering the importance of goals in stored
representations for action, we hypothesized that the preparation of an action would
provide a semantic context for the processing of a subsequently presented word, resulting
in a priming effect for words consistent with the goal of the upcoming action. The experimental set-up is represented in Figure 25.2. In front of the subject two objects (a magnifying glass and a cup) were placed at a table. A picture of one of the two objects appeared
on the screen, and subjects were instructed to prepare the action associated with the
object. Subjects were instructed to prepare the execution of a particular motor action
(e.g. drink from a cup), but to delay their response until a word appeared on the
computer screen. Subjects were instructed to withhold their response when a pseudoword was presented. Only when the word was a valid Dutch word were subjects allowed
to execute the prepared movement.
In the first experiment, presented words could be either congruent or incongruent
with the action goal (e.g. for the cup ‘mouth’ is incongruent whereas ‘eye’ is incongruent).
To control for the possibility of a simple picture-word priming effect, half of the subjects
had to indicate by a simple finger-lifting response at which side of the table (left or right)
the indicated object was present. If selection of semantic knowledge critically depends on
the action intention of the actor, interactions between action planning and semantic processing were only expected in the grasping condition. Results are summarized in Figure 25.3
and, as can be seen, a word-congruency effect was found only when subjects intended to
display
+
object 1
object 2
initial
startposition
Figure 25.2 Schematic representation of the experimental set-up. A cup (Object 1), a magnifying
glass (Object 2) and a computer screen were placed in front of the subject. Grasping of the objects
was initiated by releasing the right hand from the starting position.
581
25-Haggard-Chap25
11:34 PM
Page 582
ACTION SEMANTICS
grasp the object and to bring it to the associated goal. That is, subjects responded faster
when the presented word was congruent with their present action intention.
However, an alternative explanation for the absence of a congruency-effect in the
finger-lifting condition might be that preparing a finger-lifting response is much easier
than preparing a grasping movement. As a consequence, participants may have been
better able to cognitively separate action preparation from word recognition, resulting in
the absence of a behavioral interaction effect. In a second experiment this alternative
explanation was ruled out by replacing words that represented the goal locations of the
grasping movements by Dutch words denoting ‘left’ and ‘right’ that were more relevant to
the finger-lifting task where subjects responded with left or right finger movements to
indicate object position. Results showed a reversal of priming effects revealing priming for
the finger-lifting condition but not for the grasping condition, confirming the hypothesis
that action semantics are selectively activated in accordance with the action intended by
the subject (see Figure 25.3).
In the third experiment a semantic categorization task was used instead of a lexical
decision task, to ensure deep semantic processing of the words and to exclude the possibility that subjects’ responses in the first two experiments only relied on visual word forms.
Experiment 2
Experiment 1
Object-congruent words
Object-incongruent words
540
520
RT (ms)
RT (ms)
Consistent words
Inconsistent words
540
520
500
500
480
480
460
460
440
440
Object-grasping
Finger-lifting
Object-grasping
Finger-lifting
Experiment 3 & 4
Object-congruent words
Object-incongruent words
540
520
RT (ms)
582
5/5/07
500
480
460
440
Object-grasping
Object-grasping
Figure 25.3 Mean response latencies arranged by action condition and word consistency.
Subjects performed a lexical decision task with words representing goal locations in Experiment 1
and words representing spatial locations Experiment 2. Subjects performed a semantic categorization task in Experiment 3 and a final letter identification task in the Experiment 4. Error bars
represent the 95% within-subject confidence intervals. RT, reaction time.
25-Haggard-Chap25
5/5/07
11:34 PM
Page 583
SEMANTICS OF ACTION AND LANGUAGE
Target words again included body parts that represented the goal locations of the actions
associated with either the cup or the magnifying glass. In addition, action-unrelated
words that referred to body parts were used as filler items and words that referred to animals
were chosen as no-go stimuli. Subjects were instructed to execute the action associated
with the object, only when the word referred to a body part. No response was to be executed
when words denoted animals. Again, a reduction in response latencies was found for
words that were consistent with the action goal of the subject (e.g. ‘eye’ for grasping the
magnifying glass). Finally it was hypothesized that if the interaction between words and
action intention critically depended on deep semantic processing of the words, the effect
should disappear if subjects attended to words only at a superficial level. As can be seen
in Figure 25.3 the action-priming effect indeed disappeared if subjects performed a
simple final letter-identification task instead of semantic categorization.
Together these experiments suggest that action goals form an integral part of the
semantic representations that are activated during action preparation. Opposite to many
of the automatic effects that were found in earlier studies investigating congruency
effects between presented objects and required grips, when it comes to action goals, or
functional knowledge of object use, action semantics are not automatically activated
upon observation of an object, as is shown by the absence of the word priming effect in
the finger-lifting condition in the first experiment, but are selected in accordance with
the action intention of the actor.
Semantic activation in preparing meaningful and
meaningless actions
Although in everyday life many objects are normally associated with typical goal locations (‘mouth’ for ‘cup’), it is often the case that one object can be used in many different
ways, depending on the context within which the object is perceived (e.g. using a knife to
spread butter or to cut your meat). An important aspect in the evolution and selection of
human behavior seems to be the ability to flexibly adjust one’s strategies to environmental demands. Concerning the acquisition of action semantics, besides developing longterm representations of object use, it is equally important to be able to implement
short-term action intentions (e.g. hitting a mosquito with a newspaper).
To investigate the effects of short-term action goals upon the activation of semantic
information, we provided subjects with the instruction to perform meaningful and
meaningless actions with a cup and a magnifying glass (M. Van Elk et al., unpublished data).
In half of all blocks subjects were instructed to bring both objects to their mouth, while
in the other half they were encouraged to bring both objects to their right eye. A semantic
categorization task was used and subjects had to delay the execution of the action until
they recognized a word that represented a body part (similar to the third experiment in
Lindemann et al., 2006). It was hypothesized that if subjects prepared to execute a meaningful action that is normally associated with the use of the object (e.g. bringing the cup
to their mouth) a similar word-congruency effect could be expected as in the previous
experiments (e.g. faster responding to ‘mouth’ as compared to ‘eye’ for intending to use
the cup). However, with the intention to perform a meaningless action (e.g. bringing the
583
25-Haggard-Chap25
11:34 PM
Page 584
ACTION SEMANTICS
cup to the eye), long-term action knowledge has to be overruled in order to comply with
the short-term action intention of the actor.
Results are represented in Figure 25.4 and indicate that when subjects performed actions
normally associated with the objects they were faster to respond to words that consistently
described the goal-location of the action, replicating the results of Lindemann et al. (2006).
Interestingly, when subjects intended to perform an incorrect action with an object,
response latencies were sped up for words describing the short-term goal location of the
intended action (e.g. faster responding to ‘eye’ compared to ‘mouth’, when intending to
bring the cup to the eye).
One possible confounding factor, however, may be that priming effects were not caused
in accordance with the intended actions but rather via the task instruction that subjects
received prior to each block. That is, subjects were instructed to move objects either to
their mouth or to their eye in individual blocks, which may have facilitated the processing of the same words ‘mouth’ or ‘eye’ presented on the screen. In a second experiment
Experiment 1
Experiment 2
640
620
600
640
Object-congruent words
Object-incongruent words
620
600
Object-congruent words
Object-incongruent words
580
RT (ms)
RT (ms)
580
560
540
560
540
520
520
500
500
480
480
460
460
Meaningful Actions
Meaningless Actions
Meaningful Actions
Meaningless Actions
Experiment 3
Object-congruent words
Object-incongruent words
640
620
600
580
RT (ms)
584
5/5/07
560
540
520
500
480
460
Meaningful Actions
Meaningless Actions
Figure 25.4 Mean response latencies arranged by action condition (meaningful or meaningless
actions) and word consistency. Subjects conducted a semantic categorization task to words appearing on the screen. In the first experiment subjects performed meaningful and meaningless
actions within the same block, while in the second experiment meaningful and meaningless
actions were conducted in separate blocks. In the third experiment a concurrent articulatory
suppression task was conducted. Error bars represent the 95% within-subject confidence intervals.
RT, reaction time.
25-Haggard-Chap25
5/5/07
11:34 PM
Page 585
THEORETICAL IMPLICATIONS
we tried to rule out this explanation by having subjects prepare meaningful and meaningless
actions in separate action blocks, resulting in both action goals being used within the
same block. Results again revealed a priming effect for words congruent with the action
goal in meaningful action blocks in which subjects used both objects in the usual fashion
(see Figure 25.4). However, the effect was found to reverse in the meaningless action
blocks, in which participants responded faster to words that were congruent to the shortterm action goal. Apparently, the action intention of the actor can overrule long-term
conceptual associations.
However, given the unusual task constraints it might be that subjects relied on a shortterm verbalization strategy in performing the task. When performing meaningless
actions, subjects might have been silently repeating the goal location of the action, to
avoid being confused by the words appearing on the screen. To prevent subjects from
relying on a verbal strategy, we replicated the first experiment, but in addition subjects
conducted a concurrent articulatory suppression task. Even when subjects were prevented
from using a verbal strategy, they responded faster to words that were congruent with the
short-term action goal, as can be seen in Figure 25.4. These findings convincingly show
that the intention to perform a meaningless action with an object can overrule priming
effects from long-term conceptual knowledge associated with the object. Instead, it
appears that the present action goal of the actor determines the selection of semantic
information in accordance with the intention of the actor. Theoretical implications of
these findings will be discussed in the final part of this chapter.
Theoretical implications
As discussed in the section ‘Theories of conceptual knowledge’, a growing number of
studies support the view that human cognition is deeply rooted in bodily activity
(Gallese and Lakoff, 2005). Nowadays, discussion focuses on the way in which conceptual
knowledge is organized and represented over different brain areas. In the present chapter
we proposed the concept of action semantics as a useful way to think about conceptual
knowledge for action. We argued that an important factor in the acquisition and use of
action semantics is the action goal that is selected for a particular object. In the third
section, two recent studies were discussed in more detail to support our view that semantics for action become activated selectively in accordance with the goal of the actor. When
intending to grasp a cup, subjects responded faster if the target word consistently
described the spatial goal of the intended action (Lindemann et al., 2006). However,
when subjects were instructed to perform meaningless actions with the objects, the
word-congruency effects reversed according to the present action intention of the actor
(M. Van Elk et al., unpublished data). That is, subjects responded faster to the word ‘eye’
if they were intending to bring the cup to their eye, irrespective of the learned association
in long-term memory to bring cups to the mouth.
In this final section we will compare different approaches to conceptual knowledge
for action in relation to the presented findings. First, dissociations between functional
knowledge (‘what to use an object for’) and pragmatic knowledge (‘how to use an
585
25-Haggard-Chap25
586
5/5/07
11:34 PM
Page 586
ACTION SEMANTICS
object’) in neuropsychological patients are in line with our view that action goals and
action means are two distinct aspects of action semantics (Leiguarda and Marsden, 2000).
Exactly how action goals and means are represented in the brain is a matter of ongoing
study. Different parts of the motor system appear to contribute to the retrieval of conceptual knowledge for action consistent with the sensory–functional hypothesis which
argues for conceptual knowledge being organized according to modality (Warrington
and Shallice, 1984).
A similar view is proposed by perceptual symbols theory (Barsalou, 1999) according to
which conceptual knowledge is represented in modality-specific brain systems. Although
perceptual symbols theory does not explicitly focus on conceptual knowledge used in
action, the retrieval of conceptual knowledge about object use is believed to involve
sensorimotor simulation that includes all necessary features of a previously encountered
object. For example, a conceptual representation of ‘cup’ is acquired by countless interactions with cups and related actions directed towards them. Therefore the conceptual
representation of cup would probably include a perceptual simulation of a prototypical
‘cup’ and a motor simulation of bringing the cup towards the mouth. Perceptual symbols
theory could account for priming effects resulting from preparation of meaningful
actions for which a perceptual simulation is available (Lindemann et al., 2006). However,
it is more difficult for the theory to explain the reversal of priming effects that were found
with meaningless or unusual actions as presented in ‘Semantics of action and language’
(M. Van Elk et al., unpublished data).
Perceptual symbols theory takes the prototypical concept as the cornerstone around
which our conceptual knowledge is built. It would be interesting to think about what
kind of simulations might underlie the flexible use of conceptual knowledge. Given the
short-term priming effects in congruence with the immediate action intention it might
be that a perceptual simulation also includes the short-term goal. Whenever a person
intends to bring a magnifying glass to the mouth, the goal-related concept ‘mouth’ might
form part of the action representation. However, the way in which perceptual simulations are flexibly selected by combining different representations into new nonexisting
concepts or by preparing novel actions would need to be clarified.
The two-route model to action proposes a direct route from vision to action that bypasses
semantic information (Rumiati and Humphreys, 1998; Rumiati and Tessari, 2002). The
two-route model accounts well for the failure of neuropsychological patients to imitate
either meaningful or meaningless actions. Within this framework the findings of M. Van
Elk et al. (unpublished data), in which participants performed either meaningful or
meaningless actions with common objects, might be interpreted as reflecting the differential use of both pathways to action. When participants intend to perform meaningful
actions with objects the semantic route is activated, in which long-term conceptual
knowledge about the use of objects is represented. In contrast, the indirect route from
perception to action might be used when participants perform meaningless actions with
objects. However, considering that priming effects were found both when subjects made
appropriate and inappropriate actions with objects, we suggest that semantics are
involved for both meaningful and meaningless actions. That is, the two-route model for
25-Haggard-Chap25
5/5/07
11:34 PM
Page 587
THEORETICAL IMPLICATIONS
action cannot account for the involvement of semantics in meaningless actions. Rather
than emphasizing the difference between meaningful and meaningless actions, we suggest
that a crucial factor of all actions is the current goal of the actor.
According to ideomotor theory of action, stored effect-representations enable people
to select the appropriate action. Event-codes consist of contingent response–effect couplings
that have been shaped over the course of our lives. Accordingly, preparing to grasp a cup
will be associated with the typical goal of bringing it to the mouth. In a similar way the
principle of associative learning states that stimulus–response associations reflect the
amount of experience of acting in a particular way towards visual stimuli. However, ideomotor theory and associative learning face major difficulties when it comes to explaining
short-term priming effects of action intention, as discussed in the present chapter. When
subjects prepared meaningless actions with objects that they obviously had never
conducted before, we found a selective activation of words that were congruent to the
present short-term action goal. If priming-effects of action intention rely solely on associative learning we might have expected them to disappear when subjects prepared
meaningless actions, just like automatic effects of imitation could be cancelled out by
training participants to respond in a different way towards stimuli (Heyes et al., 2005). In
contrast, we have shown that priming of words that are congruent to normal object use
could be reversed when subjects had an action intention different from normal object use
(M. Van Elk et al., unpublished data). This word-congruency effect reveals a counterstatistical association, as most participants probably will not have much experience
bringing cups to their eye.
However, it might be argued that while participants repeatedly brought objects
towards incongruent action goals, new short-term associations between grasping the
object and the incongruent goal indeed were established. According to the associative
learning approach (Heyes et al., 2005), long-term stimulus–response associations were
learned by practice and can be flexibly adjusted by training. Within the domain of imitation Heyes et al. (2005) convincingly showed that training participants to respond in an
incongruent way to observed actions could cancel out the effects of automatic imitation.
In a similar way, novel semantics for action might be established when subjects repeatedly conducted unusual actions towards objects. By allowing the flexible acquisition of
novel action semantics, both ideomotor theory and the principle of associative learning
do a better job than other theories of conceptual knowledge in action. However, in
contrast with both theories we suggest that the acquisition of novel action semantics
depends heavily on the goal of the actor. It is not the case that associations are arbitrarily
consolidated according to incoming bottom-up stimulus information. Typically, the
action intention of a person determines what kind of information is selected and attended.
For example, it does not seem plausible that the simple observation of a cup automatically elicits a complete motor program that subsequently needs to be inhibited. A much
simpler view is that what you intend to do with a cup determines the kind of information
that is selected.
To conclude, we propose that the selection of action semantics depends critically on
the action intention of the actor. Whenever a usual action is conducted with an object,
587
25-Haggard-Chap25
588
5/5/07
11:34 PM
Page 588
ACTION SEMANTICS
associated semantic information is being activated accordingly, whereas when an object
is used in an inappropriate manner, the short-term behavioral goal temporarily overrules
the long-term associations. The selection processes underlying action semantics therefore bear resemblance to ‘selection-for-action’ in perception (Allport, 1987).
Conclusions
In sum, in the present chapter we sought to outline the concept of action semantics by
presenting a review of existing literature on functional mechanisms underlying conceptual
knowledge and goal-directed behavior supplied with recent empirical evidence supporting selection-for-action at the level of semantics. Our findings suggest a tight connection
between semantic knowledge that is used in action and language, reflected by conceptual
priming effects from action preparation to language processing. These findings suggest
that the theory of selection-for-action (Allport, 1987) may be extended to include selection of semantic information that is relevant for the intended action and that the theory
of selection-for-action may provide a useful framework for studying the associations
between language, action and perception. We suggest that, comparable with the selection
of perceptual information that is relevant to our current intentions and behavioral goals,
the selection of action semantics may be equally important for smoothly moving
throughout the world. Access to stored long-term conceptual action representations and
acquisition of novel action semantics both enable us to adapt our behavior flexibly to
continually changing environmental demands.
The study of action semantics provides important insight into the organization of our
conceptual knowledge and supports the idea that action knowledge is represented in
sensorimotor systems. In addition, mechanisms of action, perception and language should
not be studied in isolation, since the retrieval of semantic information involves action,
visual as well as lexical semantics. Developmental studies are expected to provide important
insight into the acquisition of action semantics and neuroimaging studies may help to
understand the neural organization underlying knowledge of action goals and action
means. Behavioral and electrophysiological approaches may contribute to generate further
insight into the timing of activation and the flexible selection of conceptual knowledge
that guides our behavior, providing new and exciting directions for future research.
References
Allport, A. (1987) Attention and selection-for-action. In Heuer, H. and Sanders, A. F. (eds), Perspectives
on Perception and Action, pp. 395–420. Erlbaum, Hillsdale, NJ.
Barsalou, L. W. (1999) Perceptual symbol systems. Behavioral and Brain Sciences, 22, 577–660.
Barsalou, L. W., Simmons, W. K., Barbey, A. K. and Wilson C. D. (2003) Grounding conceptual
knowledge in modality-specific systems. Trends in Cognitive Sciences, 7, 84–91.
Bekkering, H. and Neggers, S. F.W. (2002) Visual search is modulated by action intentions. Psychological
Science, 13, 370–375.
Bekkering, H. and Wohlschläger, A. (2000) Action perception and imitation: a tutorial. In Prinz, W.
and Hommel, B. (eds), Attention and Performance XIX: Common Mechanisms in Perception and
Action, pp. 294–314. Oxford University Press, Oxford.
25-Haggard-Chap25
5/5/07
11:34 PM
Page 589
REFERENCES
Bekkering, H., Wohlschläger, A. and Gattis, M. (2000) Imitation of gestures in children is goal-directed.
Quarterly Journal of Experimental Psychology, 53, 153–164.
Borgo, F. and Shallice, T. (2001) When living things and other ‘sensory quality’ categories behave in the
same fashion: a novel category specificity effect. Neurocase, 7, 201–220.
Buccino, G., Riggio, L., Melli, G., Binkofski, F., Gallese, V. and Rizzolatti, G. (2005) Listening to
action-related sentences modulates the activity of the motor system: a combined TMS and
behavioral study. Cognitive Brain Research, 24, 355–363.
Buxbaum, L. J. (2001) Ideomotor apraxia: a call to action. Neurocase, 7, 445–458.
Buxbaum, L. J. and Saffran, E. M. (2002) Knowledge of object manipulation and object function:
dissociations in apraxic and nonapraxic subjects. Brain and Language, 82, 179–199.
Calvo-Merino, B., Glaser, D. E., Grèzes, J., Passingham, R. E. and Haggard, P. (2005) Action observation
and acquired motor skills: an FMRI study with expert dancers. Cerebral Cortex, 15, 1243–1249.
Caramazza, A. and Mahon, B. Z. (2003) The organization of conceptual knowledge: the evidence from
category-specific semantic deficits. Trends in Cognitive Sciences, 7, 354–361.
Caramazza, A. and Shelton, J. R. (1998) Domain-specific knowledge systems in the brain: the
animate–inanimate distinction. Journal of Cognitive Neuroscience, 10, 1–34.
Carpenter, M., Akhtar, N. and Tomasello, M. (1998) Fourteen- through 18-month-old infants differentially
imitate intentional and accidental actions. Infant Behavior and Development, 21, 315–330.
Carpenter, M., Pennington, B. F. and Rogers, S. J. (2001) Understanding others’ intentions in children
with autism and children with developmental delays. Journal of Autism and Developmental
Disorders, 31, 589–599.
Chainay, H. and Humphreys, G. W. (2002) Privileged access to action for objects relative to words.
Psychonomic Bulletin and Review, 9, 348–355.
Chao, L. L. and Martin, A. (2000) Representation of manipulable man-made objects in the dorsal
stream. Neuroimage, 153, 260–265.
Clark, A. (1997) Being There: Putting Brain, Body and World Together Again. MIT Press, Cambridge, MA.
Craighero, L., Fadiga, L., Rizzolatti, G. and Umiltà, C. (1999) Action for perception: a motor-visual
attentional effect. Journal of Experimental Psychology: Human Perception and Performance,
25, 1673–1692.
Creem, S. H. and Proffitt, D. R. (2001) Grasping objects by their handles: a necessary interaction
between cognition and action. Journal of Experimental Psychology: Human Perception and
Performance, 27, 218–228.
de Lafuente, V. and Romo, R. (2004) Language abilities of motor cortex. Neuron, 41, 178–180.
Fiez, J. A., Raichle, M. E., Balota, D. A., Tallal, P. and Petersen, S. E. (1996) PET activation of posterior
temporal regions during auditory word presentation and verb generation. Cerebral Cortex, 6, 1–10.
Fodor, J. A. (1983) Modularity of Mind: An essay on Faculty Psychology. MIT Press, Cambridge, MA.
Gainotti, G., Silveri, M. C., Daniele, A. and Giustolisi, L. (1995) Neuroanatomical correlates of
category-specific impairments: a critical survey. Memory, 3/4, 247–264.
Gallese, V. and Goldman, A. (1998) Mirror neurons and the simulation theory of mind-reading.
Trends in Cognitive Sciences, 2, 493–501.
Gallese, V. and Lakoff, G. (2005) The brain’s concepts: the role of the sensory-motor system in
conceptual knowledge. Cognitive Neuropsychology, 22, 455–479.
Gallese, V., Fadiga, L., Fogassi, L. and Rizzolatti, G. (1996) Action recognition in the premotor cortex.
Brain, 119, 593–609.
Gentilucci, M. and Gangitano, M. (1998) Influence of automatic word reading on motor control.
European Journal of Neuroscience, 10, 752–756.
Gergely, G., Nàdasdy, Z., Csibra, G. and Biró, S. (1995) Taking the intentional stance at 12 months of
age. Cognition, 56, 165–193.
589
25-Haggard-Chap25
590
5/5/07
11:34 PM
Page 590
ACTION SEMANTICS
Gergely, G., Bekkering, H. and Kir·ly, I. (2002) Rational imitation in preverbal infants. Nature, 415, 755.
Gibson, J. J. (1986) The Ecological Approach to Visual Perception. Erlbaum, Hillsdale, NJ.
Glenberg, A. M. and Kaschak, M. P. (2002) Grounding language in action. Psychonomic Bulletin
and Review, 9, 558–565.
Glover, S. (2004) Separate visual representations in the planning and control of action. Behavioral and
Brain Sciences, 27, 3–24.
Glover, S. and Dixon, P. (2002) Semantics affect the planning but not control of grasping. Experimental
Brain Research, 146, 383–387.
Glover, S., Rosenbaum, D. A., Graham, J. and Dixon, P. (2004) Grasping the meaning of words.
Experimental Brain Research, 154, 103–108.
Grafton, S. T., Arbib, M. A., Fadiga, L. and Rizzolatti, G. (1996) Localization of grasp representations in
humans by positron emission tomography. 2. Observation compared with imagination.
Experimental Brain Research, 112, 103–111.
Grafton, S. T., Fadiga, L., Arbib, M. A. and Rizzolatti, G. (1997) Premotor cortex activation during
observation and naming of familiar tools. Neuroimage, 6, 231–236.
Grèzes, J. and Decety, J. (2002) Does visual perception of object afford action? Evidence from a
neuroimaging study. Neuropsychologia, 40, 212–222.
Grèzes, J., Tucker, M., Ellis, R. and Passingham, R. E. (2003) Objects automatically potentiate action: an
fMRI study of implicit processing. European Journal of Neuroscience, 17, 2735–2740.
Hannus, A., Cornelissen, F. W., Lindemann, O. and Bekkering, H. (2005) Selection-for-action in visual
search. Acta Psychologica, 118, 171–191.
Hebb, D. O. (1949) The Organization of Behaviour. Wiley, New York.
Heilman, K. M., Rothi, L. J. and Valenstein, E. (1981) Two forms of ideomotor apraxia. Neurology,
32, 342–346.
Heyes, C., Bird, G., Johnson, H. and Haggard, P. (2005) Experience modulates automatic imitation.
Cognitive Brain Research, 22, 233–240.
Hommel, B. (1996) The cognitive representation of action: automatic integration of perceived action
effects. Psychological Research, 59, 176–186.
Hommel, B., Müsseler, J., Aschersleben, G. and Prinz, W. (2001) The theory of event coding (TEC):
a framework for perception and action planning. Behavioral and Brain Sciences, 24, 849–937.
Iacoboni, M., Woods, R. P., Brass, M., Bekkering, H., Mazziotta, J. C. and Rizzolatti, G. (1999) Cortical
mechanisms of human imitation. Science, 286, 2526–2528.
James, W. (1890) The Principles of Psychology. Dover, New York.
Jervis, C., Bennett, K., Thomas, J., Lim, S. and Castielo, U. (1999) Semantic category interference effects
upon the reach-to-grasp movement. Neuropsychologia, 37, 857–868.
Kable, J. W., Lease-Spellmeyer, J. and Chatterjee, A. (2002) Neural substrates of action event knowledge.
Journal of Cognitive Neuroscience, 14, 795–805.
Klatzky, R. L., Pellegrino, J. W., McCloskey, B. P. and Doherty, S. (1989) Can you squeeze a tomato? The
role of motor representations in semantic sensibility judgments. Journal of Memory and Language,
28, 56–77.
Laiacona, M., Capitani, E. and Caramazza, A. (2003) Category-specific semantic deficits do not reflect
the sensory/functional organization of the brain: a test of the ‘sensory quality’ hypothesis.
Neurocase, 9, 221–231.
Lakoff, G. and Johnson, M. (1999) Philosophy in the Flesh. Basic Books, New York.
Leiguarda, R. C. and Marsden, C. (2000) Limb apraxias: higher-order disorders of sensorimotor
integration. Brain, 123, 860–879.
Lindemann, O., Stenneken, P., Van Schie, H. T. and Bekkering, H. (2006) Semantic activation in action
planning. Journal of Experimental Psychology: Human Perception and Performance, 32, 633–643.
25-Haggard-Chap25
5/5/07
11:34 PM
Page 591
REFERENCES
Martin, A. and Chao, L. L. (2001) Semantic memory and the brain: structure and processes. Current
Opinion in Neurobiology, 11, 194–201.
Martin, A., Haxby, J. V., Lalonde, F. M., Wiggs, C. L. and Ungerleider, L. G. (1995) Discrete cortical
regions associated with knowledge of color and knowledge of action. Science, 270, 102–105.
Maxfield, L. (1997) Attention and semantic priming: A review of prime task effects. Consciousness and
Cognition, 6, 204–218.
Meltzoff, A. N. (1995) Understanding the intentions of others: Re-enactment of intended acts by
18-month-old children. Developmental Psychology, 31, 1–16.
Meltzoff, A. N. and Moore, M. K. (1995) Infant’s understanding of people and things: From body
imitation to folk psychology. In Bermudez, J. L. Marcel, A. and Eilan, N. (eds), The Body and the Self,
pp. 43–69. MIT Press, Cambridge, MA.
Murata, A., Fadiga, L., Fogassi, L., Gallese, V., Raos, V. and Rizzolatti, G. (1997) Object representation in
the ventral premotor cortex (Area F5) of the monkey. Journal of Neurophysiology, 78, 2226–2230.
Neely, J. H. (1991) Semantic priming effects in visual word recognition: a selective review of current
findings and theories. In Besner, D. and Humphreys, G. W. (eds), Basic Progress in Reading: Visual
Word Recognition, pp. 264–333. Erlbaum, Hillsdale, NJ.
Ochipa, C., Rothi, L. J. G and Heilman, K. M. (1992) Conceptual apraxia in Alzheimer’s disease. Brain,
115, 1061–1071.
Pecher, D., Zeelenberg, Z. and Barsalou, L. W. (2004) Sensorimotor simulations underlie conceptual
representations: Modality-specific effects of prior activation. Psychonomic Bulletin and Review,
11, 164–167.
Phillips, A. T. and Wellman, H. M. (2005) Infants’ understanding of object-directed action. Cognition,
98, 137–155.
Piaget, J. (1953) The Origin of Intelligence in the Child (trans. Cook, M.). Routledge & Kegan Paul.
London
Pulvermüller, F., Shtyrov, Y. and Illmoniemi, R. (2005) Brain signatures of meaning access in action
word recognition. Journal of Cognitive Neuroscience, 17, 884–892.
Rizzolatti, G., Fadiga, L., Fogassi, L. and Gallese, V. (1996) Premotor cortex and the recognition
of motor actions. Cognitive Brain Research, 3, 131–141.
Rizzolatti, G., Fogassi, L. and Gallese, V. (2001) Neurophysiological mechanisms underlying the
understanding and imitation of action. Nature Reviews Neuroscience, 2, 661–670.
Rosenbaum, D. A., Loukopoulos, L. D., Meulenbroek, R. G. J., Vaughan, J. and Englebrecht, S. E. (1995)
Planning reaches by evaluating stored postures. Psychological Review, 102, 28–67.
Rosenbaum, D. A., Meulenbroek, R. J., Vaughan, J. and Jansen, C. (2001) Posture-based motion
planning: applications to grasping. Psychological Review, 108, 709–734.
Rumiati, R. I. and Humphreys, G. W. (1998) Recognition by action: dissociating visual and semantic
routes to action in normal observers. Journal of Experimental Psychology: Human Perception and
Performance, 24, 631–647.
Rumiati, R. I. and Tessari, A. (2002) Imitation of novel and well-known actions: the role of short-term
memory. Experimental Brain Research, 142, 425–433.
Rumiati, R. I. et al. (2005) Common and differential neural mechanisms supporting imitation of
meaningful and meaningless actions. Journal of Cognitive Neuroscience, 17, 1420–1439.
Sirigu et al. (1998) Distinct frontal regions for processing sentence syntax and story grammar. Cortex,
34, 771–778.
Sommerville, J. A. and Woodward, A. L. (2005) Pulling out the intentional structure of action: the
relation between action processing and action production in infancy. Cognition, 95, 1–30.
Stanfield, R. A. and Zwaan, R. A. (2001) The effect of implied orientation derived from verbal context
on picture recognition. Psychological Science, 12, 153–156.
591
25-Haggard-Chap25
592
5/5/07
11:34 PM
Page 592
ACTION SEMANTICS
Tettamanti, M. et al. (2005) Listening to action-related sentences activates fronto-parietal motor
circuits. Journal of Cognitive Neuroscience, 17, 273–281.
Tootell, R. B. et al. (1995) Functional analysis of human MT and related visual cortical areas using
magnetic resonance imaging. Journal of Neuroscience, 15, 3215–3230.
Tranel, D., Kemmerer, D., Adolphs, R., Damasio, H. and Damasio, A. R. (2003) Neural correlates of
conceptual knowledge for actions. Cognitive Neuropsychology, 20, 409–432.
Tucker, J. and Ellis, R. (2001) The potentiation of grasp types during visual object categorization.
Visual Cognition, 8, 769–800.
Umiltà, M. A. et al. (2001) I know what you are doing. A neurophysiological study. Neuron, 31, 155–165.
Van Schie, H. T., Wijers, A. A., Kellenbach, M. L. and Stowe, L. A. (2003) An event-related potential
investigation of the relationship between semantic and perceptual levels of representation. Brain and
Language, 86, 300–325.
Van Schie, H. T., Mars, R. B., Coles, M. G. H. and Bekkering, H. (2004) Modulation of activity in medial
frontal and motor cortices during error observation. Nature Neuroscience, 7, 549–554.
Warrington, E. K. and Shallice, T. (1984) Category specific semantic impairments. Brain, 107, 829–854.
Woodward, A. L. (1998) Infants selectively encode the goal object of an actor’s reach. Cognition,
69, 1–34.
Yoon, E. Y. and Humphreys, G. W. (2005) Direct and indirect effects of action on object classification.
Memory and Cognition, 33, 1131–1146.
Zalla, T., Pradat-Diehl, P. and Sirigu, A. (2003) Perception of action boundaries in patients with frontal
lobe damage. Neuropsychologia, 41, 1619–1627.
Zwaan, R. A. and Taylor, L. J. (2006) Seeing, acting, understanding: motor resonance in language
comprehension. Journal of Experimental Psychology: General, 135, 1–11.