Behavioral and cortical mechanisms for spatial coding and action

cortex 44 (2008) 587–597
available at www.sciencedirect.com
journal homepage: www.elsevier.com/locate/cortex
Special issue: Original article
Behavioral and cortical mechanisms for spatial
coding and action planning
W. Pieter Medendorpa,b,*, Sabine M. Beurzea,b, Stan Van Pelta and Jurrian Van Der Werfa,b
a
Nijmegen Institute for Cognition and Information, Radboud University Nijmegen, Nijmegen, The Netherlands
FC Donders Centre for Cognitive Neuroimaging, Radboud University Nijmegen, Nijmegen, The Netherlands
b
article info
abstract
Article history:
There is considerable evidence that the encoding of intended actions in visual space is
Received 15 February 2007
represented in dynamic, gaze-centered maps, such that each eye movement requires an
Reviewed 10 May 2007
internal updating of these representations. Here, we review results from our own experi-
Revised 4 June 2007
ments on human subjects that test the additional geometric constraints to the dynamic
Accepted 26 June 2007
updating of these spatial maps during whole-body motion. Subsequently, we summarize
Published online 23 December 2007
evidence and present new analyses of how these spatial signals may be integrated with
motor effector signals in order to generate the appropriate commands for action. Finally,
Keywords:
we discuss neuroimaging experiments suggesting that the posterior parietal cortex and
Sensorimotor transformations
the dorsal premotor cortex play selective roles in this process.
ª 2008 Elsevier Masson Srl. All rights reserved.
Human
Parietal cortex
Spatial updating
1.
Coding spatial goals
To interact successfully with our environment, we must
compute the spatial locations of objects of current interest
for ongoing behavior. To do so, we often rely on vision, which
reports locations relative to the retina, for example, 20 left of
the fovea. But because these spatial locations are encoded as
directions relative to the gaze line, their coordinates become
obsolete whenever the line of gaze moves. Nevertheless, we
manage to keep track of spatial target directions, using
remembered visual information, even after the gaze line has
shifted away from its position at the time of the first sight of
the target (Hallett and Lightstone, 1976; Medendorp et al.,
2002; Baker et al., 2003). This process of keeping track of the
locations of objects around us, even in the absence of current
spatial input, is referred to as ‘spatial updating’.
Controversy exists about how spatial updating is implemented, in particular with respect to the frame of reference
that is used to encode the location of an object (BattagliaMayer et al., 2003; Duhamel et al., 1992; Van Pelt et al., 2005;
Baker et al., 2003). Spatial locations stored within an egocentric frame (e.g., limb, eye, head or torso) must be recomputed
when the axes of the reference frame move in order to remain
veridical. In this respect, allocentric representations are more
stable, as they remain correct during intervening movements,
but they must be converted into an egocentric representation
for the control of movement. Suggestions have been made
that the brain constructs both allocentric and egocentric
representations for maintaining spatial stability, with the
use of either type of representation depending on spatial
context and task conditions (for reviews see Battaglia-Mayer
et al., 2003; Burgess 2006).
* Corresponding author. Nijmegen Institute for Cognition and Information, Radboud University Nijmegen, P.O. Box 9104, NL-6500 HE,
Nijmegen, The Netherlands.
E-mail address: [email protected] (W.P. Medendorp).
0010-9452/$ – see front matter ª 2008 Elsevier Masson Srl. All rights reserved.
doi:10.1016/j.cortex.2007.06.001
588
2.
cortex 44 (2008) 587–597
Dynamic updating of spatial maps
In simple task conditions and in neutral space, psychophysical evidence has suggested that an egocentric, gaze-centered
reference frame dominates in spatial updating (Henriques
et al., 1998; Medendorp and Crawford, 2002; Baker et al.,
2003). This mechanism has been called gaze-centered updating, or gaze-centered remapping. At present, it is generally
believed that the putative extraretinal signal necessary for
gaze-centered updating is a corollary discharge of the evoked
eye motor command (Sommer and Wurtz, 2002). This solution
entails that every time the eyes (gaze) rotate, the representation of the visual target in gaze-centered coordinates is
‘remapped’ by subtracting the vector of intervening eye
rotation (Sommer and Wurtz, 2002). While a vector subtraction would be appropriate in case of one-dimensional eye
rotations, nonlinear operations are needed for updating
across rotations in 3-D (Medendorp et al., 2002).
Over the past years, neural correlates of gaze-centered
updating have been identified in a number of cortical and
sub-cortical areas in the monkey brain. Regions such as the
lateral intraparietal area, parietal reach region, frontal eye
fields, the superior colliculus, and extrastriate visual areas
have all been shown to update their activity patterns relative
to the new gaze direction after an eye rotation has occurred
(see e.g., Duhamel et al., 1992; Batista et al., 1999; Heiser
et al., 2005; Nakamura and Colby, 2002). Conversely, it has
been shown that inactivating these regions impairs the
process of spatial updating (Li and Andersen, 2001).
Recently, updating in conjunction with eye movements
has also been described in the human brain, in parietal cortex
(Medendorp et al., 2003a, 2005a; Merriam et al., 2003; Bellebaum et al., 2005) as well as in other extrastriate visual areas
(Merriam et al., 2007). In these human experiments, gaze-centered updating was shown by demonstrating the dynamic exchange of activity between the two cortical hemispheres
when an eye movement brings the representation of a stimulus into the opposite hemifield. Fig. 1 illustrates the updating
observations by Medendorp et al. (2003a), obtained using
event-related functional magnetic resonance imaging. In their
experiments, first a region in parietal cortex was located that
showed lateralized responses for memory-guided eye movements, analogous to observations by others (Sereno et al.,
2001; Schluppeck et al., 2005). The activity in this region was
then monitored using an updating task. Subjects fixated centrally and viewed two brief peripheral dots, a ‘goal’ and ‘refixation’ target, respectively. Both targets were presented either
left or right of central fixation. After a delay, subjects performed a saccade to the refixation target, which made the remembered location of the goal target switch hemifields on
half of the trials. Crucially, in these trials, the region’s activation also shifted, as shown in Fig. 1. If the goal target shifted
into the contralateral hemifield after the first saccade, a high
sustained activation was observed in the second delay period,
but if it shifted to the ipsilateral hemifield the post-saccadic
activity level decreased. In other words, this parietal region
stored and updated a representation of the goal target relative
to current gaze direction. Medendorp et al. (2003a) made these
observations if the goal target served for a saccade, but also
when it served for a reaching movement. A similar interhemispheric transfer of activity has also been found when
the location of a visual stimulus must be reversed to specify
Fig. 1 – A bilateral parietal region, shown on an inflated representation of the brain, mediates gaze-centered spatial
updating in a double-step saccade task. Two stimuli (stim), flashed either on the left or on the right hemifield, cause
increased activity in the contralateral parietal area. After a 7 sec delay, the subject makes the first saccade (sac1) and another
12 sec later the second saccade (sac2). After the first saccade, the remembered target of the second saccade switches
hemifields (left-to-right or right-to-left). Correspondingly, the region’s activation also shifted: if it shifted into the
contralateral hemifield, a high sustained activation was observed prior to the second saccade, but if it shifted to the
ipsilateral hemifield the post-saccadic activity level decreased. Modified from Medendorp et al. (2003a).
cortex 44 (2008) 587–597
589
the goal for an anti-saccade (Medendorp et al., 2005a; Van Der
Werf et al., 2006) or in other visual-motor dissociation tasks
(Fernandez-Ruiz et al., 2007).
In turn, a failure in this gaze-centered updating
mechanism, could explain the deficits that occur in visuomotor processing in patients with bilateral optic ataxia or
patients with neglect (Karnath and Perenin, 2005; Khan
et al., 2005; Heide et al., 1995; Dijkerman et al., 2006).
3.
Translational updating
Till recently, the support for gaze-centered updating was based
mostly on behavioral and neural signals derived during simple
eye saccades with the head and body restrained (but see Baker
et al., 2003). New questions arise when research in updating
mechanisms is broadened from saccadic eye movements to
other eye movements systems (e.g., pursuit eye movements)
and other modalities (head, arm and body movements). For
example, a complicating factor for spatial updating is the
translation of the eyes, as occurs during a head rotation about
the neck axis or during translational motion of the head and
body. When the eyes translate through space, images of
space-fixed visual objects move at different speeds and in different directions relative to the retinas, depending on their distance from the eyes’ fixation point. This is known as motion
parallax, and the same geometry needs to be accounted for
in the gaze-centered updating of remembered targets during
translational motion. This is outlined in Fig. 2: unlike the
case of updating for eye rotations, where parallax geometry
does not play a role, target distance is a necessary component
in gaze-centered updating when the eyes translate.
Recent studies have shown that differences in target
distance are taken into account in the amplitude of memory
saccades to remember target locations after an intervening
translation (Li et al., 2005; Li and Angelaki, 2005; Medendorp
et al., 2003b). This suggests that there must be a neural
mechanism to handle translational updating, but little is
known about how this mechanism actually works. Studies
with eye movements, for example, cannot readily identify
the reference frame that is involved in the computation due
to the similarity of the sensory frame of reference imposed
by the retina and oculomotor reference frame for the eyes
(Snyder, 2000). That is, for the eyes to look at the remembered
target location after a translation movement, saccadic
amplitude must depend nonlinearly on target depth and
direction. If saccadic amplitude is not scaled appropriately,
the saccadic errors that appear do not reveal information
about the spatial representation that codes the target per se
but can also be related to a faulty motor command. Although
the latter option may be less likely, arm movements do not
suffer from this drawback: the sensory frame of the retina is
quite distinct from the motor frame of reference of the arm.
In a recent study using arm movements, we addressed the
question of how the brain remembers target locations during
translational movements (Van Pelt and Medendorp, 2007). In
our test, subjects viewed targets, briefly flashed in darkness
at different distances from fixation, then translated sideways,
and then reached the memorized locations (Fig. 3A, middle
panel). We reasoned as follows. If the targets would be visible
Fig. 2 – Schematic illustration of the geometry underlying
spatial updating during rotational (left) and translational
motion (right). Two targets, flashed at different distances
from the fovea, are stored in spatial memory. If these
memories are coded in gaze-centered coordinates, they
must be updated when the eyes move. The amount of
updating for the targets is the same for rotational eye
motion. For translational eye motion, the amount of
updating depends on target distance and the size of the
translation.
at all times, also when the body translates sideways, parallax
geometry dictates that images of targets in front and behind
the eyes’ fixation point (FP) shift in opposite directions across
the retinas. Thus, if the brain were to simulate parallax
geometry, as required for gaze-centered updating in darkness,
misjudging the body translation would make the updated
locations deviate from the actual locations, resulting in reach
errors in opposite directions for targets in front of and behind
the FP (Fig. 3A, left panel). In contrast, parallax geometry plays
no role if the brain were to code locations in a gazeindependent reference frame, e.g., relative to the body midline
(Fig. 3A, right panel). If then translations are misjudged, the
updated locations will also deviate from the actual locations,
but with updating errors in the same direction, irrespective
of target depth. We performed this experiment on 12 subjects,
whose reach responses showed small but clear parallaxsensitive errors: their errors increased with depth from
590
cortex 44 (2008) 587–597
A
Gaze
Dependent
Gaze
Independent
Far
FP
Near
Transl
B
Far
C
Error
In
de Ga
pe zend
en
t
10 cm
Leftward Trl
Rightward Trl
10 cm
Near
en
e-
ep
az
G
D
de
nt
5 cm
Error
Fig. 3 – (A) If targets, flashed in front and behind the eyes’ fixation point, are stored in gaze-centered coordinates, they must
be updated in opposite directions when the eyes translate. If the same targets are stored in gaze-independent coordinates,
e.g., body coordinates, updating directions are the same. (B) Subjects view targets, flashed in darkness in front of or behind
fixation, then translate their body sideways, and subsequently reach the memorized target. Reach errors depend on the
direction of the intervened translation (leftward or rightward translation) and on the depth of the target from fixation,
confirming the predictions of the gaze-dependent updating model. This typical subject overestimated the amount of
self-motion in the updating process, i.e., remembered target direction is overestimated in gaze-centered coordinates. (C) Reach
errors to targets at opposite, but same distances from fixation, plotted versus each other. Data would fall along the negative
diagonal if subjects had updated remembered locations in a gaze-centered frame and along the positive diagonal if subjects
had employed a gaze-independent updating mechanism. This subject supports the gaze-centered model. Modified from
Van Pelt and Medendorp (2007).
fixation (not shown) and reversed in lateral direction for
targets presented at opposite depths from fixation (Fig. 3B
and C). In other words, our results were consistent with
updating errors occurring in a gaze-centered reference frame,
which indicates that translational updating is organized along
much the same lines as head-fixed saccadic updating.
One of the predictions of this work is that during translational motion of the head, internal target representations
are remapped depending on their depth from fixation
(Medendorp et al., 2003b; Van Pelt and Medendorp, 2007). So
far, the neural correlates of translational updating have not
been studied, and future work should reveal if the mechanisms for spatial updating show this level of sophistication.
A candidate region for this process is parietal cortex given its
role in rotational updating, described above. Moreover, accumulating evidence indicates that visuomotor neurons in parietal areas have three-dimensional receptive fields (Genovesio
and Ferraina, 2004; Gnadt and Mays, 1995), showing that these
neurons are not only sensitive to the direction of a target but
also to its depth. It is thus conceivable that these neurons
play a role in the process of visuospatial updating for translations, although this has not yet been shown. In theoretical support of this idea, recent neural network simulations have
shown that the brain could indeed rely on stereoscopic depth
and direction information to update visual space during selfmotion (Van Pelt and Medendorp, 2007). Experiments that explicitly address the role of depth signals in spatial updating are
currently underway (SFN, Van Pelt and Medendorp, 2006).
cortex 44 (2008) 587–597
What are the contributions of the various extraretinal signals for updating in these body-free conditions? In the study
above (Van Pelt and Medendorp, 2007), subjects made their
translations actively thus giving rise to a variety of potential
extraretinal updating cues, such as the corollary discharges
of the motor commands as well as vestibular, gravitational,
and neck proprioceptive signals. Behavioral studies in the
monkey have indicated the vestibular system to be the main
extraretinal source of motion-related information (Li et al.,
2005; Li and Angelaki, 2005; Wei et al., 2006), but the exact
nature of its interaction with spatial signals at the neural level
must await further studies.
4.
Linking space to effectors
Considering that targets are represented in dynamic spatial
maps; how are these dynamic spatial signals then funneled
to the motor areas? Obviously, as muscles need to contract
to make a movement, a gaze-centered target representation
on its own cannot drive a movement. The initial position of
the effector is just as important a variable in the computation
of a movement plan as the position of the target (Vindras et al.,
2005). Imagine the simple task of picking up a cup of coffee. In
this case, the brain must combine its internal representation
of the location of the cup with its representation of the hand
selected for use in order to compute a motor error that
ultimately leads to the motor commands for control of the
necessary muscles. To simplify this computation, it has
been suggested that target and hand position must be
encoded in the same frame of reference at some stage in the
sensorimotor process. At which level within the visuomotor
system is the motor error, i.e., the difference vector between
target and hand position, computed?
As the evidence above indicates, target position appears to
be coded in dynamic gaze-centered coordinates. Hand position
can be derived from both visual and proprioceptive information, such that it may be coded in gaze-centered coordinates,
body-centered coordinates, or both in the early sensory
representations (Buneo et al., 2002). In darkness, or in other
situations where view of the hand is occluded, it would require
coordinate transformations to compare the location of the
target to the location of the hand. Till recently, it was a generally
accepted view that the location of a target is first transformed
from gaze-centered coordinates to body-centered coordinates
using sensory signals about the linkage geometry, such as eye
and head position signals, and then compared to the position of
the hand with respect to the body (Flanders et al., 1992;
McIntyre et al., 1997). According to this hypothesis, the
movement plan is computed in body-centered coordinates.
More recently, however, neurophysiological evidence recorded
in posterior parietal and dorsal premotor cortex in the monkey
brain has suggested that the target-hand comparison – the
computation of the difference vector – is done at an earlier
stage of visuomotor processing, in gaze-centered coordinates
(Buneo et al., 2002; Pesaran et al., 2006). Buneo and colleagues
based their evidence on the comparison of the activity of single
cells for two different movements to the same target location in
hand, body, gaze, body and hand, or eye and hand coordinates
and found the best correlation when target locations were
591
identical in both gaze and hand coordinates. Recently, Pesaran
et al. (2006) arrived at similar conclusions when they mapped
the response fields of PMd neurons, demonstrating that the
activity of these neurons codes the relative position of target,
hand, and eye. Thus, the implication of these experiments,
which were performed with direct vision of the hand, is that
there must be a gaze-centered representation of hand position.
However, it requires a demonstration of its persistence in the
absence of vision to show that this representation is derived
from transformed somatosensory information (Buneo and
Andersen, 2006).
We set out to collect further evidence about the mechanisms that are involved in the integration of target and hand
position signals in reach planning using a behavioral paradigm in humans (Beurze et al., 2006). We examined the reach
errors of one-dimensional movements to memorized targets
starting from various initial hand positions while keeping
gaze fixed in various directions. We tested subjects with and
without vision of the hand at the moment the target was
presented (referred to as Seen and Unseen Hand conditions,
respectively). We first present these results using a similar
analysis as Buneo et al. (2002), making a pair-wise comparison
of the reach errors. Fig. 4 shows the results of one subject for
the Unseen Hand condition, showing the lowest degree of
scatter when target locations were identical in gaze and
hand coordinates. This was found for eight out of ten subjects
tested without visual hand feedback during reach planning
and in seven out of eight subjects tested with visual hand
feedback. In other words, these results are in line with the
interpretation of Buneo et al. (2002) in the sense that the reach
error depended on the location of the target in gaze and hand
coordinates.
However, caution should be exercised when interpreting
these results. Equally valid interpretations are that the brain
computes target and hand position in gaze-centered coordinates, or target and gaze position in hand-centered coordinates, or gaze and hand position in target coordinates, or
indeed a relative position signal reflecting the difference
vector between hand and target location in gaze coordinates.
Theoretically speaking, we cannot tell and any of these conceptual distinctions seems purely arbitrary at the behavioral
level (Buneo and Andersen, 2006). As a side note, one could
get away from the indeterminacy of reference frames, by
performing these experiments under the manipulation of all
three rotational degrees of freedom (Crawford et al., 2004;
Medendorp et al., 2002). Let’s illustrate this by example. Suppose one keeps the head straight-up and vision would provide
the locations of the target and hand, with say the target at 20
above the fovea and the hand at 30 right of the fovea. In this
case, the displacement vector between hand and target would
yield (30 , þ20 ) in gaze coordinates, which is equivalent to
the movement vector in body coordinates since the head is
erect. Now suppose, the head (read: eyes) is tilted in roll, say
90 counterclockwise. In this case, the same body-centered
target and hand locations, would stimulate different locations
on the retina (target at 20 right and hand at 20 below of the
fovea), yielding a different gaze-centered movement vector
(20 , 30 ).
Despite these geometrical reservations, our data speak to
a model of target-hand integration in a gaze-centered frame
592
cortex 44 (2008) 587–597
A
B
Body
H
G
T
H
G
G
H
T
G
H
Error (deg)
r=.05
10 n=101
C
Hand
T
T
r=.23
n=100
D
Gaze
H
G
G
H
T
Hand and Body
G
T
r=.53
n=101
H
T
H
T
E
Hand and Gaze
G
G
r=.15
n=90
H
G
T
H
T
0
10
r=.71
n=86
0
-10
-10
0
10
-10
0
10
-10
0
10
-10
0
10
-10
Error (deg)
Fig. 4 – Testing the reference frame for hand-target comparison in the Unseen Hand condition. Subjects made reaching
movements to targets at seven different locations relative to the body (range: L308 to 308), executed using various initial
hand positions and with gaze fixed at various directions. Scatter plots of reach errors for movements performed to identical
target locations in body coordinates (A), hand coordinates (B), gaze coordinates (C), hand and body coordinates (D), and hand
and gaze coordinates (E). Each data point represents the errors for a pair of movements taken from different experimental
conditions (exemplified by cartoons on top). Errors were randomly assigned to the horizontal and vertical axis. Panels D and
E are modified from Beurze et al. (2006).
of reference. The following supports this claim, using the
collective weight of the results of both the Unseen and Seen
Hand condition. Fig. 5 revisits the data in Fig. 4, plotting the
systematic reach errors as a function of gaze and hand position
relative to the target, for both the Seen and Unseen Hand
conditions. The various panels demonstrate virtually a single
response curve for all seven body-fixed target locations, as if
the body-centered location of the target had no effect on the
reach errors. Moreover, the finding that the reach errors
depend on initial hand location in a similar way suggests that
the brain does not specify a movement in terms of a final position but rather in terms of a vector (Shadmehr and Wise, 2005;
Vindras et al., 2005). Importantly, Fig. 5 also shows that making
the hand visible before the reach reduced the errors. In other
words, the influence of hand position depends on visual information (Sober and Sabes, 2003). Obviously, this finding is much
more difficult to reconcile with a body-centered integration
model than with a model in visual (i.e., gaze-centered) coordinates, which is not to say that the gaze-centered visuomotor
scheme is used at all times and in all contexts (see, e.g.,
Carrozzo et al., 2002; McIntyre et al., 1998; Van Pelt et al., 2005).
The implication of these results is that initial hand
position, as must be derived from proprioceptive information
when the hand is invisible, is transformed ‘backwards’ into
gaze coordinates, using gaze position and other extraretinal
signals (Crawford et al., 2004; Pesaran et al., 2006). Sober and
Sabes (2003) have shown that a hand position estimate is
determined by the relative weighting of both visual and
proprioceptive information (see also Rossetti et al., 1995).
Generally, vision is a more accurate sensory modality than
proprioception and therefore has a greater effect on
weighting. Moreover, in the perspective of this model, vision
puts the hand position directly in gaze coordinates, whereas
a hand position based on proprioception needs an additional
computation to be represented in these coordinates.
We emphasize that a gaze-centered movement vector
must still be put through further reference frame transformations in order to convert it into a more intrinsic limb-centered
muscle-based motor command, requiring nonlinear operations to deal with the complex linkage structure between the
retina and the movement effector (Crawford et al., 2004;
Buneo and Andersen, 2006; Sober and Sabes, 2005). In support
of this, Sober and Sabes (2003, 2005) showed that a hand/arm
position estimate is required at two stages of motor planning:
first to determine the desired movement vector, and second to
transform the movement vector into a joint-based motor
command.
5.
Mechanisms for target and
effector selection
Where in the human brain can the neural correlates of these
behavioral findings be identified? Recent neuroimaging work
in the human has reported activity in various areas along
the dorsal parietal-frontal network during the preparation
and execution of simple reaching movements (Astafiev
et al., 2003; Prado et al., 2005; Medendorp et al., 2005b). Yet,
most of this work has been unclear regarding the precise
role these areas serve in the integration of target and effector
information in the planning of a reach, let alone that this work
has revealed the reference frame in which they operate.
One way to investigate this issue is by dissociating the
process of reach planning into separate stages of target
processing, effector processing and the integrative processing
of both target and effector for movement planning. Hoshi and
Tanji (2000) were the first to assess these stages of reach planning in monkeys using a paradigm where the spatial goal of
the movement (left or right from fixation) and the effector to
be employed (left or right hand) are presented sequentially,
593
cortex 44 (2008) 587–597
Unseen Hand
Seen Hand
(Right)
4
2
Reach Error (deg)
(Left)
0
-2
-4
-40
-20
0
20
40
-40
Target re Gaze
-20
0
20
40
Target re Gaze
(Right)
4
2
(Left)
0
Targets re Body
-10°
30°
-20°
20°
10°
-30°
0°
-2
-4
-40
-20
0
20
Hand re Target
40
-40
-20
0
20
40
Hand re Target
Fig. 5 – Reach errors in the Unseen and Seen Hand condition plotted as a function of either the gaze-centered or
hand-centered location of the target. Data can well be described by single response curves in each case, indicating that
the body-centered location of the target has no effect on the reach error. Visual feedback about hand position significantly
reduces the observed reach errors (right-hand panels). Modified from Beurze et al. (2006).
in either order, and separated in time by a delay. Their
investigations using this paradigm (Hoshi and Tanji, 2000,
2004a, 2004b, 2006) as well as other research using analogous
paradigms (Calton et al., 2002), has indicated more specifically
which regions in parietal and frontal cortex play a role in the
process of reach planning. Fig. 6A shows an overview of these
results, as inferred from these studies. While some of these regions seem involved at a global level, other regions such as the
pre-supplementary motor area (pre-SMA), dorsal premotor
cortex (PMd), dorsolateral prefrontal cortex (dl-PFC) and the
intraparietal sulcus (IPS) have been attributed a central role
in the integration of target and effector information.
Recently, we applied rapid event-related fMRI in 3T to
obtain insights in the target-hand integration process in the
human using Hoshi and Tanji’s two-stage instruction
paradigm (Beurze et al., 2007). Sixteen healthy right-handed
subjects prepared a reaching movement following two successive instruction cues given in random order. A goal cue was
signaled as a brief visual target, presented for 250 msec either
leftward or rightward relative to a central fixation point at an
eccentricity of w10 ; an effector cue was given as a color
change of this central fixation point, indicating the use of
either the left or right hand. Each cue was followed by a random delay of less than five seconds. Subsequently, following
a go-signal, the subjects executed the reach while maintaining
central eye fixation. Thus, in effect, subjects could only store
the information about the goal location or hand choice after
the first cue, while they were able to integrate the information
for the reach plan after presentation of the second cue.
In our analysis, we reasoned that regions that integrate
spatial and effector signals should fulfill two requirements.
First, they should show significant activation to each of the
two cues, indicating access to both types of information.
Second, they should respond more strongly to the second
cue than to the first, due to the increased metabolic demands
for the integrative processing of the two cues. Thus, we did not
look for regions specific to target or effector processing in
isolation (but see Blangero et al., 2008, this issue). As a result,
we found bilateral regions in the posterior parietal cortex, the
premotor cortex, the medial frontal cortex and the insular cortex to be involved in target-effector integration (see Fig. 6B). As
far as corresponding experiments have been made, many of
these regions were also observed in monkeys performing similar tasks (see Fig. 6A). In this respect, our results emphasize
the synergy between primate neurophysiology and human
functional imaging concerning the neural mechanisms for
reach planning. In a further analysis, we examined the
functional properties of the human integration regions in
594
cortex 44 (2008) 587–597
A
B
Macaque
Human
PMd
PFd
PMd
PFv
IPS
IPS
PMv
pre-SMA
CMAr
SMA
PMv
CMAd
CMAv
SMA
CMA
Fig. 6 – (A) Regions in the monkey brain that have been attributed a role in the integration of target and hand information
based on sensory signals. Data are based on studies by Hoshi and Tanji (2000, 2004a, 2004b, 2006) and Calton et al. (2002). (B)
Similar regions in the human brain, modified from Beurze et al. (2007), including bilateral SMA, CMA, PMd, PMv, IPS and
insula and the left dl-PFC (not shown). All regions responded significantly to the first cue but increased their activity after
the second cue, when the information of both cues can be integrated to develop a movement plan.
terms of spatial and effector specificity. This showed that the
posterior parietal cortex and the dorsal premotor cortex
selectively specify both the spatial location of a target and
the effector selected for the response. This led us to conclude
that these regions are selectively engaged in the neural
computations for human reach planning (Beurze et al., 2007).
One may wonder whether we have revealed the neural
correlates of our behavioral findings described in the previous
section. We admit that, at this stage, the identified regions
have not been decoded in terms of the frames of reference
that are employed. These regions will serve as starting point
in our future research that is intended to address this issue
in more detail (Beurze et al., 2006; Buneo et al., 2002; Pesaran
et al., 2006). One interesting point to make, though, is that
the region observed in posterior parietal cortex in Fig. 6 seems
to exhibit some overlap with the region described in Fig. 1,
which was shown to code saccade and reach representations
in gaze-centered coordinates (Medendorp et al., 2003a, 2005a,
2005b). There may be a saccade-to-pointing gradient that
begins at the region in Fig. 1 and then extends more medial
into the present region (Connolly et al., 2003; Astafiev et al.,
2003), which perhaps serves as a homolog of the monkey
parietal reach region (Batista et al., 1999). Taking this one
step further, it may be speculated that this parietal region
begins the neural computations required for an accurate
reach by integrating target and hand information in gazecentered coordinates, which is consistent with the behavioral
observations in Figs. 4 and 5.
6.
Summary
In this paper, we have reviewed arguments and principles for
the use of a gaze-centered reference frame to implement spatial updating and plans for movement. This suggests: (1)
a coding and updating of a spatial goal in gaze-centered coordinates, (2) a coding of the location of effectors (hands) in
gaze-centered coordinates, even when the effector is invisible,
(3) a specification of motor plan in terms of a gaze-centered
desired movement vector, transformed afterward into
a joint-based motor command.
The updating findings show that the brain possesses a geometrically complete, dynamic map of remembered space,
whose spatial accuracy is maintained in gaze-centered
coordinates by internally simulating the geometry of motion
parallax during volitional translatory body movements. This
finding corroborates and extends previous findings about
head-fixed saccadic updating in human and monkey studies.
In monkey updating experiments, neurons have been shown
that actually begin to respond before the eye movement to
stimuli that will enter the receptive field after the eye
movement (Duhamel et al., 1992). In other words, these
neurons anticipate the sensory consequences of the future
eye position before the saccade is executed. Based on these
studies, it has been argued that the updating mechanism relies on a copy of the eye motor command (Sommer and Wurtz,
2002) rather than on sensory feedback which is only available
cortex 44 (2008) 587–597
after a delay. In the literature, systems that predict
consequences of motor commands in sensory coordinates
are called forward models (Wolpert and Ghahramani, 2000;
Shadmehr and Wise, 2005). Evidently, such systems have no
efficacy during passive body displacements when updating
is entirely dependent on other sensory feedback, including
vestibular and other proprioceptive signals. During active
body translations, however, efference copy in combination
with a forward model of body dynamics but also sensory
feedback could play a role in spatial updating, and possibly
produce an estimate of visual space that is more accurate
than possible from either source alone (Vaziri et al., 2006).
Further studies should be conducted to address this issue.
Our evidence also indicates that in the development of
a reach plan, the brain computes a gaze-centered hand position
in order to determine a gaze-centered difference vector
between hand and target (see Figs. 4 and 5). Furthermore, the
presented fMRI results (Beurze et al., 2007 and see Figs. 1 and
6), as well as other neurophysiological results (Hoshi and Tanji,
2000; Buneo et al., 2002; Pesaran et al., 2006), suggest that posterior parietal cortex and dorsal premotor cortex are key players
in the integration of target selection and effector selection.
Based on our data, we cannot say whether these regions only
facilitate a selection process of target and effector, or whether
they are also incorporated in the feedback loop that computes
moment-to-moment motor error during the movement.
Recently, however, various lesion and transcranial magnetic
stimulations studies argued that these regions do participate
in the online guidance of the movement (Desmurget et al.,
1999; Lee and Van Donkelaar, 2006; Wolpert et al., 1998; Karnath
and Perenin, 2005; Grea et al., 2002), using a forward model to
estimate the current state of the limb in gaze-centered coordinates (Desmurget and Grafton, 2000; Ariff et al., 2002). In close
connection, it has even been speculated that the gaze-centered
tuning for target position may in fact reflect an estimate of the
position of the effector in gaze-centered coordinates at some
point in the future (Buneo and Andersen, 2006).
In sum, the results presented here shed more light on how
the human brain stores and transforms spatial information
into motor action. Central in our observations is the
dominance of gaze-centered coordinates, in spatial updating
and movement planning, although we do not want to argue
that no spatial representations can exist in other coordinate
systems in parallel (see Battaglia-Mayer et al., 2003; Van Pelt
et al., 2005). While gaze coordinates may also play a role in
the monitoring of various aspects of the movement, further
reference frame transformations are required to execute the
movement itself. How this process works in a feedforward
fashion and how feedback and other monitoring mechanisms
play their mediating roles will require further characterization
of the sequence of neural activations. But also new paradigms,
novel methodology and clinical studies will help to obtain
further insights in the behavioral and neural mechanisms
for sensorimotor control in the years ahead.
Acknowledgements
This research was supported by grants from the Human Frontier Science Program (CDA) and the Netherlands Organization
595
for Scientific Research (VIDI: 452-03-307 and MaGW: 400-04186) to WPM. Stefan Everling and Kenneth Valyaer kindly
provided a reconstruction of the cortical surface of a macaque
monkey brain which was used in Fig. 6A.
references
Ariff G, Donchin O, Nanayakkara T, and Shadmehr R. A real-time
state predictor in motor control: study of saccadic eye
movements during unseen reaching movements. Journal of
Neuroscience, 22: 7721–7729, 2002.
Astafiev SV, Shulman GL, Stanley CM, Snyder AZ, Van Essen DC,
and Corbetta M. Functional organization of human
intraparietal and frontal cortex for attending, looking, and
pointing. Journal of Neuroscience, 23: 4689–4699, 2003.
Baker JT, Harper TM, and Snyder LH. Spatial memory following
shifts of gaze. I. Saccades to memorized world-fixed and gazefixed targets. Journal of Neurophysiology, 89: 2564–2576, 2003.
Batista AP, Buneo CA, Snyder LH, and Andersen RA. Reach plans
in eye-centered coordinates. Science, 285: 257–260, 1999.
Battaglia-Mayer A, Caminiti R, Lacquaniti F, and Zago M. Multiple
levels of representation of reaching in the parieto-frontal
network. Cerebral Cortex, 13: 1009–1022, 2003.
Bellebaum C, Hoffmann KP, and Daum I. Post-saccadic updating
of visual space in the posterior parietal cortex in humans.
Behavioral Brain Research, 163: 194–203, 2005.
Beurze SM, Van Pelt S, and Medendorp WP. Behavioral reference
frames for planning human reaching movements. Journal of
Neurophysiology, 96: 352–363, 2006.
Beurze SM, De Lange FP, Toni I, and Medendorp WP. Integration
of target and effector information in the human brain during
reach planning. Journal of Neurophysiology, 97: 188–199, 2007.
Blangero A, Gaveau V, Luauté J, Rode G, Salemme R, Guinard M,
Boisson D, Rossetti Y, and Pisella L. A hand and a field effect
in on-line motor control in unilateral optic ataxia. Cortex, 44:
560–568, 2008.
Buneo CA, Jarvis MR, Batista AP, and Andersen RA. Direct
visuomotor transformations for reaching. Nature, 416:
632–636, 2002.
Buneo CA and Andersen RA. The posterior parietal cortex:
sensorimotor interface for the planning and online control of
visually guided movements. Neuropsychologia, 44: 2594–2606,
2006.
Burgess N. Spatial memory: how egocentric and allocentric
combine. Trends in Cognitive Sciences, 10: 551–557, 2006.
Calton JL, Dickinson AR, and Snyder LH. Non-spatial, motorspecific activation in posterior parietal cortex. Nature
Neuroscience, 5: 580–588, 2002.
Carrozzo M, Stratta F, McIntyre J, and Lacquaniti F. Cognitive
allocentric representations of visual space shape pointing
errors. Experimental Brain Research, 147: 426–436, 2002.
Connolly JD, Andersen RA, and Goodale MA. FMRI evidence for
a ‘parietal reach region’ in the human brain. Experimental Brain
Research, 153: 140–145, 2003.
Crawford JD, Medendorp WP, and Marotta JJ. Spatial
transformations for eye-hand coordination. Journal of
Neurophysiology, 92: 10–19, 2004.
Desmurget M, Epstein CM, Turner RS, Prablanc C, Alexander GE,
and Grafton ST. Role of the posterior parietal cortex in
updating reaching movements to a visual target. Nature
Neuroscience, 2: 563–567, 1999.
Desmurget M and Grafton S. Forward modeling allows feedback
control for fast reaching movements. Trends in Cognitive
Sciences, 4: 423–431, 2000.
Dijkerman HC, Mcintosh RD, Anema HA, De Haan EH, Kappelle LJ,
and Milner AD. Reaching errors in optic ataxia are linked to
596
cortex 44 (2008) 587–597
eye position rather than head or body position.
Neuropsychologia, 44: 2766–2773, 2006.
Duhamel JR, Colby CL, and Goldberg ME. The updating of the
representation of visual space in parietal cortex by intended
eye movements. Science, 255: 90–92, 1992.
Fernandez-Ruiz J, Goltz HC, DeSouza JF, Vilis T, and Crawford JD.
Human parietal ‘‘reach region’’ primarily encodes intrinsic
visual direction, not extrinsic movement direction, in a visual
motor dissociation task. Cerebral Cortex, 17: 2283–2292, 2007.
Flanders M, Tillery SIH, and Soechting JF. Early stages in
a sensorimotor transformation. Behavioral Brain Science, 15:
309–362, 1992.
Genovesio A and Ferraina S. Integration of retinal disparity and
fixation-distance related signals toward an egocentric coding
of distance in the posterior parietal cortex of primates. Journal
of Neurophysiology, 91: 2670–2684, 2004.
Gnadt JW and Mays LE. Neurons in monkey parietal area LIP are
tuned for eye-movement parameters in three-dimensional
space. Journal of Neurophysiology, 73: 280–297, 1995.
Grea H, Pisella L, Rossetti Y, Desmurget M, Tilikete C, Grafton S,
Prablanc C, and Vighetto A. A lesion of the posterior parietal
cortex disrupts on-line adjustments during aiming
movements. Neuropsychologia, 40: 2471–2480, 2002.
Hallett PE and Lightstone AD. Saccadic eye movements towards
stimuli triggered by prior saccades. Vision Research, 16:
99–106, 1976.
Heide W, Blankenburg M, Zimmermann E, and Kömpf D. Cortical
control of double-step saccades: implications for spatial
orientation. Annals Neurology, 38: 739–748, 1995.
Heiser LM, Berman RA, Saunders RC, and Colby CL. Dynamic
circuitry for updating spatial representations. II.
Physiological evidence for interhemispheric transfer in area
LIP of the split-brain macaque. Journal of Neurophysiology, 94:
3249–3258, 2005.
Henriques DYP, Klier EM, Smith MA, Lowy D, and Crawford JD.
Gaze-centered remapping of remembered visual space in an
open-loop pointing task. Journal of Neuroscience, 18:
1583–1594, 1998.
Hoshi E and Tanji J. Integration of target and body-part
information in the premotor cortex when planning action.
Nature, 408: 466–470, 2000.
Hoshi E and Tanji J. Differential roles of neuronal activity in the
supplementary and presupplementary motor areas: from
information retrieval to motor planning and execution. Journal
of Neurophysiology, 92: 3482–3499, 2004a.
Hoshi E and Tanji J. Area-selective neuronal activity in the
dorsolateral prefrontal cortex for information retrieval and
action planning. Journal of Neurophysiology, 91: 2707–2722,
2004b.
Hoshi E and Tanji J. Differential involvement of neurons in the
dorsal and ventral premotor cortex during processing of
visual signals for action planning. Journal of Neurophysiology,
95: 3596–3616, 2006.
Karnath HO and Perenin MT. Cortical control of visually guided
reaching: evidence from patients with optic ataxia. Cerebral
Cortex, 15: 1561–1569, 2005.
Khan AZ, Pisella L, Rossetti Y, Vighetto A, and Crawford JD.
Impairment of gaze-centered updating of reach targets in
bilateral parietal-occipital damaged patients. Cerebral Cortex,
10: 1547–1560, 2005.
Lee JH and Van Donkelaar P. The human dorsal premotor cortex
generates on-line error corrections during sensorimotor
adaptation. Journal of Neuroscience, 26: 3330–3334, 2006.
Li CS and Andersen RA. Inactivation of macaque lateral
intraparietal area delays initiation of the second saccade
predominantly from contralesional eye positions in
a double-saccade task. Experimental Brain Research, 137:
45–57, 2001.
Li N and Angelaki DE. Updating visual space during motion in
depth. Neuron, 48: 149–158, 2005.
Li N, Wei M, and Angelaki DE. Primate memory saccade
amplitude after intervened motion depends on target
distance. Journal of Neurophysiology, 94: 722–733, 2005.
McIntyre J, Stratta F, and Lacquaniti F. Short-term memory for
reaching to visual targets: psychophysical evidence for
body-centered reference frames. Journal of Neuroscience, 18:
8423–8435, 1998.
McIntyre J, Stratta F, and Lacquaniti F. Viewer-centered frame of
reference for pointing to memorized targets in three-dimensional
space. Journal of Neurophysiology, 78: 1601–1618, 1997.
Medendorp WP, Smith MA, Tweed DB, and Crawford JD.
Rotational remapping in human spatial memory during
eye and head motion. Journal of Neuroscience, 22: 196RC,
2002.
Medendorp WP and Crawford JD. Visuospatial updating of
reaching targets in near and far space. Neuroreport, 13:
633–636, 2002.
Medendorp WP, Goltz HC, Vilis T, and Crawford JD. Gaze-centered
updating of visual space in human parietal cortex. Journal of
Neuroscience, 23: 6209–6214, 2003a.
Medendorp WP, Tweed DB, and Crawford JD. Motion parallax is
computed in the updating of human spatial memory. Journal of
Neuroscience, 23: 8135–8142, 2003b.
Medendorp WP, Goltz HC, and Vilis T. Remapping the
remembered target location for anti-saccades in human
posterior parietal cortex. Journal of Neurophysiology, 94:
734–740, 2005a.
Medendorp WP, Goltz HC, Crawford JD, and Vilis T. Integration of
target and effector information in human posterior parietal
cortex for the planning of action. Journal of Neurophysiology, 93:
954–962, 2005b.
Merriam EP, Genovese CR, and Colby CL. Spatial updating in
human parietal cortex. Neuron, 39: 361–373, 2003.
Merriam EP, Genovese CR, and Colby CL. Remapping in
human visual cortex. Journal of Neurophysiology, 97:
1738–1755, 2007.
Nakamura K and Colby CL. Updating of the visual representation
in monkey striate and extrastriate cortex during saccades.
Proceedings of the National Academy of Sciences of the United States
of America, 99: 4026–4031, 2002.
Pesaran B, Nelson MJ, and Andersen RA. Dorsal premotor neurons
encode the relative position of the hand, eye, and goal during
reach planning. Neuron, 51: 125–134, 2006.
Prado J, Clavagnier S, Otzenberger H, Scheiber C, Kennedy H, and
Perenin MT. Two cortical systems for reaching in central and
peripheral vision. Neuron, 48: 849–858, 2005.
Rossetti Y, Desmurget M, and Prablanc C. Vectorial coding of
movement: vision, proprioception, or both? Journal of
Neurophysiology, 74: 457–463, 1995.
Schluppeck D, Glimcher P, and Heeger DJ. Topographic
organization for delayed saccades in human posterior parietal
cortex. Journal of Neurophysiology, 94: 1372–1384, 2005.
Sereno MI, Pitzalis S, and Martinez A. Mapping of contralateral
space in retinotopic coordinates by a parietal cortical area in
humans. Science, 294: 1350–1354, 2001.
Shadmehr R and Wise SP. Computational neurobiology of reaching
and pointing: a foundation for motor learning. Cambridge: MIT
Press, 2005.
Snyder LH. Coordinate transformations for eye and arm
movements in the brain. Current Opinion in Neurobiology, 10:
747–754, 2000.
Sober SJ and Sabes PN. Flexible strategies for sensory
integration during motor planning. Nature Neuroscience, 8:
490–497, 2005.
Sober SJ and Sabes PN. Multisensory integration during motor
planning. Journal of Neuroscience, 23: 6982–6992, 2003.
cortex 44 (2008) 587–597
Sommer MA and Wurtz RH. A pathway in primate brain for internal
monitoring of movements. Science, 296: 1480–1482, 2002.
Van Pelt S, Van Gisbergen JAM, and Medendorp WP. Visuospatial
memory computations during whole-body rotations in roll.
Journal of Neurophysiology, 94: 1432–1442, 2005.
Van Pelt S and Medendorp WP. Gaze-centered updating of
remembered visual space during active whole-body
translations. Journal of Neurophysiology, 97: 1209–1220, 2007.
Van Pelt S and Medendorp WP. Representation of target distance
across eye movements in depth. Society for Neuroscience
Abstracts, 2006. Program No. 549.19.
Van Der Werf J, Jensen O, Fries P, and Medendorp WP. Oscillatory
activity in human parietal cortex for memory-guided pro- and
antisaccades. Society for Neuroscience Abstracts, 2006. Program
No. 48.18.
Vaziri S, Diedrichsen J, and Shadmehr R. Why does the brain
predict sensory consequences of oculomotor commands?
597
Optimal integration of the predicted and the actual
sensory feedback. Journal of Neuroscience, 26: 4188–4197,
2006.
Vindras P, Desmurget M, and Viviani P. Error parsing in
visuomotor pointing reveals independent processing of
amplitude and direction. Journal of Neurophysiology, 94:
1212–1224, 2005.
Wei M, Li N, Newlands SD, Dickman JD, and Angelaki DE. Deficits
and recovery in visuospatial memory during head motion
after bilateral labyrinthine lesion. Journal of Neurophysiology,
96: 1676–1682, 2006.
Wolpert DM, Goodbody SJ, and Husain M. Maintaining internal
representations: the role of the human superior parietal lobe.
Nature Neuroscience, 1: 529–533, 1998.
Wolpert DM and Ghahramani Z. Computational principles of
movement neuroscience. Nature Neuroscience, 3: 1212–1217,
2000.