Viewer-Centered Frame of Reference for Pointing to Memorized
Targets in Three-Dimensional Space
J. MC INTYRE, F. STRATTA, AND F. LACQUANITI
Istituto Scientifico S. Lucia, I.N.B.-C.N.R., 00179 Rome, Italy
INTRODUCTION
In arm reaching, a target location must be matched by a
hand position. How does the brain solve this correspondence
problem? When a stationary target is presented visually, its
direction is mapped topographically onto the retina, whereas
its distance relative to the viewer is defined by both monocular cues (accommodation, relative size, intensity, perspective, shading, etc.) and binocular cues (retinal disparity and
ocular vergence signals). Hand position can also be monitored visually. In addition, proprioception and efference
copy of motor commands define arm posture in the intrinsic
reference frames of muscles, joints, and skin receptors. The
hypothesis has been put forth that, in the process of translating sensory information about target and arm position into
appropriate motor commands, endpoint position of reaching
may be specified in either shoulder-centered (Flanders et al.
1992; Soechting and Flanders 1989a,b) or hand-centered
(Flanders et al. 1992; Gordon et al. 1994) frames of reference.
Experimental approaches often rely on the study of the
errors made by subjects who point to a previously visible,
memorized target, thus avoiding movement corrections
based on visual feedback of a continuously present target
(Prablanc et al. 1979). In this approach, differences in precision between independent neural channels for spatial information may reveal the underlying frames of reference putatively used by the brain (Soechting and Flanders 1989a).
There are two types of errors for repeated trials with the same
target location: constant (systematic) errors, representing the
deviation of the mean endpoint from the target, and variable
errors, representing the dispersion (variance) of the endpoints around the mean (Poulton 1981).
Soechting and Flanders (1989a) assessed the systematic
errors made by subjects pointing in the dark to remembered
target locations in three-dimensional (3-D) space. They
found large undershoots of the radial distance of the more
distal targets, with only negligible errors in direction. They
were able to account for such errors by hypothesizing a
linear, approximate transformation of the shoulder-centered
coordinates of the target into the angular coordinates of the
arm (Soechting and Flanders 1989b). An origin close to the
shoulder for the coordinate system of movement direction
was found by extrapolating a straight line from the target
location through the final finger position to the frontal plane
of the body (Soechting et al. 1990). When subjects pointed
to remembered targets in full light, they reported that they
used vision of background objects to align the direction of
the finger with the remembered direction of the target. Accordingly, the origin of the coordinate system for movement
direction was found to be at eye level (Soechting et al.
1990). Nevertheless, undershoots of target distance in full
light were comparable with those measured in darkness
(Soechting and Flanders 1989a). On the whole, these authors concluded that movements performed in the presence
0022-3077/97 $5.00 Copyright q 1997 The American Physiological Society
/ 9k19$$se25
J0986-6RR
08-15-97 08:00:02
neupa
LP-Neurophys
1601
Downloaded from http://jn.physiology.org/ by 10.220.33.2 on June 18, 2017
McIntyre, J., F. Stratta, and F. Lacquaniti. Viewer-centered
frame of reference for pointing to memorized targets in threedimensional space. J. Neurophysiol. 78: 1601–1618, 1997. Pointing to a remembered visual target involves the transformation of
binocular visual information into an appropriate motor output. Errors generated during pointing tasks may indicate the reference
frames used by the CNS for the transformation and storage of the
target position. Previous studies have proposed eye-, shoulder-, or
hand-centered reference frames for various pointing tasks, depending on visual conditions. We asked subjects to perform pointing movements to remembered three-dimensional targets after a
fixed memory delay. Pointing movements were executed under
dim lighting conditions, allowing vision of the fingertip against a
uniform black background. Subjects performed repeated movements to targets distributed uniformly within a small (radius 25
mm) workspace volume. In separate blocks of trials, subjects
pointed to different workspace regions that varied in terms of distance and direction from the head and shoulder. Additional blocks
were performed that differed in terms of starting position, effector
hand, head rotation, and memory delay duration. Final pointing
positions were quantified in terms of the constant and variable
errors in three dimensions. The orientation of these errors was
examined as a function of workspace location to identify the underlying reference frames. Subjects produced anisotropic patterns of
variable error, with greater variability for endpoint distances from
the body. The major axes of the variable-error tolerance ellipsoids
pointed toward the eyes of the subject, independent of workspace
region, effector hand (left or right), initial hand position, and head
rotations. Constant errors were less consistent across subjects, but
also tended to point toward the head and body. Both overshoots
and undershoots of the target position were observed. Increasing
the duration of the memory delay period increased the size but did
not alter the orientation of the variable-error ellipsoids. Variability
of the endpoint positions increased equally in all three Cartesian
directions as the memory delay increased from 0.5 to 8.0 s. The
anisotropy of variable errors indicates a viewer-centered reference
frame for pointing to remembered visual targets with vision of the
finger. The anisotropy of pointing variability stems from variability
in egocentric binocular cues as opposed to reliance on allocentric
visual references or to specific approximations in the sensorimotor
transformation. Nevertheless, observed increases in variability with
longer memory delays indicate that the short-term storage of the
target position does not simply mirror the retinal and ocular sensory
signals of the visually acquired target location. Thus spatial memory is carried out in an internal representation that is viewer-centered but that may be isotropic with respect to Cartesian space.
1602
J. MC INTYRE, F. STRATTA, AND F. LACQUANITI
/ 9k19$$se25
J0986-6RR
compared with the use of 3-D statistics. The experimental
protocols included pointing movements from different starting positions of the hand and use of either the right or the
left arm. We performed a separate series of experiments to
assess the effect of head rotations on pointing accuracy.
The rationale behind all these protocols was to discriminate
between viewer-centered, shoulder-centered, and hand-centered reference frames for the specification of endpoint position in reaching to memorized targets (Lacquaniti 1997).
As a separate issue, we examined the effect of increasing
memory delays. This latter manipulation of the pointing task
was designed to ascertain whether target position is memorized in the same reference frame used for target acquisition
or endpoint control. In all experiments, pointing movements
were performed under dim light conditions, allowing successive vision of the target and hand against an unstructured
black background. These viewing conditions were designed
to test the role of different egocentric signals in reconstructing target and hand position and to exclude allocentric
visual cues from the background.
METHODS
Subjects sat on a 45-cm-high straight-back chair facing a table
measuring 150 cm wide 1 54.5 cm deep at a height of 69 cm. To
the opposite edge of the table was fixed an upright flat backboard
measuring 130 cm wide 1 85 cm high. Subjects were seated at Ç20
cm from the front edge of the table (75 cm from the backboard).
A headrest helped the subject maintain a constant head position
throughout the experiment (Fig. 1), although the head was free to
turn.
Point targets, in the form of a red 5-mm-diam light-emitting
diode (LED), were presented to the subject by a 5-degree-of-
FIG . 1. Experimental conditions. Subject was seated in front of a black
table and screen. Target, in the form of a red, 5-mm-diam light-emitting
diode (LED), was presented by a robot in the space between the subject
and a black screen. Small gray spheres: target LED attached to the robot
arm and fixation LED located on the background screen. Low-level room
lighting emanating from behind the screen resulted in a uniform black
background behind the targets.
08-15-97 08:00:02
neupa
LP-Neurophys
Downloaded from http://jn.physiology.org/ by 10.220.33.2 on June 18, 2017
or absence of visual feedback involve the same serial processes, including an obligatory transformation from headcentered to shoulder-centered coordinates (Flanders et al.
1992; Soechting et al. 1990).
The spatial analysis of constant errors may not be sufficient to reveal the underlying reference frames unambiguously. This is because biases in the mean endpoint are idiosyncratic to individual subjects and may also depend on the
specific experimental conditions (Collewijn and Erkelens
1990). Both undershoot (Darling and Miller 1993; Soechting and Flanders 1989a) and overshoot (Berkinblit et al.
1995; Foley 1975; Foley and Held 1972) of distal targets
have been reported in the literature.
The spatial analysis of variable errors may contribute additional insight on the issue of the reference frames for reaching. In particular, the lack of correlation between endpoint
variance along a specific set of coordinate axes would argue
in favor of that particular coordinate system as the one used
to encode endpoint positions (Bookstein 1992; Gordon et
al. 1992; Lacquaniti et al. 1990). Gordon et al. (1994) analyzed the statistical distribution of variable errors in 2-D
reaching movements. In these experiments subjects slid a
hand-held cursor on a horizontal digitizing table so as to
match the memorized position of targets displayed on a vertical monitor. The target and the screen cursor were blanked
during movement, and vision of the hand was prevented,
although feedback about movement accuracy was given after
each trial. Under such experimental conditions, the variance
of endpoint positions along the line of hand movement tends
to be uncorrelated with the variance along the orthogonal
line, in accordance with a hand-centered frame of reference.
Bock (1986) has also argued for a hand-centered frame of
reference on the basis of an observed accumulation of errors
as subjects point to a series of targets without intermediate
visual feedback about hand position. This observation argues
for a programming of movement in terms of the magnitude
of the displacement from the initial to final hand position.
Overshoot and undershoot have been observed for monoarticular movements (Bock and Eckmiller 1986) and for movements in the frontoparallel plane (Prablanc et al. 1979).
Although the undershoots noted for frontoparallel movement
might be explained by an underestimation of the shoulderto-target distance, as proposed by Soechting et al., undershoots in rotary movement around a fixed body axis (Bock
and Eckmiller 1986) cannot. Furthermore, the magnitude of
spatial errors depends on both the retinal locus of the visually
presented stimulus and the target position relative to the
body, whereas movement kinematics depends primarily on
the position of the target with respect to the body axis (Fisk
and Goodale 1985). Observed pointing errors might therefore reflect an underestimation of movement extent coupled
in a nonlinear fashion with a distortion in the mapping from
retinal to spatial coordinates.
The present study was undertaken to reexamine some of
these issues by including a number of tests that have not
been combined in previous studies. From a methodological
standpoint, different workspace regions were probed with
the use of the same locally uniform distribution of targets
in 3-D. Each target location was repeated many times, in
random sequence, by the accurate repositioning of a robot
arm. Both constant and variable errors were computed and
VIEWER-CENTERED POINTING ERRORS
/ 9k19$$se25
J0986-6RR
at each of the target positions. This method of measuring the target
position compensated automatically for the slight offset of the
marker position with respect to the fingertip. Control trials were
usually performed after all test trials, with some exceptions noted
below.
The 3-D trajectory of the fingertip marker was computed for each
trial. We calculated the initial and final position of each movement
as the mean position computed over the first and last 10 samples,
respectively, of the 2-s movement recording. A threshold based on
the SD for these mean positions was used to reject trials in which
the final endpoint position was not stable. Fewer than 2% of trials
were rejected on this basis. The same SD of the final finger position
within a single trial gives an estimate of the resolution of our measurements of the endpoint position. This estimate takes into account
both the resolution of the Elite measurement system and the biological precision with which the subject holds a steady final position.
For a typical experiment, the average SD was 0.16 mm.
We measured the variability of the finger position at the start of
the movement across all trials. For a typical experiment, initial
positions varied by õ1 cm peak-to-peak, with an SD of 2.2 mm
for X (left/right), 1.7 mm for Z (forward/back) in the horizontal
plane, and 0.7 mm for Y (height), where the last axis was constrained by contact with the table.
Target configurations
Trials were performed in blocks of 90, with one block of trials
lasting Ç15 min. The number of blocks depended on the specific
experimental design, as noted in the following text. Within a single
block of trials, target locations were restricted to a relatively small
volume in 3-D space. Different regions of the workspace were
measured in separate blocks, depending on the experiment protocol. In the following, we refer to the location of the set of nine
targets as the workspace region. The term ‘‘target position’’ refers
to the exact position of a single target in space.
The primary constraint that led to the choice of target grouping
was to provide a set of targets within a workspace region that had
no readily discernible pattern or directional biases. A second goal
was to present targets close enough together so as to encourage
the subject to be as accurate as possible in determining the 3-D
position in space. Subjects were told before beginning the experiment that the targets for a given block would appear within a small
volume, but that the position of the targets would indeed vary in
all three dimensions. It was further emphasized that the subject
should place the fingertip at the exact position of the remembered
target and not just point in the general direction of the remembered
target location.
For a single workspace region, eight targets were distributed
uniformly on the surface of a sphere of 22-mm radius, with a ninth
target located at the center. This configuration is equivalent to
points on the corners of a cube. The cube was tilted such that two
opposite corners and the center formed a vertical line and was
rotated to be symmetrical across the midline. The resulting distribution of points was left/right symmetrical across the vertical midline
and up/down symmetrical across the horizontal midline when projected onto the frontal plane. The projection onto the horizontal
plane produced a pattern that was both front/back and left/right
symmetrical. The projection of the points into the sagital plane did
not produce an up/down or left/right symmetry; the top of the
figure was ‘‘tilted’’ away from the subject (Fig. 2A).
Experiment protocols
Six variants of the basic experiment were performed, differing
in terms of workspace location of the targets, choice of hand for
pointing (left vs. right), starting position of the pointing hand, leftright rotation of the head, and memory delay duration. For all
08-15-97 08:00:02
neupa
LP-Neurophys
Downloaded from http://jn.physiology.org/ by 10.220.33.2 on June 18, 2017
freedom robot (CRS Plus model A200). The base of the robot was
situated behind the backboard and the LED was fixed to a 56-cm
rod attached to the final link of the robot arm. The robot arm
reached over the backboard and down to place the LED in the
region between the subject and board (Fig. 1). The SD of repeated
movements of the robot to the same target position was õ0.25 mm
in all three dimensions.
The board, the rod carrying the LED, and the top surface of the
table were painted black. The room was dimly illuminated with
indirect lighting coming from behind the backboard. Under these
conditions, no discernible visual points could be seen directly behind the presented target. For a gaze orientation centered on the
backboard, the visual field was uniform over a range of {407
horizontally and {307 vertically. Robot motion occurred only with
the LED turned off and thus could not be seen. Except for the
LEDs, room lighting was kept constant throughout a block of trials
(15–20 min) to maintain a constant level of dark adaptation for
the subjects.
Subjects performed a series of pointing movements in the following manner. The subject placed the index finger of the hand (left
or right, depending on the experiment) on one of two starting
positions located on the tabletop 10 cm from the front edge of the
table and 20 cm to the right or left of the midline (depending on
the experiment variant). The starting position was an upraised
bump (2.5-mm-radius hemisphere) on the table surface that could
be located by touch.
At the beginning of a trial, a green fixation LED located on the
surface of the backboard at the midline, 13 cm above the tabletop,
was illuminated. One second later an audible attention signal
sounded. After a random delay of 1.2–2.4 s, the fixation light was
extinguished and the red target LED of luminance 11.4 cd/m 2
(Gamma Scientific Digital Radiometer Model 2009 JR) subtending
0.57 at 60 cm was lighted at the target position for a period of 1.4
s, then extinguished and quickly removed. After a memory delay
following the extinction of the target LED (0.5, 5.0, or 8.0 s,
depending on the specific protocol described below), a second
audible tone sounded, indicating that the subject should initiate the
pointing movement. Subjects were instructed to place the tip of
the index finger so as to touch the remembered location of the
target LED. Subjects were instructed to attempt to maintain fixation
of the remembered target position during the memory delay period.
The subject had 2 s to perform the movement and hold at the
remembered target position. Illumination of the finger was dim
(0.0029 cd/m 2 ), but the finger was visible against the black background (0.0010 cd/m 2 ). Subjects could see the pointing finger
throughout the movement while maintaining fixation at the target
position. At the end of the 2-s period, a double or triple beep
occurred, indicating that the subject should return the finger to the
right or left starting position, respectively.
Positions and movements were measured by a four-camera Elite
3-D tracking system. At the beginning of the experiment, the coordinates of various body features were acquired by placing a reflective marker at each of the following points: centered on the top of
the head, 2 cm above the midpoint between the eyes, at the right
ear canal, at the shoulder, at the elbow, at the wrist, and at the
fingertip. Measurements were taken with the right index finger
positioned at the right hand starting position. For the measurement
of the shoulder position, the marker was placed above the end of
the scapula to approximate the location of the vertical axis of
rotation for this joint.
During the experiment, the movement was measured by means
of a reflective marker attached to the fingertip. The marker position
was sampled at a rate of 100 Hz. A second marker attached to the
forehead 2 cm above the midpoint between the eyes was also
tracked during each trial. To measure the actual location of each
target position, the subjects performed a set of 10 control trials in
which they moved the index finger to touch the actual LED situated
1603
1604
J. MC INTYRE, F. STRATTA, AND F. LACQUANITI
workspace locations, each target cluster was located Ç10 cm above
shoulder height (35 cm above the table) but varied in terms of
distance from the subject and left or right displacement from the
midline. For a given workspace region, subjects performed 20
trials to each of the nine target positions. Ten control trials were
performed to each target position, either before or after the test
trials, as noted below.
Only one workspace region was tested within a single block of
trials. In all but one paradigm, the head was free to rotate. In
general, subjects rotated the head to face the center of the target
region being tested.
LEFT-MID-RIGHT. Six subjects performed pointing trials to three
different regions of the workspace: the ‘‘left’’ location, located
14.5 cm to the left of the midline and 60 cm in front of the subject;
the ‘‘middle’’ location, 60 cm in front of the subject along the
midline; and the ‘‘right’’ location, 14.5 cm to the right of the
midline and 60 cm in front of the subject (the mirror image of the
left workspace location, see Fig. 2B). Subjects worked with the
right hand and used only the right starting position.
Three of the subjects (AP, AR, and SC) performed the control
trials after performing all test trials with a memory delay. Two
subjects (SE and MS) performed all 10 control trials to each of
the nine targets immediately before performing the test trials for
a given workspace location. The last subject (FO) first performed,
for each workspace location, several practice control trials to the
center target only, then the test trials to remembered targets, and
finally the complete set of control trials to all targets.
LEFT-NEAR-FAR. Four subjects performed the experiment with
the right hand to targets in three different workspace regions, starting all trials from the right side. One set of trials was performed
to the left target group, as in the left-mid-right paradigm above. A
second workspace region was located along the midline, 68 cm in
front of the subject (‘‘far’’ targets). A third set of trials was performed to a group of targets located also on the midline, 38 cm
from the subject (‘‘near’’ targets). The left workspace location
was chosen to lie on a line emanating from the right shoulder and
passing through the center of the near target group (Fig. 2C).
Furthermore, the distance from the right shoulder of the left target
group was equal to that of the far target group. Thus we could
/ 9k19$$se25
J0986-6RR
compare two pairs of target groups, the near and far targets, which
fell on a line passing through the head, and the near and left targets,
which lay on a line passing just above the right shoulder.
This configuration of target locations was designed specifically
to test whether the origin of the coordinate system used for executing the task, as defined by the orientation of the variable-error
distribution, is located at the head or the shoulder. If the origin is
the head, one would observe a 307 counterclockwise rotation of
the variable-error ellipsoid between the left and near targets,
whereas the near and far major axes would be approximately parallel when projected onto the horizontal plane. Conversely, if the
origin is at the shoulder, the near and left major axes would be
roughly parallel, whereas the far axis would be rotated 147 clockwise with respect to the other two.
For the left-near-far configuration, all four subjects performed
the control trials after having completed all trials to remembered
targets. Note, however, that except for subject FF, all subjects had
performed a similar experiment on a previous day to targets located
in the far workspace region.
STARTING POSITION AND MEMORY DELAY. We tested the effects of starting hand position and memory delay duration in a single
experiment. Six subjects performed pointing trials to the far workspace
region. For this protocol, subjects performed trials with the 0.5-s
memory delay used for all other protocols and additional trials with
a memory delay of 8.0 s. Trials with the two different memory delays
were randomly mixed within each block of 90. A pseudorandom
sequence of trials was created, assuring that each target position was
repeated five times for each memory delay. Subjects alternated the
starting position for each trial (odd from left, even from right, or
vice versa). By repeating the same pseudorandom sequence of target
positions and memory delays, once starting the first trial from the left
and once starting from the right, we were assured of having a total
of five trials for each target position, each memory delay, and each
starting position. Subjects performed four such pairs of blocks, resulting in 20 trials to each target position for each memory delay and
each starting position (720 trials total). In a separate test of memory
delay duration, two subjects performed the left-mid-right paradigm
with a single, long (5.0-s) memory delay with the use of the right
hand and the right starting position.
08-15-97 08:00:02
neupa
LP-Neurophys
Downloaded from http://jn.physiology.org/ by 10.220.33.2 on June 18, 2017
FIG . 2. Target positions for a single workspace
region. Target positions were distributed evenly on
a 22-mm-radius sphere. A: orthonormal projection
views of the target positions from 4 different viewpoints (top) and the corresponding global viewpoint for each target view (bottom). In the global
views, the size of the sphere represents the volume
tested for a single workspace region. Plus signs:
average finger position at the start of each pointing
movement. B: workspace regions for the left-midright configuration. Nine targets were presented
, linewithin each of the 3 separate regions.
of-sight defined by the midpoint between the eyes
and the center target position. C: workspace locations for the left-near-far configuration. Near and
far workspace regions are aligned with the head;
near and left regions fall on a line passing through
the shoulder (- - -).
VIEWER-CENTERED POINTING ERRORS
Subjects
A total of 13 right-handed subjects participated in the experiments, 4 males and 9 females aged 20–35 yr. The subject pool
includes two of the authors (JM and FS). Informed consent was
obtained from all other subjects, who were drawn from a school
of physiotherapy. These subjects were informed of the overall nature of the experiment, but were unaware of the specific hypotheses
being tested and the expected results.
Analysis of pointing errors
We computed both the constant errors made by the subjects
(mean error to each target position) and the variable error (variance
around the mean). The 3-D variable error was examined quantitatively in two ways. First we computed the eigenvectors of the
variance and covariance matrix derived from the variability of
responses to a given target. This provided an unconstrained estimate of the natural orientation of the coordinate system in which
the target position is encoded without imposing a priori assumptions about the origin of that system. These measures were computed as follows.
Let t i be the position of target i, and p ij be the final position of
the finger for trial j to target i. The error D ij for a single trial is
simply the vector difference between the final finger position and
the target. The average final position for n i repetitions to target i
is denoted by pV i . The constant error e i for target i is given by the
expression
overall average constant error e for all targets is simply the average
constant error over all nine targets within a given workspace region.
The underlying assumption for combining data in this manner is
that the mapping of the intrinsic coordinate system of the subject
to a standard Cartesian coordinate system is ‘‘smooth’’ and that
targets within a single group were sufficiently close to one another
such that errors produced at each target would be similar. The
validity of these assumptions is discussed in the following text.
A single covariance matrix can also be computed for each workspace region, again to increase the statistical robustness. The combined covariance matrix is computed from the formula
ni
k
( ( d ij ( d ij ) T
SÅ
i Å1 jÅ1
i Å1
Note that the deviation d for a trial to a given target is computed
relative to the mean of trials to only that target, not to the overall
mean for all targets. The number of target positions k is subtracted
from the total number of trials in the denominator to give an
unbiased estimator of S based on k sample groups drawn from a
population, with each group having its own mean (Morrison 1990,
p. 100). The implicit assumption that pointing errors follow the
same distribution for all nine targets is discussed in the following
text.
The matrix S is the 3-D covariance about the mean for pointing
to a remembered target. S can be scaled to compute the matrix
describing the 95% tolerance ellipsoid on the basis of the total
number of trials n
r
T 0.95 Å
q(n / 1)(n 0 k)
F0.05,q,n0k0q/1 S
n(n 0 k 0 q / 1)
(1)
The deviation d from the mean for a given trial is defined as
d ij Å p ij 0 pV i
(2)
The 3 1 3 matrix of variance and covariance (hereafter referred
to as the covariance matrix) for a single target is given by the
equation
ni
( d ij ( d ij ) T
Si Å
jÅ1
ni 0 1
(3)
x 2 Å 0 (n 0 1)
∑ ln l j / (n 0 1)r ln
p
J0986-6RR
( lj
j
r
(6)
where l j are the r Å 2 eigenvalues being compared for a given
covariance matrix ( j Å 1–2 or j Å 2–3). This x 2 statistic has q
Å 1/2r(r / 1) 0 1 degrees of freedom. Unless otherwise stated,
we report and/or plot eigenvalues and their corresponding eigenvectors only if the given eigenvalue is significantly different from
the other two at the 95% confidence level ( x 2 ú x 20.05; 2 ).
The eigenvector vi of a sample covariance matrix S is distributed
as a multidimensional vector around the computed estimate. The tip
of each eigenvector follows the multinormal distribution computed
directly from the sample covariance matrix
V i Å li ∑
To obtain more robust statistical estimates, data may be combined
to describe the average error within a single target group. The
/ 9k19$$se25
(5)
where q Å 3 is the dimensionality of the Cartesian vector space
(Diem 1963).
T 0.95 represents an ellipsoidal region around the mean response
within which would fall 95% of all trials to a given target. The
shape, size, and orientation of these ellipsoids were characterized
by computing the eigenvalues and eigenvectors of the matrix T 0.95
with the use of the Jacobi method (Press et al. 1988). The size of
the distribution is given by the eigenvalues T 0.95 , the shape by the
difference or ratio of the eigenvalues, and the orientation in space
by the orientation of the three eigenvectors.
Because the estimates of the covariance matrices are based on
a finite number of samples, the three computed eigenvalues will
always differ, even for a population covariance that is truly isotropic. We can test whether any two eigenvalues are statistically
distinct with the use of a x 2 test of the form (Morrison 1990, p.
336)
j
e i Å pV i 0 t i
(4)
k
( ni 0 k
08-15-97 08:00:02
h Å1
h xi
neupa
lh
vhv h*
( lh 0 li ) 2
LP-Neurophys
(7)
Downloaded from http://jn.physiology.org/ by 10.220.33.2 on June 18, 2017
LEFT HAND. Three subjects performed the left-near-far protocol
with the use of the left hand. Two of the subjects performed the
experiment with the use of the starting position on the left side
only. One subject (JM) performed trials alternating between the
left and right starting positions, as described in the preceding text.
Subjects performed the control trials after all test trials. It should
be noted that all subjects had performed the experiment to the
same set of targets on a previous day, including control trials, but
with the use of the right hand.
HEAD ROTATION. Two subjects performed two separate sets of
trials each, with the head rotated 267 to the left for one set or to
the right for the other. Before each trial, the subject fixated an
LED located 38 cm to the left or right of the midline, attached to
the backboard 13 cm above the surface of the table. Subjects were
required to maintain the eccentric head rotation throughout the
trial, including the target fixation, memory delay, and movement
execution. The head was lightly restrained with a cap and a string
tied to a fixed post, blocking head rotation toward the central
position. The far workspace location, as defined above, was tested
for pointing with the right hand from the right-side starting position. Control trials were performed after all test trials. However,
both subjects had performed the experiment on a previous day to
the same set of targets with the head straight.
1605
1606
J. MC INTYRE, F. STRATTA, AND F. LACQUANITI
Because the eigenvectors are constrained to have unit length, the
distribution of tip locations is intrinsically described by a 2-D
ellipse. However, because the ellipse may be oriented arbitrarily
in 3-D space (in the plane perpendicular to the corresponding
eigenvector), the ellipse is represented by the singular 3 1 3 matrix
V of rank 2.
The accuracy of the estimate for the constant-error vector e is
described by the 95% confidence ellipsoid C0.95 , which may also
be computed from the sample covariance matrix
r
C0.95 Å
q(n 0 k)
F0.05,q,n0k0q/1 S
n(n 0 k 0 q / 1)
(8)
Targets within a given workspace region were placed relatively close together (22 mm) so as to encourage the subject to work very precisely. This assumes that the subject
can in fact distinguish differences between the nine target positions
in all three dimensions. We tested this assumption by computing
the correlation between the target position and the average final
position for that target in each of the three Cartesian directions.
Correlations were very significant (P õ 0.0001) for all three axes,
for all subjects, and for all workspace regions. Subjects were clearly
able to discriminate between different target positions in all three
dimensions.
The combining of data across trials to compute the average
TESTS OF ASSUMPTIONS.
RESULTS
The purpose of these experiments was to search for a
preferred reference frame as indicated by the pattern of errors
observed for a manual pointing task. We looked for consistent changes in the constant and variable errors, defined in
the preceding text, as a function of workspace location, starting position, choice of effector hand, and head orientation.
Workspace location
The most consistent indication of an inherent reference
frame for pointing errors is given by the measurements of
the variable error. In Fig. 4 we show the 95% tolerance
ellipsoids for the experiments performed in the left-mid-right
configuration. The ellipsoids are centered on the average
final finger position for each of the three workspace regions.
FIG . 3. Quantitative measures of errors
in pointing by a single subject. Column 1:
viewing orientation for each row. Small
black sphere: range of target positions for
the ‘‘mid’’ workspace region. Column 2:
scatter of endpoint positions for a single
workspace location. Data represent the combination of 20 trials to each of 9 real targets
as described in METHODS . Small sphere:
‘‘virtual’’ target position at the center of all
real target positions. Line segment: average
constant error for all trials to the same workspace region. Cloud of points: variability of
responses to a single target location, computed by combining data from all 9 targets.
Column 3: tolerance ellipsoid that encompasses 95% of all data points within the distribution. Column 4: major axis of the 95%
tolerance ellipsoid describing the variable
error, surrounded by the 95% confidence
cone for that axis.
/ 9k19$$se25
J0986-6RR
08-15-97 08:00:02
neupa
LP-Neurophys
Downloaded from http://jn.physiology.org/ by 10.220.33.2 on June 18, 2017
where q Å 3 is the dimensionality of the Cartesian vector space.
To depict the accuracy of the direction of the constant-error estimate, the 95% confidence cones may be swept out for each estimated eigenvector by drawing rays from the true target position
to points on the ellipsoid described by C0.95 . In a similar way, the
accuracy of the variable-error eigenvector estimates can be shown
by drawing the cone defined by the confidence ellipse for each
eigenvector. The 95% confidence ellipse V0.95 can be computed
from V with the use of Eq. 8, with q Å 2.
The results of one such set of calculations can be visualized in
Fig. 3 from three different viewpoints as indicated in column 1.
Column 2 shows the cloud of 180 final finger positions combined
from nine targets in a single workspace region. The line from the
sphere representing the center target position to the center of the
distribution depicts the average constant error e for the given workspace region. Column 3 depicts the ellipsoid that contains 95% of
the distributed points as computed from T 0.95 . Column 4 shows the
major eigenvector of the covariance matrix emanating from the
center of the distribution of points and scaled by 2 times the appropriate eigenvalue of T 0.95 . The eigenvector is drawn surrounded by
the confidence cone computed from V0.95 .
constant error assumes that the constant error is similar for all nine
targets within a single workspace region. This was not strictly the
case. By an multivariate analysis of variance (MANOVA) test
with target identification as a within-subjects factor, we can see a
significant effect of the target factor on the 3-D error vector. Visual
inspection shows that the average final positions for the targets on
the surface of the sphere were biased toward the center of the
sphere. However, because of the symmetrical distribution of targets
on the sphere, biases for individual targets tend to cancel each
other in the computation of the combined constant error. Constanterror vectors computed for the center target alone do not deviate
substantially from the average error vectors computed across all
targets.
By computing the covariance matrix based on data from all nine
targets, one implicitly assumes that the distribution of errors is the
same, or nearly the same, for all targets. We tested this assumption
by comparing the covariance matrix computed for the 20 trials to
the center target only with the covariance matrix S of Eq. 4. Estimates of the covariance orientation based on trials to the center
target only were obviously noisier (larger confidence cones) but
did not deviate systematically from the combined estimate. The
major eigenvectors computed on the basis of combined data fell
within the 95% confidence cones for the corresponding estimates
based only on the center target. Thus we do not reject the null
hypothesis that the two estimates are measures of the same statistical distribution, and we thus report results based only on the combined data, because of the greater confidence for these measures.
VIEWER-CENTERED POINTING ERRORS
These ellipsoids are drawn to scale and are in general small
with respect to the total workspace. The magnitude of the
variable error is measured by the square root of the eigenvalues of the tolerance ellipsoid T 0.95 , which are given in Table
1 for each subject and each workspace location.
The major eigenvector for each covariance matrix is plotted along with the ellipsoid in Fig. 4. The eigenvectors are
extended in space toward the frontal plane that passes
through the eyes of the subject. When viewed from either
above or from the side, these vectors show a clear tendency
to converge toward the head of the subject.
Figure 5 shows the same eigenvectors surrounded by the
corresponding 95% confidence cone for each estimate. It can
be seen that in almost all cases the head, and even more
precisely the eyes, falls within the edges of the confidence
cone, whereas in no case does the shoulder lie within the
confidence region.
The variable errors for the left-near-far paradigm, shown
in Fig. 6, give even stronger evidence for a head-centered
reference frame. In three of four cases (subject FS is the
exception) the major axis of the variable-error ellipsoid rotates significantly for the left workspace region, as compared
with the near location, whereas the eigenvectors for the near
and far locations are essentially parallel.
To test the statistical significance of these results, we first
/ 9k19$$se25
J0986-6RR
performed an ANOVA on the direction (azimuth) of the
covariance major eigenvector, with workspace region as a
within-subject factor. For the left-near-far paradigm, we see
a statistically significant effect of workspace location on the
orientation of the major eigenvector [F(2,6) Å 8.16, P õ
0.019]. According to Scheffe’s t-test for post hoc analysis,
the azimuth of the major eigenvector differs significantly
between the left and near workspace regions (P õ 0.036)
but not between the far and near regions (P ú 0.99).
We tested two potential spherical coordinate systems, one
with the origin at the eyes and a second with the origin at the
right shoulder. Combining trials from all five workspace regions, there is no statistically significant difference between the
direction of the major eigenvector of the covariance matrix and
the direction of a line drawn to each workspace location from
the head/eyes [R(2,28) Å 0.51, P ú 0.60]. The difference
between the azimuth and elevation of such a line drawn from
the shoulder to the workspace region is highly significant
[R(2,28) Å 128.73, P õ 0.00001]. These results hold even if
we consider only the left workspace region [R(2,8) Å 1.33,
P ú 0.31 for the eye/head origin; R(2,8) Å 128.56, P õ
0.00001 for the shoulder origin].
No clear pattern emerges for the second and third eigenvectors. Whereas the first eigenvalue is distinct to a very
high degree of confidence for all subjects and all conditions,
the second and third eigenvalues are statistically different in
only 50% of the cases (Table 1).
Measurements of constant error for the different workspace regions do not show the same consistency across subjects that was seen for measurements of variable error. For
the left-mid-right protocol, the magnitude of the constanterror vector, shown in Table 2, varied from very small errors
(subject SC) to rather large errors (subject AP). In Fig. 7
one can see both overshoots (subjects AP and FO) and
undershoots (subjects MS and SE) of the true target position,
where an overshoot is defined as a final finger position that
is farther from the body than the real target position. Note
that constant errors have been magnified by a factor of 5 to
be discernible in the figure. Constant-error directions vary
widely across subjects, although the direction of the vector
is not very reliable when the magnitude is small (subjects
AR and SC). When one considers only the larger constanterror vectors (subjects AP, MS, and SE), these vectors are
roughly aligned with the head-target axis. All four subjects
in the left-near-far experiment show a significant undershoot
of the left and far targets (Table 2) and a tendency for these
constant-error vectors to point toward the head (Fig. 8).
We computed a linear regression of target versus endpoint
position for each of the three Cartesian directions independently for the six subjects who performed pointing to the
mid-workspace region (Table 3). Slopes are significantly
different from zero in all cases, confirming that subjects
differentiated between targets in all three dimensions. Slopes
for the Z dimension (depth) may be less than unity, indicating a compression of the average endpoint positions toward
the center. Compression was greater in the Z (depth) dimension than for X (left/right) or Y (up/down) dimensions.
Starting position
Changing the starting position of the movement had no
consistent effect on the direction of the variable-error major
08-15-97 08:00:02
neupa
LP-Neurophys
Downloaded from http://jn.physiology.org/ by 10.220.33.2 on June 18, 2017
FIG . 4. Variable error for 6 subjects who performed the experiment to
the left, middle, and right workspace regions. Ellipsoids represent the 95%
tolerance region for a single target, computed from 20 movements to each
of 9 targets. Lines emanating from each ellipsoid indicate the direction of
the primary axis (1st eigenvector) projected to intersect with the frontal
plane passing through the eyes. Plus signs: average starting hand position
for all movements.
1607
1608
TABLE
Subject
J. MC INTYRE, F. STRATTA, AND F. LACQUANITI
1.
Variable error eigenvalues
1st
2nd
3rd
1st
2nd
Left
3rd
1st
Middle
2nd
3rd
Right
A.
AP
AR
FO
MS
SC
SE
16.8**
11.1**
12.7**
6.6**
10.9**
6.2**
7.7*
6.0**
7.3
4.1**
4.3
4.4
6.1*
4.2**
6.8
3.2**
3.9
3.7
20.0**
10.6**
19.6**
6.5**
13.8**
6.3**
8.9*
5.9**
7.8*
4.1**
4.0
4.0
Left
7.2*
4.3**
5.9*
3.1**
3.4
3.8
14.2**
10.6**
11.2**
7.0**
8.8**
6.1**
Near
8.5**
5.4**
6.1
4.6**
4.1
3.9
5.8**
3.8**
5.9
3.2**
3.4
3.6
Far
B.
8.9**
11.2**
15.6**
7.6**
6.5**
6.6
6.8*
2.7
4.6**
5.7
5.4*
2.2
8.0**
8.6**
17.8**
7.5**
4.7
6.1**
5.5
2.2**
3.8
4.0**
5.0
1.4**
8.6
12.2**
14.5**
14.1**
7.3
7.2**
5.5
3.8*
4.6**
4.6**
4.4
3.2*
Variable error magnitude (mm) given by the square root of the 3 eigenvalues for the 95% tolerance ellipsoid. Values marked with asterisks are
statistically distinct at the P Å 0.01 (**) or P Å 0.05 (*) levels.
axis (Fig. 9A). The projected eigenvector for the left starting
) passes to the left of the vector for the right
position (
starting position (- - -) in only half of the cases ( subjects
AG, FS, and LB), and in all cases both eigenvectors point
toward the head. This is not to say that there is no secondary
effect of the starting hand position. If we project the data
into the plane that contains the target position and the two
starting positions, we can compute the 2-D variable error as
shown in Fig. 9B. The analysis in this plane will be the most
sensitive to the imposed changes in starting position. The
directions of the major eigenvectors computed in this plane
are shown in Fig. 9C. Although for two subjects (AG and
JM) the 2-D eigenvalues were not statistically distinct at
the 95% confidence level, the indicated eigenvectors still
represent the best estimate of the direction of maximum
variation. Considering the data from all subjects, it can be
seen that, in general, the eigenvector for the left starting
position passes to the left of the corresponding eigenvector
for the right starting position (Wilcoxon signed-rank test,
Z Å 2.02, n Å 6, P õ 0.043). Looking at the individual,
nonsignificant data in this way is analogous to flipping a
coin—a single ‘‘heads’’ is not significant in itself, but 9 of
10 heads might indicate a biased coin. Thus an effect of the
starting hand position can be detected in the orientation of
the distribution of errors, but this effect is minor compared
with the strong influence of the direction from the head/
eyes to the target.
Left versus right hand
FIG . 5. The 95% confidence cones for the variable-error eigenvector
directions shown in Fig. 4. In all cases, the head falls within the confidence
cone, whereas the shoulder lies outside.
/ 9k19$$se25
J0986-6RR
Changing the hand used to perform the movement had no
effect on the pattern of variable errors. Figure 10A shows the
orientations of the variable-error ellipsoids for the left-nearfar paradigm performed with the left hand. These three subjects
started each trial from the left starting position. Thus the required movement was the mirror image of the right-hand trials.
Vectors for all three workspace locations point toward the
head, as was seen for the right hand in Fig. 6. The near and
far vectors are parallel, whereas the left vector is rotated with
respect to the other two. One subject (JM) performed the
experiment with the left hand starting from both initial positions. The results, shown in Fig. 10B, are essentially identical
) and right (- - -) starting positions. Constant
for the left (
08-15-97 08:00:02
neupa
LP-Neurophys
Downloaded from http://jn.physiology.org/ by 10.220.33.2 on June 18, 2017
DA
FF
FS
JM
VIEWER-CENTERED POINTING ERRORS
1609
errors for subjects FF and JM (Fig. 11) are comparable with
those obtained with the right hand (Fig. 8).
Head rotation
The orientations of both the variable-error ellipsoid and
the constant-error vector do not remain aligned with the
TABLE
2.
Constant error magnitudes
Magnitude
Subject
Left
Middle
Overshoot
Right
Left
Middle
Right
24.6
2.9
8.3
021.7
0.4
010.6
8.5
1.8
1.5
023.9
05.4
016.8
83.7
01.7
9.8
021.7
00.8
026.4
A.
AP
AR
FO
MS
SC
SE
27.3
13.6
9.4
23.8
10.3
10.8
11.3
9.3
7.2
24.4
9.5
17.1
88.2
14.9
17.6
23.6
14.2
27.8
Magnitude
Subject
Left
Near
Overshoot
Far
Left
Near
Far
011.1
014.4
022.8
028.7
010.5
23.7
07.5
013.7
027.0
040.5
051.3
041.3
B.
DA
FF
FS
JM
14.9
21.0
30.1
29.5
12.3
24.6
16.2
14.6
28.1
41.7
53.1
41.9
Constant error magnitudes (mm), and the degree of overshoot (positive
values) or undershoot (negative values), as measured from the head (mm).
/ 9k19$$se25
J0986-6RR
body midsagital plane. Both vectors rotate in response to
reorientation of the head. In Fig. 12, a dashed line is drawn
between the center of head rotation and the center target
position. For trials with the head rotated to the left, both
the variable-error major eigenvector and the constant-error
vector pass to the left of the dashed midline. Similarly, both
vectors pass to the right for the rightward head rotation. The
amount of rotation does not, however, correspond exactly
to what would be expected from the shift in eye position
induced by the head rotation. The confidence cones in Fig.
12 indicate that the rotation of the constant error in the
horizontal plane is statistically significant at the 95% confidence level for each of these two subjects. The rotation of
the variable-error eigenvector is statistically significant for
subject FF but not for subject JM.
Memory delay
Increasing the duration of the memory delay had little
or no effect on the direction of the variable-error major
eigenvector. For the two subjects who pointed to targets
in the left, middle, and right workspace regions, the major
eigenvectors of the variable-error ellipsoids continue to point
toward the head (Fig. 13). For the six samples, there is no
significant difference between the orientation of the major
eigenvector and the orientation of a vector pointing from the
eyes to the center target position [R(2,4) Å 1.06, P õ
0.4263]. For the subjects who pointed to targets in the far
workspace region with two different memory delays (Fig.
14), the difference in the direction of the variable-error ma-
08-15-97 08:00:02
neupa
LP-Neurophys
Downloaded from http://jn.physiology.org/ by 10.220.33.2 on June 18, 2017
FIG . 6. Variable error for subjects who performed trials to the left, near, and far workspace
regions. A: major axis directions projected to the
frontal plane passing through the eyes. In 3 of 4
cases, the major axes of the variable-error ellipsoids are parallel for the near and far configurations, whereas the major axis for the left workspace region is rotated with respect to the other
2. For the exception (subject FS), all 3 axes are
essentially parallel. B: confidence limits for the
axes plotted in A. Note that the 1st eigenvalue
for the ellipsoid, indicated by an asterisk, is not
statistically different from the 2nd at the 95%
confidence level. However, the 1st and 2nd eigenvectors define a vertical plane whose intersection with the horizontal is indicated by the dashed
line.
1610
J. MC INTYRE, F. STRATTA, AND F. LACQUANITI
difference was significant between the first eigenvalue and
the other two, but not between the second and third eigenvalues (P õ 0.0067 and P õ 0.43, respectively, Scheffe’s ttest for post hoc analysis). This confirms our previous analysis that there is no significant anisotropy of variable error in
the frontoparallel plane. There was no significant cross-effect
of memory delay versus eigenvalue rank [F(2,10) Å 0.37,
P õ 0.70], indicating that the additive increase of variability,
when expressed in terms of the SD, was equal in all three
dimensions. Note that the increase was additive, not multiplicative. The ratio of the first to the second eigenvalue decreased with memory delay [F(1,5) Å11.25, P õ 0.02].
DISCUSSION
jor eigenvector between a short (0.5-s) and long (8.0-s)
delay was not statistically significant [R(2,3) Å 6.28, P õ
0.085, subject DA was excluded from this analysis because
the direction of the 1st eigenvector was not statistically significant]. The magnitude of the variable error increased by
an average of 3.7 mm in each direction (Fig. 15). To test
the significance of the increased variability, we performed
an ANOVA on the square roots of the eigenvalues of the
variable-error tolerance ellipsoid, with memory delay and
eigenvalue rank (1st, 2nd, or 3rd) as independent factors.
The memory delay factor had a significant main effect on
the average eigenvalue [F(1,5) Å 118.21, P õ 0.0001].
There was a significant main effect between the three different eigenvalues [F(2,10) Å 28.42, P õ 0.001], where the
Binocular distance cues—eye position
The tight coupling between the orientation of the variableerror ellipsoid and the presumed orientation of gaze suggests
that the anisotropy in 3-D pointing movements might be attributed to variability in the sensory input coming from the binocular visual system. Consider first the computation of target loca-
FIG . 8. Constant-error vectors for the
left, near, and far configuration, plotted as
in Fig. 7.
/ 9k19$$se25
J0986-6RR
08-15-97 08:00:02
neupa
LP-Neurophys
Downloaded from http://jn.physiology.org/ by 10.220.33.2 on June 18, 2017
FIG . 7. Constant-error direction and relative magnitude for the left, middle, and right workspace regions. Small spheres: true center target position
for each workspace region. Line segments: average constant error for all 9
targets, magnified by a factor of 5 to be visible.
Specific anisotropies in the distribution of pointing errors
provide clear evidence of a viewer-centered reference frame
for pointing to remembered targets when vision of the hand
is allowed. The axis of maximum variance in the movement
endpoints (major axis of the tolerance ellipsoids) passes
consistently through the head, very close to midpoint between the eyes. This feature is independent of the location
of the target within the workspace, the hand used to perform
the movement (left or right), the starting position of the
hand, and the orientation of the head during the task. Thus
specification of the endpoint position for these movements
appears to be related to a viewer-centered, as opposed to an
arm-centered, reference frame. In the following discussion
we illustrate how binocular distance perception based on eye
position or retinal disparity predicts the orientation, but not
the shape, of the variable errors. We then show how the
control of conjugate eye movements or the fusion of multisensory cues can account for the observed endpoint distributions. Finally, we argue that the target location is memorized
in a reference frame different from that of the visual input,
on the basis of isotropic increases in endpoint variability.
VIEWER-CENTERED POINTING ERRORS
TABLE
3.
Local target to endpoint transformations
Subject
AP
AR
FO
MS
SC
SE
MX
1.36
1.12
1.06
0.93
1.16
1.01
{
{
{
{
{
{
MY
0.05
0.03
0.04
0.02
0.02
0.02
0.99
1.01
0.89
0.93
1.02
1.02
{
{
{
{
{
{
MZ
0.06
0.05
0.08
0.03
0.03
0.03
1.06
0.60
0.56
0.67
0.99
0.75
{
{
{
{
{
{
0.14
0.07
0.19
0.04
0.09
0.04
Values are regression slopes (means { SE) for the correlation of endpoint
vs. target position computed separately for each of the 3 Cartesian directions. Data are from 6 subjects for targets in the mid-workspace region. All
slopes are significantly different from 0 at P õ 0.0001.
tive eye movements, the structure of the sensory input variability will be altered. It is generally accepted that conjugate
eye movements and vergence are controlled by separate
brain stem circuits, although these two systems are evidently
coordinated (Mays and Gamlin 1995). It is therefore reasonable to expect that eye position fluctuations might covary.
Mathematically, this means that the covariance matrix for
eye position information will no longer be diagonal with
equal eigenvalues. The anisotropy in the eye position variance will change the shape of the transformed output variance (see APPENDIX B ). A finer control of vergence versus
version eye movements would decrease the variability of the
final pointing depth with respect to variability in the tangent
plane.
Van Eë and Erkelens (1996) propose a different mechanism that might explain an apparently greater stability for
visual depth perception. Displacements of entire half-images
of a random dot stereogram induce vergence movements in
the eyes but limited perceptions of changes in depth (Erkelens and Collewijn 1985a,b; Regan et al. 1986). Changes in
relative disparity within the stereogram, on the other hand,
elicit strong perceptions of depth changes. Furthermore, fast
side-to-side head movements or pressing against the eyeball
generate vergence changes that are not perceived as changes
in target depth (Steinman et al. 1985). Van Eë and Erkelens
have hypothesized that movements of the entire retinal image
are ignored or attenuated by the depth perception system,
because such retinal shifts are ambiguously related either to
changes in object distance or to variations in eye or head
position. To explain the results of the current study in this
context, however, it must be shown that slip of an entire
retinal image induces perceptions of change of target direction but not of target depth. Sustained passive rotation of
one eye can indeed induce changes in ocular alignment and
concomitant changes of perceived target direction (Gauthier
et al. 1994).
Under our experimental conditions, subjects have more cues than just the convergence of the eyes with which to estimate the target distance.
Relative disparity between the target and fixation lights
FUSION OF MULTIPLE SENSORY CUES.
FIG . 9. Comparison of variable-error orientation
for the left vs. right starting positions. A: 3-dimen)
sional (3-D) major axis directions for the left (
and right (- - -) starting positions to a single workspace region (far region). B: 2-D variable-error ellipses computed in the plane containing the 2 starting
positions and the center target location. C: directions
of the 2-D variable-error major axes. Across subjects,
the best estimate of 2-D variable error rotates with
the starting position, although when taken individually the major eigenvalues for the axes marked ns are
not statistically different from the corresponding 2nd
eigenvalue at the 95% confidence level.
/ 9k19$$se25
J0986-6RR
08-15-97 08:00:02
neupa
LP-Neurophys
Downloaded from http://jn.physiology.org/ by 10.220.33.2 on June 18, 2017
tion on the basis of the orientation of the two eyes, assuming
that the target is bifoveated. Uncompensated eye movements
can account for a decreased ability to discriminate differences
in positions of visually acquired targets (Matin et al. 1966,
1970). For a given level of random uncertainty in the angular
position of each eye, we can calculate the expected variability
of the perceived target location as a function of workspace
location (see APPENDIX B ). The computed workspace variation
of this perceptual anisotropy predicts the direction of variableerror ellipsoids seen in our experiments. Uncertainty in depth
perception, and thus variability in the pointing distance, will
be greater than that of azimuth or elevation for all but the very
nearest of target locations (target distances less than half the
interocular distance of 6.3 cm).
Variability in eye position signals alone predicts the orientation but not the shape of the pointing error distribution.
For a target located 60 cm from the eyes, left/right and
up/down variability should be 20 times smaller than the
variability in depth. Comparison of the first and second eigenvalues from our experiments indicate a factor closer to
2 or 3 to 1. We offer two possible explanations for this
discrepancy.
VERGENCE VERSUS VERSIONAL EYE MOVEMENTS. The preceding analysis assumes that variability in eye position sense
was stochastically independent for each of the two eyes. If,
instead, uncompensated eye movements tend to be coupled,
with a differential in the control of conjugate versus disjunc-
1611
1612
J. MC INTYRE, F. STRATTA, AND F. LACQUANITI
FIG . 10. Variable-error orientation for the left
hand. A: 3 subjects using the left hand, starting
from the left hand starting position. Results are
nearly identical to movements with the right hand
from the right side (Fig. 6). B: subject JM performed the experiment with the left hand, starting
) and right (- - -) starting
from both the left (
positions. Results are indistinguishable for the 2
starting positions.
The transformation of equal variances for monocular disparities into Cartesian coordinates also results in a 20:1 ratio of
covariance eigenvalues.
Accommodation provides an additional cue as to the depth
of the target location and final fingertip position. Blur in
the retinal image of the fingertip as the subject fixates the
remembered target position can indicate when the fingertip
is at the proper depth. For accommodation to be useful in
reducing variable error at the endpoint, it must provide information that is independent from vergence signals. It is known
that vergence can be triggered by blur-driven accommodation (Jiang 1996; Mays and Gamlin 1995). Conversely, the
vergence controller can initiate an accommodative response.
Thus dynamic vergence and accommodation signals would
appear to be correlated. However, in addition to the phasic
responses, tonic components are present in this cross-linked
system. Accommodative and vergence eye movements in
darkness are thought to represent tonic accommodation and
tonic vergence, respectively. Current models of this system
indicate that the cross-links are driven only by the phasic
control elements, the tonic elements being located after the
cross-coupling (Jiang 1996; Schor 1992). Accordingly, the
resting position of accommodation and vergence in the absence of adequate stimulation should be independent of each
other (Owens and Leibowitz 1980).
Because blur is related primarily to object depth, accommodation is one visual cue that might reduce distance variance without affecting directional accuracy. Because tonic
accommodation and vergence cues are independent, the integration of vergence and accommodation information reduces
FIG . 11. Constant error for trials performed with the left hand from the left starting position (A) and from both the left and
right starting positions (B).
/ 9k19$$se25
J0986-6RR
08-15-97 08:00:02
neupa
LP-Neurophys
Downloaded from http://jn.physiology.org/ by 10.220.33.2 on June 18, 2017
might provide one such extra cue, as would the absolute
disparity of the observed finger position near the end of
the movement as the subject fixates the remembered target
location. For these experimental conditions, both absolute
and relative disparity is useful in the acquisition of the target
location, whereas in the comparison of the finger position
to the remembered target position, only absolute disparity
cues are available in the absence of a visual reference point.
Relative disparity provides a much more powerful indication
of target depth that is on the order of 10 times more precise
than absolute depth cues (Westheimer 1979). Foley (1976)
measured discriminability in a visual task with the use of
successive presentation of two stimuli located Ç63 cm from
the viewer. Using a forced-choice, staircase paradigm, Foley
found discriminability thresholds on the order of 4–10
minarc for both depth and vernier (azimuth) displacements
at an interstimulus interval of 1 s. A 5-min SD for absolute
disparity measurements would result in a 12-mm SD in the
depth direction. This is in agreement with the average magnitude of the major eigenvalues for the 3-D SDs calculated in
our experiment for targets at a comparable distance. One
might conclude, therefore, that the variable error seen in
our experiments arises from the comparison of the fingertip
position and the remembered target position at the end of
the movement rather than from variability in the visual acquisition of the target position.
Retinal disparity cues alone are not sufficient to explain
the shape of the covariance ellipsoids seen in our experiments. The computation of depth and direction based on
absolute disparity has the same form as that of vergence.
VIEWER-CENTERED POINTING ERRORS
1613
the net variance in the estimated distance. Note that, unlike
vergence, blur is ambiguous for the direction of depth. Although blur may not indicate the direction of an error in
depth, minimizing blur can still serve to reduce variability
in distance estimates.
It seems likely that subjects combined multiple sensory
cues in the performance of this particular pointing task. Gogel (1969, 1972) has proposed a model of distance perception in which different monocular and binocular cues combine to form an estimate of target distance. Anderson (1974)
has proposed a quantitative model in which the target distance estimate is a weighted sum of available cues and the
specific distance effect. Foley and Held (1972) compared
radial pointing under two conditions: with multiple distance
cues and under carefully controlled vergence-only conditions, finding, as expected, greater accuracy under the
multicue condition. The magnitude of the variable error for
our experiments corresponds better to the variability observed in the multicue condition of Foley and Held. Neural
activity for cells in area LIP of monkey parietal cortex varies
according to target depth (Gnadt and Mays 1995). Firing
rates for these cells are modulated by both changes in stimulus blur and changes of vergence angle or absolute disparity.
Thus we have both psychophysical and physiological evidence for the integration of vergence, disparity, and accommodation signals within the CNS.
Egocentric ocular inputs versus allocentric visual cues
In a study of visually guided pointing to a remembered
target position (condition with lights on), Soechting et al.
(1990) also concluded that subjects use a head-centered coordinate system to perform the task. They interpreted these
results on the basis of the fact that subjects could align the
target with visual references in the background. Under our
experimental conditions, viewer-centered variable errors
were observed despite the lack of visual cues in the background. Thus it would seem that the anisotropy observed in
pointing variability reflects the anisotropy in the treatment
of egocentric ocular information rather than reliance on allocentric visual cues.
Distortions induced by coordinate transformations
FIG . 13. Variable-error orientation for pointing trials with a long (5.0s) memory delay period. Major eigenvectors of the variable-error ellipsoids
point toward the head.
/ 9k19$$se25
J0986-6RR
We have argued that the pattern of variable errors arises
from the transformation of ocular information into an estimate of 3-D position. Furthermore, we have proposed that
the discrepancy between the predicted 20:1 and observed
2:1 ratio of covariance eigenvalues be explained by the reduction of depth-related variance through coupled eye move-
08-15-97 08:00:02
neupa
LP-Neurophys
Downloaded from http://jn.physiology.org/ by 10.220.33.2 on June 18, 2017
FIG . 12. Variable error (A) and constant error
(B) for pointing trials performed with the head
turned to the left or right. White bar on the head:
direction of head rotation. – – –, midline as defined
by the center of the head and the center of the far
target region. Cones at bottom middle of each panel:
overlaid 95% confidence cones for the variable-error
(A) and constant-error (B) estimates computed in
the horizontal plane for each head orientation. Both
the variable error and the constant error rotate to the
left for a leftward rotation of the head and to the
right for a rightward head rotation.
1614
J. MC INTYRE, F. STRATTA, AND F. LACQUANITI
ments and/or integration of multiple visual and oculomotor
cues. An additional explanation for the observed ratio must
be considered. In a pointing task the CNS transforms sensory
input about the target location in space into a motor output
that achieves the final pointing position, passing perhaps
through an intermediate representation of the target location.
If the transformation is exact, the distribution of pointing
errors expressed in Cartesian coordinates will be unaffected
by the sensorimotor transformation. On the other hand, if
FIG . 15. Variable-error magnitude as a
function of memory delay. Magnitudes are
expressed in mm as the square root of the
3 eigenvalues of the tolerance ellipsoid.
Variability of final finger positions increases additively by an equal amount in
all directions.
/ 9k19$$se25
J0986-6RR
08-15-97 08:00:02
neupa
LP-Neurophys
Downloaded from http://jn.physiology.org/ by 10.220.33.2 on June 18, 2017
FIG . 14. Variable-error orientations and 95% confidence cones for long
(8.0-s) and short (0.5-s) memory delays intermixed within the same block
of trials. Eigenvectors of the covariance matrix point toward the head for
both delays. For subject DA, the 2 major 3-D eigenvalues were not distinct
for the long delay. First and 2nd eigenvectors lie in a parasagital plane
passing through the head. Overhead view: 2-D eigenvector in the horizontal
plane ( ∗ ); lateral view includes only the eigenvector direction for the short
delay ( ∗∗ ).
the coordinate transformations are inexact, distortions will
be observed in the mapping of target to finger positions.
Soechting and colleagues have proposed just such a model
for the transformation of perceived target position to motor
output. They postulate that the CNS uses a linear approximation to transform target distance, azimuth, and elevation into
shoulder and elbow angles (Soechting and Flanders 1989b).
They base this model on systematic undershoots in pointing
to remembered targets. Soechting’s model predicts the compression of target depth information seen in their experiments.
In a similar vein, Foley (1980) has reviewed extensively
the literature on binocular depth perception and finds that
subjects tend to misjudge the true distance of a visually
perceived target irrespective of the reporting procedure used
(verbal response, pointing, etc.) A linear relationship between actual and perceived target vergence angle with slope
less than unity fits the available data well. Although the
slopes and intercepts of the fitted relationships would predict
overshoots, instead of undershoots, for reachable targets, this
model of sensory perception also predicts a compression of
data in the depth dimension.
An inexact transformation between sensory or motor variables would affect both the variable and constant errors produced in a pointing task. We do not see systematic undershoots or overshoots in pointing to visually remembered
targets across different workspace regions. Thus we have no
evidence for a globally inaccurate transformation of perceived target location. This may be due to the fact that
subjects reached to a relatively small region of the workspace
for a given set of trials. Although a specific approximation
to a sensorimotor transformation may be employed, subjects
seem to be able to calibrate the transformation to achieve
good accuracy for a small workspace region.
On comparing targets within a single workspace region,
we do see a relative compression of the responses in distance
with respect to the presented target positions. A linear regression of target versus response depth reveals a slope that is
significantly õ1, in contrast to the near-unity slopes seen
for regressions of target versus response azimuth or elevation
(Table 3). This compression of the responses in depth may
be attributed to a range effect (Brown et al. 1948) or specific
distance tendency (Gogel 1969) rather than to a specific
approximation in a sensorimotor transformation. In any case,
the magnitude of the effect is not sufficient to account for the
eccentricity of the variable-error ellipsoids. A compression
VIEWER-CENTERED POINTING ERRORS
Memory storage
We have argued that the anisotropy in variable errors for
pointing arises from the binocular estimation of target and
finger position. For the visually guided movements studied
here, there is no need to postulate any other representation
of the positions within the CNS. The brain could simply
compare the input parameters (vergence angle, disparity,
and accommodation) of the visually perceived finger position with the remembered target parameters expressed in the
same terms. Variability of the inputs will manifest itself as
anisotropy for depth versus direction at the output, but this
does not mean that the CNS represents the target position
explicitly in terms of a spherical coordinate system.
Does the CNS indeed memorize the target location in the
intrinsic coordinates of the input, or are the data somehow
transformed into another internal representation? For pointing trials with a 5- to 8-s memory delay, the variability
of responses increased equally in all three dimensions with
respect to a memory period of only 0.5 s. Discriminability
thresholds have also been shown to increase as a function
of memory delay for a variety of unidimensional stimuli
(Kinchla and Smyzer 1967). Specifically, it was found that
decreases in discriminability with increasing time delays can
be modeled by a diffusional process in which the variance
of the remembered stimulus value increases linearly with
time. The fact that the size of the variable-error tolerance
ellipsoid increased isotropically in our experiments indicates
that the storage of the target position may be carried out in
transformed coordinates. In fact, the same transformation
that generates a radial anisotropy in depth for the perceived
target position would apply to increases in variability induced by the prolonged memory delay (see APPENDIX A ).
The observed changes in variable error would be inconsistent
with the model of memory decay of Kinchla and Smyzer
if the memorized target position were stored in retinal or
oculomotor coordinates. Furthermore, the observed increases
/ 9k19$$se25
J0986-6RR
in variable error do not correspond to the increases in discriminabilty thresholds for a purely visual task. Foley (1976)
observed that the ratio of depth and vernier discriminability
thresholds, expressed in terms of visual arc, remained constant, whereas both increased as a function of interstimulus
delay. Thus, although Kinchla’s model can adequately explain Foley’s data in terms of retinal or oculomotor coordinates, such a description cannot fully account for the pointing
data. Indication of a memorized target position through
reaching, as opposed to a visual judgment of target locations,
seems to modify the pattern of variable errors and thus might
indicate a difference in the memory encoding for these two
different tasks.
Patterns of eye movement errors to memorized targets
cannot explain the pointing errors observed in our experiments. Monkeys and humans demonstrate systematic errors
when performing memory saccades, fixating consistently
above the actual target position (Gnadt et al. 1991; White
et al. 1993). For memory saccades, there is a dissociation
between the patterns of constant and variable errors—the
magnitude of constant errors reaches a stable maximum after
only 400 ms, whereas variable-error magnitude rises monotonically over ¢2.4 s. Ocular drift during the dark fixation
period cannot explain constant errors observed for memory
saccades (White et al. 1993). In our experiment, subjects
performed saccades to the target while it was still visible.
Thus we cannot expect to relate errors in pointing directly
to error in the performance of memory saccades. However,
pointing errors might be related to eye position drift if subjects were merely comparing the remembered retinal image
of the target with the observed position of the fingertip. The
data do not confirm this hypothesis. Upward and leftward
drift in eye position would result in a pattern of constant
errors upward and to the left, with an elongation of the
variable error in the same direction. We see no such consistent pattern of errors for pointing with the finger to a remembered target.
The isotropic increase of the tolerance ellipsoids as a function of memory delay suggests that the target location may
be memorized in a Cartesian reference frame. Such an encoding of the target position would require the combination of
depth and directional cues before the storage of the target
position. Cross-coupling of distance and direction information has been observed for cells within primate parietal area
LIP. Certain cells in this area signal actual or intended
changes in eye position in the absence of retinal stimulation,
as seen in the production of memory saccades (Gnadt and
Andersen 1988; Gnadt and Mays 1995). The authors have
argued that these neurons encode their activity in motor
spatial parameters, with motor fields that are shaped according to a 2-D Gaussian function in the frontoparallel
plane (directional tuning) and a broad sigmoid function in
depth (Gnadt and Mays 1995). Irrespective of whether or
not this cortical area is directly involved in the control of
manual pointing, these eye-movement-related cells provide
an example of the integration of depth and direction information within the CNS. The sensitivity of these cells to depth
may provide a mechanism by which the CNS can transform
spherically referenced sensory information into a spatially
isotropic representation.
The memorization of target location in a Cartesian ref-
08-15-97 08:00:02
neupa
LP-Neurophys
Downloaded from http://jn.physiology.org/ by 10.220.33.2 on June 18, 2017
factor of 0.6 would reduce the eigenvalue ratio from 20:1
to 12:1 instead of the observed 2:1 ratio.
As noted in the preceding text, we observed no consistent
pattern of errors related to a specific inaccuracy in the transformation from the visual input to the motor output. Gordon
et al. (1994), on the other hand, based their arguments for
separate processing of movement extent and direction on
patterns of variable error observed in their experiments independent of constant errors introduced by the sensorimotor
transformation. The lack of strong evidence in our data for
such a hand-centered reference frame may be accounted for
by differences in experimental conditions. A recent study
by Desmurget et al. (1997) has shown that movement kinematics and variable-error distributions can depend on
whether the movement is completely unconstrained (as in
our study) or whether the movement is executed in compliance with a physical constraint (as for the experiments performed by Gordon et al. 1994). A second difference between
these experiments concerns the visual conditions. In the current study, subjects could see their hands during the course
of the movement. Thus the question remains as to whether
visually guided updates during the course of the movement
might alter the initial visuomotor transformation.
1615
1616
J. MC INTYRE, F. STRATTA, AND F. LACQUANITI
erence frame is an attractive hypothesis, but one that remains speculative at this point. Differential increases in
variance for independent input cues such as vergence and
accommodation might explain why the shape of the overall variance does not change as expected, although it
seems unlikely that the increase in the different channels
would be matched so as to precisely cancel the unequal
increase in the transformed variability. Similarly, just as
we have argued above that coupled eye movements might
affect the shape of the variable error, an increase in the
correlation of movements between the eyes for longer delays might also explain the less-than-expected increase in
depth variability. Although further evidence is needed, the
hypothesis of an isotropic internal representation nevertheless seems worth pursuing.
where J(x0 ) is the Jacobian matrix of the transformation L(x)
evaluated at x Å x0
Ìy 1
Ìx 1
???
Ìy 1
Ìx n
:
??
:
J(x) Å
?
Ìy
Ìx 1
(A5)
m
m
Ìy
Ìx n
???
The distribution of the variable y is given by
Sy Å ∑ (yi 0 y0 )(yi 0 y0 ) T
i
Å
F
∑ y0 / J(x0 ) di 0 y0 I y0 / J(x0 ) di 0 y0
i
Å
(A6)
∑ J(x0 ) did Ti J T (x0 )
G
T
(A7)
(A8)
Å J(x0 )Sd J T (x0 )
Conclusions
Errors for pointing to remembered targets are distinct between target distance and direction from the eyes. This result
is independent of movement starting position, effector hand,
workspace location, and head orientation, demonstrating a
viewer-centered anisotropy for the reproduction of the target
positions. Analysis of the variable error indicates that the
pattern of errors most probably arises from the inherent geometric properties of the binocular perception of distance and
direction. Isotropic increases in variability with memory delay nevertheless suggest that spatial memory is carried out
in an internal representation that does not simply mirror the
retinal and ocular sensory signals of the visually acquired
target location.
(A9)
The addition of variance in the input variable x results in an increase
in variance for the output y. The change in variance for the two
variables is also related by the Jacobian matrix
x i* Å x0 / di / hi
where hi has zero mean and variance Sh
S x* Å Sd / Sh
(A11)
S y* Å J(x0 )[Sd / Sh]J T (x0 )
(A12)
T
Å Sy / J(x0 )Sh J (x0 )
As information is passed from one coordinate frame to another,
random noise in the input variable will be transformed into variability in the output. If the transformation is known, the form of the
output variability can be computed from the form of the input
variance. In this section we derive the equations for the transformation of variance between coordinate systems.
Let x be an n-dimensional vector representing the input information, y be the m-dimensional output variable, and y Å L(x) indicate
the transformation of information between coordinate frames. Furthermore, let x be a random variable with mean x0 and variance
Sd . Sd need not necessarily be diagonal. That is, the components
of x are not necessarily independent. For relatively small variations
of the input variable x, the distribution of the output variable y is
computed as follows
(A1)
where di is a randomly distributed vector with zero mean and
variance Sd . For small displacements from x0 , the corresponding
value for y is approximately
(A2)
à L(x0 ) / J(x0 ) di
(A3)
à y0 / J(x0 ) di
(A4)
/ 9k19$$se25
J0986-6RR
(A14)
Similarly, a scaling of the input variance will result in a scaling
of the output variance
APPENDIX A: TRANSFORMATION OF VARIANCE
yi Å L(x0 / di )
(A13)
Thus the additive change in output variance DSy is related to the
additive change in input variance Sh
DSy Å J(x0 )Sh J T (x0 )
xi Å x0 / di
(A10)
S x* Å rSd
(A15)
S y* Å J(x0 )[rSd ]J T (x0 )
(A16)
Å rSy
(A17)
and the additive increase of the output variance will have the same
shape as the original transformed variance
DSy Å (r 0 1)Sy
(A18)
In both cases, an isotropic increase in variance at the input will
cause an increase of variance at the output that is shaped by the
coordinate transformation.
APPENDIX B: TRANSFORMATION OF RANDOM
NOISE IN OCULOMOTOR SIGNALS TO CARTESIAN
VARIABILITY
Random noise in retinal and oculomotor signals will generate
variability in visually perceived estimates of spatial locations. In
this section we derive expressions for computing the variability of
spatial estimates expressed in Cartesian coordinates on the basis
of estimates of the variance for the oculomotor inputs.
Eye position signals corresponding to a foveated target can be
transformed directly into a perceived Cartesian target position
08-15-97 08:00:02
neupa
LP-Neurophys
Downloaded from http://jn.physiology.org/ by 10.220.33.2 on June 18, 2017
i
VIEWER-CENTERED POINTING ERRORS
1617
Predicted cartesian variances
FIG.. A1 Definition of terms for the calculation of Cartesian target position (x, y) from binocular information, where al and ar are the angular
position of the target with respect to the eye and e is the interocular distance.
where P is the target position, al , ar are the eye orientations, and
e is the interocular distance (Fig. A1). From the law of sines
e
Dr
Å
sin al sin ( ar 0 al )
Using these calculations, we can predict the shape of pointing
errors expressed in Cartesian coordinates that would be evoked by
variability in the oculomotor system. This calculation makes no
assumptions about the underlying neural processes that perform
the transformation. In fact, this computation predicts the pointing
errors for the case where the subject simply matches the retinal
position of the finger to the remembered retinal position of the
target, with all variance arising from uncompensated eye movements. Note, however, that similar errors will be produced at the
endpoint for any sensorimotor transformation provided that the
source of the information is eye position and that the transformation
accurately reproduces the 3-D target position.
We first consider the assumption that uncompensated eye movements are stochastically independent between the two eyes
(A19)
Sxy Å J( al , ar )
sl
0
0
sr
G
J T ( al , a r )
(A31)
From trigonometry and substitution
x Å Dr cos ar /
Å
e
2
(A20)
e sin al cos ar a
/
sin ( ar 0 al )
2
(A21)
y Å Dr sin ar
(A22)
e sin al sin ar
Å
sin ( ar 0 al )
(A23)
Small displacements of eye position give rise to small displacements of the perceived target X -Y position. These displacements
are related by the Jacobian matrix of the transformation
J( al , ar ) Å
Ìx Ìx
Ì al Ì ar
where sl and sr represent the variance of the perceived eye position
for the left and right eyes, respectively. We cannot calculate the
absolute Cartesian variance of the perceived target position without
knowing the values of sl and sr . We can estimate the shape and
direction of Sxy , however, by assuming that sl equals sr . For these
examples, we use an eye position SD of 5 minarc, which corresponds to apparent discrimination thresholds for human subjects
(see text). Note, however, that the shape and orientation of the
Cartesian covariance is not affected by the absolute value of this
parameter. The computed Cartesian covariance matrix S*
xy will be
related to the true covariance Sxy by a scale factor, and the two
matrices will have the same eigenvectors and the same eigenvalue
ratio. For a target located straight ahead at a distance of 600 mm
and an interocular distance of 60 mm, the transformed eye position
variance is
(A24)
Ìy Ìy
Ìal Ìar
Sxy Å
e cos ar
Ìx
[sin ( ar 0 al ) cos al / sin al cos ( ar 0 al )
Å
Ìal sin 2 ( ar 0 al )
(A25)
e sin ar
Ìy
[sin ( ar 0 al ) cos al / sin al cos ( ar 0 al )
Å
Ìal sin 2 ( ar 0 al )
(A26)
e sin al
Ìx
[sin ( ar 0 al ) sin ar / cos ar cos ( ar 0 al )
Å0
Ì ar
sin 2 ( ar 0 al )
(A27)
e sin al
Ìy
[sin ( ar 0 al ) cos ar 0 sin ar cos ( ar 0 al )
Å
Ìar sin 2 ( ar 0 al )
(A28)
The variance Sxy of the perceived target position can be calculated
from the variance of the perceived eye positions (see APPENDIX A )
Sxy Å J( al , ar )Sa J T ( al , ar )
(A29)
al Å tan 01
e
2
Py
Px /
e
2
J0986-6RR
Sa Å
Sxy Å
F
1.5av 0.5av
0.5av 1.5av
G
F
1.53 0.0
0.0 153.1
G
(A30)
and the predicted covariance matrix Sxy can be computed from a
given eye position variance Sa with the use of Eq. A24–A29.
/ 9k19$$se25
G
where av is the SD of disjunctive (vergence) eye movements.
Setting av Å 5 min, the resulting Cartesian variance is
Py
Px 0
0.38 0.0
0.0 153.1
The square roots of the eigenvalues of the Cartesian variance indicate the magnitude of the variability (SD) in millimeters along
each major axis. The ratio of the SDs indicates the shape of the
variance independent of size. The above Cartesian variance is diagonal, with an SD ratio of 20:1, indicating that the Cartesian SD
resulting from equal variances in the two eyes will be 20 times
greater in the depth versus lateral directions.
In a second example, we estimated the Cartesian distribution of
errors on the basis of the assumption that disjunctive eye movements are more finely controlled than conjugate eye movements.
To illustrate, we assumed an arbitrary ratio of 2:1 for the SD of
version versus vergence eye movements. In this case the eye position variance will have the form
where Sa represents the variance of the left and right eye positions.
For a given target position (X ,Y ), the eye orientations can be
computed by
ar Å tan 01
F
giving a Cartesian SD ratio of 10:1. Note that uncompensated head
movements would have a similar effect. Expressed in terms of
absolute gaze orientation, head rotations affect the conjugate orientation of the eyes without affecting vergence.
08-15-97 08:00:02
neupa
LP-Neurophys
Downloaded from http://jn.physiology.org/ by 10.220.33.2 on June 18, 2017
F
1618
J. MC INTYRE, F. STRATTA, AND F. LACQUANITI
The authors thank M. Carrozzo, F. Migliore, D. Angelini, and L. Bianchi
for help with the experiments and G. Baud-Bovy for assistance with the
statistical equations. We also thank M. Giacomelli and MES s.p.a for luminance measurement equipment and technical assistance.
This work was supported in part by grants from the Italian Health Ministry, the Italian Space Agency, and the Human Frontier Science Program.
Address for reprint requests: J. McIntyre, Istituto Scientifico S. Lucia,
via Ardeatina 306, 00179 Rome, Italy.
Received 17 December 1996; accepted in final form 14 May 1997.
REFERENCES
/ 9k19$$se25
J0986-6RR
08-15-97 08:00:02
neupa
LP-Neurophys
Downloaded from http://jn.physiology.org/ by 10.220.33.2 on June 18, 2017
ANDERSON, N. H. Algebraic models in perception. In: Handbook of Perception, edited by E. C. Carterette and M. P. Friedman. New York: Academic, 1974, vol. 2, p. 167–175.
BERKINBLIT, M. B., FOOKSON, O. I., SMETANIN, B., ADAMOVICH, S. V., AND
POIZNER, H. The interaction of visual and proprioceptive inputs in pointing to actual and remembered targets. Exp. Brain Res. 107: 326–330,
1995.
BOCK, O. Contributions of retinal versus extraretinal signals towards visual
localization in goal-directed movements. Exp. Brain Res. 64: 476–482,
1986.
BOCK, O. AND ECKMILLER, R. Goal-directed arm movements in absence of
visual guidance: evidence for amplitude rather than position control. Exp.
Brain Res. 62: 451–458, 1986.
BOOKSTEIN, F. L. Error analysis, regression and coordinate systems (commentary to Flanders et al.). Behav. Brain Sci. 15: 327–328, 1992.
BROWN, J. S., KNAUFT, E. B., AND ROSEMBAUM, G. The accuracy of positioning reactions as a function of their direction and extent. Am. J. Psychol. 61: 167–182, 1948.
COLLEWIJN, H. AND ERKELENS, C. J. Binocular eye movements and the
perception of depth. In: Eye Movements and Their Role in Visual and
Cognitive Processes, edited by E. Kowler. Amsterdam: Elsevier, 1990,
p. 213–260.
DARLING, W. G. AND MILLER, G. F. Transformations between visual and
kinesthetic coordinate systems in reaches to remembered object locations
and orientations. Exp. Brain Res. 93: 534–547, 1993.
DESMURGET, M., JORDAN, M., PRABLANC, C., AND JEANNEROD M. Constrained and unconstrained movements involve different control strategies. J. Neurophysiol. 77: 1644–1650, 1997.
DIEM, K. (Editor). Tables Scientifiques. Basel: Geigy, 1963, p. 184.
ERKELENS, C. J. AND COLLEWIJN, H. Motion perception during dichoptic
viewing of moving random-dot stereograms. Vision Res. 25: 583–588,
1985a.
ERKELENS, C. J. AND COLLEWIJN, H. Eye movements and stereopsis during
dichoptic viewing of moving random-dot stereograms. Vision Res. 25:
1689–1700, 1985b.
FISK, J. D. AND GOODALE, M. A. The organization of eye and limb movements during unrestricted reaching to targets in contralateral and ipsilateral space. Exp. Brain Res. 60: 159–178, 1985.
FLANDERS, M., TILLERY, S.I.H., AND SOECHTING, J. F. Early stages in a
sensorimotor transformation. Behav. Brain Sci. 15: 309–362, 1992.
FOLEY, J. M. Error in visually directed manual pointing. Percept. Psychophys. 17: 69–74, 1975.
FOLEY, J. M. Successive stereo and vernier position discrimination as a
function of dark interval duration. Vision Res. 16: 1269–1273, 1976.
FOLEY, J. M. Binocular distance perception. Psychol. Rev. 87: 411–435,
1980.
FOLEY, J. M. AND HELD, R. Visually directed pointing as a function of
target distance, direction and available cues. Percept. Psychophys. 12:
263–268, 1972.
GAUTHIER, G. M., VERCHER, J. L., AND ZEE, D. S. Changes in ocular alignment and pointing accuracy after sustained passive rotation of one eye.
Vision Res. 34: 2613–2627, 1994.
GNADT, J. W. AND ANDERSEN, R. A. Memory related motor planning activity in posterior parietal cortex of macaque. Exp. Brain Res. 70: 216–
220, 1988.
GNADT, J. W., BRACEWELL, M., AND ANDERSEN, R. A. Sensorimotor transformation during eye movements to remembered visual targets. Vision
Res. 31: 693–715, 1991.
GNADT, J. W. AND MAYS, L. E. Neurons in monkey parietal area LIP are
tuned for eye-movement parameters in three-dimensional space. J. Neurophysiol. 73: 280–297, 1995.
GOGEL, W. C. The sensing of retinal size. Vision Res. 9: 1079–1094, 1969.
GOGEL, W. C. Scalar perceptions with binocular cues of distance. Am. J.
Psychol. 85: 477–497, 1972.
GORDON, J., Ghilardi, M. F., AND Ghez, C. In reaching, the task is to move
the hand to a target (commentary to Flanders, M. et al.). Behav. Brain
Sci. 15: 337–339, 1992.
GORDON, J., GHILARDI, M. F., AND GHEZ, C. Accuracy of planar reaching
movements. I. Independence of direction and extent variability. Exp.
Brain Res. 99: 97–111, 1994.
JIANG, B. C. Accommodative vergence is driven by the phasic component
of the accommodative controller. Vision Res. 36: 97–102, 1996.
KINCHLA, R. A. AND SMYZER, F. A. A diffusion model of perception memory. Percept. Psychophys. 15: 353–360, 1967.
LACQUANITI, F. Frames of reference in sensorimotor coordination. In: Handbook of Neuropsychology, edited by F. Boller and J. Grafman. Amsterdam: Elsevier, 1997, vol. 11, p. 27–63.
LACQUANITI, F., LE TAILLANTER, M., LOPIANO, L., AND MAIOLI, C. The
control of limb geometry in cat posture. J. Physiol. (Lond.) 426: 1772–
1792, 1990.
MATIN, L., MATIN, E., AND PEARCE, D. G. Visual perception of direction
in the dark: role of local sign, eye movements and ocular proprioception.
Vision Res. 6: 453–469, 1966.
MATIN, L., MATIN, E., AND PEARCE, D. G. Eye movements in the dark
during the attempt to maintain prior fixation position. Vision Res. 10:
837–857, 1970.
MAYS, L. E. AND GAMLIN, P.D.R. Neuronal circuitry controlling the near
response. Curr. Opin. Neurobiol. 5: 763–768, 1995.
MORRISON, D. F. Multivariate Statistical Methods. Singapore: McGrawHill, 1990.
OWENS, D. A. AND LEIBOWITZ, H. W. Accommodation convergence and
distance perception in low illumination. Am. J. Optom. Physiol. Opt. 65:
118–126, 1980.
POULTON, E. C. Human manual control. In: Handbook of Physiology. The
Nervous System. Motor Control. Bethesda, MD: Am. Physiol. Soc., 1981,
sect. 1, vol. II, p. 1337–1389.
PRABLANC, C., ECHALLIER, J. F., KOMILIS, E., AND JEANNEROD, M. Optimal
response of eye and hand motor systems in pointing at a visual target.
I. Spatio-temporal characteristics of eye and hand movements and their
relationship when varying the amount of visual information. Biol. Cybern.
35: 113–124, 1979.
PRESS, W. H., FLANNERY, B. P., TEUKOLSKY, S. A., AND VETTERLING, W. T.
Numerical Recipes In C. Cambridge, UK: Cambridge Univ. Press, 1988,
p. 374.
REGAN, D., ERKELENS, C. J., AND COLLEWIJN, H. Necessary conditions for
the perception of motion in depth. Invest. Opthamol. Visual Sci. 27: 584–
597, 1986.
SCHOR, C. M. A dynamic model of cross-coupling between accommodation
and convergence: simulation of step and frequency responses. Optom.
Vision Sci. 69: 258–269, 1992.
SOECHTING, J. F. AND FLANDERS, M. Sensorimotor representations for pointing to targets in three-dimensional space. J. Neurophysiol. 62: 582–594,
1989a.
SOECHTING, J. F. AND FLANDERS, M. Errors in pointing are due to approximations in sensorimotor transformations. J. Neurophysiol. 62: 595–608,
1989b.
SOECHTING, J. F., TILLERY, S.I.H., AND FLANDERS, M. Transformation from
head- to shoulder- centered representation of target direction in arm
movements. J. Cognit. Neurosci. 2: 32–43, 1990.
STEINMAN, R. M., LEVINSON, J. Z., COLLEWIJN, H., AND VAN DER STEEN, J.
Vision in the presence of known natural retinal image motion. J. Opt.
Soc. Am. A 2: 226–233, 1985.
VAN EË, R. AND ERKELENS, C. J. Stability of binocular depth perception
with moving head and eyes. Vision Res. 36: 3827–3842, 1996.
WESTHEIMER, G. Cooperative neural processes involved in stereoscopic
acuity. Exp. Brain Res. 36: 585–597, 1979.
WHITE, J. M., SPARKS, D. L., AND STANFORD, T. R. Saccades to remembered
target locations: an analysis of systematic and variable errors. Vision Res.
34: 79–92, 1993.
© Copyright 2026 Paperzz