Active Self-calibration of Hand-mounted Laser Range Finders
Guo-Qing Wei, Gerd Hirzinger
Institute of Robotics and System Dynamics
German Aerospace Research Establishment
FF-DR/RS, DLR
82234 Oberpfaenhofen
Germany
ABSTRACT
In this paper, we propose a method for self-calibration of robotic hand-mounted laser
range nders by means of active motion of the robot. Through range-measuring of a plane
of unknown position and orientation, the mounting parameters of the range nders and
the coordinates of the world plane are estimated. Systematic measurement errors can
also be calibrated at the same time. The approach is fully autonomous, in that no initial
guesses of the unknown parameters are to be provided from the outside by humans for the
solution of a set of nonlinear equations. In fact, the initial values are all found in closed
forms by the algorithm itself. Sucient conditions for a unique solution are derived in
terms of controlled motion sequences. Experimental results in both a simulated and a
real environments are reported.
Key words: self-calibration, laser range nders, hand-eye calibration, active motion,
unique solution.
1
1 Introduction
Active ranging devices, such as laser range nders, have found a variety of applications
in robotics. The range data, which are obtained in the sensor coordinate system, should
be rst transformed to those in the robot end-eector coordinate system, in order for the
robot to execute any control tasks. To do this, the geometric mounting parameters of the
sensors have to be determined in advance. This is hand-eye calibration.
Previous work on calibrating laser range nders has been mainly about determining
the transformation from primary sensory measurements (e.g., the 2-dimensional (2D) coordinates of a point in the photodetector) to the 3-dimensional (3D) coordinates of the
point in the sensor coordinate system. Such calibrations are dependent on the mechanisms of the sensing devices. Examples include the use of a camera-light conguration [4]
or the intersection of 3 laser planes [6] for spatial measurement. To calibrate such range
sensors, coordinates of some reference world points should be provided by some other
measurement apparatus such as theodolites [6]. Chen and Ni [1] used the camera image of a planar surface marked with feature points to aid the calibration of the range
measurement errors after an initial calibration of a laser radar. The feature points are
rst detected in the camera image and then used in the camera system to determine the
position and orientation of the plane based on knowledge about the distances between
the feature points. The obtained pose information of the plane is then utilized to correct
the measurement error of the laser radar. Zhuang et. al [12] proposed a self-calibration
method to determine the relative positions of 3 converging laser beams, each of which
provides only length data. The purpose of the calibration is to transform the length data
to 3D coordinates of the measured point in the sensor coordinate system. A plane is
measured under dierent positions and orientations by the sensors. Then, the residual
2
errors of the plane equations are minimized with respect to the laser position parameters
and the plane's coordinates.
As for hand-eye calibration, i.e., the determination of the mounting parameters of a
sensor on a robot hand, Shiu and Ahmad [8] proposed to solve a homogeneous equation
of the kind AiX = XBi by making sensor measurement at dierent robot stations, where
Ai is the the robot motion matrix for the ith station, which can be calculated from the
known robotic kinematics; and Bi is the corresponding sensor motion matrix determined
from sensory data at each station. A number of techniques for the solution of the above
equation have been developed for hand-camera calibration [9], [11]. Unfortunately, unless
a dense range map is produced, the above formulation cannot be applied to the case of
hand-laser calibration, e.g., the case of a single or sparsely arranged laser ranging beams.
This is because feature correspondence could not be established for sparse range data in
the latter case for the estimation of the sensor motion matrix Bi .
In this paper, we propose a self-calibration method to determine the mounting parameters of sparse or even single laser beams. The method is equally applicable to the case
of dense-range sensors such as laser scanners. Similarly to Zhuang et. al [12], we also use
a planar surface as the calibration object. The position and orientation of the plane is
unknown. By making sensory measurement at dierent robot positions and orientations,
we are able to compute both the mounting parameters and the plane coordinates. Unlike
Zhuang et. al [12] (their problem is not hand-laser calibration), we derive closed form
solutions for all the parameters. The values of these parameters are then used in a rening
stage as the initial values in a nonlinear optimization. Systematic measurement errors,
e.g., due to change of temperature, can also calibrated at the same time. Conditions for
a unique solution are derived by using controlled motion sequences.
3
Position Sensitive Detector
Lens
Object Surface
Laser Diode
Fig. 1: The laser range nder.
The paper is structured as follows. In Section 2, we rst formulate the hand-laser
calibration problem, and then propose an active method to the problem. Sucient conditions for a unique solution are derived. In Section 3, we experiment with our method and
test the accuracy through both simulation and measurement in a real robot environment.
In Section 4, we conclude the paper.
2 Active Self-calibration
2.1 Single-Beam Range Finders
The laser range nders we consider are single-beam elements, which are mounted in the
front of a robot gripper for direct range sensing. Figure 1 shows how a triangulation-based
range nder works. The laser diode emits a laser beam which is then reected by the
object surface to be measured and received by a position-sensitive detector. A built-in
electric circuit then calculates the distance to the object based on the position of the
reected light in the position detector in terms of triangulation. Here, we assume that
an o-line calibration has been done for this range measurement. Our main interest is to
obtain the mounting parameters of these sensors with respect to the hand coordinate sys4
tem. Furthermore, we can also rene the distance measurement if the o-line calibration
is subject to errors or if the sensors undergo environmental changes, e.g., temperature
changes. Dierences between the reectance properties of the surface to be measured and
that used in calibration may also contribute to the measurement errors. Thus, on-line
self-calibration is more important in this case.
Although the method we shall describe is developed for the calibration of triangulationbased laser range nders, extensions to the hand-eye calibration of other range devices,
such as ultrasonic sensors, laser scanners, are straightforward, as long as range data are
provided by these sensors.
2.2 Formulation
First, we dene two coordinate systems, the gripper (hand) coordinate system < g >:
Xg ? Yg ? Zg and the laser coordinate system < L >: XL ? YL ? ZL. Then, we model the
geometrical mounting parameters of a laser range beam in the hand coordinate system
by the location r = (r1; r2; r3)T and the direction v = (v1 ; v2; v3)T of the beam. Here v is
a unit vector. For a given laser sensor mounted on the hand, both r and v are xed. The
origin of the laser coordinate system < L >: XL ? YL ? ZL is located at position r, and
the coordinate axes are parallel to those of the hand system, as shown in Fig. 2. Note
that more laser beams may be needed for the full determination of the pose of the hand
with respect to an object since a single laser beam cannot determine the rotation around
the laser direction v.
Suppose there is a world plane , whose equation in an initial hand coordinate system
< g0 > is
nT0xg + b0 = 0;
0
5
(1)
ZL
Zg
v
OL
YL
r
XL
Og
Yg
Xg
Fig. 2: The hand and laser coordinate systems
where n0 is a unit vector representing the normal of the plane; b0 is the algebraic distance
from the origin of < g0 > to the plane; and xg0 = (Xg0 ; Yg0 ; Zg0 )T is the coordinate vector
of a point in the < g0 > system. To resolve the sign ambiguity in (1), we designate the
third component of n0 as being positive. The coordinates of the calibration plane in frame
< g0 > are thus represented by (n0 ; b0).
Suppose that the hand is moved to M dierent stations < gj >, j = 0; 1; 2; :::; M , where
< g0 > stands for the initial station, and that the range measurements with respect to
the plane at the corresponding sensor stations f< Lj >g are dj ; j = 0; 1; 2; :::; M . Denote
the robot motion from < g0 > to < gj > by rotation Rg0j and translation tg0j :
xg = Rg0j xg + tg0j
0
j
(2)
where xg is the coordinate vector of a point in < gj >. Then the coordinates (nj ; bj ) of
j
plane in frame < gj > can be easily shown to be
nj = RgT0j n0;
(3)
bj = b0 + nT0 tg0j :
(4)
and
6
That is, the plane equation at frame < gj > is
nTj xg + bj = 0;
j
(5)
According to the geometric modeling of the laser beam in Fig 2, the predicted range
measurement at frame < gj > can be easily shown to be
n r+b
d^j = ? jnT v j ; j = 0; 1; :::; M ;
T
j
(6)
which are functions of the mounting parameters r, v, and the initial plane coordinates
(n0 ; b0 ) as well as the robot motion parameters (Rg0j ; tg0j ). Equation (6) is derived by
substituting the coordinates xg =r + d^j v of the measured point into the plane equation
j
(5).
Based on the above derivations, the measurement equations for the laser beam at the
j -th station can be written as
dj + dj = d^j ;
j = 0; 1; :::; M ;
(7)
where dj is the range correction for dj due to systematic measurement errors, which is
usually a function of some correction parameters k. Here we model the systematic error
as a 2nd order polynomial:
dj = k1dj + k2d2j ;
(8)
where k1 and k2 are the compensation coecients. The model of (8) is not physicsbased, but purely from mathematical considerations. Besides, in (8), no consideration of
a constant shift is necessary since it can be accounted for by the location parameter r;
that is, a constant error is equivalent to a shift of the origin of the laser beam.
Based on the above formulations, the hand-eye self-calibration problem is to determine, from the measurement equations (7), the plane coordinates (n0 ; b0 ), the distortion
7
parameters k1, k2, and the mounting parameters r and v. Note that the calibration
procedure has not assumed any metric calibration eld, such as points/planes of known
coordinates in a reference coordinate system. What we need is only a at plane, which
is arbitrarily oriented in the 3D space. Of course, assumptions are made of the known
robot motions fRg0j ; tg0j g. This has been an assumption adopted in most previous work
on hand-eye calibration, e.g., Tsai and Lenz [9], Shiu and Ahmad [8], to name a few.
Since (7) is highly nonlinear in the unknowns, its solution may need good initial
guesses for the unknown parameters. With the initial values, standard techniques like the
Newton-Raphson method can be used to iteratively adjust the unknown parameters until
convergence is reached.
2.3 Solving the calibration equations
Instead of seeking the initial values in terms of any a priori system knowledge, we let the
algorithm itself to estimate them. Thus, we are treating the system as being completely
`black' to us. Our basic idea for self-calibration is to use designed motion sequences, e.g.,
pure translational motions, to simplify (7), so that the initial guesses can be found all in
closed forms.
As an approximation, we rst ignore the systematic measurement error by setting
k1 = 0 and k2 = 0. Thus, (7) can be rewritten, by inserting (6), (3), and (4), and by
multiplying each side of (7) with the denominator of (6), as
dj (RgT0j n0 ) v + (RgT0j n0 ) r + n0 tg0j + b0 = 0; j = 0; 1; 2; :::; M:
(9)
Now, we assume that the gripper stations < gj >, j = 1; 2; ::; Mt are obtained by Mt
pure translational motions of the hand, i.e., Rg0j = I . Under these assumptions, (9) can
8
be simplied as:
dj (n0 v) + n0 r + n0 tg0j + b0 = 0
(10)
j = 0; 1; 2; :::; Mt:
Notice that n0 v, n0 r, and b0 are all constants for the given plane and the given laser
beam. Thus by subtracting equation (10) for j 6= 0 by that for j = 0, and by dening
p0 = n 0 v
(11)
tg0j n0 = ?(dj ? d0) p0; j = 1; 2; :::; Mt;
(12)
we obtain
where we have used tg00 = 0. We can now view (12) as being constituting a system of
linear equations on the plane normal n0 since tg0j , dj , and d0 are all known quantities.
Thus, a least-squares solution for n0 can be obtained as
X
X
j
j
n0 = ?( tg0j tTg0j )?1 ( tg0j (dj ? d0 ))p0:
(13)
Remember that p0 is also an unknown. But we know that n0 is a unit vector. Thus from
jjn0jj = 1, we can nd the magnitude of p0 by (13). The sign of p0 can also be determined
from (13) by using our postulate that the third component of n0 is positive. Thus, both
p0 and n0 can be determined from (13). From (13), we also obtain the conditions for a
unique solution of n0 as follows.
Lemma 1: If a robot undergoes 3 or more translational motions (i.e., Mt 3), among
which at least 3 of the translation vectors with respect to the initial hand system are not
coplanar, then the plane's normal is uniquely determined.
After n0 and p0 become known, we see that (11) gives only one constraint on the
direction parameter v. Therefore, pure translational motion of the hand does not solve
9
the complete system. To solve for v, we have to nd more constraints. There are two
ways to create new constraints on v. The rst is to re-orientate the hand and for the
new orientation of the hand, to make one translational motion. Denote the two new
robot frames by < g00 > and < g10 >, where from < g00 > from < g10 >, there is only
a translational motion t0g01. Suppose the range measurement at the two stations are d00
and d01, respectively. Then, in the same way as (12) was generated, we obtain one more
constraint on v as:
(d01 ? d00)n00 v = ?(t0g01 n00 );
(14)
where we have substituted (11) into (12). Note that n00 , which is the plane's orientation
in the new frame < g00 >, is known since we have obtained the orientation n0 at the old
frame < g0 > and we know the amount of robot motion from < g0 > to < g00 >. If
more than one translational motions are made, i.e. f< gj0 >; j = 0; 1; :::; Mt0 > 1g, more
equations of the form (14) are obtained, the summation of which at both sides can give
an average value of n00 v, but still only one constraint on v is generated. Thus, to enable
an unique solution of v, another re-orientation of the hand and the followed translations
f< gj00 >; j = 0; 1; ::; Mt00g should be made to give the third constraint on v. From the three
linear constraints we can then solve for the laser direction vector v in a closed form. The
second way of generating new constraints on v is to re-orientate the plane for (at least)
two times without assuming the motions of the plane to be known. This is equivalent to
using (at least) three independent planes. For each plane, the robot should make at least 3
noncoplanar translational motions, as indicated by Lemma 1. Then, the three sequences
of stations f< gj >g, f< gj0 >g and f< gj00 >g corresponding to the three planes are
treated independently. (These stations can have the same physical positions in the robot
base system.) The same procedures as in (10){(13) are performed for each plane. After
10
the parameter p0 and the normal n0 corresponding to each plane are determined, three
equations of the form (11) can be obtained and solved jointly for v in a closed form. Of
course, it is equivalent in the second case to re-orientate the robot without assuming the
motions of the hand from < g0 > to < g00 > and from < g0 > to < g000 > to be known. The
advantage of assuming unknown plane's motion or unknown robot motion in the second
case is that if the robot has only rotation errors, then the planes' orientation and the laser
direction can be determined independently of the rotation errors since only translational
motions are used. From the above procedures, we also obtain conditions for a unique
solution of the laser direction as follows.
Lemma 2: If (a) the two rotational axes of the motion from station < g0 > to station
< g00 > and from station < g0 > to station < g000 > are not parallel to each other and
none of them is in the same direction as the normal of the plane ; or (b) the normals
of the three arbitrarily orientated planes are not coplanar, then the a unique solution for
the laser direction v can be found.
Proof: The proof can be easily carried out by analyzing (14) and (11).
After both n0 and v have been determined, we can sum (10) over j to obtain
b0 = e0 ? n0 r;
where
X
e0 = ? M 1+ 1 (( dj )n0 v + n0 t
j
(15)
Xt
j
g0j );
(16)
which is a known quantity. Equation (15) provides only one constraint on b0 and r, which
is all the information contained in pure translational motions. To solve completely for
b0 and r, we need motions of non-zero rotations. Suppose f< gj >; j = Mt + 1; ::; M g
are obtained by non-zero rotation motions. Then we can substitute (15) into (9) for
11
j = Mt + 1; ::; M , to get a set of linear equations on r as
nT0(Rg0j ? I )r = ?dj (RgT0j n0) v ? n0 tg0j ? e0
(17)
from which r can be solved for in a closed form. The conditions for a unique solution
are that there should be at least three non-zero rotation motions whose axes of rotation
with respect to < g0 > are not collinear with r, since otherwise we have Rg0j r r,
which provides no constraints on r according to (17). After r is known, b0 can be easily
computed from (15).
So far we have obtained the initial values of all the unknown parameters. These values
will be the exact solution if there were neither measurement noises nor systematic measurement errors. In general cases, however, when noises and systematic errors are present,
the obtained values can only be regarded as good initial guesses because the estimation
errors accumulate in a sequential way. Thus we use these values as the starting point in
the simultaneous adjustment of all the unknowns (including the correction coecients)
by solving the original measurement equations (7). The use of the original measurement
equation minimizes the range measurement errors, instead of the residuals of the plane
equations [12]. The former solution is optimal in the sense of minimum variance if the
measurement noises are Gaussian; while the latter may amplify the measurement noises.
Notice that the measurement equations for f< gj0 >g and f< gj00 >g should also be included to stabilize the iteration, as could be understood from their role in specifying a
unique v. The unit-vector constraints on n0 and v are imposed at this stage by using a
q
replacement exemplied by ni3 = n2i1 + n2i2 , where i3 is chosen as the component of the
largest value in the initial vector n = (n1 ; n2; n3), with the sign of ni3 being determined
from the initial value of n.
In the case of re-orientating the plane , we have, for each plane, three new unknowns,
12
which are the plane's coordinates. When the number of planes is chosen reasonably large,
e.g., 10, for robustness considerations, the solution of (7) by the Newton-Raphson method
will be very expensive since a inversion of a large matrix is involved. We observed that the
normal matrix in the normal equation has a bordered block-diagonal structure, because
the coordinates of a plane do not appear in all measurement equations. This enable
us to employ a partitioning scheme proposed by Brown [3] to make the computational
cost only linearly proportional to the number of planes. It should be noted that the
Levenberg-Marquardt algorithm [5], which has been widely used in nonlinear optimization,
takes no account of the structure of the normal matrix. Its computational cost increases
quadratically with the dimensions of the problem. Details of the reduction scheme and
the derivation of the Gauss-Markov theorem [7] for estimation of parameter variances in
the reduction case can be found in [10].
3.1 Simulations
3 Experiments
1) Setup:
We simulated a robot end-eector mounted with four laser range nders, with their
mounting parameters chosen similar to those in a real environment. The active range of
the range nders are 0{30 mm. Four planes of dierent positions and orientations are
used for their calibration. The motion uncertainties for the robot are assumed to be, for
translation, T = 0.05 mm, and for rotation, R = 0.03 degrees; both errors are assumed
to be on the Yg axis. Besides, we assume a measurement error of d =0.03 mm for range
data. No systematic measurement error was simulated.
2) Accuracy:
13
Table 1: Laser parameters' variances in real experiments.
variance
n0 1
;
n0 2
b0 (mm)
;
v1
v2
3.1e-5
3.5e-5
0.021 8.59e-4 9.54e-4
variance r1 (mm) r2 (mm) r3 (mm) k1
k2
8.86e-3 9.47e-3 2.19e-2 5.9e-4 2.68e-5
We compare the calibrated parameters with the ground truth by Monte-Carlo simulation. The error er of the location parameter r is computed as the magnitude of the
dierence vector between the computed and the given ones, and the error ev of the orientation parameter v as the angle between the computed and the given vectors. From 1000
Monte-Carlo simulations, the rms error of er and ev were found to be 0.19 mm and 0.17
degrees.
3) Number of iterations and speed: In most cases, the number of iterations are within
10. With the reduction scheme described in [10], considerable run time can be saved.
For example, when 4 planes are used for calibration, the running time with and without
reduction are 3 seconds and 10 seconds in a Silicon Graphics Indigo2 workstation, respectively. When 13 planes are used, the respective time are 8 seconds and 81 seconds. A
factor of 10 is achieved by the reduction scheme of [10] in speeding-up the calibration.
3.2 Real Experiments
The calibration method has been implemented in a real robotic environment. The robot is
Manutec R2, with a repeatability of 0.05 mm for translational motion (unknown accuracy
for rotation). The gripper is a multisensory one developed at the authors' institution (with
2 tiny cameras and 4 laser range nders in the front), see [2] for details.
A total of 13 planes were used in the calibration through arbitrarily orientating a
metal plate covered with a sheet of white paper. For each plane, the robot makes both
14
translational and rotation motions. To ensure measurement accuracy, we keep the angle
between the laser direction and the planes' normals to be roughly within 45, which can be
realized by either manually controlling the plane's orientation, or by doing rst an initial
calibration and then using the computed knowledge about plane's position and orientation
to achieve this. The polynomial model of (8) is used to account for systematic errors of
each laser beam independently. To access the calibration accuracy, we use the GaussMarkov theorem (see [10] for details) calibrated parameters. Table 1 shows the variances
of one of the laser range nders. It can be seen from the table that the parameter estimates
are highly accurate. The variance of the residual error of the measurement equation is
estimated to be 0.03 mm, which is the value used in the simulation of measurement
uncertainty.
As another test of the calibration accuracy, we reconstruct the plane in 3D space
at one gripper station using the 4 calibrated range sensors and then predict the range
values after robot motions. The dierences between the measured range values and the
predicted ones are computed. From 320 such tests, we obtain a rms error of 0.088 mm
and a maximum error of 0.41 mm. If systematic measurement errors are not modeled
in the calibration, the respective rms and maximum errors are 0.092 mm and 0.43 mm.
Notice that these errors have included the motion uncertainties of the robot. The above
accuracy may be misleading; it provides only a relative accuracy rather than absolute
accuracy since absolute errors may cancel each other in relative evaluations. Absolute
accuracy of the reconstructed plane in the hand coordinate system is unknown since we
do not have ground truth.
15
4 Conclusions
In this paper, we have presented a fully automatic method for calibrating the mounting
parameters of laser range nders for use on a robot hand. The method does not need
any metric calibration object and relies only on the atness of a planar surface. The
calibration also computes the correction parameters for systematic measurement errors.
Although formulated as a nonlinear optimization problem, the calibration procedure does
not need any human aids to provide initial values for the unknown parameters. Highly
accurate results have been achieved in both simulations and real experiments. The use of
the reduction scheme has substantially speeded up the calibration procedure.
At the current stage, a robot of 6 degrees of freedom is needed for the calibration. For
vehicles moving on the ground, knowledge about the calibration object may be needed
for the calibration, e.g., the angle between two planes. This constrained motion case is
currently under investigation.
Acknowledgment
The authors are very thankful to Mr. Jinxing Shi for valuable information about the
range sensors.
References
[1] Y.D. Chen, J. Ni, \Dynamic calibration and compensation of a 3-D laser radar
scanning system, " IEEE Trans. Robotics Automat. , Vol. 9, No.3, pp.318-323, 1993.
[2] G. Hirzinger, B. Brunner, J Dietrich, J. Heindl, \Sensor-based space robotics-ROTEX
and its telerobotic features," IEEE Trans. Robotics Automat., Vol.9, No.5, pp.649663, 1993.
16
[3] Manual of Photogrammetry, 4th ed. Bethesda, MD: Amer. Soc. of Photog., 1980.
[4] P. Mansbach, \Calibration of a camera and light source by tting to a physical
models," Computer Vision Graphics and Image Processing, Vol.35, pp.200-219, 1986.
[5] W. H. Press et al., Numerical Recipes in C: The Art of Scientic Computing, 2nd
edition, Cambridge University Press, 1992
[6] B.R. Sorensen, M. Donath, G.B. Yang, R.C. Starr, \The minnesota scanner: a prototype sensor for three-dimensional tracking of moving body segmentation," IEEE
Trans. Robotics Automat., Vol.RA-5, No.4, pp.499-509, 1989.
[7] C.R. Rao, Linear Statistical Inference and Its Applications, second edition, John
Wiley & Sons, Inc., 1973.
[8] Y.C. Shiu, S. Ahmad, \Calibration of wrist-mounted robotic sensors by solving homogeneous transformation equations of the form AX = XB ," IEEE Trans. Robotics
Automat., Vol.RA-5, No.1, pp.16-29, 1989.
[9] R.Y. Tsai, R.K. Lenz, \A new technique for fully autonomous and ecient 3D
robotics hand/eye calibration," IEEE Trans. Robotics Automat., Vol.5, No.3, pp.345358, 1989.
[10] G.-Q. Wei, K. Arbter, G. Hirzinger, \ Active self-calibration of robotic eyes, hand-eye
relationships with motion planning", IEEE Trans. Robotics Automat., to appear
[11] H. Zhuang, Y.C. Shiu, \A noise-tolerant algorithm for robotic hand-eye calibration
with or without sensor orientation measurement," IEEE Trans. System Man Cybern.,
Vol.SMC-23, No.4, pp.1168-1175, 1993.
17
[12] H. Zhuang, B. Li, Z.S. Roth, X. Xie, \Self-calibration and mirror center oset elimination of a multi-beam laser tracking system," Robotics and Autonomous System,
Vol.9, pp.255-269, 1992.
18
© Copyright 2026 Paperzz