Improved Direct Linear Transformation for Parameter Decoupling in

algorithms
Article
Improved Direct Linear Transformation for Parameter
Decoupling in Camera Calibration
Zhenqing Zhao 1 , Dong Ye 1, *, Xin Zhang 1 , Gang Chen 1 and Bin Zhang 2
1
2
*
School of Electrical Engineering and Automation, Harbin Institute of Technology, Heilongjiang 150001,
China; [email protected] (Z.Z.); [email protected] (X.Z.); [email protected] (G.C.)
Capital Aerospace Machinery Company, Beijing 100076, China; [email protected]
Correspondence: [email protected]; Tel.: +86-186-4603-2603; Fax: +451-8641-7581
Academic Editor: Faisal N. Abu-Khzam
Received: 11 December 2015; Accepted: 25 April 2016; Published: 29 April 2016
Abstract: For camera calibration based on direct linear transformation (DLT), the camera’s intrinsic
and extrinsic parameters are simultaneously calibrated, which may cause coupling errors in the
parameters and affect the calibration parameter accuracy. In this paper, we propose an improved
direct linear transformation (IDLT) algorithm for calibration parameter decoupling. This algorithm
uses a linear relationship of calibration parameter errors and obtains calibration parameters by
moving a three-dimensional template. Simulation experiments were conducted to compare the
calibration accuracy of DLT and IDLT algorithms with image noise and distortion. The results
show that the IDLT algorithm calibration parameters achieve higher accuracy because the algorithm
removes the coupling errors.
Keywords: camera calibration; direct linear transformation; parameter decoupling
1. Introduction
With the technological developments of digital cameras and microprocessors, computer vision has
been widely applied to robot navigation, surveillance, three-dimensional (3D) reconstruction and other
fields for its high speed, high accuracy and non-contact nature. To obtain improved 3D information
from a two-dimensional (2D) image, it is necessary to calibrate the intrinsic parameters of the camera,
such as the focal distance and optical center point, as well its extrinsic parameters, such as rotation
and translation, which relate the world coordinate system to the camera coordinate system. Over the
last decade, numerous studies have focused on this area. In [1], the authors proposed an efficient
approach for the dynamic calibration of multiple cameras. In [2], the authors proposed a calibration
algorithm based on line images. In [3], the authors used one-dimensional (1D) information to calibrate
the parameters. All of these algorithms make camera calibration faster and more convenient.
To obtain high accuracy parameter results, high accuracy 3D or 2D templates can be used. These
algorithms include direct linear transformation (DLT) [4], the Tsai calibration method [5] and the Zhang
calibration method [6]. In space rendezvous and docking, as well as in visual tracking applications,
it is necessary to obtain the specific extrinsic and intrinsic parameters of the camera. At the same
time, with the wide application of the coordinate measuring machine (CMM) [7,8], high precision,
large range 3D templates have become more readily applied. The DLT algorithm is more suitable for
these applications.
The DLT algorithm is based on the perspective projection between 3D space points and 2D image
points. It calculates a transformation matrix and obtains the camera’s intrinsic and extrinsic parameters
according to the parameter decomposition. With this model, only one 3D template in one position is
required for calculation; therefore, the template size, number of feature points and relative distance
between the template and camera are critical [9–11].
Algorithms 2016, 9, 31; doi:10.3390/a9020031
www.mdpi.com/journal/algorithms
Algorithms 2016, 9, 31
2 of 15
Algorithms 2016, 9, 31
2 of 15
Camera calibration with image noise and distortion remains a difficult task. Algorithms typically
solve Camera
the camera
parameters
by analysis;
then,
they solve
theamodel
oftask.
image
noise and
distortion by
calibration
with image
noise and
distortion
remains
difficult
Algorithms
typically
optimization
[6,12,13].
All traditional
algorithms
visual
such
solve the camera
parameters
by analysis;optimization
then, they solve
the modelapplied
of imagetonoise
andsystems,
distortion
by as the
modified
Newton
algorithm,
the
Levenberg–Marquardt
algorithm
and
the
genetic
algorithm,
require a
optimization [6,12,13]. All traditional optimization algorithms applied to visual systems, such as the
good
initial
solution
to
optimize.
Therefore,
the
analytic
solution
is
not
only
robust
to
image
noise,
but
modified Newton algorithm, the Levenberg–Marquardt algorithm and the genetic algorithm, require
a good
initial for
solution
to optimize.
Therefore, the analytic solution is not only robust to image noise,
also
effective
addressing
distortion.
but also
effective
for addressing
distortion.
When
the calibration
data contain
noise and distortion, coupling errors exist between the camera’s
When
the
calibration
data
contain
noise
and distortion,
coupling
exist parameter
between themay be
extrinsic and intrinsic parameters. This
means
that an error
in anerrors
extrinsic
camera’s
extrinsic
and
intrinsic
parameters.
This
means
that
an
error
in
an
extrinsic
parameter
may
compensated by an error in an intrinsic parameter [14–16]. Existing parameter decoupling methods
use
be
compensated
by
an
error
in
an
intrinsic
parameter
[14–16].
Existing
parameter
decoupling
calibration models without considering extrinsic parameters. These include vanishing points [15,17,18],
methods use calibration models without considering extrinsic parameters. These include vanishing
straight lines [16,19], and cross ratios [20,21]. In the present study, the image re-projection error from
points [15,17,18], straight lines [16,19], and cross ratios [20,21]. In the present study, the image rethe mathematical model and the camera pinhole geometric model were analyzed. The results show
projection error from the mathematical model and the camera pinhole geometric model were
that
the coupling
error
causes
small
re-projection
error
variance
and a large
parameter
analyzed.
The results
show
thatathe
coupling
error causes
a small
re-projection
errorcalibration
variance and
a
error
Atparameter
the sameerror
time,variance.
this variance
related
to this
the template
number
feature points
largevariance.
calibration
At theissame
time,
variance issize,
related
to theoftemplate
and
between
therelative
template
and camera.
size,relative
numberdistance
of feature
points and
distance
betweenTo
theimprove
templatethe
andcalibration
camera. Toaccuracy
improve of the
camera
parameters,
a decoupling
algorithm
of intrinsic
and extrinsic
is proposed
the calibration
accuracy
of the camera
parameters,
a decoupling
algorithmparameters
of intrinsic and
extrinsic based
on
DLT.
parameters
is proposed based on DLT.
The remainder
remainder ofof
this
paper
is organized
as follows.
The relationship
between the
couplingthe
error,
The
this
paper
is organized
as follows.
The relationship
between
coupling
template
size
and
relative
distance
between
the
template
and
camera
is
described
in
Section
2.
The
error, template size and relative distance between the template and camera is described in Section 2.
proposed
improved
DLT DLT
(IDLT)
algorithm
for camera
calibration
is proposed
in Section
3. The 3. The
The
proposed
improved
(IDLT)
algorithm
for camera
calibration
is proposed
in Section
experimental
results
are
presented
in
Section
4,
and
the
conclusions
are
given
in
Section
5.
experimental results are presented in Section 4, and the conclusions are given in Section 5.
Parameter Coupling
2.2. Parameter
CouplingAnalysis
Analysis
2.1.Camera
Camera Model
Model
2.1.
T
The camera model is shown in Figure 1. A 3D point is denoted by Pi = [ X i Yi Zi ]” . A 2D image ıT
The camera model is shown
in
Figure
1.
A
3D
point
is
denoted
by
P
“
. A
Xi Yi Zi
i
T
point is denoted by pi = [ui vi ] . The
” relationship
ıT between 3D point Pi and its image projection,
2Dp image
point is denoted by pi “ ui νi
. The relationship between 3D point Pi and its image
i , is given by:
projection, pi , is given by:
r X + r12iY`i r+12rY13i Z
`ir+
13 tZxi `t x
u0x`11a xi rr11 X
ui u=i u“
0 +a
Xi `r32 Yi `r33 Zi `tz
31
r31 X i + r32Yi + r33 Z i + t z
(1)
(1)
r21 X ir+
r22i Y
r Z +t
`i r+
22 Y23i `ri 23 Zyi `ty
21 X
vi ν=i v“
0 +νa
0y` ay r X `r32 Y `r33 Z `tz
31 r i Y + r i Z + t i
r31 X i +
z
32 i
33 i
where ax = f /dx , ay = f /dy and f is the camera focal distance. In addition, dx , dy are pixel sizes in the
where ax = f/dx, ay = f/dy and f is the camera focal distance. In addition, dx, dy are pixel sizes in the
horizontal
verticaldirections;
directions;(u(u
0 , v0 ) is the optical center point of the image; and R, T are the
horizontal and
and vertical
0, v0) is the optical center point of the image; and R, T are the
rotation
andtranslation
translationvector,
vector,
respectively,
which
relate
the world
coordinate
rotation matrix
matrix and
respectively,
which
relate
the world
coordinate
systemsystem
to the to the
camera
coordinate
system.
Furthermore,
r
is
the
i-th
row
and
j-th
column
element
of
R,
camera coordinate system. Furthermore, rij isij the i-th row and j-th column element of R, and txand
, ty, tztx , ty , tz
are
therotation
rotationmatrix
matrixisisrepresented
represented
Euler
angles,
arethe
theelements
elements of
of T.
T. For simplicity,
simplicity, the
byby
Euler
angles,
ψ ,ψ,θ θ,
, φφ,, which
represent
the
rotation
around
the
respective
x,
y
and
z
axes.
which represent the rotation around the respective x, y and z axes.
Figure
1. Camera
model.
Figure
1. Camera
model.
Algorithms 2016, 9, 31
3 of 15
Owing to the influence of image noise and distortion, the calibration parameters, image points
and space points do not completely conform to Equation (1). Thus, we have:
`r12 Y`r13 Z`t x
∆ui “ ui ´ u0 ´ a x rr11 X
X `r32 Y `r33 Z`tz
31
(2)
∆νi “
r X `r Y `r23 Z`ty
νi ´ ν0 ´ ay r21 X`r22
32 Y `r33 Z`tz
31
n
ÿ
min f pu, νq “
p∆u2i ` ∆νi2 q
(3)
i “1
Equation (2) describes the calculation of image re-projection errors. Small re-projection errors are
desired, which means that the calibration results satisfy Equation (3). However, the question arises of
if a small image re-projection error will lead to the high calibration accuracy of the camera parameters.
Analysis of the relationship between the image re-projection error and the calibration accuracy is
therefore required.
2.2. Error Coupling Analysis
The two equations in Equation (2) have the same form. We thus analyze the first equation; all
conclusions can apply to the second equation. We assume that u0 has error 4u0 and that tx has error
4tx . From Equation (2), we have:
r X `r Y `r Z `pt `∆t x q
x
∆u1 i “ ui ´ pu0 ` ∆u0 q ´ a x 11 ri X12`ri32 Y13`ri33 Z `
31 i
i
i tz
ax
“ ∆ui ´ p∆u0 ` r X `r32∆tYx `
r33 Z `tz q
31
i
i
(4)
i
when ∆t x “ ´pr31 Xi ` r32 Yi ` r33 Zi ` tz q{a x ∆u0 , we have ∆u1 i “ ∆ui . The image re-projection error
will not change. However, because the space point coordinate values are different, other image point
re-projection errors will be changed. We assume that:
∆t x “ ´ r31 X1 `r32 Ya1x`r33 Z1 `tz ∆u0
(5)
“ ´ r31 Xi `r32 Yaix`r33 Zi `tz ∆u0 ´
r31 dXi `r32 dYi `r33 dZi
∆u0
ax
where dXi = X1 ´ Xi , dYi = Y1 ´ Yi , dZi = Z1 ´ Zi , and image re-projection errors are given by:
∆u1 i “ ∆ui ` ∆u2 i with ∆u2 i “
r31 dXi ` r32 dYi ` r33 dZi
∆u0
r31 Xi ` r32 Yi ` r33 Zi ` tz
(6)
Based on Equation (6), if dZi is very small and tz is very large, the change in image re-projection
errors will be very small. This is because coupling 4u0 and 4tx reduces the image re-projection error.
In order to reduce the coupling effect, better results can be obtained if the 3D template is large and
close to the camera. In fact, the template size is limited, and the calibration distance is limited by the
field of view and the lens parameter.
With respect to focal distance and translation vector, we assume ax has error 4ax and tz has error
4tz . From Equation (2), we have:
∆u1 i “ ui ´ u0 ´
pa x ` ∆a x qCi
Bi ` ∆tz
(7)
where Bi “ r31 Xi ` r32 Yi ` r33 Zi and Ci “ r11 Xi ` r12 Yi ` r13 Zi . We assume that ∆a x {∆tz “ a x {B1 , and
we obtain ∆u1 1 “ ∆u1 . Other image re-projection errors are:
∆u1 i “ ∆ui ` ∆u2 i with ∆u2 i “ ´
a x Ci ∆tz
a x Ci ∆tz
ˆ
«´
B1 pBi ` ∆tz q
Bi
B1
(8)
Algorithms 2016, 9, 31
4 of 15
Δui′ = Δui + Δui′′ with Δui′′ = −
Algorithms 2016, 9, 31
ax Ci Δt z
a C Δt
≈− x i × z
B1 ( Bi + Δt z )
Bi
B1
(8)
4 of 15
From Equation (1), we have ax Ci / Bi ≈ (ui − u0 ). We assume that the Euler angles of the world
coordinate system relative to the camera coordinate system are zero. By setting the first space point
From Equation (1), we have a x Ci {Bi « pui ´ u0 q. We assume that the Euler angles of the world
Z1 = 1 , we have:
coordinate system relative to the camera coordinate system are zero. By setting the first space point
Z1 “ 1, we have:
u −u
Δ u i′′ = − i ui ´0 uΔ0t z
(9)
2
∆u i “ ´ t z
∆tz
(9)
tz
u ) is small and t is large, the change in the image re-projection
Based
(9),
if if(upu
i −
Basedon
onEquation
Equation
(9),
i ´0 u0 q is small and ztz is large, the change in the image re-projection
△t4
z t
reduces
thethe
image
re-projection
error.
error
errorwill
willbe
bevery
verysmall.
small.This
Thisisisbecause
becausecoupling
coupling△a
4xaand
image
re-projection
error.
x and
z reduces
To
camera.
Toreduce
reducethe
thecoupling
couplingeffect,
effect,better
betterresults
resultscan
canbe
beobtained
obtainedififthe
the3D
3Dtemplate
templateisiscloser
closertotothe
the
camera.
Similar
translation vector
vector coupling
couplingerror
errorare
arerelated
relatedto
Similartotothe
thedistortion,
distortion,the
the camera
camera focal
focal distance and translation
tothe
theimage
imagepoint
point
location.
Therefore,
under
influence
distortion,
camera
focal
distance
location.
Therefore,
under
thethe
influence
of of
distortion,
thethe
camera
focal
distance
and
and
translation
vector
coupling
error
increase
calibration
error.
translation
vector
coupling
error
maymay
increase
the the
calibration
error.
The
Thecoupling
couplingerror
errorisispresent
presentininthe
thegeometric
geometricmodel.
model.The
Thecamera
cameramodel
modelisisalso
alsocalled
calleda apinhole
pinhole
model;
model;that
thatis,is,the
thespace
spacepoint,
point,image
imagepoint
pointand
andcamera
cameraorigin
originare
arelocated
locatedon
onthe
thesame
sameline.
line.Figure
Figure2 2
shows
center point
pointof
ofthe
theimage
imageand
andthe
thetranslation
translation
vector.
When
showsthe
thecoupling
couplingbetween
between the
the optical center
vector.
When
the
the
optical
center
point
of image
c is offset,
the origin
the camera
coordinate
shifts.
To
optical
center
point
of image
Oc isOoffset,
the origin
of theof
camera
coordinate
system system
shifts. To
continue
continue
fitting
the model,
pinholethe
model,
space
i, under
camera coordinate
will
fitting the
pinhole
spacethe
point,
Pi , point,
under P
the
camerathe
coordinate
system willsystem
produce
an
produce
an
offset.
The
space
point
under
the
world
coordinate
system
is
fixed
so
that
the
relationship
offset. The space point under the world coordinate system is fixed so that the relationship between the
between
the world system
coordinate
camerasystem
coordinate
system changes.
world coordinate
and system
camera and
coordinate
changes.
Figure2.2.Relationship
Relationshipbetween
betweenthe
theprincipal
principalpoint
pointand
andtranslation.
translation.
Figure
Figure
Figure3 3shows
showsthe
thecoupling
couplingbetween
betweenthe
thecamera
camerafocal
focaldistance
distanceand
andthe
thetranslation
translationvector.
vector.The
The
camera
focal
distance
is
calibrated
based
on
the
relative
distance
between
multiple
space
points.
camera focal distance is calibrated based on the relative distance between multiple space points. When
When
the camera
distance
changes,
is the
origin
of the
camera
coordinate
system,
, movesto
the camera
focalfocal
distance
changes,
thatthat
is the
origin
of the
camera
coordinate
system,
OcO, cmoves
toO'
O'
c, the world coordinate system moves along the direction of the optical axis of the camera because
c , the world coordinate system moves along the direction of the optical axis of the camera because
the
distance
2. When
the Euler angles are zero, the value of the optical axis
the distanceisisfixed
fixedbetween
betweenPP1 1and
andPP
2 . When the Euler angles are zero, the value of the optical axis
direction
directionisis ttzz ..
In summary, under the influence of image noise and distortion, when the size of the templates,
number of feature points and relative distance between the template and camera are certain, coupling
errors occur in the camera’s intrinsic and extrinsic parameters that affect the calibration parameter
accuracy. Therefore, to improve the accuracy of the calibration parameters, it is necessary to remove
the parameter coupling errors.
Algorithms 2016, 9, 31
Algorithms 2016, 9, 31
5 of 15
5 of 15
Figure
3. Relationship
between
the focal
distance
translation.
Figure
3. Relationship
between
the focal
distance
and and
translation.
3.InImproved
Transformation
summary,Direct
underLinear
the influence
of image noise and distortion, when the size of the templates,
number of feature points and relative distance between the template and camera are certain, coupling
3.1.occur
DirectinLinear
Transformation
errors
the camera’s
intrinsic and extrinsic parameters that affect the calibration parameter
accuracy.The
Therefore,
to
improve
accuracy
the
parameters,
it is necessary
remove
DLT algorithm datesthe
to the
work ofof[4]
orcalibration
earlier. In [13,22],
the authors
analyzedto
the
definition
the of
parameter
errors.
the worldcoupling
coordinate
system and proposed the use of data normalization to reduce the noise impact
on the camera calibration parameters. In [10,23], the authors proposed combining the DLT algorithm
3. Improved
Direct Linear
Transformation
with an optimization
algorithm
to address the camera distortion models.
The DLT algorithm consists of two steps: (1) homogeneous equation solving; and (2) parameter
3.1. factorization.
Direct Linear Transformation
Space point P and its image p are related by homography M:
The DLT algorithm dates to the work of [4] or earlier. In [13,22], the authors analyzed the
sp and
“ MP
with M the
“ K[R|T]
definition of the world coordinate system
proposed
use of data normalization to reduce the(10)
noise impact on the camera calibration parameters. In [10,23], the authors proposed combining the
where K is the camera intrinsic matrix. mij is the i-th row and j-th column element of M.
DLT algorithm with an optimization algorithm to address
the camera distortion models.
The 12 parameters in M3ˆ4 are unknown in the matrix M'12ˆ1 . From Equation (10), we have:
The DLT algorithm consists of two steps: (1) homogeneous equation solving; and (2) parameter
factorization. Space point P and its image p are related by homography M:
AM’ “ 0 with ||M'|| “ C
(11)
sp = MP with M = K[R|T]
(10)
where C is a constant, and:
where K is the camera intrinsic matrix. mij is the i-th row and j-th column element of M. The 12
«
ff
«
ff
T matrix
T . From Equation
parameters in M3×4 are unknown inPthe
M'12×1
weT have:
0 ´uP
P T 0 (10),
´uP
A“
A“
(12)
T
T ´νP T
0 AM’
P T = 0´νP
with ||M'|| = C0 P
(11)
where CBecause
is a constant,
and: (11) is a homogeneous equation, we assume that m34 is equal to one. The M
Equation
matrix can be solved by singularT value decomposition.
We can then
obtain the intrinsic and extrinsic
P
 PT
0 −uPT 
0 −uPT 
=
=
A
parameters of the camera Athrough
factorization.
(12)



T
T
−vPT 
−vPT 
0 P
0 P
a
tz “ 1{ m31 2 ` m32 2 ` m33 2
(13)
Because Equation (11) is a homogeneous equation, we assume that m34 is equal to one. The M
matrix can be solved by singular value decomposition.
We can then obtain the intrinsic and extrinsic
u0 “ t2z pm11 m31 ` m12 m32 ` m13 m33 q
(14)
parameters of the camera through factorization.
Algorithms 2016, 9, 31
6 of 15
ν0 “ t2z pm21 m31 ` m22 m32 ` m23 m33 q
(15)
b
a x “ t2z pm12 m33 ´ m13 m32 q2 ` pm11 m33 ´ m13 m31 q2 ` pm11 m32 ´ m12 m31 q2
b
ay “ t2z pm22 m33 ´ m23 m32 q2 ` pm21 m33 ´ m23 m31 q2 ` pm21 m32 ´ m22 m31 q2
(17)
t x “ tz pm14 ´ u0 q{a x
(18)
ty “ tz pm24 ´ ν0 q{ay
(19)
r11 “ tz pm11 ´ u0 m31 q{a x
r21 “ tz pm21 ´ ν0 m31 q{a x
r31 “ tz m31
r12 “ tz pm12 ´ u0 m32 q{a x
r22 “ tz pm22 ´ ν0 m32 q{a x
r32 “ tz m32
r13 “ tz pm13 ´ u0 m33 q{a x
r23 “ tz pm23 ´ ν0 m33 q{a x
r33 “ tz m33
(16)
(20)
Accordingly, at least six non-coplanar feature points and their corresponding image points—the
camera focal distance, optical center point, rotation matrix and translation vector—can be solved
according to Equations (11)–(20).
However, with the influence of image noise and distortion, mij contains errors. From
Equations (18) and (19), we have:
t x ` ∆t x “
ptz `∆tz qpm14 `∆m14 ´u0 ´∆u0 q
a x `∆a x
ty ` ∆ty “
ptz `∆tz qpm24 ´u0 `∆m24 ´∆u0 q
ay `∆ay
where 4tx , 4tz , 4m14 , 4ty and 4m24 are errors. When t x «
∆t x “
tz p∆m14 ´∆u0 q
a x `∆a x
∆ty “
tz p∆m24 ´∆ν0 q
ay `∆ay
`
∆tz pm14 ´u0 q
a x `∆a x
`
∆tz pm24 ´ν0 q
ay `∆ay
tz pm14 ´u0 q
a x `∆a x , ty
`
∆tz p∆m14 ´∆u0 q
a x `∆a x
`
∆tz p∆m24 ´∆ν0 q
ay `∆ay
(21)
«
tz pm24 ´ν0 q
ay `∆ay ,
we have:
(22)
In Equation (22), the third molecule is much smaller than the others and can be ignored. When
tz ą t x and ∆u0 " ∆m14 , we have:
∆t x “ ´tz ∆u0 {a x
(23)
At the same time, when tz ą ty and ∆ν0 " ∆m24 , we have:
∆ty “ ´tz ∆ν0 {ay
(24)
With respect to the coupling error between the focal distance and translation vector, to simplify
the analysis model, we assume that the Euler angles of the world coordinate system relative to the
camera coordinate system are zero. Then, we have:
»
a x {tz
—
M“– 0
0
0
ay {tz
0
u0 {tz
ν0 {tz
1{tz
fi
pt x a x ` tz u0 q{tz
ffi
pty ay ` tz ν0 q{tz fl
1
(25)
From Equation (13), we have:
tz “ 1{m33
(26)
a x “ t2z m11 m33 “ m11 tz
ay “ t2z m22 m33 “ m22 tz
(27)
From Equations (16) and (17), we have:
Algorithms 2016, 9, 31
7 of 15
a x = t z2 m11m33 = m11t z
(27)
a y = t z2 m22 m33 = m22 t z
Algorithms 2016, 9, 31
7 of 15
Owing to the influence of image noise and distortion, M contains errors. From Equation (27), we
thusOwing
have: to the influence of image noise and distortion, M contains errors. From Equation (27), we
thus have:
ax + Δax = m11t z + m11Δt z + Δm11t z + Δm11Δt z
a x ` ∆a x “ m11 tz ` m11 ∆tz ` ∆m11 tz ` ∆m11 ∆tz
(28)
(28)
ay + Δay = m22t z + m22 Δt z + Δm22t z + Δm22 Δt z
ay ` ∆a
“
m
t
`
m
∆t
`
∆m
t
`
∆m
∆t
y
22 z
22 z
22 z
22 z
where4
△a
11, △a
y and △m22 are errors. Δm Δt z and
Δm Δt z 22are
smaller
than the
others
where
axx,, △t
4tzz, ,△m
4m
∆tzmuch
are much
smaller
than
the
11 , 4ay and 4m22 are errors.11 ∆m
11 ∆tz and22∆m
and
can
be
ignored.
Then,
we
have:
others and can be ignored. Then, we have:
Δa = m Δt z + Δm11t z
∆a x “x m11 11
∆tz ` ∆m11 tz
ay m
= m∆t
Δt z +∆m
Δm22ttz
∆ayΔ“
22 22 z `
22 z
(29)
(29)
Onaccount
accountofof∆tΔt"
 Δm and Δt z ∆m
Δm22, ,we
wehave:
have:
On
z z ∆m1111 and ∆tz "
22
Δa = m Δt
∆a x “x m1111∆tzz
a y =mm22∆t
Δtz
∆aΔ
y “
22 z
(30)
(30)
Equations (23), (24) and (30) show that a linear relationship exists between the calibration
Equations (23), (24) and (30) show that a linear relationship exists between the calibration
parameters after ignoring some minor errors. We use a simulation to illustrate the size of some of the
parameters after ignoring some minor errors. We use a simulation to illustrate the size of some
ignored minor errors. In the simulation, we have ax = 1454.5, ay = 1454.5, u0 = 700 pixel and v0 = 512 pixel;
of the ignored minor errors. In the simulation, we have ax = 1454.5, ay = 1454.5, u0 = 700 pixel and
moreover, the size of the pattern is 0.7 m × 0.7 m × 0.3 m. The relationship of the world coordinate
v0 = 512 pixel; moreover, the size of the pattern is 0.7 m ˆ 0.7 m ˆ 0.3 m. The relationship of the
system relative to the camera coordinate system relationship is represented by R = [20, 20, 20] (°) and
world coordinate system relative to the camera coordinate system relationship is represented by
T = [−0.25, −0.25,
tz] (mm). Gaussian noise with a zero mean and a 0.1-pixel standard deviation is
R = [20, 20, 20] (˝ ) and T = [´0.25, ´0.25, tz ] (mm). Gaussian noise with a zero mean and a 0.1-pixel
added to the projected image points. The analysis of Equations (23) and (24) is shown in Figure 4. The
standard deviation is added to the projected image points. The analysis of Equations (23) and (24) is
difference between △ax, △ay and the results of Equations (23) and (24) is small. The analysis of
shown in Figure 4. The difference between 4ax , 4ay and the results of Equations (23) and (24) is small.
Equation (30) is shown in Figure 5. Because the Euler angles are not equal to 0°, △tx, △ty and the result
The analysis of Equation (30) is shown in Figure 5. Because the Euler angles are not equal to 0˝ , 4tx ,
of Equation (30) are different. However, the difference is small.
4ty and the result of Equation (30) are different. However, the difference is small.
1.4
1.4
1
The error of tx
1.2
The approximation value of tx error
The error of t y (mm)
The error of t x (mm)
1.2
0.8
0.6
0.4
0.2
0
0
The error of ty
The approximation value of ty error
1
0.8
0.6
0.4
0.2
0.5
1
1.5
2
tz(m)
(a)
2.5
3
3.5
0
0
0.5
1
1.5
2
tz(m)
2.5
3
3.5
(b)
Figure
erroranalysis;
analysis;(b)
(b)ty terror
analysis.
y error
Figure 4.
4. Error
Errorapproximate
approximate analysis:
analysis: (a)
(a) ttxx error
analysis.
Under the influence of image noise and lens distortion, the M matrix of the DLT algorithm
contains errors. These errors further affect the accuracy of the calibration parameters. Through the
above analysis, there is a linear coupling relationship between ∆t x and ∆u0 , ∆ty and ∆ν0 , ∆a x and ∆tz
and ∆ay and ∆tz after ignoring some minor errors.
Algorithms
Algorithms2016,
2016,9,9,31
31
8 8ofof1515
1.4
2
The error of ax
1.2
The approximation value of
the ax error
The error of a y
The error of a x
1
1.5
0.8
0.6
0.4
The error of ay
The approximation value of
the ay error
1
0.5
0.2
0
1
1.5
2
2.5
3
3.5
0
1
1.5
2
2.5
3
3.5
tz (m)
tz (m)
(b)
(a)
ay aerror
FigureFigure
5. Error
analysis:
(a) ax(a)error
comparison;
(b) (b)
comparison.
5. approximate
Error approximate
analysis:
a x error
comparison;
y error comparison.
3.2. Improved
Direct
Linear Transformation
Under the
influence
of image noise and lens distortion, the M matrix of the DLT algorithm
contains errors. These errors further affect the accuracy of the calibration parameters. Through the
Owing to the linear relationship that exists between the intrinsic and extrinsic parameter errors,
above analysis, there is a linear coupling relationship between Δt x and Δu0 , Δt y and Δv0 , Δax
all of these errors are unknown, and the specific error value cannot be directly solved. We assume that
ay and
and3DΔtemplate
t z and Δonly
Δt z along
after ignoring
some
the
moves
the z axis.
a1 x ,minor
a1 y , u1 0errors.
, ν1 0 , t1 x , t1 y and t1 z1 are calibration values
of the DLT algorithm before the z axis translation. a2 x , a2 y , u2 0 , ν2 0 , t2 x , t2 y and t1 z2 are calibration
3.2. Improved
Direct
Linear Transformation
values
of the DLT
algorithm
after the z axis translation. From Equations (23) and (24), we have:
Owing to the linear relationship
that exists between the intrinsic and extrinsic parameter errors,
∆t1 x “ t1 x ´ t x “ ´tz1 pu1 0 ´ u0 q{a x
(31)
all of these errors are unknown, and
the specific error value cannot be directly solved. We assume
∆t2 x “ t2 x ´ t x “ ´tz2 pu2 0 ´ u0 q{a x
that the 3D template only moves along the z axis. ax′ , a′y , u0′ , v0′ , t x′ , t y′ and t z′1 are calibration
1
values of the DLT algorithm before
z taxis
translation.
ax′′ , y a′′y , u0′′ , v0′′ , t x′′ , t ′′y and tz′2 are
∆t1 y “the
t1 y ´
y “ ´t
z1 pν 0 ´ ν0 q{a
(32)
2
2
2
∆t y “ t yafter
´ ty the
“ ´tz z2axis
pν 0translation.
´ ν0 q{ay From Equations (23) and (24),
calibration values of the DLT algorithm
we have:
where
a , a , u , ν , t , t , t and t are true values. Because the translation occurs only along the z
x
y
0
0
x
y
z2
z1
axis, we set tz2 “ ntz1 . From EquationsΔ(31)
′ (32), we ′have:
t x′ = tand
x − t x = −t z1 (u0 − u0 ) a x
Δt ′′ = t ′′ − t = −t (u ′′ − u ) a
t1 x ´ tx2 x x tz1x nu2z 20 ´0 u1 0 0 x
“
p
´ u0 q
n´
(v0′1− v0 ) a y
Δt1y′ = t ′y −atxy = −tnz1 ´
(31)
(33)
(32)
Δtty′′2 = t ′′y −tt y =nν
−t2z 2 (´
v′′ −1 v ) a y
t1 y ´
y
0 0 ν 00
z1
“
p
´ ν0 q
(34)
1 t a xare true
n ´ 1values. Because the translation occurs only
where ax , a y , u0 , v0 , t x , t y , t zn1 ´and
z2
1
2
From
relationship
exists
t z 2 = ntaz1 . linear
From Equations
(31) and
(32),between
we have: pnu 0 ´ u 0 q{pn ´ 1q and
along
the z Equation
axis, we set(33),
1
2
pt x ´ t x q{pn ´ 1q. By repeatedly moving the 3D template, u0 can be solved with a linear
′′ t z1 nu0′′ − (34).
t x′ − tby
u0′
least-squares fit. Similarly, ν0 can be solved
x Equation
(33)
= (
− u0 )
n − 1 ax
n −1
From Equations (31) and (32), we have:
− v0′2 0 ´ u1 0 q
nt1 x ´tt′y2−x t ′′y t z1 nv
tz 0′′npu
− v0 )
“=t x ´(
n ´ 1n − 1 ax a xn − 1 n ´ 1
(34)
(35)
From Equation (33), a linear relationship
nt1 y ´ t2 y exists between
tz npν2 0 ´(nu
ν1 00′′q− u0′ ) (n − 1) and (t x′ − t x′′) (n − 1) . By
“ ty ´
(36)
n ´u10 can be solved
ay
n ´ 1a linear least-squares fit. Similarly, v0
with
repeatedly moving the 3D template,
can A
be solved
Equation (34). exists between npu2 ´ u1 q{pn ´ 1q and pnt1 x ´ t2 x q{n ´ 1,
linearby relationship
0
0
1
1 ´(32),
From
Equations
(31)
we
have:
2 q{pn
npν2 ´ ν q{pn ´ 1q and
pntand
t
´
1q.
Thus,
t
,
t
can
be
solved by Equations (35) and (36).
y
y
x y
0
0
nt x′ − t x′′
t n(u0′′ − u0′ )
= tx − z
n −1
a
n −1
a1 x ´ a x “ pr11 a x ` r31 ux0 qpt1 z1 ´ tz1 q{tz1
a2 x ´ a x “ pr11 a x ` r31 u0 qpt1 z2 ´ tz2 q{tz2
From Equation (30), we have:
(35)
(37)
Algorithms 2016, 9, 31
9 of 15
From Equation (37), we have:
pr11 a x ` r31 u0 qppt1 z2 ´ t1 z1 q ´ ptz2 ´ tz1 qq ´ ptz2 a2 x ´ tz1 a1 x ´ a x ptz2 ´ tz1 qq “ 0
(38)
We set dz1 “ tz2 ´ tz1 . We then obtain:
pr11 a x ` r31 u0 q
a2 x ´ a1 x
t1 z2 ´ t1 z1 ´ dz1
´ tz1
´ a2 x ` a x “ 0
dz1
dz1
(39)
Because pa2 x ´ a1 x q{dz1 is small, we use t1 z1 to replace tz1 . Then, we have:
pr11 a x ` r31 u0 q
t1 z2 ´ t1 z1 ´ dz1
a2 x ´ a1 x
´ pt1 z1
` a2 x q ` a x “ 0
dz1
dz1
(40)
A linear relationship exists between pt1 z2 ´ t1 z1 ´ dz1 q{dz1 and t1 z1 pa2 x ´ a1 x q{dz1 ` a2 x . By
repeatedly moving the 3D template, a x can be solved with a linear least-squares fit.
With respect to tz1 , from Equation (37), we have:
npt1 z1 ´ tz1 q
∆a1 x
“
∆a2 x
t1 z2 ´ ntz1
(41)
∆a1 x t1 z2 ´ n∆a2 x t1 z1 “ tz1 pn∆a1 x ´ n∆a2 x q
(42)
Then,
A linear relationship exists between ∆a1 x t1 z2 ´ n∆a2 x t1 z1 and n∆a1 x ´ n∆a2 x . By repeatedly
moving the template, tz1 can be solved with a linear least-squares fit.
For ay , we have:
pr22 ay ` r32 ν0 q
a2 y ´ a1 y
t1 z2 ´ t1 z1 ´ dz1
´ pt1 z1
` a2 y q ` a y “ 0
dz1
dz1
∆a1 y t1 z2 ´ n∆a2 y t1 z1 “ tz1 pn∆a1 y ´ n∆a2 y q
(43)
(44)
A linear relationship exists between t1 z2 ´ t1 z1 ´ dz1 {dz1 and pa2 y ´ a1 y qt1 z1 {dz1 ` a2 y and
1
1
1
2
2
y t z2 ´ n∆a y t z1 and n∆a y ´ n∆a y . By repeatedly moving the template, ay , tz1 can be solved
with a linear least-squares fit.
In sum, the 3D template along the z axis to the translation performs a DLT algorithm at each
location. With the results of the DLT, u0 , ν0 , t x , ty , a x , ay and tz1 can again be solved with a linear
least-squares fit by Equations (33)–(36), (40) and (42)–(44).
∆a1
4. Experimental Section
The simulation and physical experiment parameters were set as follows. The camera focal length
was 12 mm. The image resolution was 1400 ˆ 1024. The pixel size was 7.4 µm ˆ 7.4 µm.
4.1. Simulation Experiment
The size of the 3D template was 0.7 m ˆ 0.7 m ˆ 0.3 m and contained a pattern of 8 ˆ 8 ˆ 3 points.
The rotation matrix and translation vector were R = [10, 10, 10] (˝ ) and T = [´0.35, ´0.35, tz ] (m),
tz = 1.2–3.2 m. Gaussian noise with a 0 mean and a 0.01–0.5-pixel standard deviation was added to the
projected image points. The 3D template moved 0.1 m at a time. The IDLT algorithm was calculated
after 18 times of movement. A total of 100 independent tests was performed for each noise to obtain
the parameter error mean and standard deviation. The error mean plus three times the standard
deviation was used to represent the calibration error.
Because the calibration error became larger as tz increased, we only analyzed the results at
tz “ 1.2m. The results of the DLT and IDLT calibration are shown in Figure 6. For u0 and v0 , the errors
projected image points. The 3D template moved 0.1 m at a time. The IDLT algorithm was calculated
after 18 times of movement. A total of 100 independent tests was performed for each noise to obtain
the parameter error mean and standard deviation. The error mean plus three times the standard
deviation was used to represent the calibration error.
Because the calibration error became larger as tz increased, we only analyzed the results at
Algorithms 2016, 9, 31
15
t z = 1.2 m. The results of the DLT and IDLT calibration are shown in Figure 6. For u0 and 10
v0 of
, the
errors of IDLT are less than 10% of the error of DLT. The errors in t x and t y are less than 0.1 mm.
ofFor
IDLT
of theof
error
of are
DLT.
The
errors
ty are
less than 0.1 mm. For a x , ay
x and
ay less
ax ,are
andthan
t z , 10%
the errors
IDLT
less
than
60%in
oft the
error
of DLT.
and tz , the errors of IDLT are less than 60% of the error of DLT.
1.4
0.7
1
The error of t x (mm)
The error of u 0 (pixel)
1.2
0.8
DLT
IDLT
0.8
0.6
0.4
0.2
0
0
DLT
IDLT
0.6
0.5
0.4
0.3
0.2
0.1
0.1
0.2
0.3
Noise (pixel)
0.4
0
0
0.5
0.1
(a)
The error of t y (mm)
The error of v 0 (pixel)
0.6
0.8
0.6
0.4
0.2
0.5
0.4
0.5
0.5
0.4
0.3
0.2
0.1
0.1
0.2
0.3
Noise (pixel)
0.4
0
0
0.5
0.1
0.2
0.3
Noise (pixel)
(d)
2.5
1.4
DLT
IDLT
1.2
The error of t z (mm)
The error of a x
0.4
DLT
IDLT
(c)
2
0.5
0.7
DLT
IDLT
1
0
0
0.4
(b)
1.4
1.2
0.2
0.3
Noise (pixel)
1.5
1
DLT
IDLT
1
0.8
0.6
0.4
0.5
0.2
0
0
0.1
0.2
0.3
Noise (pixel)
0.4
0.5
(e)
0
0
0.1
0.2
0.3
Noise (pixel)
(f)
Figure 6. Cont.
Algorithms 2016, 9, 31
11 of 15
Algorithms 2016, 9, 31
Algorithms 2016, 9, 31
11 of 15
11 of 15
2.5
2
1.5
1.2
1.4
Theoferror
of t z(mm)
The error
t z(mm)
Theoferror
The error
a y of a y
2.5
2
1.4
DLT
IDLT
DLT
IDLT
1.5
1
1
0.5
1
1.2
DLT
IDLT
DLT
IDLT
0.81
0.6
0.8
0.4
0.6
0.2
0.4
0.5
0
0
0.1
0.4
0.5
0
0.2
0
0.1
0.5
0.1
0.2(g)
0.3
Noise (pixel)
0.4
0.5
0
0
0.2
0.3
Noise (pixel)
0.4
0
0
0.2
0.3
Noise (pixel)
0.1
0.2(h)
0.3
Noise (pixel)
0.4
0.5
Figure 6. Calibration error
improved DLT (IDLT)
(g) analysis between direct linear translation (DLT) and
(h)
with noise: (a) error of u0 ; (b) error of v0 ; (c) error of t x ; (d) error of t y ; (e) error of ax ; (f) error of
Figure6.6.Calibration
Calibrationerror
erroranalysis
analysisbetween
betweendirect
directlinear
lineartranslation
translation(DLT)
(DLT)and
andimproved
improvedDLT
DLT(IDLT)
(IDLT)
Figure
ay error
t z based
on(a)aerror
error
of(b)aerror
error
of
terror
on; (d)
.
y ; (h)of
x ; (g)of
z based
t
u
;
v
;
(c)
of
t
of
;
(e)
error
of
a
;
(f)
error
with
noise:
y error of a x ; (f)
x error of tof
with noise: (a) error of u0 0; (b) error of v0 ;0(c) error of t x ; x(d) error of ty ; (e)
z
based
on a xon
; (g)aerror
ay ; (h)
tz based
ay . on a y .
ay ; (h)oferror
t z based
error
of error
of ton
x ; (g) of
z based
Figure 7 shows the calibration result with noise and distortion. Gaussian noise with a 0 mean
and a 0.1-pixel standard deviation is added to the projected image points. Image distortion comes
Figure7 7shows
showsthe
the
calibration
result
with
noise
distortion.
Gaussian
a 0 mean
Figure
calibration
result
with
noise
andand
distortion.
Gaussian
noisenoise
withwith
a 0 mean
and
from ideal image points (ui , vi ) and real image points (ui′, vi′) with k = 2.915 × 10−9 − 2.915 × 10−8 (the
a 0.1-pixel
standard
deviation
is added
the projected
distortion
aand
0.1-pixel
standard
deviation
is added
to the to
projected
image image
points.points.
ImageImage
distortion
comes comes
from
image
distortion
1–10
pixels
(1400,
when points
u01 =, ν700
).
1 q(uwith
´8−8(the
fromimage
ideal
image
ui , vi real
)atand
real0)image
k = 2.915
− 2.915
× 10
(the
ideal
pointsispoints
pu
image
points
pu
= 2.915
ˆ 10´×910
ˆ 10
´−92.915
i , νi q (and
i
i i′, vi′) kwith
image
u0u“
700). 2
imagedistortion
distortionisis1–10
1–10pixels
pixelsatat(1400,
(1400,0)0)when
0 = 700 ).
′
uwhen
i = ui + (ui − u0 )kr
(45)
22
2
′ =uviui`++pu
(
v
−
v
)
kr
(
u
−
u
)
kr
u1 i vu“
´
u
qkr
0
ii′ =
i
ii
00
i
(45)
(45)
22
ν1 i v“
νvi `+pν
ν0)qkr
i´
′
=
(
v
−
v
Owing to the influence of image distortion,
the
calibration
errors of DLT are relatively large.
0 kr
i
i
i
However,
the calibration
errorsimage
of IDLT are relatively
small. In particular,
the errors
of u0 and
v0
Owing
large.
Owingto
tothe
theinfluence
influenceof
of imagedistortion,
distortion, the
the calibration
calibration errors
errors of
of DLT
DLT are
are relatively
relatively
large.
are
less than
0.7
pixels, whereas
the
maximum
error ofsmall.
DLT
isIn44
pixels.
The
errors
ofofaof
are
However,
the
calibration
errors
are
relatively
particular,
the
errors
u
and
va0y are
x 0and
However,
the
calibration
errorsof
ofIDLT
IDLT
are
relatively
small.
In
particular,
the
errors
u0 and
v0
less
than
0.7
pixels,
whereas
the
maximum
error
of
DLT
is
44
pixels.
The
errors
of
a
and
a
are
less
a
less
than
3.
Because
the
pixels
are
square,
the
errors
of
a
and
have
the
same
form.
The
error
in
x
y
y
x
are less than 0.7 pixels, whereas the maximum error of DLT
is 44 pixels.
The errors of ax and a y are
than
3.larger
Because
the
pixels
are
square,
the errors
of
athat
ais
the
same
The error in tz is
x and
y have
t
t y . form.
tless
is
than
that
in
t
,
.
The
main
reason
is
t
larger
than
t x ,form.
y are square, the errors of a z and a
z
x
than 3. Because the pixels
same
The error in
y have the
x
larger than that in t x , ty . The main reason is that tz is larger than t x , ty .
t z is larger than that in t x , t y . The main reason is that t z is larger than t x , t y .
30
25
20
15
Theoferror
of t x (mm)
The error
t x (mm)
Theoferror
of u 0 (pixel)
The error
u 0 (pixel)
30
25
15
DLT
IDLT
DLT
IDLT
20
15
15
10
10
5
05
0
0
0
0.5
0.5
1
1.5
Distortion parameter k
2
1 (a)
1.5
Distortion parameter k
2
(a)
10
DLT
IDLT
DLT
IDLT
10
2.5
5
5
0
0
0.5
-8
x 10
2.5
-8
x 10
Figure 7. Cont.
0
0
0.5
1
1.5
Distortion parameter k
2
1 (b)
1.5
Distortion parameter k
2
(b)
2.5
-8
x 10
2.5
-8
x 10
Algorithms
Algorithms2016,
2016,9,9,31
31
12
12ofof15
15
50
25
DLT
IDLT
20
The error of t y (mm)
The error of v 0 (pixel)
40
30
20
10
DLT
IDLT
15
10
5
0
0
0.5
1
1.5
Distortion parameter k
2
0
0
2.5
0.5
-8
x 10
(c)
30
DLT
IDLT
The error of t z(mm)
The error of a x
25
15
10
5
-8
DLT
IDLT
20
15
10
0.5
1
1.5
Distortion parameter k
2
0
0
2.5
0.5
-8
x 10
(e)
30
DLT
IDLT
25
The error of t z(mm)
15
10
5
0
0
1
1.5
Distortion parameter k
2
2.5
-8
x 10
(f)
20
The error of a y
2.5
x 10
5
0
0
25
2
(d)
25
20
1
1.5
Distortion parameter k
DLT
IDLT
20
15
10
5
0.5
1
1.5
Distortion parameter k
(g)
2
2.5
-8
x 10
0
0
0.5
1
1.5
Distortion parameter k
2
2.5
-8
x 10
(h)
Figure7.7.Calibration
Calibrationerror
erroranalysis
analysisbetween
between
DLT
and
IDLT
with
noise
distortion:
(a) error
Figure
DLT
and
IDLT
with
noise
andand
distortion:
(a) error
of u0of;
t
u
;
(b)
error
of
v
;
(c)
error
of
t
;
(d)
error
of
;
(e)
error
of
a
;
(f)
error
of
t
based
on
ayx;;
0
x
x t z based on a x ;z (g) error of a
(b)0 error of v0 ; (c) error
of t x ; (d) error
of ty ; (e) errory of a x ; (f) error of
(h)
on ay .of t z based on a y .
(g)error
errorofoftz abased
y ; (h) error
4.2.
4.2.Physical
PhysicalExperiment
Experiment
We
Weused
usedaalight-emitting
light-emittingdiode
diode(LED)
(LED)as
asthe
thespace
spacepoint
pointfixed
fixedon
onaacoordinate
coordinatemeasurement
measurement
machine,
as
shown
in
Figure
8.
The
size
of
the
plane
was
0.7
m
ˆ
0.7
m,
which
containeda apattern
pattern
machine, as shown in Figure 8. The size of the plane was 0.7 m × 0.7 m, which contained
of
of
8
ˆ
8
points.
There
were
20
planes;
the
data
for
three
of
these
planes
were
used
for
one
8 × 8 points. There were 20 planes; the data for three of these planes were used for one DLT
DLT
calculation.
calculation. The
The DLT
DLTresults
resultsare
areshown
shownin
inTable
Table1.1.The
Thefluctuations
fluctuationsin
inthe
theEuler
Eulerangle
angleerrors
errorsare
areless
less
˝ , and they are not very volatile. The most volatile values are t and v . The fluctuation in t is
than
0.06
y
y ty
0 v0. The fluctuation in
than 0.06°, and they are not very volatile. The most volatile values are ty and
less
thanthan
28.0328.03
mm, mm,
and the
in v0 isin
less
2.59than
pixels.
results
of results
the IDLT
is less
andfluctuation
the fluctuation
v0 than
is less
2.59The
pixels.
The
ofalgorithm
the IDLT
are
shown
in
Table
2,
where
the
calibration
parameter
values
are
different
from
those
of
DLT.
algorithm are shown in Table 2, where the calibration parameter values are different from
those of DLT.
Algorithms 2016, 9, 31
13 of 15
Algorithms 2016, 9, 31
13 of 15
Figure
Figure 8.
8. Photograph
Photograph of
of the
the physical
physical experiment.
experiment.
Table 1. DLT results of the physical experiment.
Table 1. DLT results of the physical experiment.
ax
ax
1
2
3
4
5
6
7
8
9
0
1
2
3
4
5
6
7
8
1
2
3
4
5
6
7
8
9
0
1
2
3
4
5
6
7
1735.90
1735.90
1736.97
1736.97
1738.26
1738.26
1736.34
1736.34
1737.88
1737.88
1738.41
1738.41
1737.42
1737.42
1737.75
1737.21
1737.75
1737.17
1737.21
1736.76
1737.17
1737.09
1736.76
1737.19
1736.94
1737.09
1737.05
1737.19
1736.63
1736.94
1736.33
1736.79
1737.05
1736.63
1736.33
8
1736.79
ax
1738.11
ψ
φ
θ
tx
ty
tz
˝
˝
(mm)
ay
tx (mm) (mm)
ty (mm) (mm)
tz (mm) (°)ψ ( ) (°) θ ( )(°)
(pixel)
(pixel)
1735.56 701.09 522.07 −363.52 −319.10 1705.86 1.11 −0.13 1.36
1735.56
701.09
522.07
´363.52
´319.10
1705.86
1.11
´0.13
1736.67
700.97
522.59
−363.67
−317.70
1806.84
1.101.10−0.13´0.13
1.36
1736.67
700.97
522.59
´363.67
´317.70
1806.84
1737.96
701.66
523.22
−364.68
−316.48
1908.12
1.08
−0.13
1.36
´316.48
1908.12
1.08
´0.13
1737.96
701.66
523.22
´364.68
1736.05 701.59
701.59 523.46
523.46 −364.90
´364.90 −314.87
´314.87 2005.89
2005.89 1.071.07−0.13
´0.13
1736.05
1.36
1737.58 700.65
700.65 523.91
523.91 −364.05
´364.05 −313.54
´313.54 2107.60
2107.60 1.061.06−0.13
´0.13
1737.58
1.36
´311.73
2208.32
1.06
´0.12
1738.15
700.71
523.93
´364.39
1738.15
700.71
523.93
−364.39
−311.73
2208.32
1.06
−0.12
1.36
1737.20
700.71
524.02
´364.65
´310.01
2307.00
1.05
´0.12
1737.20
700.71
524.02
−364.65
−310.01
2307.00
1.05
−0.12
1.36
´308.38
2407.47
1.05
´0.12
1737.60
700.58
524.17
´364.72
1737.07 700.58
700.71 524.17
524.16 −364.72
´365.17 −308.38
´306.53 2407.47
2506.68 1.051.05−0.12
´0.13
1737.60
1.36
1737.05
700.54
524.10
´365.18
´304.62
2606.64
1737.07
700.71
524.16
−365.17
−306.53
2506.68
1.051.05−0.13´0.12
1.36
1736.63
700.66
524.31
´365.61
´303.12
2705.92
1.04
´0.12
1737.05
700.54
524.10
−365.18
−304.62
2606.64
1.05
−0.12
1.35
´302.02
2806.35
1.03
´0.13
1737.00
700.74
524.76
´365.99
1736.63
1.35
1737.11 700.66
700.97 524.31
524.85 −365.61
´366.63 −303.12
´300.39 2705.92
2906.44 1.041.03−0.12
´0.13
´298.18 2806.35
3006.04 1.031.03−0.13
´0.13
1736.86 700.74
700.96 524.76
524.60 −365.99
´366.89 −302.02
1737.00
1.35
´296.22
3106.31
1.04
´0.13
1736.98
700.83
524.52
´366.94
1737.11 700.97 524.85 −366.63 −300.39 2906.44 1.03 −0.13 1.35
1736.56
700.75
524.32
´367.06
´294.05
3205.58
1.05
´0.13
1736.86
1.36
1736.27 700.96
700.88 524.60
524.43 −366.89
´367.58 −298.18
´292.43 3006.04
3304.94 1.031.04−0.13
´0.13
´291.07 3106.31
3405.76 1.041.03−0.13
´0.14
1736.77 700.83
701.40 524.52
524.66 −366.94
´368.87 −296.22
1736.98
1.36
1736.56 700.75 524.32 −367.06 −294.05 3205.58 1.05 −0.13 1.35
1736.27 Table
700.88
524.43
−292.43
3304.94 1.04 −0.13 1.35
2. IDLT
results −367.58
of the physical
experiment.
ay
1736.77
ay
u0
u0
(pixel)
v0
v0
(pixel)
1.36
1.36
1.36
1.36
1.36
1.36
1.36
1.36
1.36
1.35
1.35
1.35
1.35
1.36
1.36
1.35
1.35
1.35
701.40 524.66 −368.87 −291.07 3405.76 1.03 −0.14 1.35
u0 (pixel)
v0 (pixel)
tx (mm)
ty (mm)
Tz (mm)
IDLT results 555.55
of the physical
experiment.´352.39
1738.33Table 2.696.27
´358.82
ax
φ (˝ )
ay
u0 (pixel)
v0 (pixel)
tx (mm)
ty (mm)
1708.23
Tz (mm)
1738.11
1738.33accuracy
696.27of the555.55
−358.82 the
−352.39
To compare the
calibration
two algorithms,
results1708.23
of IDLT and the first set
of DLT data were used to calculate the re-projection errors of 20 planes. Each plane image error was
To compare
the calibration
the two deviation
algorithms,
resultspoint
of IDLT
and the first
set
described
by the mean
plus threeaccuracy
times theofstandard
of the image
re-projection
errors.
of
DLT
data
were
used
to
calculate
the
re-projection
errors
of
20
planes.
Each
plane
image
error
was
The results are shown in Figure 9. The calculation errors of DLT increase with the number of planes.
described
bythat
the the
mean
plus
three times
the standard
deviation
of the
imagebut
point
errors.
This shows
DLT
calibration
results
are optimal
for some
planes,
notre-projection
for all planes.
The
The
results are
shown
in Figure
9. The
calculation
errors
DLT increase
withisthe
number
planes.
calibration
errors
of IDLT
decrease
with
the number
of of
planes.
The reason
because
theofdistance
This
shows
the point
DLT calibration
optimal
some
but the
not effect
for allofplanes.
The
between
thethat
image
and opticalresults
center are
point
of thefor
image
is planes,
small, and
distortion
calibration
is small. errors of IDLT decrease with the number of planes. The reason is because the distance
between the image point and optical center point of the image is small, and the effect of distortion is
small.
Algorithms 2016, 9, 31
Algorithms 2016, 9, 31
14 of 15
14 of 15
3
DLT
IDLT
0
The error of v (pixel)
The error of u (pixel)
2.5
5
DLT
IDLT
2
1.5
-10
1
-15
0.5
0
0
-5
5
10
plane
15
20
-20
0
(a)
5
10
plane
15
20
(b)
Figure9.9.Re-projection
Re-projectionerrors
errorsof
of20
20planes
planeswith
withDLT
DLTand
andIDLT
IDLTcalibration
calibrationresults:
results:(a)
(a)uuofofprojection
projection
Figure
image
errors;
(b)
v
of
projection
image
errors.
image errors; (b) v of projection image errors.
5. Conclusions
5. Conclusions
Based on a camera model, the DLT algorithm uses linear equations to calculate the intrinsic and
Based on a camera model, the DLT algorithm uses linear equations to calculate the intrinsic
extrinsic camera parameters. Because the camera’s intrinsic and extrinsic parameters are
and extrinsic camera parameters. Because the camera’s intrinsic and extrinsic parameters are
simultaneously calibrated, the coupling error of the calibration parameter affects the calibration
simultaneously calibrated, the coupling error of the calibration parameter affects the calibration
accuracy. In this paper, we analyzed the principles of intrinsic and extrinsic parameter error coupling,
accuracy. In this paper, we analyzed the principles of intrinsic and extrinsic parameter error coupling,
determined a linear coupling relationship between the intrinsic parameter calibration error and the
determined a linear coupling relationship between the intrinsic parameter calibration error and the
translation vector (extrinsic parameters) calibration error and proposed the IDLT algorithm. The
translation vector (extrinsic parameters) calibration error and proposed the IDLT algorithm. The
IDLT algorithm uses the linear coupling relationship to calculate the calibration parameters of the
IDLT algorithm uses the linear coupling relationship to calculate the calibration parameters of the
camera. The results of simulations and experiments show that there are significantly fewer calibration
camera. The results of simulations and experiments show that there are significantly fewer calibration
parameter errors using the IDLT algorithm than there are using DLT with noise and distortion.
parameter errors using the IDLT algorithm than there are using DLT with noise and distortion.
Acknowledgments: This work was supported by the National Science Foundation of China (Grant
Acknowledgments:
ThisNatural
work was
supported
by theof
National
ScienceProvince
Foundation
of China
(Grant No. 51075095)
No. 51075095) and the
Science
Foundation
Heilongjiang
(Grant
No. E201045).
and the Natural Science Foundation of Heilongjiang Province (Grant No. E201045).
Author Contributions: Zhenqing Zhao proposed the method and wrote the manuscript. Xin Zhang performed
Author Contributions: Zhenqing Zhao proposed the method and wrote the manuscript. Xin Zhang performed
thedata
dataanalysis.
analysis.
Dong
contributed
toconception
the conception
this study.
Gangand
Chen
Binhelped
Zhang
helped
the
Dong
Ye Ye
contributed
to the
of thisofstudy.
Gang Chen
Bin and
Zhang
perform
perform
thewith
analysis
with constructive
discussions.
the
analysis
constructive
discussions.
Conflicts
ConflictsofofInterest:
Interest:The
Theauthors
authorsdeclare
declareno
noconflict
conflictofofinterest.
interest.
References
References
1.1.
2.2.
3.3.
4.4.
5.5.
6.6.
7.7.
8.
Chen,
S.J.S.J.
AnAn
efficient
approach
for dynamic
calibration
of multiple
cameras.
IEEE Trans.
Chen,I.H.;
I.H.;Wang,
Wang,
efficient
approach
for dynamic
calibration
of multiple
cameras.
IEEEAutom.
Trans.
Sci.
Eng.
2009,
6,
187–194.
[CrossRef]
Autom. Sci. Eng. 2009, 6, 187–194.
Ly,
Ly,D.S.;
D.S.;Demonceaux,
Demonceaux,C.;
C.;Vasseur,
Vasseur,P.;P.;Pegard,
Pegard,C.C.Extrinsic
Extrinsiccalibration
calibrationofofheterogeneous
heterogeneouscameras
camerasby
byline
line
images.
images.Mach.
Mach.Vis.
Vis.Appl.
Appl. 2014,
2014, 25,
25,1601–1614.
1601–1614. [CrossRef]
Peng,
Li,Li,
L. Camera
calibration
using using
one-dimensional
information
and its applications
in both controlled
Peng,E.;E.;
L. Camera
calibration
one-dimensional
information
and its applications
in both
and
uncontrolled
environments.
Pattern
Recogn.
2010,
43,
1188–1198.
[CrossRef]
controlled and uncontrolled environments. Pattern Recogn. 2010, 43, 1188–1198.
Abdel-Aziz,
Karara, H.M.
H.M.Direct
Directlinear
linear
transformation
from
comparator
coordinates
into object
Abdel-Aziz, Y.I.;
Y.I.; Karara,
transformation
from
comparator
coordinates
into object
space
space
coordinates
in close-range
photo
grammetry.
Proceedingsofofthe
the Symposium
Symposium on
coordinates
in close-range
photo
grammetry.
In In
Proceedings
on Close-Range
Close-Range
Photogrammetry,
Photogrammetry.University
UniversityofofIllinois
IllinoisatatUrbana-Champaign,
Urbana-Champaign,Champaign,
Champaign,IL,
IL,USA,
USA,1971;
1971;pp.
pp.1–18.
1–18.
Tsai,
R.Y.
A
versatile
camera
calibration
technique
for
high-accuracy
3D
machine
vision
metrology
Tsai, R.Y. A versatile camera calibration technique for high-accuracy 3D machine vision metrologyusing
using
off-the-shelf
off-the-shelfTV
TVcameras
camerasand
andlenses.
lenses.IEEE
IEEEJ.J.Robot
RobotAutom
Autom1987,
1987,3,3,323–344.
323–344.[CrossRef]
Zhang,
Zhang,Z.Y.
Z.Y.AAflexible
flexiblenew
newtechnique
techniquefor
forcamera
cameracalibration.
calibration.IEEE
IEEETrans.
Trans.Pattern
PatternAnal.
Anal.Mach.
Mach.Intell.
Intell.2000,
2000,22,
22,
1330–1334.
[CrossRef]
1330–1334.
Chen,
Chen,G.;
G.;Guo,
Guo,Y.B.;
Y.B.;Wang,
Wang,H.P.;
H.P.;Ye,
Ye,D.;
D.;Gu,
Gu,Y.F.
Y.F.Stereo
Stereovision
visionsensor
sensorcalibration
calibrationbased
basedon
onrandom
randomspatial
spatial
points
given
by
CMM.
Optik
2012,
123,
731–734.
[CrossRef]
points given by CMM. Optik 2012, 123, 731–734.
Samper, D.; Santolaria, J.; Brosed, F.J.; Majarena, A.C.; Aguilar, J.J. Analysis of Tsai calibration method
using two- and three-dimensional calibration objects. Mach. Vis. Appl. 2013, 24, 117–131.
Algorithms 2016, 9, 31
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
15 of 15
Samper, D.; Santolaria, J.; Brosed, F.J.; Majarena, A.C.; Aguilar, J.J. Analysis of Tsai calibration method using
two- and three-dimensional calibration objects. Mach. Vis. Appl. 2013, 24, 117–131. [CrossRef]
Sun, W.; Cooperstock, J.R. An empirical evaluation of factors influencing camera calibration accuracy using
three publicly available techniques. Mach. Vis. Appl. 2006, 17, 51–67. [CrossRef]
Huang, J.H.; Wang, Z.; Xue, Q.; Gao, J.M. Calibration of a camera projector measurement system and error
impact analysis. Meas. Sci. Technol. 2012, 23, 125402. [CrossRef]
Zhou, F.Q.; Cui, Y.; Wang, Y.X.; Liu, L.; Gao, H. Accurate and robust estimation of camera parameters using
RANSAC. Opt. Laser Eng. 2013, 51, 197–212. [CrossRef]
Ricolfe-Viala, C.; Sanchez-Salmeron, A. Camera calibration under optimal conditions. J. Opt. Soc. Am. 2011,
19, 10769–10775. [CrossRef] [PubMed]
Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge,
UK, 2000; pp. 110–115.
Batista, J.; Araujo, H.; Almeida, A.T. Iterative multistep explicit camera calibration. IEEE Trans. Robot. Autom.
1999, 15, 897–917. [CrossRef]
Ricolfe-Viala, C.; Sanchez-Salmeron, A.; Valera, A. Efficient lens distortion correction for decoupling in
calibration of wide angle lens cameras. IEEE Sens. J. 2013, 13, 854–863. [CrossRef]
Guillemaut, J.Y.; Aguado, A.S.; Illingworth, J. Using points at infinity for parameter decoupling in camera
calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 265–270. [CrossRef] [PubMed]
Caprile, B.; Torre, V. Using vanishing points for camera calibration. Int. J. Comput. Vis. 1990, 4, 127–140.
[CrossRef]
He, B.W.; Zhou, X.L.; Li, Y.F. A new camera calibration method from vanishing points in a vision system.
Trans. Inst. Meas. Control 2011, 33, 806–822. [CrossRef]
Devernay, F.; Faugeras, O. Straight lines have to be straight. Mach. Vis. Appl. 2001, 13, 14–24. [CrossRef]
Zhang, G.J.; He, J.J.; Yang, X.M. Calibrating camera radial distortion with cross-ratio invariability. Opt. Laser
Technol. 2003, 35, 457–461. [CrossRef]
Li, D.D.; Wen, G.J.; Hui, B.W.; Qiu, S.H.; Wang, W.F. Cross-ratio invariant based line scan camera geometric
calibration with static linear data. Opt. Laser Eng. 2014, 62, 119–125. [CrossRef]
Hartley, R.I. In defense of the eight-point algorithm. IEEE Trans. Pattern Anal. Mach. Intell. 1997, 19, 580–593.
[CrossRef]
Xu, Q.Y.; Ye, D.; Chen, H.; Che, R.S.; Chen, G.; Huang, Y. A valid camera calibration based on the maximum
likelihood using virtual stereo calibration pattern. In Proceedings of the International Conference on Sensing,
Computing and Automation, Chongqing, China, 8–11 May 2006; pp. 2346–2351.
© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC-BY) license (http://creativecommons.org/licenses/by/4.0/).