Comparison of Computer Vision and Photogrammetric Approaches

sensors
Article
Comparison of Computer Vision and
Photogrammetric Approaches for Epipolar
Resampling of Image Sequence
Jae-In Kim and Taejung Kim *
Department of Geoinformatic Engineering, Inha University, 100 Inharo, Nam-gu Incheon 22212, Korea;
[email protected]
* Correspondence: [email protected]; Tel.: +82-32-860-7606
Academic Editor: Assefa M. Melesse
Received: 13 February 2016; Accepted: 16 March 2016; Published: 22 March 2016
Abstract: Epipolar resampling is the procedure of eliminating vertical disparity between stereo
images. Due to its importance, many methods have been developed in the computer vision and
photogrammetry field. However, we argue that epipolar resampling of image sequences, instead of a
single pair, has not been studied thoroughly. In this paper, we compare epipolar resampling methods
developed in both fields for handling image sequences. Firstly we briefly review the uncalibrated
and calibrated epipolar resampling methods developed in computer vision and photogrammetric
epipolar resampling methods. While it is well known that epipolar resampling methods developed in
computer vision and in photogrammetry are mathematically identical, we also point out differences
in parameter estimation between them. Secondly, we tested representative resampling methods in
both fields and performed an analysis. We showed that for epipolar resampling of a single image
pair all uncalibrated and photogrammetric methods tested could be used. More importantly, we
also showed that, for image sequences, all methods tested, except the photogrammetric Bayesian
method, showed significant variations in epipolar resampling performance. Our results indicate that
the Bayesian method is favorable for epipolar resampling of image sequences.
Keywords: epipolar resampling; image rectification; Bayesian approach; stereo image sequence
1. Introduction
Epipolar resampling is the procedure of eliminating Y parallax, or vertical disparity, between a
stereo image pair. This procedure is critical both for stereo image processing and for three-dimensional
(3D) content generation. For stereo image processing, this can convert two-dimensional (2D)
correspondence search problems into one-dimensional (1D) ones and hence enhance automated
depth map generation [1–3]. For 3D content generation, this can eliminate visual fatigue and produce
high quality 3D perception [4]. In addition, this can enhance the processing of various stereovision
systems such as mobile robots or smart vehicles.
Epipolar resampling has been studied extensively in the fields of computer vision and
photogrammetry. In computer vision, epipolar resampling is achieved by a homographic
transformation for sending epipoles of original images to infinity [5–11]. Depending on how the
homography is estimated, epipolar resampling methods can be classified into two approaches:
uncalibrated and calibrated cases. The uncalibrated approach estimates the homography from the
fundamental matrix, which is determined using tie-points between a stereo image pair. The calibrated
approach estimates the homography from known intrinsic and extrinsic parameters of stereo
images. Typically, these parameters are acquired by a stereo calibration method using calibration
patterns [12,13].
Sensors 2016, 16, 412; doi:10.3390/s16030412
www.mdpi.com/journal/sensors
Sensors 2016, 16, 412
2 of 12
Sensors 2016, 16, 412
2 of 12
In photogrammetry, epipolar resampling is performed by a perspective transformation that
aligns
epipolar lines with horizontal image lines. This transformation is defined by relative orientation
In photogrammetry, epipolar resampling is performed by a perspective transformation that
parameters between the two images. Collinear or coplanar equations are used to estimate the
aligns epipolar lines with horizontal image lines. This transformation is defined by relative
parameters
mathematically
[14–17].
a typical
application,
intrinsic
parameters
orientation
parameters between
theIntwo
images.photogrammetric
Collinear or coplanar
equations
are used
to estimateare
assumed
to
be
known.
Relative
orientation
parameters,
which
are
extrinsic
parameters
the right
the parameters mathematically [14–17]. In a typical photogrammetric application, ofintrinsic
image
with respect
to the leftto
image
frame, are
estimated
by tie-points.
parameters
are assumed
be known.
Relative
orientation
parameters, which are extrinsic
Epipolar
resampling
developed
in
each
field
is
serving
purpose
and application.
parameters of the right image with respect to the left image frame,its
areown
estimated
by tie-points.
In computer
vision,
for example,
resampling
used
for robot
vision and
fast image
Epipolar
resampling
developed epipolar
in each field
is serving itsisown
purpose
and application.
In computer
processing
photogrammetry,
is mainly
used
for precise
rectification
for stereo
vision, for[18–20].
example,Inepipolar
resampling isitused
for robot
vision
and fast image
image processing
[18–20].
In
plotters
[14,21,22]. For
new for
andprecise
challenging
applications,
an increasing
need
photogrammetry,
it is serving
mainly used
image rectification
forthere
stereoisplotters
[14,21,22].
Forfor
servingtechniques
new and challenging
applications,
there isand
an photogrammetry.
increasing need for
merging evaluation
techniques of
merging
developed in
computer vision
However,
developed
in computer
visiondeveloped
and photogrammetry.
However,
epipolarcomparison
resampling of
epipolar
resampling
methods
from each field
using evaluation
a commonofdataset,
methods
developed
from
each field using
a common
dataset,incomparison
their
performances
and
their
performances
and
understanding
of their
differences
theoreticalof
and
practical
perspectives
understanding
of
their
differences
in
theoretical
and
practical
perspectives
are
missing.
are missing.
this
paper,
to compare
epipolar
resampling
methods
developed
computer
vision
In In
this
paper,
wewe
aimaim
to compare
epipolar
resampling
methods
developed
in in
computer
vision
and
and
photogrammetry.
In
particular,
we
aim
to
apply
epipolar
resampling
not
for
rectifying
one
stereo
photogrammetry. In particular, we aim to apply epipolar resampling not for rectifying one stereo pair,
but for rectifying
a sequence
of stereo
images.
We argue
most
previous
investigationswere
werefor
butpair,
for rectifying
a sequence
of stereo
images.
We argue
thatthat
most
previous
investigations
for epipolar resampling of a single image pair and that epipolar resampling of image sequences has
epipolar resampling of a single image pair and that epipolar resampling of image sequences has not
not been studied thoroughly. Here we focus on epipolar resampling of image sequences with an
been studied thoroughly. Here we focus on epipolar resampling of image sequences with an intension
intension of stereo applications where two cameras are installed at separate platforms and move
of stereo applications where two cameras are installed at separate platforms and move independently.
independently.
We will first review the general principles and formulations of epipolar resampling developed in
We will first review the general principles and formulations of epipolar resampling developed
the computer vision field. We will then review the epipolar resampling developed in photogrammetry.
in the computer vision field. We will then review the epipolar resampling developed in
Wephotogrammetry.
will re-confirm the
well-known
principle
that epipolarprinciple
resampling
computer
We
will re-confirm
the well-known
thatmethods
epipolar developed
resamplinginmethods
vision
and in photogrammetry
areand
mathematically
identical.
will also pointidentical.
out practical
developed
in computer vision
in photogrammetry
areWe
mathematically
We differences
will also
between
them.
We
will
then
explain
image
sequences
and
the
representative
epipolar
point out practical differences between them. We will then explain image sequencesresampling
and the
methods
used forepipolar
tests. Finally,
we report
and compare
a single
image pair
and
representative
resampling
methods
used fortheir
tests.performance
Finally, we over
report
and compare
their
over
image sequences.
performance
over a single image pair and over image sequences.
2. 2.
Epipolar
Vision
EpipolarResampling
Resamplingin
inComputer
Computer Vision
Epipolar
geometry
between
a stereo
pair
is is
explained
inin
Figure
1. 1.The
two
perspective
Epipolar
geometry
between
a stereo
pair
explained
Figure
The
two
perspectivecenters
centers(C1
and
in the figure)
and a ground
point point
P definedefine
an epipolar
plane.plane.
Epipolar
lineslines
l1 and land
( C2and
in the figure)
and a ground
an epipolar
Epipolar
2 are the
intersection
between the
epipolar
andplane
the left
and
planes,
respectively.
Epipoles e1
are the intersection
between
theplane
epipolar
and
theright
left image
and right
image
planes, respectively.
Epipoles
and
are the
intersections
between
the line
the two
perspective
and
e2 are the intersections
between
the line
connecting
the connecting
two perspective
centers
and thecenters
left and
andimage
the left
and right image planes.
right
planes.
ZM P(X , Y , Z )
P
P
P
(x1, y1, f )
zc
yc
xc
C1(X1, Y1, Z1)
Epipolar line
YM
(x2, y2, f )
zc
for
xc
yc
XM
C2(X2, Y2, Z2)
Figure
of stereo
stereoimage.
image.
Figure1.1.Epipolar
Epipolar geometry
geometry of
perspectiveimages,
images,any
any corresponding
corresponding left
thethe
ForFor
perspective
left and
andright
rightimage
imagepoints
points qand
2 satisfy
1 and qsatisfy
following
matrix
equation
[23]:
following matrix equation [23]:
Sensors 2016, 16, 412
3 of 12
q2T Fq1 “ 0
(1)
where q1 pu1 , v1 , 1q and q2 pu2 , v2 , 1q are homogenous image coordinates of the left and right image
points (p1 and p2 ) and F the fundamental matrix. Epipolar lines l1 and l2 and epipoles e1 and e2 can be
found by F:
l1 “ FT q2 , l2 “ Fq1 , Fe1 “ 0, FT e2 “ 0
(2)
Epipolar resampling can be achieved by a homographic transformation of mapping the epipole
of the original image to a point at infinity. The homography in the case of the uncalibrated approach
can be determined from the fundamental matrix as
H “ GAT
(3)
where T is a translation taking the principal point (cx , cy ) of the image to the origin of the coordinate
system, A is a rotation about the origin taking the epipole to a point pk, 0, 1qT on the x-axis, and G is a
transformation taking the moved epipole to the point at infinity pk, 0, 0qT as shown below [8].
»
fi
»
fi
1 0 ´cx
1
0 0
—
ffi
—
ffi
ffi
—
T“—
(4)
1 0 ffi
– 0 1 ´cy fl , G “ – 0
fl
0 0
1
´1{k 0 1
Once the homographies are calculated, then optimization is generally carried out to align
corresponding epipolar lines between stereo images [7,9,24].
Under the uncalibrated approach, the fundamental matrix can be estimated from tie-points
without any additional information. Therefore, this approach enables automation over the whole
process and the use of images from an unknown source. There are many studies regarding the reliable
estimation of the fundamental matrix and the reliable reconstruction of epipolar geometry. Among
them, included were Hartley’s normalized eight-point algorithm [25], and algorithms that apply
additional constraints such as algebraic minimization [26], minimization of epipolar distance [8,24],
and other geometric cost functions [27–29].
The uncalibrated approach may, however, be sensitive to tie-point noise and prone to image
distortion as the transformation is not based on physical geometry between the two images. Fusiello
and Irsara [30] tried to overcome these problems by proposing a new uncalibrated approach called
quasi-Euclidean epipolar rectification. They estimated stereo geometry using non-linear equations by
minimizing geometric re-projection errors.
In the calibrated approach, intrinsic and extrinsic parameters for the left and right images are
estimated separately by the stereo calibration method using dedicated calibration patterns [12,13,31,32].
Extrinsic parameters are used to estimate the relative geometric relationship between the two images.
The homography for epipolar resampling is then defined by the relative geometric relationship.
Assuming R1 and R2 are 3 ˆ 3 rotation matrices containing rotational extrinsic parameters of the
left and right cameras and C1 and C2 are 3 ˆ 1 translation matrices containing extrinsic parameters
related to the location of the perspective center, the relative rotation R and translation B between two
images are defined as below [33]:
»
fi
»
fi
r11 r12 r13
bx
—
ffi
—
ffi
(5)
R “ R2 RT1 “ – r21 r22 r23 fl , B “ R1 pC2 ´ C1 q “ – by fl
r31 r32 r33
bz
The homography, H1 for the left and H2 for the right image, can be estimated by two rotations.
The first rotation, R1half and R2half , is distributed from the relative rotation R to make the overlap
between the two images reach maximum. The second rotation Rrect transforms the epipole to infinity
using the baseline vector B:
H1 “ Rrect R1half , H2 “ Rrect R2half
(6)
Sensors 2016, 16, 412
4 of 12
”
`
˘T
where R “ R1half R2half , Rrect “ mT1
mT2
mT3
ıT
and m1 , m2 and m3 are defined as
”
m1 “
B
, m2 “
||B||
´by bx 0
b
b2x ` by2
ıT
, m 3 “ m1 ˆ m2
(7)
As mentioned above, while the calibrated approach guarantees minimized image distortion and
high accuracy, it requires a prior calibration process. Accordingly, this approach potentially involves a
limitation in terms of availability.
3. Photogrammetric Epipolar Resampling
In this section, we review the epipolar resampling developed in photogrammetry. While its
mathematical formulation is well reported [17], we re-formulate photogrammetric camera models
here in order to compare them more directly with the ones developed in computer vision. In
photogrammetry, the geometric relationship between a ground point P and its left and right image
points, q1 and q2 , is explained by the collinear equation below:
q1 “ M1 p1 “ λ1 M1 R1 P1 , q2 “ M2 p2 “ λ2 M2 R2 P2
(8)
Ñ
Ñ
where λ is a scale factor, R1 and R2 are 3 ˆ 3 rotation matrices defined as before. P1 “ C1 P, P2 “ C2 P,
and p1 px1 , y1 , f 1 q and p2 px2 , y1 , f 2 q are look vectors of the left and right camera frames from the
perspective center to projection point. M1 and M2 are camera matrices of the left and right images for
converting camera coordinates to image coordinates as follows:
»
fi
»
fi
f1
0
c x1
f2
0
c x2
—
M1 “ —
– 0
0
f1
ffi
—
—
cy1 ffi
fl , M2 “ – 0
1
0
f2
ffi
cy2 ffi
fl
1
0
0
(9)
where cx and cy are defined as Equation (4) and f is the focal length of the camera.
The geometric relationship between the stereo images can be explained by the condition that
Ñ
the baseline vector S “ C1 C2 , the left look vector P1 and the right look vector P2 are coplanar. This
coplanar condition can be expressed by the scalar triple product among the three vectors as below.
»
fi
0
´sz sy
—
ffi
PT2 ¨ rS ˆ P1 s “ PT2 rSsˆ P1 “ 0, rSsˆ “ – sz
(10)
0
´s x fl
´sy s x
0
Taking the left camera axes as the reference frame, i.e., R1 “ I3ˆ 3 and C1 “ 03ˆ1 , the above
coplanar equation can be re-written in the form of the fundamental matrix as shown below:
PT2 rSsˆ P1 “ pT2 R rBsˆ p1 “ 0
(11)
´
¯T
1
1
T
qT2 M´
R rBsˆ M´
2
1 q1 “ q2 Fq1 “ 0
(12)
where R and B are the relative rotation and translation of the right camera with respect to the left
image frame [17]. As shown in Equation (12), it is well known that the coplanar equation used in
photogrammetry is mathematically identical to the fundamental matrix equation used in computer
vision. The main difference in the uncalibrated and photogrammetric approaches is that the former
estimates the whole eight parameters of the fundamental matrix from tie-points whereas the latter
estimates relative rotational and translational parameters.
Epipolar resampling is carried out by two perspective transformations derived from the extrinsic
parameters [17,34]. First, the right image plane is re-projected parallel to the reference frame (left
Sensors 2016, 16, 412
5 of 12
camera axes) using the rotation angles of the right camera as shown in Figure 2a. Secondly, the two
image planes are then re-projected to align with the baseline connecting two perspective centers, and
to remove Y parallaxes as shown in Figure 2b.
These two perspective transformations can also be expressed by homography as
H1 “ Rbase RT1 , H2 “ Rbase RT2
(13)
where Rbase is the second perspective transformation to align with respect to the baseline and is defined
from the rotation angles of the baseline for X, Y and Z axes as
Sensors 2016, 16, 412
Rbase
fi »
fi »
fi 5 of 12
0
0
cosΦ 0 ´sinΦ
cosK sinK 0
ffi —
ffi —
ffi
(14)
cosΩ
sinΩ
cosK
fl –0 0cos Φ1 0 0− sinfl
1
0
Φ – ´sinK
cos Κ sin
Κ 00 fl
´sinΩ
cosΩ
sinΦ
0
cosΦ
0
0
1
= 0 cos Ω sin Ω
0
1
0
− sin Κ cos Κ 0
(14)
0 − sin Ω cos Ω sin Φ 0 cos Φ
0
0
1
by
ω ` ω2
´1 b b z
,ΚK=“
tan´1
´tan
(15)
Ω “ 1Ω = , Φ, “
Φ
=
−
tan
,
tan
2
bz
(15)
b2x ` by2
»
1
—
“ R Ω R Φ RK “ – 0
0
=
As As
shown
in in
Equations
resamplingisismathematically
mathematically
shown
Equations(13)–(15),
(13)–(15),photogrammetric
photogrammetric epipolar
epipolar resampling
identical
to tothat
difference in
in the
the calibrated
calibratedand
and
identical
thatunder
underthe
thecalibrated
calibrated approach.
approach. The
The main difference
photogrammetric
approaches
is that
thethe
former
calculates
relative
orientation
parameters
from
leftleft
and
photogrammetric
approaches
is that
former
calculates
relative
orientation
parameters
from
and
right extrinsic
parameters,
which
are estimated
by calibration
latter
right
extrinsic
parameters,
which are
estimated
by calibration
patterns,patterns,
whereaswhereas
the latterthe
estimates
estimates
relativeparameters
orientation parameters
directly
from tie-points.
relative
orientation
directly from
tie-points.
Z
Z
C2
C1
C2
X
Y
C1
X
Y
(a)
(b)
Figure 2. Procedure of epipolar transformation based on dependent relative orientation. (a)
Figure 2.
Procedure of epipolar transformation based on dependent relative orientation.
transformation for reprojecting the right image plane parallel to reference frame, (b) transformation
(a) transformation for reprojecting the right image plane parallel to reference frame, (b) transformation
for reprojecting two image planes to align with the baseline.
for reprojecting two image planes to align with the baseline.
The photogrammetric approach may also be sensitive to tie-point noise. In order to overcome
The
photogrammetric
may also be
sensitive
to tie-point
noise.
In order
to Bayesian
overcome
the
sensitivity
problem in approach
epipolar resampling,
one
may apply
the Bayesian
approach.
The
theapproach
sensitivityisproblem
in epipolar
resampling,
theand
Bayesian
approach.
Bayesian
a popular
estimation
technique one
that may
usesapply
a priori
a posteriori
error The
statistics
of
approach
is
a
popular
estimation
technique
that
uses
a
priori
and
a
posteriori
error
statistics
of
unknown
unknown parameters [35]. In photogrammetric epipolar resampling, one can limit the convergence
parameters
[35].extrinsic
In photogrammetric
resampling,
one canoflimit
the convergence
range
range of the
parameters by epipolar
setting a priori
error statistics
the parameters.
One can
alsoof
thereduce
extrinsic
by setting
priori
of by
thesetting
parameters.
One statistics
can also of
reduce
the parameters
effects of tie-point
errorsa on
the error
overallstatistics
estimation
a priori error
the
thetie-point
effects of
tie-point errors
on the overall
estimation
by setting
a priori
of of
the
measurements.
Advantages
of the Bayesian
approach
are that
it canerror
makestatistics
constraints
tie-point
measurements.
Advantages
the Bayesian
approach
it can make
constraints
both the
observation equations
and theofestimation
parameters,
and are
thatthat
the constraints
contribute
to
the determination
of the
unknowns
If theparameters,
a priori constraints
unknowns
arecontribute
defined
of both
the observation
equations
andas
theweights.
estimation
and thatfor
the
constraints
fromdetermination
actual geometric
theweights.
camera arrangement,
thisconstraints
approach should
work more
to the
of characteristics
the unknownsof as
If the a priori
for unknowns
are
consistently
over
stereo
image
sequences
and
more
robustly
over
tie-point
errors.
For
many
defined from actual geometric characteristics of the camera arrangement, this approach should work
photogrammetric
applications,
the sequences
constraints and
can more
be determined
fromtie-point
sensors’ errors.
specification
or
more
consistently over
stereo image
robustly over
For many
operational
practice
[36,37].
photogrammetric applications, the constraints can be determined from sensors’ specification or
operational practice [36,37].
4. Datasets and Methodology
For comparative analysis of the epipolar resampling methods, four stereo image sequences
(TEST01, TEST02, TEST03, and TEST04) were obtained by two webcams (Microsoft Webcam Cinema).
The four image sequences were designed so that error factors were increased from TEST01 to TEST04.
Two sequences (TEST01 and TEST02) were obtained in a low resolution mode with an image size of
Sensors 2016, 16, 412
6 of 12
4. Datasets and Methodology
For comparative analysis of the epipolar resampling methods, four stereo image sequences
(TEST01, TEST02, TEST03, and TEST04) were obtained by two webcams (Microsoft Webcam Cinema).
The four image sequences were designed so that error factors were increased from TEST01 to TEST04.
Two sequences (TEST01 and TEST02) were obtained in a low resolution mode with an image size
of 320 ˆ 240 pixels and two (TEST03 and TEST04) in a high resolution mode with 640 ˆ 480 pixels.
Each sequence was obtained with different baseline distances and different camera tilt angles (see
Table 1). This was to check the effect of wider baseline and larger tilt angle on epipolar resampling
Sensors 2016, 16, 412
12
performance.
For each sequence, 100 stereo pairs were acquired without camera movement 6toofanalyze
the consistency
of geometry
with respect
change
of tie-points
among
different
pairs.
analyze the consistency
ofestimation
geometry estimation
withto
respect
to change
of tie-points
among
different
For
estimation
of
epipolar
geometry,
tie-points
were
extracted
using
the
scale
invariant
feature
pairs.
transformFor
(SIFT)
algorithm
[38] geometry,
and the outlier
removal
algorithm
[39]
on random
sample
estimation
of epipolar
tie-points
were extracted
using
thebased
scale invariant
feature
consensus
(RANSAC).
Tolerances
outlier
removal
were algorithm
set to one[39]
pixel
for TEST01
and sample
three pixels
transform
(SIFT) algorithm
[38]for
and
the outlier
removal
based
on random
consensus
(RANSAC).
forone
outlier
removal
were set
to one
for TEST01
and three
for the
other datasets.
TheTolerances
tolerance of
pixel
was chosen
to set
the pixel
optimal
case, where
tie-points
pixels
for
the
other
datasets.
The
tolerance
of
one
pixel
was
chosen
to
set
the
optimal
case,
where
tie- this
had little position error. The tolerance of three pixels was chosen to set the practical case since
points
had
little
position
error.
The
tolerance
of
three
pixels
was
chosen
to
set
the
practical
case
since
value was most widely used in practice. This aims to analyze the robustness against tie-point errors.
this value was most widely used in practice. This aims to analyze the robustness against tie-point
Additional tie-points were extracted manually at 10 locations from individual image sequences. These
errors. Additional tie-points were extracted manually at 10 locations from individual image
points were used to check the accuracy of epipolar resampling. Figure 3 shows one stereo pair from
sequences. These points were used to check the accuracy of epipolar resampling. Figure 3 shows one
the four
image
sequences
and
the sequences
10 tie-points
manually.
stereo
pair from
the four
image
and extracted
the 10 tie-points
extracted manually.
(a) TEST01
(b) TEST02
(c) TEST03
(d) TEST04
Figure 3. Image datasets used and the 10 tie-points extracted manually for performance evaluation.
Figure 3. Image datasets used and the 10 tie-points extracted manually for performance evaluation.
(a) TEST01, (b) TEST02, (c) TEST03, (d) TEST04.
(a) TEST01, (b) TEST02, (c) TEST03, (d) TEST04.
In experiments, five existing epipolar resampling methods were tested: three developed in
In experiments,
five
epipolar resampling
were used
tested:
three
developed in
computer
vision and
twoexisting
in photogrammetry.
Table 1 lists methods
out the methods
for the
experiments.
computer
and methods
two in photogrammetry.
Table 1 lists out
the methods
for the
experiments.
Note vision
that these
were chosen as representative
methods
of the used
different
approaches
mentioned
in
Sections
2
and
3.
“Bouguet”
indicates
the
calibrated
approach
based
the
Note that these methods were chosen as representative methods of the different approacheson
mentioned
homographies
in
Equation
(6)
developed
by
Bouguet
[40].
“Hartley”
is
the
normalized
eight-point
in Sections 2 and 3. “Bouguet” indicates the calibrated approach based on the homographies in Equation (6)
algorithm
geometric
minimization
based on theeight-point
Sampson cost
function,with
which
is one of error
developed
by with
Bouguet
[40]. error
“Hartley”
is the normalized
algorithm
geometric
the most well-known uncalibrated methods developed in the computer vision field [8]. “Fusiello” is
minimization
based on the Sampson cost function, which is one of the most well-known uncalibrated
the quasi-Euclidean epipolar rectification developed by Fusiello and Irsara [30]. These two methods
methods developed in the computer vision field [8]. “Fusiello” is the quasi-Euclidean epipolar rectification
belong to the uncalibrated approaches. “Kim” represents a photogrammetric epipolar resampling
developed
by Fusiello and Irsara [30]. These two methods belong to the uncalibrated approaches. “Kim”
method based on the relative orientation explained in Section 3 [16]. “Bayesian” is a photogrammetric
represents
a photogrammetric
epipolar
resampling
method
basedrelative
on the orientation
relative orientation
explained
epipolar
resampling with the
Bayesian
approach for
estimating
parameters,
as
in Section
3
[16].
“Bayesian”
is
a
photogrammetric
epipolar
resampling
with
the
Bayesian
approach
for
explained in Section 3.
estimatingFor
relative
orientation
parameters,
as software
explained
in Section
3.
the first
three methods,
we used
available
publicly,
and for the fourth and fifth
methods,
we implemented
the we
algorithms
in-house.available
The Bouguet
method
calibration
For the first
three methods,
used software
publicly,
andrequires
for theafourth
and fifth
process
thethe
intrinsic
and extrinsic
parameters.
For this,
10 pairs
of chessboard
patterns
methods,
wedetermining
implemented
algorithms
in-house.
The Bouguet
method
requires
a calibration
process
were used. The intrinsic parameters acquired from this process were also used in the Kim and
Bayesian methods.
As mentioned earlier, the Bayesian method needs to determine a priori constraints for estimation
parameters and tie-point measurements. These constraints will contribute to correct determination
of the unknown parameters by limiting their convergence range in the least-squares sense. We set the
a priori covariance values for tie-points to one pixel in consideration of the expected rectification error
Sensors 2016, 16, 412
7 of 12
determining the intrinsic and extrinsic parameters. For this, 10 pairs of chessboard patterns were used.
The intrinsic parameters acquired from this process were also used in the Kim and Bayesian methods.
As mentioned earlier, the Bayesian method needs to determine a priori constraints for estimation
parameters and tie-point measurements. These constraints will contribute to correct determination of
the unknown parameters by limiting their convergence range in the least-squares sense. We set the
a priori covariance values for tie-points to one pixel in consideration of the expected rectification error
and the expected accuracy of tie-points obtained from the SIFT and outlier removal algorithms. We set
the a priori covariance values related to orientation parameters to the square of 30˝ in consideration of
the expected variation of camera angles from the ideal case. We set the a priori covariance values for
the baseline vector (by and bz ) to one in consideration of the expected deviation of the baseline vector
from the ideal case when assuming bx “ 100. For different experiment settings, different covariance
values would be required. For our case, we set relatively large angular covariance values and relatively
small positional covariance values. This was because we installed two cameras on a stereo rig and
rotated them manually.
A total of four performance measures were used to compare the methods quantitatively. For the
evaluation of epipolar geometry, rectification errors (Er ) and epipolar line fluctuation (Ev ) were used
and for the evaluation of image distortion, orthogonality (Eo ) and relative scale change in the result
images (Ea ) were used [11]. Rectification errors refer to the residual of Y parallaxes in the result image
and are measured as the difference of transformed vertical coordinates of tie-points in the left and right
images. Epipolar line fluctuation was measured in pixels as standard deviations of the transformed
image coordinates over the whole image sequence. In the ideal case, rectification errors and epipolar
line fluctuation should be zero. Orthogonality was measured as the angles of the x and y axes after
epipolar resampling, which should be 90˝ ideally. Relative scale change was measured as the ratio
of the two diagonal lines after epipolar resampling, which should be one ideally. For more details
regarding performance measures, refer to [11].
5. Results of Performance Evaluation
The experiments were carried out by firstly using single pairs and then using image sequences.
In the two experiments, the results of the Bouguet method were used as ground truth since they were
acquired by calibration patterns. Table 1 summarizes the performance comparison of the five methods
applied to single pairs. For the other methods except the Bouguet method, we chose 10 consecutive
frames from each image sequence, accumulated tie-points from the 10 frames and estimated a single
transformation matrix. Note that due to the limit of the matrix dimension, we chose 10 instead of all
frames. Among the four accuracy parameters, epipolar line fluctuation (Ev ) was not included because
this is related to an image sequence only. The actual number of tie-points from the 10 frames used for
epipolar resampling is also shown in Table 1. In order to better interpret experiment results, distortion
measurements were also represented as differences (∆Eo and ∆Ea ) with those of the Bouguet method.
In the results of TEST01 and TEST02 datasets, all methods showed satisfactory results in terms
of rectification errors. These results demonstrate that although mismatched tie-points are partly
included, acceptable transformation accuracy can be produced if the camera configuration is favorable.
However, in the case of TEST03 and TEST04 datasets, rectification errors were increased for all test
methods. In particular, a large error increase was observed for the uncalibrated epipolar resampling
methods, Hartley and Fusiello. This observation agrees well with previous investigations showing
that the uncalibrated methods were more prone to tie-point errors in weak stereo geometry [10,11].
The uncalibrated methods also show larger image distortion. This distortion was observed over
the whole datasets compared to the results of the Bouguet method. In this experiment, the two
photogrammetric methods, Kim and Bayesian, showed almost identical results. It can be interpreted
as that effect of tie-point errors was handled better than in the uncalibrated methods by estimating
relative orientation parameters directly. A slight rectification error increase for the Kim and Bayesian
was observed with TEST04 due to the wide baseline and large tilt angle.
Sensors 2016, 16, 412
8 of 12
Table 2 summarizes the performance comparison of the epipolar resampling methods applied to
all 100 stereo pairs of the four test datasets. Note that the Bouguet was not tried with image sequences
since it uses identical homography over all pairs within the same image sequence. For this experiment,
we did not accumulate tie-points of consecutive images. We treated each frame independently
by estimating a transformation matrix, generating an epipolar resampled pair and calculating the
transformation accuracy per frame. Table 2 shows the average number of tie-points used for each
image sequence. The accuracy parameters in Table 2 were the average (“Mean” in the Table) and
standard deviation (Stdev) of the parameters for the 100 frames.
Table 1. Results of performance evaluation for single pairs chosen from each image sequence (C, U
and P indicate calibrated, uncalibrated and photogrammetric approaches, respectively).
Ea (∆Ea )
TEST01
Eo p∆Eo ) [˝ ]
Image size
Baseline width
Camera tilt angle
Tie-point errors
Tie-points used
320 ˆ 240
Narrow
Small
Small
858
(C) Bouguet
(U) Hartley
(U) Fusiello
(P) Kim
(P) Bayesian
0.52
0.46
0.48
0.35
0.37
90.01 (0.00)
91.77 (1.76)
89.91 (´0.10)
89.99 (´0.02)
90.01 (0.00)
1.00 (0.00)
1.02 (0.02)
1.00 (0.00)
1.00 (0.00)
1.00 (0.00)
TEST02
Er [pixel]
Image size
Baseline width
Camera tilt angle
Tie-point errors
Tie-points used
320 ˆ 240
Narrow
Medium
Large
480
(C) Bouguet
(U) Hartley
(U) Fusiello
(P) Kim
(P) Bayesian
0.64
0.63
0.59
0.61
0.43
90.07 (0.00)
90.73 (0.66)
90.52 (0.45)
90.09 (0.02)
90.11 (0.04)
1.00 (0.00)
1.01 (0.01)
1.01 (0.01)
1.00 (0.00)
1.00 (0.00)
TEST03
Method
Image size
Baseline width
Camera tilt angle
Tie-point errors
Tie-points used
640 ˆ 480
Narrow
Medium
Large
500
(C) Bouguet
(U) Hartley
(U) Fusiello
(P) Kim
(P) Bayesian
0.66
3.70
3.26
0.56
0.61
90.07 (0.00)
89.48 (´0.59)
89.98 (´0.09)
90.07 (0.00)
90.06 (´0.01)
1.00 (0.00)
0.99 (´0.01)
1.00 (0.00)
1.00 (0.00)
1.00 (0.00)
TEST04
Dataset
Image size
Baseline width
Camera tilt angle
Tie-point errors
Tie-points used
640 ˆ 480
Wide
Large
Large
277
(C) Bouguet
(U) Hartley
(U) Fusiello
(P) Kim
(P) Bayesian
0.73
1.51
1.26
1.09
1.13
90.38 (0.00)
90.22 (´0.16)
90.44 (0.06)
90.33 (´0.05)
90.35 (´0.03)
1.01 (0.00)
1.00 (´0.01)
1.01 (0.00)
1.01 (0.00)
1.01 (0.00)
Table 2. Results of performance evaluation for all stereo pairs of the image sequences.
Dataset
Method
TEST01
Tiepoints used:
98 points/frame
Er [pixel]
Eo p∆Eo ) [˝ ]
Ea (∆Ea )
Ev
[pixel]
Mean
Stdev
Mean
Stdev
Mean
Stdev
(U) Hartley
(U) Fusiello
(P) Kim
(P) Bayesian
0.50
0.50
0.38
0.38
0.14
0.06
0.05
0.02
91.98 (1.97)
89.90 (´0.11)
89.98 (´0.03)
90.01 (0.00)
0.74
0.09
0.02
0.00
1.02 (0.02)
1.00 (0.00)
1.00 (0.00)
1.00 (0.00)
0.01
0.00
0.00
0.00
3.69
4.13
4.44
0.57
TEST02
Tiepoints used:
57 points/frame
(U) Hartley
(U) Fusiello
(P) Kim
(P) Bayesian
0.71
0.66
0.60
0.43
0.86
0.61
0.55
0.29
89.71 (´0.36)
90.43 (0.36)
90.23 (0.16)
90.09 (0.02)
2.07
0.74
0.10
0.02
1.00 (0.00)
1.00 (0.00)
1.00 (0.00)
1.00 (0.00)
0.06
0.01
0.00
0.00
7.49
10.59
14.05
0.50
TEST03
Tiepoints used:
58 points/frame
(U) Hartley
(U) Fusiello
(P) Kim
(P) Bayesian
1.58
1.62
1.10
0.77
1.36
1.50
1.24
0.55
91.25 (1.18)
90.36 (0.29)
90.10 (0.03)
90.07 (0.00)
2.18
0.70
0.03
0.01
1.01 (0.01)
1.00 (0.00)
1.00 (0.00)
1.00 (0.00)
0.02
0.00
0.00
0.00
5.45
17.72
5.24
1.68
TEST04
Tiepoints used:
29 points/frame
(U) Hartley
(U) Fusiello
(P) Kim
(P) Bayesian
1.92
2.09
2.66
1.63
2.24
2.13
3.18
0.88
90.01 (´0.37)
90.54 (0.16)
90.37 (´0.01)
90.36 (´0.02)
3.44
0.29
0.09
0.07
1.00 (´0.01)
1.01 (0.00)
1.01 (0.00)
1.01 (0.00)
0.02
0.00
0.00
0.00
4.65
5.84
6.02
1.79
Sensors 2016, 16, 412
9 of 12
Firstly, we notice that the performance for an image sequence is different from that of single
pairs. For a single pair, all methods tested showed small differences among themselves, whereas for
image sequences their performance differed significantly. All methods showed variations over image
sequences in rectification errors, image distortion errors and epipolar line fluctuation. In particular,
epipolar line fluctuation was very significant for all but the Bayesian method for all test datasets.
These results imply inconsistent geometry estimation according to the change of the tie-points among
Sensors 2016, 16, 412
9 of 12
each frame and that those methods may not be suitable for handling epipolar resampling of image
sequences. Among those tested, the Bayesian method showed the least changes over image sequences.
sequences. Among those tested, the Bayesian method showed the least changes over image sequences.
This result demonstrates that by limiting convergence ranges of orientation parameters, stable geometry
This result demonstrates that by limiting convergence ranges of orientation parameters, stable
estimation was achieved. This property of the Bayesian approach may be favorable for image sequences.
geometry estimation was achieved. This property of the Bayesian approach may be favorable for
Figure 4 shows the fluctuation of the rectification errors for all image frames of TEST02 for comparison
image sequences. Figure 4 shows the fluctuation of the rectification errors for all image frames of
of the consistency.
TEST02 for comparison of the consistency.
Figure 4.
4. Rectification
Rectification errors
errors for
for each
each frame
frame of
of TEST02
Figure
TEST02 dataset.
dataset.
Secondly, we
which
is
Secondly,
wecan
cansee
seethe
thetendency
tendencyof
ofaccuracy
accuracydegradation
degradationdue
duetotoweak
weakstereo
stereogeometry,
geometry,
which
similar
to to
thethe
result
forfor
thethe
single
pairpair
experiment.
By By
comparing
TEST01
andand
TEST02
results,
we can
is
similar
result
single
experiment.
comparing
TEST01
TEST02
results,
we
check
that accuracy
has been
with larger
tie-point
errors. errors.
Nevertheless,
the Bayesian
method
can
check
that accuracy
has degraded
been degraded
with larger
tie-point
Nevertheless,
the Bayesian
showed the
least degradation.
By comparing
TEST02 and
TEST03,
weTEST03,
can observe
thatobserve
the rectification
method
showed
the least degradation.
By comparing
TEST02
and
we can
that the
and epipolarand
lineepipolar
fluctuation
have been
increased
with
a larger with
image
size and
hence
with
a
rectification
lineerrors
fluctuation
errors
have been
increased
a larger
image
size
and
smallerwith
pixela size
in anpixel
identical
camera.
By comparing
and TEST04,
weand
can TEST04,
observe that
the
hence
smaller
size in
an identical
camera. TEST03
By comparing
TEST03
we can
errors have
with
misalignment
between
two images.
observe
thatalso
the increased
errors have
alsolarger
increased
with larger
misalignment
between two images.
Thirdly, all
Table
2 were
increased
compared
to those
of Table
1 for all
methods
tested.
Thirdly,
allfour
fourerrors
errorsinin
Table
2 were
increased
compared
to those
of Table
1 for
all methods
This
was
anticipated
becausebecause
the number
of tie-points
used for
epipolar
resampling
in Table
was
tested.
This
was anticipated
the number
of tie-points
used
for epipolar
resampling
in 2Table
than that
in Table
1. However,
the Bayesian
produced
least accuracy
degradation.
2smaller
was smaller
than
that in
Table 1. However,
themethod
Bayesian
methodthe
produced
the least
accuracy
This
observation
also impose
to the favor
Bayesian
method
in case
of a in
smaller
of
degradation.
This may
observation
may favor
also impose
to the
Bayesian
method
case ofnumber
a smaller
tie-points.
number
of tie-points.
In addition, we checked the performance differences in the methods tested by comparing the
resampled image sequences visually.
visually. Figure
Figure 5 shows a portion of the resampled left and right images
Each column
column shows
shows the consecutive
consecutive resampled
resampled image
image pairs from a
of TEST02 from each method. Each
particular method. The
The white
white horizontal
horizontal lines
lines represent
represent the
the epipolar
epipolar lines
lines of
of the
the resampled
resampled images.
images.
We
We can
can check
check the
the effects
effects of
of image
image distortion
distortion and
and inconsistent
inconsistent epipolar geometry estimations in the
with
inconsistent
epipolar
geometry
estimations
cannot
form
result images.
images.The
Theresampled
resampledsequences
sequences
with
inconsistent
epipolar
geometry
estimations
cannot
a
stereoscopic
image
sequence
due
to
abrupt
scene
changes.
The
last
column
showed
that
the
Bayesian
form a stereoscopic image sequence due to abrupt scene changes. The last column showed that the
method could
handle
an image
with
consistency.
explicit comparison
of comparison
execution time
Bayesian
method
could
handlesequence
an image
sequence
with An
consistency.
An explicit
of
was
not carried
out because
eachout
method
tested
undertested
different
Hartley and
execution
time was
not carried
because
eachwas
method
wascircumstances.
under differentThe
circumstances.
Kim methods
were Kim
optimized
for fast
execution
(about
andexecution
26 frames/s,
respectively,
platform
The
Hartley and
methods
were
optimized
for32fast
(about
32 and on
26a frames/s,
respectively, on a platform with CPU i5-4460, clock speed 3.20 GHz and memory size 8 GB). The
Fusiello and Bayesian methods were not optimized for speed. The Fusiello method was available
only in Matlab executables and was very slow (about 2 frames/s). The Bayesian method used very
large-sized matrices without efficient matrix decomposition for parameter estimation and was also
slow (about 13 frames/s) Reduction of processing time for Fusiello or Bayesian was, however, deemed
outside of this paper’s scope and was not tried.
Sensors 2016, 16, 412
10 of 12
with CPU i5-4460, clock speed 3.20 GHz and memory size 8 GB). The Fusiello and Bayesian methods
were not optimized for speed. The Fusiello method was available only in Matlab executables and was
very slow (about 2 frames/s). The Bayesian method used very large-sized matrices without efficient
matrix decomposition for parameter estimation and was also slow (about 13 frames/s) Reduction of
processing time for Fusiello or Bayesian was, however, deemed outside of this paper’s scope and was
Sensors 2016, 16, 412
10 of 12
not tried.
(a)
(b)
(c)
(d)
(e)
Figure 5.
5. AAportion
portionofof
results
from
the consecutive
rectified
images
of TEST02.
From
left, (a)
Figure
thethe
results
from
the consecutive
rectified
images
of TEST02.
From left,
(a) original
original
image
andofresults
the (b) (c)
Hartley,
(c) Fusiello,
(d) Kim,
and (e) Bayesian
image
and
results
the (b)ofHartley,
Fusiello,
(d) Kim, and
(e) Bayesian
methods.methods.
6. Conclusions
6. Conclusions
In this paper, epipolar resampling methods developed in computer vision and photogrammetry
In this paper, epipolar resampling methods developed in computer vision and photogrammetry
were analyzed in terms of rectification errors, image distortion and stability of epipolar geometry.
were analyzed in terms of rectification errors, image distortion and stability of epipolar geometry.
The analysis was focused on the resampling of image sequences.
The analysis was focused on the resampling of image sequences.
From the theoretical review, we pointed out that while the epipolar resampling methods
From the theoretical review, we pointed out that while the epipolar resampling methods
developed in two fields are mathematically identical, their performance in parameter estimation may
developed in two fields are mathematically identical, their performance in parameter estimation
be different.
may be different.
From the analysis of experiment results, we showed that for epipolar resampling of a single
From the analysis of experiment results, we showed that for epipolar resampling of a single
image pair all uncalibrated and photogrammetric methods could be used and, however, that for
image pair all uncalibrated and photogrammetric methods could be used and, however, that for
image sequences all methods tested, except the Bayesian method, showed significant variations in
image sequences all methods tested, except the Bayesian method, showed significant variations in
terms of image distortions and epipolar line fluctuation among image pairs within the same sequence.
terms of image distortions and epipolar line fluctuation among image pairs within the same sequence.
Our results indicate that the Bayesian method is favorable for epipolar resampling of image
Our results indicate that the Bayesian method is favorable for epipolar resampling of image sequences.
sequences. These results imply that although differences among the epipolar resampling methods
These results imply that although differences among the epipolar resampling methods may have been
may have been un-noticeable in resampling a single pair, they may not be in resampling image
un-noticeable in resampling a single pair, they may not be in resampling image sequences.
sequences.
As a future study, we will investigate the cause of such variations, ways to minimize them and
As a future study, we will investigate the cause of such variations, ways to minimize them and
how to remove remaining small variations in the Bayesian method. Optimization of the processing
how to remove remaining small variations in the Bayesian method. Optimization of the processing
speed of the Bayesian method will also be studied. Recently, the rapid change of image sources
speed of the Bayesian method will also be studied. Recently, the rapid change of image sources and
and the diversification of relevant applications are calling for the merge of computer vision and
the diversification of relevant applications are calling for the merge of computer vision and
photogrammetric technologies. We hope that our investigation can contribute to the understanding
photogrammetric technologies. We hope that our investigation can contribute to the understanding
of epipolar resampling technologies developed in both fields and to the development of new stereo
applications.
Acknowledgments: This research was supported by the “Development of the integrated data processing system
for GOCI-II” funded by the Ministry of Ocean and Fisheries, Korea.
Sensors 2016, 16, 412
11 of 12
of epipolar resampling technologies developed in both fields and to the development of new
stereo applications.
Acknowledgments: This research was supported by the “Development of the integrated data processing system
for GOCI-II” funded by the Ministry of Ocean and Fisheries, Korea.
Author Contributions: Jae-In Kim designed the work, performed the experiments and wrote the manuscript;
Taejung Kim supervised all of the process. He designed the experiments and wrote the manuscript.
Conflicts of Interest: The authors declare no conflict of interest.
References
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
Kim, T. A Study on the Epipolarity of Linear Pushbroom Images. Photogramm. Eng. Remote Sens. 2000, 66,
961–966.
Lee, H.; Kim, T.; Park, W.; Lee, H. Extraction of Digital Elevation Models from Satellite Stereo Images
through Stereo Matching Based on Epipolarity and Scene Geometry. Image Vision Comput. 2003, 21, 789–796.
[CrossRef]
Noh, M.; Cho, W.; Bang, K. Highly Dense 3D Surface Generation using Multi-Image Matching. ETRI J. 2012,
34, 87–97. [CrossRef]
Kang, Y.; Ho, Y. An Efficient Image Rectification Method for Parallel Multi-Camera Arrangement. IEEE Trans.
Consum. Electron. 2011, 57, 1041–1048. [CrossRef]
Al-Shalfan, K.A.; Haigh, J.G.B.; Ipson, S.S. Direct Algorithm for Rectifying Pairs of Uncalibrated Images.
Electron. Lett. 2000, 36, 419–420. [CrossRef]
Chen, Z.; Wu, C.; Tsui, H.T. A New Image Rectification Algorithm. Pattern Recogn. Lett. 2003, 24, 251–260.
[CrossRef]
Hartley, R.I. Theory and Practice of Projective Rectification. Int. J. Comput. Vision 1999, 35, 115–127. [CrossRef]
Hartley, R.I.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press:
Cambridge, UK, 2003.
Loop, C.; Zhang, Z. Computing Rectifying Homographies for Stereo Vision. In Proceedings of the 1999 IEEE
Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 1999), Fort Collins, CO,
USA, 23–25 June 1999; pp. 125–131.
Seo, J.; Hong, H.; Jho, C.; Choi, M. Two Quantitative Measures of Inlier Distributions for Precise Fundamental
Matrix Estimation. Pattern Recogn. Lett. 2004, 25, 733–741. [CrossRef]
Mallon, J.; Whelan, P.F. Projective Rectification from the Fundamental Matrix. Image Vision Comput. 2005, 23,
643–650. [CrossRef]
Zhang, Z. A Flexible New Technique for Camera Calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22,
1330–1334. [CrossRef]
Bok, Y.; Ha, H.; Kweon, I. Automated Checkerboard Detection and Indexing Using Circular Boundaries.
Pattern Recogn. Lett. 2016, 71, 66–72. [CrossRef]
Schenk, T. Digital Photogrammetry; Terra Science: Laurelville, OH, USA, 1999.
Wolf, P.; DeWitt, B. Elements of Photogrammetry with Applications in GIS, 3rd ed.; McGraw-Hill Science:
New York, NY, USA, 2000.
Kim, J.; Kim, H.; Lee, T.; Kim, T. Photogrammetric Approach for Precise Correction of Camera Misalignment
for 3D Image Generation. In Proceedings of the 2012 IEEE International Conference on Consumer Electronics
(ICCE 2012), Las Vegas, NE, USA, 13–16 January 2012; pp. 396–397.
McGlone, J.C., Mikhail, E.M., Bethel, J., Eds.; Manual of Photogrammetry, 5th ed.; American Society of
Photogrammetry and Remote Sensing: Bethesda, MD, USA, 2004.
Lazaros, N.; Sirakoulis, G.C.; Gasteratos, A. Review of Stereo Vision Algorithms: From Software to Hardware.
Int. J. Optomech. 2008, 2, 435–462. [CrossRef]
Zilly, F.; Kluger, J.; Kauff, P. Production Rules for Stereo Acquisition. IEEE Proc. 2011, 99, 590–606. [CrossRef]
Guerra, E.; Munguia, R.; Grau, A. Monocular SLAM for Autonomous Robots with Enhaced Features
Initialization. Sensors 2014, 14, 6317–6337. [CrossRef] [PubMed]
Adams, L.P. Fourcade: The Centenary of a Stereoscopic Method of Photographic Surveying. Photogramm. Rec.
2001, 17, 225–242. [CrossRef]
Sensors 2016, 16, 412
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
12 of 12
Wang, M.; Hu, F.; Li, J. Epipolar Resampling of Linear Pushbroom Satellite Imagery by a New Epipolarity
Model. ISPRS J. Photogramm. Remote Sens. 2011, 66, 347–355. [CrossRef]
Longuet-Higgins, H.C. A Computer Algorithm for Reconstructing a Scene from Two Projections. Nature
1981, 293, 133–135. [CrossRef]
Wu, H.; Yu, Y.H. Projective Rectification with Reduced Geometric Distortion for Stereo Vision and
Stereoscopic Video. J. Intell. Rob. Syst. Theor. Appl. 2005, 42, 71–94. [CrossRef]
Hartley, R.I. In Defense of the Eight-Point Algorithm. IEEE Trans. Pattern Anal. Mach. Intell. 1997, 19, 580–593.
[CrossRef]
Hartley, R.I. Minimizing Algebraic Error in Geometric Estimation Problems. In Proceedings of the 1998 IEEE
6th International Conference on Computer Vision, Bombay, India, 4–7 January 1998; pp. 469–476.
Banno, A.; Ikeuchi, K. Estimation of F-Matrix and Image Rectification by Double Quaternion. Inf. Sci. 2012,
183, 140–150. [CrossRef]
Fathy, M.E.; Hussein, A.S.; Tolba, M.F. Fundamental Matrix Estimation: A Study of Error Criteria.
Pattern Recogn. Lett. 2011, 32, 383–391. [CrossRef]
Kumar, S.; Micheloni, C.; Piciarelli, C.; Foresti, G.L. Stereo Rectification of Uncalibrated and Heterogeneous
Images. Pattern Recogn. Lett. 2010, 31, 1445–1452. [CrossRef]
Fusiello, A.; Irsara, L. Quasi-Euclidean Epipolar Rectification of Uncalibrated Images. Mach. Vision Appl.
2011, 22, 663–670. [CrossRef]
Brown, D.C. Close- Range Camera Calibration. Photogramm. Eng. 1971, 37, 855–866.
Tsai, R.Y. A Versatile Camera Calibration Technique for High-Accuracy 3D Machine Vision Metrology using
Off-the-Shelf TV Cameras and Lenses. IEEE J. Robot. Automat. 1987, 3, 323–344. [CrossRef]
Medioni, G.; Kang, S. Emerging Topics in Computer Vision; Prentice Hall PTR: Upper Saddle River, NJ,
USA, 2004.
Cho, W.; Schenk, T.; Madani, M. Resampling Digital Imagery to Epipolar Geometry. Int. Arch. Photogramm.
Remote Sens. 1993, 29, 404–408.
Koch, K. Introduction to Bayesian Statistics, 2nd ed.; Springer: Berlin, Germany, 2007.
Agouris, P.; Schenk, T. Automated Aerotriangulation using Multiple Image Multipoint Matching.
Photogramm. Eng. Remote Sens. 1996, 62, 703–710.
Grodecki, J.; Dial, G. Block Adjustment of High-Resolution Satellite Images Described by Rational
Polynomials. Photogramm. Eng. Remote Sens. 2003, 69, 59–68. [CrossRef]
Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vision 2004, 60, 91–110.
[CrossRef]
Fischler, M.A.; Bolles, R.C. Random Sample Consensus: A Paradigm for Model Fitting with Applications to
Image Analysis and Automated Cartography. Commun. ACM 1981, 24, 381–395. [CrossRef]
Bouguet’s Camera Calibration Toolbox. Available online: http://www.vision.caltech.edu/bouguetj/calib_doc/
(accessed on 12 February 2016).
© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons by Attribution
(CC-BY) license (http://creativecommons.org/licenses/by/4.0/).