Rational Functions and Potential for Rigorous Sensor Model

Rational Functions and Potential for
Rigorous Sensor Model Recovery
Kaichang Di, Ruijin Ma, and Rong Xing Li
Abstract
Rational functions (RFs) have been applied in photogrammetry
and remote sensing to represent the transformation between
the image space and object space whenever the rigorous model
is made unavailable intentionally or unintentionally. It attracts more attention now because Ikonos high-resolution
images are being released to users with only RF coefficients.
This paper briefly introduces the RF for photogrammetric
processing. Equations of space intersection with upward RF
are derived. The computational experimental result with onemeter resolution Ikonos Geo stereo images and other airborne
data verified the accuracy of the upward RF-based space
intersection. We demonstrated different ways to improve the
geopositioning accuracy of Ikonos Geo stereo imagery with
ground control points by either refining the vendor-provided
Ikonos RF coefficients or refining the RF-derived ground
coordinates. The accuracy of 3D ground point determination
was improved to 1 to 2 meters after the refinement. Finally,
we showed the potential for recovering sensor models of a
frame image and a linear array image from the RF.
Introduction
A rigorous sensor model of an image is used to reconstruct the
physical imaging setting and transformations between the 3D
object space and the image space. It includes physical parameters about the camera, such as focal length, principal point location, pixel size, and lens distortions, and orientation parameters of the image such as position and attitude of the image. Collinearity conditions are the most popular equations used to
implement the transformations based on the rigorous sensor
model. Such rigorous models are conventionally applied in
photogrammetric processing because of the clear separation
between various parameters representing different physical
settings. Consequently the parameters can be modeled and calibrated for the high accuracy required in many mapping and
other applications (Mikhail et al., 2001).
Rational functions (RFs) have recently drawn considerable
interest in the civilian remote sensing community, especially
in light of the trend that some commercial high-resolution satellite imaging data such as Ikonos are supplied with RFs (Cheng
and Toutin, 2000) instead of rigorous sensor models. An RF
model is generally the ratio of two polynomials derived from
the rigorous sensor model and the corresponding terrain information, which does not reveal the sensor parameters. RFs have
been used for different reasons, for example, to supply data
without disclosing the sensor model or to achieve generality.
Among the various versions of RFs is the special RF model
called the Universal Sensor Model (USM), first developed and
implemented by GDE Systems Inc., now BAE Systems, and
then used by the OpenGIS Consortium (Whiteside, 1997; OGC,
1999). In this model several improvements were made to
achieve a higher accuracy and to be effective for implementation. Madani (1999) discussed advantages and disadvantages of
RFs compared with rigorous sensor models. He tested the accuracy of the RF solution using 12 SPOT Level 1A scenes of the
Winchester area in Virginia. Using two stereo image pairs with
50 ground (control/pass) points, the RMS error of the planimetric coordinates estimated from the differences between the
known and computed ground coordinates is 0.18 m. The RMS
error of the Z coordinate is about 10 m. It is concluded that the
RF expressed the SPOT scenes very well and that properly
selected RFs can be used in operations of digital photogrammetric systems. Tao and Hu (2000; 2001b) and Tao et al. (2000)
gave a least-squares solution for RF parameter generation and
assessed the fitting accuracy using simulated DEM data, a SPOT
scene, and an aerial image. In their comprehensive investigation, various scenarios with different polynomial orders and
different forms of the denominators were tested and compared.
It was found that RFs are sensitive to the distribution of control
points (CPs). If CPs are well distributed, RFs normally perform
much better than regular polynomials (no denominator). Hu
and Tao (2001) proposed two methods to update solutions of
the RF model using additional ground control points (GCPs).
Yang (2000) performed an experiment using a pair of SPOT
images and a pair of NAPP (National Aerial Photography Program) aerial images. The RF fitting result indicates that the
third-order RF, even the second-order RF, with various denominators achieved an RMS error of less than 0.2 pixels when approximating the rigorous SPOT sensor model. Additionally, the
first-order RFs were appropriate for the aerial images. A similar
experiment performed by Alamus et al. (2000) used a MOMS
data set that covers an area of 120 km by 40 km in the Andes
(between Chile and Bolivia). The RF model reached RMS errors
of 7.9 m, 8.1 m, and 12.8 m in the X, Y, and Z directions, respectively. Dowman and Dolloff (2000) reviewed the RF technique
and proposed a method for RF-based error propagation.
All the test results mentioned above indicate that the RF
models can be used to approximate rigorous sensor models for
linear scanning sensors or frame cameras, as confirmed by Grodecki (2001). On the other hand, the absolute accuracy of 3D
ground point determination, or geopositioning, depends on the
accuracy of the actual rigorous sensor model itself and the RF
generation method. Fraser et al. (2001) investigated the accuracy of geopositioning using Ikonos Geo stereo images in a Melbourne test field. The RF model, an extended DLT (Direct Linear
Transformation) model, and an Affine projection model were
Photogrammetric Engineering & Remote Sensing
Vol. 69, No. 1, January 2003, pp. 33–41.
Department of Civil and Environmental Engineering and Geodetic Science, The Ohio State University, Columbus, OH
43210 ([email protected]).
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
0099-1112/03/6901–033$3.00/0
䉷 2003 American Society for Photogrammetry
and Remote Sensing
Ja nuar y 20 03
33
tested and compared. A submeter accuracy of 2D and 3D positioning was reported. Baltsavias et al. (2001) investigated the
radiometric and geometric characteristics of Ikonos Geo imagery and its use for orthoimage generation and 3D building reconstruction. Point positioning was evaluated using the same
stereo images in the Melbourne test field. With the help of four
to eight accurate and well-distributed GCPs and a simple translation (bias removal), accuracies of 0.4 to 0.5 m in planimetry
and 0.6 to 0.8 m in height were achieved with the full set of 80
RF coefficients (RFCs) per image, or reduced RFCs omitting the
higher-order terms.
A parameter sensor model (the CCRS Canada Centre for
Remote Sensing model) for geometric processing of Ikonos
imagery was announced without a detailed description; it was
reported that it was implemented in PCI OrthoEngine software
(Toutin and Cheng, 2000). This model was compared with the
RF model and a simple polynomial (SP) model using an Ikonos
Geo image of the City of Richmond Hill, Ontario, Canada (Toutin and Cheng, 2000). The result stated that, with 30 GCPs, the
RF model gave a better accuracy than did the CCRS and SP models while, with seven GCPs, the CCRS model produced a better
result. Davis and Wang (2001) presented a detailed assessment
of the planimetric accuracy of Ikonos Geo images using test
results from three sites in Missouri. The 1-m resolution Ikonos
Geo images were orthorectified using the RF model and the
CCRS sensor model. The planimetric accuracy of the orthorectified images was 2 to 3 m, which is comparable to the accuracy of
the Ikonos Precision product. It was also found that the RF
model produced a better accuracy than did the CCRS model
based on the errors at independent checkpoints. The RF method
also caused certain substantial distortions in some linear
features.
Di et al. (2001) discussed RFs of frame and linear array
images for coastal mapping and monitoring applications.
In this paper, we briefly review the principle of the rational
function (RF) and introduce the newly derived equations for
space intersections with the upward RF. We investigate the
accuracy and the application of the upward RF using a frame
image with its known rigorous sensor model. Two different
ways to improve the geopositioning accuracy of actual Ikonos
Geo stereo imagery through additional GCPs are proposed and
compared. Finally, we present the results of a feasibility study
on retrieving rigorous sensor model parameters from the RF
under different conditions. This experiment was performed
using a frame image and HRSC (high-resolution stereo camera)
stereo linear array images because their rigorous sensor parameters are available and can be used to compare with those recovered from the RF model.
Rigorous Sensor Model and Rational Functions
Rigorous Sensor Model
A rigorous sensor model is a physical model that describes the
imaging geometry and the transformation between the object
space and image space. For a point with ground coordinates
(X, Y, Z ) and image coordinates (x, y), the most commonly used
transformation model is expressed by the following collinearity equations:
x ⫺ xo ⫽ ⫺f
a11(X ⫺ Xs) ⫹ a12(Y ⫺ Ys) ⫹ a13(Z ⫺ Zs)
a31(X ⫺ Xs) ⫹ a32(Y ⫺ Ys) ⫹ a33(Z ⫺ Zs)
(1a)
era, xo and yo are the image coordinates of the principal point,
and the aij are elements of the rotation matrix with the three
angles (␻, ␸, ␬ ) (Moffitt and Mikhail, 1980). f, xo , and yo are usually called interior orientation (IO) parameters, while Xs , Ys ,
Zs , ␻, ␸, and ␬ are called exterior orientation (EO) parameters. In
addition, image coordinates (x, y) are corrected for lens distortions, both radial and decentering distortions. Otherwise, the
distortion parameters can also be included and estimated in
Equation 1.
Equations 1a and 1b can be applied for both frame and linear array sensors. For a frame camera, one image has one set of
EO parameters while, for linear array sensors such as the SPOT
and Ikonos imaging systems, each scan line has its own EO
parameters. That is, the EO parameters change from image line
to image line. Such changes in the EO parameters are often
modeled by polynomials. From a computational point of view,
solving for polynomial coefficients of the EO parameters in a
photogrammetric adjustment instead of actual EO parameters of
each image line greatly reduces the required number of GCPs
and can achieve a higher computational efficiency (Li, 1998;
Zhou and Li, 2000).
The inverse collinearity equations transform the image
coordinates (x, y) and the elevation Z into the ground coordinates (X, Y ): i.e.,
X ⫺ Xs ⫽ (Z ⫺ Zs)
a11(x ⫺ xo) ⫹ a21( y ⫺ yo) ⫺ a31 f
a13(x ⫺ xo) ⫹ a23( y ⫺ yo) ⫺ a33 f
(2a)
Y ⫺ Ys ⫽ (Z ⫺ Zs)
a12(x ⫺ xo) ⫹ a22( y ⫺ yo) ⫺ a32 f
a13(x ⫺ xo) ⫹ a23( y ⫺ yo) ⫺ a33 f
(2b)
Because of its power of modeling physical characteristics
of the sensor and the imaging setting, the rigorous sensor
model is usually the preferred geometric model in photogrammetric applications. However, the rigorous sensor model is
rather complex and requires specialized software, and sometimes the sensor models and their physical parameters are not
made available, either unintentionally or intentionally.
Rational Functions
RFs perform transformations between the image and object
spaces through a ratio of two polynomials. The image coordinates (x, y) and the ground coordinates (X, Y, Z ) are normalized
to the range from ⫺1.0 to 1.0 by their image size and geometric
extent, respectively, for computational stability and minimizing computational errors. Similar to Equation 1, the RF can be
expressed as (Whiteside, 1997; OGC, 1999; Madani, 1999; Tao
and Hu, 2000; Tao and Hu, 2001b):
P1(X, Y, Z )
P2(X, Y, Z )
(3a)
P3(X, Y, Z )
.
P4(X, Y, Z )
(3b)
x⫽
y⫽
Polynomials Pi (i ⫽ 1, 2, 3, and 4) have the general form
m1 m2 m3
a21(X ⫺ Xs) ⫹ a22(Y ⫺ Ys) ⫹ a23(Z ⫺ Zs)
y ⫺ yo ⫽ ⫺f
a31(X ⫺ Xs) ⫹ a32(Y ⫺ Ys) ⫹ a33(Z ⫺ Zs)
P(X, Y, Z ) ⫽
(1b)
where Xs , Ys , and Zs are coordinates of the exposure center in
the ground coordinate system, f is the focal length of the cam-
34
Ja nuar y 20 03
兺兺兺
aijkX iY jZk.
(4)
i⫽0 j⫽0 k⫽0
Usually, the order of the polynomials is limited by 0 ⱕ m1
ⱕ 3, 0 ⱕ m2 ⱕ 3, 0 ⱕ m3 ⱕ 3, and m1 ⫹ m2 ⫹ m3 ⱕ 3. Each
P(X, Y, Z ) is then a third-order, 20-term polynomial: i.e.,
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
P(X, Y, Z ) ⫽ a0 ⫹ a1X ⫹ a2Y ⫹ a3 Z ⫹ a4 X 2 ⫹ a5 XY
⫹ a6 XZ ⫹ a7Y ⫹ a8YZ ⫹ a9 Z ⫹ a10X
2
2
where
3
⫹ a11X 2Y ⫹ a12 X 2Z ⫹ a13 XY 2 ⫹ a14 XYZ
(5)
⫹ a15 XZ 2 ⫹ a16Y 3 ⫹ a17Y 2Z ⫹ a18YZ 2 ⫹ a19Z 3 .
Replacing the Pi s in Equation 3 by the polynomials in
Equation 5 and eliminating the first coefficient in the denominator, the RFs become
x⫽
(1 X Y Z ⭈⭈⭈ YZ 2 Z 3)(a0 a1 a2 a3 ⭈⭈⭈ a18 a19)T
(1 X Y Z ⭈⭈⭈ YZ 2 Z 3)(1 b1 b2 b3 ⭈⭈⭈ b18 b19)T
(6a)
y⫽
(1 X Y Z ⭈⭈⭈ YZ 2 Z 3)(c0 c1 c2 c3 ⭈⭈⭈ c18 c19)T
(1 X Y Z ⭈⭈⭈ YZ 2 Z 3)(1 d1 d2 d3 ⭈⭈⭈ d18 d19)T
(6b)
⭸P1
⭸P2
P2 ⫺
P1
⭸X
⭸X
⭸x
⫽
.
2
⭸X
P2
x and y are the computed values of the image coordinates (x, y)
from the ground coordinates (X, Y, Z ) estimated in the last iter⭸x ⭸x ⭸y ⭸y
⭸y
,
ation. The other partial derivatives
,
,
, and
⭸Y ⭸Z ⭸X ⭸Y
⭸Z
can be derived in a similar way. Further, the derivatives of
polynomial Pi with respect to X, Y, and Z are derived, for example, by
⭸P1
⫽ a1⫹ 2a4 X ⫹ a5Y ⫹ a6 Z ⫹ 3a10 X 2 ⫹ 2a11XY
⭸X
(9)
⫹ 2a12 XZ ⫹ a13Y 2 ⫹ a14YZ ⫹ a15 Z 2.
where there are 39 terms, including 20 in the numerator and 19
and the constant 1 in the denominator. In order to solve for the
RF coefficients, at least 39 control points are required.
Given the ground coordinate Z, the inverse form of the RF,
which transforms from the image space to the object space, can
be represented as (Yang, 2000)
X⫽
P5(x, y, Z )
P6(x, y, Z )
(7a)
Y⫽
P7(x, y, Z )
.
P8(x, y, Z )
(7b)
Equation 7 is called downward RF and, similarly, Equation
3 is called upward RF.
Usually, the RF model is generated based on a rigorous sensor model. After a rigorous sensor bundle adjustment is performed, multiple evenly distributed image/object grid points
can be generated and used as control points (CPs). Such CPs are
created based on the full extent of the image and the range of
elevation variation. The entire range of elevation variation is
sliced into several layers. Then, the RFCs are calculated by a
least-squares adjustment with these virtual CPs. On the other
hand, if a sufficient number of GCPs is available, RFCs can be
solved for with the GCPs directly without knowing the rigorous
model. Tao and Hu (2000; 2001b) gave a detailed description of
a least-squares solution of RFCs and suggested using a Tikhonov regularization for tackling possible undulations.
Space Intersection Using Upward RF
A space intersection computes 3D ground coordinates of a point
from measured image coordinates of conjugate points in multiple images. Yang (2000) discussed a solution using the downward RF. In many cases the upward RFs are used in practice. For
example, only upward RFCs are provided with Ikonos images.
We developed equations of an upward RF-based space intersection method and used it in 3D shoreline calculation (Di et al.,
2001).
If the upward RF coefficients are available, we can solve for
the ground coordinates (X, Y, Z ) iteratively. The linearized
upward RFs are expressed as
x ⫽ (x) ⫹
⭸x
⭸x
⭸x
⌬X ⫹
⌬Y ⫹
⌬Z
⭸X
⭸Y
⭸Z
(8a)
y ⫽ ( y) ⫹
⭸y
⭸y
⭸y
⌬X ⫹
⌬Y ⫹
⌬Z
⭸X
⭸Y
⭸Z
(8b)
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
We rewrite Equation 8 as
vx ⫽ ␥11 ⭈ ⌬X ⫹ ␥12 ⭈ ⌬Y ⫹ ␥13 ⭈ ⌬Z ⫺ lx
(10a)
vy ⫽ ␥21 ⭈ ⌬X ⫹ ␥22 ⭈ ⌬Y ⫹ ␥23 ⭈ ⌬Z ⫺ ly
(10b)
where lx ⫽ x ⫺ (x) and ly ⫽ y ⫺ ( y). The coefficients ␥ij are the
partial derivatives. Given n (n ⱖ 2) conjugate image points
from stereo images, 4n equations from Equation 10 can be constructed for the stereo images to solve 3n unknowns (X, Y, Z ).
This is accomplished iteratively by a least-squares adjustment.
Initial values of the ground coordinates are required in
computation. For the downward RF-based intersections, we
only need the initial value of the Z coordinate. We may use the
average Z value of the covered area as the initial value. For the
upward RF-based intersections, the initial value of the Z coordinate can be selected in the same way as for the downward RFbased intersections. Given the above initial Z value and the
image coordinates (x, y), the initial values of the X and Y coordinates can be computed from two first-order polynomials,
which approximate the relationships between the (X, Y ) and
(x, y, Z ).
Improving the Geopositioning Accuracy of Ikonos Geo Stereo
Imagery Using GCPs
The vendor-provided RF coefficients might not be sufficiently
accurate to perform photogrammetric operations for some
applications. For example, the nominal ground accuracy of the
1-m resolution Ikonos Geo product is 25 m, and those of Ikonos
Precision and Precision Plus products are 1.9 m and 0.9 m,
respectively. It is desirable in many situations to enhance the
ground accuracy of the Geo product to the level of Precision or
Precision Plus products.
There are two methods to improve the geopositioning
accuracy of the Ikonos Geo product. The first is to compute the
new RFCs with the vendor-provided RF coefficients used as initial values in Equations 3 and 4. Such high quality initial values
of the RF make the solution of the new RFCs more stable and the
computational process faster to converge. If the rigorous sensor
model is not available, high quality CPs cannot be produced by
the rigorous sensor model. Consequently, this method requires
a large number of GCPs to compute the new RFCs. In fact, more
than 39 GCPs are required for the third-order RF.
The second method improves the ground coordinates
derived from the vendor-provided RFCs by a polynomial correction whose parameters are determined by the GCPs. The vendor-provided RFs are employed to perform the photogrammetric intersections to compute the ground coordinates from
Ja nuar y 20 03
35
the corresponding image points for all measured points,
including the GCPs and check points. A polynomial transformation is then applied to all the ground coordinates computed
from the RF. Each ground coordinate of a point (XRF , YRF , ZRF)
undergoes a first-(or second-) order polynomial: i.e.,
X ⫽ a0 ⫹ a1 XRF ⫹ a2YRF ⫹ a3 ZRF
(11a)
Y ⫽ b0 ⫹ b1 XRF ⫹ b2YRF ⫹ b3 ZRF
(11b)
Z ⫽ c0 ⫹ c1 XRF ⫹ c2YRF ⫹ c3 ZRF
(11c)
where X, Y, and Z are the improved ground coordinates. To
solve for the coefficients of the polynomials, at least four GCPs
are required for the first-order polynomials and ten GCPs for the
second-order polynomials.
Theoretically, the first method aims at improving the RF
coefficients that describe the perspective imaging geometry
that in turn enhances the quality of the photogrammetric intersections. On the other hand, the second method gives a mathematical fit between the coordinates computed from the vendorprovided RF and the coordinates of the GCPs, and no improvement of sensor model parameters is performed. Consequently,
fewer GCPs are needed, and the method is also easy to implement in practice.
Reconstruction of a Rigorous Sensor Model from RF
It was scientific curiosity that led us to the investigation of the
reconstruction of the rigorous sensor model from RF. Taking a
frame camera as an example, if lens distortion is corrected, and
the denominators are the same for the x and y dimensions, the
RF polynomials become linear and Equation 3 turns out to be a
DLT: i.e.,
x⫽
L 1 X ⫹ L 2Y ⫹ L 3 Z ⫹ L 4
L9 X ⫹ L10Y ⫹ L11 Z ⫹ 1
(12a)
y⫽
L5 X ⫹ L6Y ⫹ L7 Z ⫹ L8
.
L9 X ⫹ L10Y ⫹ L11 Z ⫹ 1
(12b)
In this case there exists a direct relationship between the
rigorous sensor model and the RF. The DLT coefficients in Equation 12 can be computed directly from the collinearity Equation
1, and, inversely, the interior and exterior orientation parameters can be calculated from the DLT coefficients (Karara, 1989;
Wang, 1990; Mikhail et al., 2001).
If the available RF model of a frame image is not the same as
the DLT, for example, with different denominators for the image
coordinates x and y in Equation 3, the coefficients of the DLT
model can be estimated in a way similar to that for RF. First, we
employ the RF coefficients to produce a grid of CPs with known
ground coordinates and image coordinates. The CPs are then
utilized to compute the DLT coefficients in Equation 12. Finally,
the interior and exterior orientation parameters of the image
can be calculated from the DLT coefficients. Note that the image
coordinates used here should not contain any lens and other
nonlinear distortions. Otherwise, such uncompensated distortions may affect the derived orientation parameters. An alternative to the DLT method is to compute the orientation parameters from the RF-derived CPs through a photogrammetric
space resection. Both methods are implemented and the
results are presented in this paper.
In the case of a linear array sensor, strictly speaking, we
must use the above method for each image line to retrieve the
orientation parameters of the line. If the RFCs are given for an
entire scene, there seems no explicit way to regain the orientation parameters of individual image lines from the blended
RFCs.
36
Ja nuar y 20 03
One special thought was given to the procedure where we
can generate CPs on a plane that contains an image line using
the RFCs. The orientation parameters of the image line are then
computed by the DLT method or by a space resection. If we can
compute the orientation parameters of several image lines
along the imaging track in this way, we are able to give initial
values of the coefficients of the six EO polynomials of the image.
The 3D CPs generated through the RF should provide image
coordinates and corresponding ground coordinates that can be
used in an extended photogrammetric bundle adjustment for
linear array images (Zhou and Li, 2000). Finally, the IO parameters and EO polynomial coefficients for the image are obtained as
a result of the adjustment computation.
Experimental Results and Analysis
Experimental Data Sets
We performed a comprehensive experiment to test the abovediscussed methods using three data sets.
Data Set I consists of an aerial image and a corresponding
DEM. The aerial photo was taken along the Ohio Lake Erie shore
in 1997 by the National Geodetic Survey (NGS) of NOAA. The
photo was scanned at an image resolution of 25 micrometers
with an image size of 9,280 by 9,280 pixels. The ground resolution is 0.5 m. The focal length is 153.28 mm. The principal
point position and lens distortion parameters are provided in a
camera calibration report. A bundle adjustment of 12 overlapping images supplied the exterior orientation parameters. A
DEM with a grid spacing of 2.5 m was generated from the stereo
pairs.
Data Set II consists of two stereo pairs of one-meter resolution Ikonos Geo stereo images acquired on 19 March 2001,
which cover an area of 11 km along the Ohio Lake Erie shore.
The image sizes of the two stereo pairs are 8796 by 7900 pixels
and 8708 by 7480 pixels (Figures 1 and 2). The RF coefficients
Figure 1. One of the 1-m Ikonos stereo images (first pair)
superimposed with GCPs and CKPs employed for refining RF
coefficients (first method).
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
Similarly, the third-order RFCs for a part of the large HRSC
linear array image strip in Data Set III were also generated from
the available orientation parameters at a very high accuracy.
Experience also shows that the selection of an appropriate RF
polynomial order and use of a sufficient number of CPs are critical to the computational efficiency and RF coefficient quality.
In the implementation, we noticed that solutions of the firstand second-order RF are more stable than those of the thirdorder RF. Regularization is generally not necessary for solving
the first- and second-order RF. Nevertheless, it is usually beneficial for solving the third-order RF.
Figure 2. One of the 1-m Ikonos stereo images (second pair)
superimposed with GCPs and CKPs employed for refining RF
coefficients (first method).
for each image were supplied. The nominal ground accuracy of
this Geo product is 25 m.
Data Set III is a set of HRSC (High Resolution Stereo Camera)
images acquired by the German Aerospace Agency (DLR). The
sensor system is an airborne stereo imaging system with fore-,
nadir-, and aft-looking linear arrays. The stereo image strips
have a size of 14,500 lines and 5,184 columns. The exterior orientation parameters were supplied for each image line, along
with the measured image coordinates and ground coordinates
of 108 GCPs. The fore-looking panchromatic image was chosen
for our experiment.
Experiment I: Generation of RF Coefficients
The frame image in Data Set I was initially corrected for lens
distortions and principal point offset. Thus, we expected that
the linear polynomial form of the RF with the same denominator, namely, a DLT model, would be sufficient to approximate
the camera model. The computed DLT model was used later in
Experiment III in an attempt to recover the orientation parameters of the image. Four layers of CPs, each with 13 rows and 14
columns, were generated in the vertical range of the DEM. A
total of 728 CPs were used to produce the DLT coefficients. In
order to check how well the computed DLT coefficients approximate the camera model, 317 points from the DEM were utilized
as check points (CKPs). From the CKPs on the ground, transformations from the object space to the image space were performed
to calculate the image coordinates of the CKPs in the image
using the DLT coefficients and the camera model. The differences in the CKPs in the images were used to estimate the RMS
errors of the image coordinates x and y, which are negligible in
this case (less than 4 ⫻ 10⫺12 pixels). That means that the firstorder RF (DLT) coefficients represent the camera model very
well. In fact, we expanded the RF with second- and third-order
terms and have not found any significant changes.
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
Experiment II: Refinement of the Geopositioning Accuracy of Ikonos Geo Stereo
Images
The one-meter resolution Ikonos Geo stereo images in Data Set
II came with a set of vendor-provided RF coefficients (upward)
for each image (Figures 1 and 2). A test using nine GPS survey
points in the area found out that there was a systematic error of
up to 16 m in the ground coordinate X (east-west) direction,
which is within the nominal accuracy of the Geo product.
The rigorous sensor model for Ikonos is not available to us.
Our attempt was to use GCPs to improve the geopositioning
accuracy by either refining the Ikonos RF coefficients (first
method) or refining RF-derived ground coordinates (second
method). There are ten GPS survey points in the area that are
obviously insufficient for improving the RF coefficients of the
Ikonos images. However, in the same area there are 12 aerial
stereo photographs which were acquired by the National Geodetic Survey of NOAA for shoreline mapping. Eight GPS survey
points were used to carry out an aerial photogrammetric bundle adjustment of the 12 aerial photographs, in which a large
number of tie points were used to build the aerial triangulation
network. The ground positions of the tie points were computed
through the bundle adjustment and were applied as GCPs for
the subsequent refinement of the Ikonos RF. Each Ikonos image
contains 57 such GCPs that have RMS errors of 24 cm in X, 23 cm
in Y, and 44 cm in the Z direction.
The first method for refining the RF coefficients was
employed. Out of 57 GCPs for each image, we chose 52 GCPs for
the actual computation of the RF coefficients and the remaining
five GCPs as CKPs. The distribution of the GCPs and CKPs is illustrated in Figures 1 and 2. The vendor-provided RF coefficients
were used as initial values and then refined (recomputed)
according to Equations 3 to 6 by a least-squares adjustment with
the GCPs. After the refining process, the ground coordinates of
the CKPs were computed by space intersections using the
refined RF coefficients and were then compared with their
known coordinates. The differences between them were used
to estimate the RMS errors of the ground coordinates (Table 1).
The improved accuracy of the X coordinate of about 2 m is comparable to the vendor—the nominal ground accuracy of the
Ikonos Precision product (1.9 m). However, the RMS error of the
Y coordinate of about 4 m is larger because of the weak geometric control in the Y direction where all GCPs are distributed predominantly along the shore in the X direction (Figures 1 and
2). The same reason may have contributed to the accuracy of the
Z coordinate of the second stereo pair.
TABLE 1. ASSESSMENT OF TWO METHODS FOR IMPROVING PHOTOGRAMMETRIC
INTERSECTION USING ACTUAL 1-m IKONOS STEREO IMAGES
RMS Errors of the
Ground Coordinates (m)
Refining Method
First Method
Second Method
Stereo Pair
Pair
Pair
Pair
Pair
I
II
I
II
X
Y
Z
2.489
1.863
1.342
0.991
4.404
4.124
1.051
0.787
0.746
4.318
1.632
1.513
Ja nuar y 20 03
37
Figure 3. One of the 1-m Ikonos stereo images (first pair)
superimposed with GCPs and CKPs employed for refining RF
coefficients (second method).
The second method was also tested using the same data set.
First-order polynomials were employed to establish a transformation from the ground coordinates computed using the vendor-provided RF coefficients to the improved ground coordinates. As shown in Figures 3 and 4, nine GCPs and 45 CKPs in
Stereo Pair I and eight GCPs and 49 CKPs in Stereo Pair II were
selected. The GCPs were used to calculate the polynomial
parameters of the transformations. Consequently, the computed ground coordinates of the CKPs improved by the polynomials were compared with the known coordinates. The
differences between them led to the RMS errors of the ground
coordinates illustrated in the last two rows of Table 2.
As discussed before, attempts to improve the RF coefficients by the first method, namely, by the parameters, describe
the perspective imaging geometry. It requires a large number of
GCPs because the rigorous sensor model is not available. After
improvement, a targeted ground coordinate accuracy of 2 to 4
meters was achieved. On the other hand, the second method
uses the GCPs to build a transformation from the less accurate
ground coordinates computed using the vendor-provided RF to
the improved ground coordinates. It used fewer GCPs and
achieved a better fit at the CKPs (Table 1). This is because the test
site is not over a mountainous area, and a polynomial fit performs effectively.
Experiment III: Recovery of Rigorous Sensor Models from RF
Frame Image
Recovery of frame image orientation parameters can be carried
Figure 4. One of the 1-m Ikonos stereo images (second pair)
superimposed with GCPs and CKPs employed for refining RF
coefficients (second method).
out using DLT coefficients, a special form of RF, if the assumptions in Experiment I are met, where the DLT coefficients of the
frame image are computed based on the normalized image and
ground coordinates. The coefficients were then transformed
back to those corresponding to the actual image and ground
coordinates. The orientation parameters of the image were
consequently calculated from the DLT coefficients and compared with the known values provided in the camera calibration report and the bundle adjustment result performed earlier
using a large image network. The absolute values of differences
of the exterior orientation (EO) parameters (⌬␻, ⌬␸, ⌬␬, ⌬Xs ,
⌬Ys , ⌬Zs) and the interior orientation (IO) parameters (⌬f, ⌬xo ,
⌬yo) are listed in Table 2. It is obvious that the recovered orientation parameters from the DLT coefficients are sufficiently
accurate.
In addition, an experiment was also carried out to investigate the possibility of recovery of the orientation parameters of
the same image by the introduced space resection method. To
prepare for the space resection, 728 CPs with various elevations
were generated through the above DLT coefficients. In order to
examine the effect of the terrain, another set of 182 CPs on a flat
plane were generated by assigning the plane the average elevation of the area covered. The experiment was performed with
several scenarios in terms of assumed known orientation parameters and terrain types as illustrated in Table 3. In the second
column is the result where all the orientation parameters were
to be estimated by the space resection through the DLT. The next
TABLE 2. DIFFERENCES COMPUTED FROM THE KNOWN ORIENTATION PARAMETERS AND THOSE RECOVERED FROM RF (DLT)
앚⌬␻앚 ⫽ 1.9 ⫻ 10
second
앚⌬Xs앚 ⫽ 1.2 ⫻ 10⫺10 m
앚⌬f 앚 ⫽ 0 ␮m
⫺11
38
Ja nuar y 20 03
앚⌬␸앚 ⫽ 7.5 ⫻ 10
second
앚⌬Ys앚 ⫽ 9.3 ⫻ 10⫺10 m
앚⌬xo앚 ⫽ 2.1 ⫻ 10⫺11 ␮m
⫺12
FOR A
FRAME IMAGE IN DATA SET I
앚⌬␬앚 ⫽ 2.3 ⫻ 10
second
앚⌬Zs앚 ⫽ 7.3 ⫻ 10⫺12 m
앚⌬yo앚 ⫽ 1.3 ⫻ 10⫺10 ␮m
⫺11
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
TABLE 3. QUALITY ASSESSMENT OF THE RECOVERED ORIENTATION PARAMETERS FOR THE SAME FRAME IMAGE AS IN TABLE 2
RESECTION
FROM THE
RF (DLT) USING SPACE
Given
None
(xo, yo)
f
IO
EO
Solving for
EO and IO
EO, f
EO and (xo , yo)
EO
IO
Very Good
Failure
Very Good
Sensitive
Very Good
Very Good
Very Good
Very Good
CPs Selected from
Layers (diff. Elevations)
A flat plane
four scenarios assume that the orientation parameters are partially known and the rest of the orientation parameters are to be
estimated by the space resection.
In Table 3, the quality of the recovered orientation parameters for the various scenarios is rated as Very Good, Sensitive,
and Failure. A quality indicator of Very Good means that the
solution is stable and is not sensitive to the initial values. The
differences between the estimated and known orientation
parameters are small: 앚⌬␻앚, 앚⌬␸앚, and 앚⌬␬앚 are less than 10⫺10 seconds; 앚⌬Xs앚, 앚⌬Ys앚, and 앚⌬Zs앚 are less than 10⫺10 m; and 앚⌬f 앚, 앚⌬xo앚,
and 앚⌬yo앚 are less than 10⫺10 ␮m. Most cases in Table 3 have this
category of high quality, including recovering both EO and IO
parameters from CPs from layers with different elevations.
“Failure” is the scenario of the estimation of both EO and IO
with CPs selected on a flat plane. The differences of the rotation
angles are greater than several degrees, the differences of exposure center coordinates are greater than several kilometers, and
those of the interior orientation parameters are greater than several centimeters.
“Sensitive” is the case where the principal point position
(xo, yo) is known and the EO and f are to be estimated through
CPs selected on a flat plane. The quality is generally a failure.
However, if very good initial values of the unknowns are given,
especially the for the angle ␬ (say, 앚⌬␬앚 ⬍ 2 degrees), the solution
would become stable and differences are small.
As with the space resection, the DLT coefficients computed
from CPs on a flat plane would be highly correlated and would
lead to an unsuccessful derivation of orientation parameters
from the DLT coefficients. The above result in Table 3 demonstrates that recovering the orientation parameters of a frame
image from the RF (DLT) coefficients is feasible if the CPs are
appropriately selected.
Linear Array Image
The HRSC system is a strict implementation of the three-line
stereo imaging principle on an airborne platform. The exterior
orientation parameters of each image line of the HRSC images in
Data Set IV were provided by the DLR, which were the result of
their bundle adjustment. The interior orientation parameters
were also provided. One entire image strip has 14,500 lines. A
segment of 410 lines in the middle of the image strip was chosen and its RFCs were computed. We first tried to recover the
orientation parameters of one image line by using the space
resection method, instead of the DLT, because the DTL coefficients for one image line would be highly correlated. We then
tested our concept to recover the orientation parameters for the
entire image segment. The following was carried out under the
Very Good
Very Good
conditions that we know the sensor model (HRSC), and the CPs
were selected from layers with different elevations instead of
from a flat plane.
Initially, we selected one image line in the middle of the
image segment, along which the elevation range of 1,200 m was
sliced into seven layers, and 364 CPs were generated using the
RF. The CPs were used to recover the orientation parameters of
the image line by employing the space resection method. The
recovered orientation parameters were compared with those
known orientation parameters of the image line provided by the
DLR to calculate the differences listed in Table 4. ⌬A denotes
the differences of the three rotation angles, ⌬C those of the
exposure center coordinates, ⌬I the differences of all interior
orientation parameters, ⌬F the difference of the focal length,
and ⌬P the differences of the principal point coordinates.
From Table 4, we can observe that, when none of the three
IO parameters were known (first scenario), the recovery for one
linear array image line was not successful. The 1.8⬚ angular
errors and the several millimeter errors in IO are apparently not
acceptable. Overall, results from the rest of the scenarios (partial or all IO parameters are known) are satisfactory, although
reasonable initial values, especially for the rotation angles, are
required. Usually, the ␻ and ␸ angles are small and their initial
values can be set to 0. The initial angular value of ␬ can be estimated from the CPs. In general, the requirement that the initial
angular values be better than 10⬚ or 20⬚ is not difficult to meet.
One way to recover the orientation parameters of all image
lines would be to perform the above method for each image
line repeatedly. However, for linear array images, we usually
use a polynomial to represent each EO parameter that changes
from image line to image line. The EO parameters include three
exposure center coordinates and three rotational angles (Zhou
and Li, 2000). Our previous experiment with this data set
showed that the third-order EO polynomials are sufficiently
accurate to represent the along-track EO parameter variations
(Li et al., 1998).
Next, we aimed at recovery of the orientation parameters of
the entire linear array image segment from the RF coefficients.
The IO parameters are generally supposed not to change and the
EO parameters are modeled by the EO polynomials for which
we need to recover the polynomial coefficients. We defined a
grid on the image. Through the RF coefficients and the seven
elevation layers, we transformed the 2D grid points to 14,924 3D
grid points as CPs in the object space.
Subsequently, the EO parameters for the two image lines at
the beginning and end of the image segment were computed
from the RF coefficients by using the space resection method
TABLE 4. RESULT OF RECOVERING ORIENTATION PARAMETERS FOR ONE IMAGE LINE OF A LINEAR ARRAY IMAGE SEGMENT FROM RF COEFFICIENTS WITH VARIOUS
SCENARIOS: ⌬A DENOTES THE DIFFERENCES OF THE THREE ROTATION ANGLES, ⌬C THOSE OF EXPOSURE CENTER COORDINATES, ⌬I DIFFERENCES OF ALL
INTERIOR ORIENTATION PARAMETERS, ⌬F DIFFERENCE OF FOCAL LENGTH, AND ⌬P DIFFERENCES OF THE PRINCIPAL POINT COORDINATES
Given
None
(xo , yo)
f
IO
EO
Solving for
Differences of IO and EO Parameters
Quality of initial values
EO and IO
EO and f
EO and (xo , yo)
EO
IO
⌬A ⬍ 1.8⬚, ⌬C ⬍ 5 ⫻ 10 m, ⌬F ⫽ 1.71 mm, ⌬xo ⫽ 5.26 mm, ⌬yo ⫽ 0.001 mm
⌬A ⬍ 0.2⬙, ⌬C ⬍ 5 ⫻ 10⫺3 m, ⌬F ⬍ 5 ⫻ 10⫺4 mm
⌬A ⬍ 2.3⬙, ⌬C ⬍ 5 ⫻ 10⫺3 m, ⌬P ⬍ 1.3 ⫻ 10⫺3 mm
⌬A ⬍ 0.03⬙, ⌬C ⬍ 3 ⫻ 10⫺4 m
⌬F ⬍ 10⫺5 mm, ⌬P ⬍ 10⫺5 mm
EO angles better than 10⬚
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
⫺3
EO angles better than 10⬚
Ja nuar y 20 03
39
TABLE 5. RESULT OF RECOVERING ORIENTATION PARAMETERS FOR A LINEAR ARRAY IMAGE SEGMENT FROM RF COEFFICIENTS WITH VARIOUS SCENARIOS
Given
Solving for
None
EO and IO
(xo , yo)
EO and f
f
EO and (xo , yo)
IO
EO
EO
IO
Differences of IO and EO Parameters
⌬F ⫽ 0.02 mm, ⌬xo ⫽ 0.07 mm, ⌬yo ⫽ 7.6 ⫻ 10 mm
RMSEX ⫽ 2.8 ⫻ 10⫺3 m, RMSEY ⫽ 2.9 ⫻ 10⫺3 m
RMSEZ ⫽ 3.0 ⫻ 10⫺3 m
RMSE␸ ⫽ 46.51⬙, RMSE␻ ⫽ 89.20⬙
RMSE␬ ⫽ 70.00⬙
⌬F ⫽ 3.9 ⫻ 10⫺5 mm
RMSEX ⫽ 2.8 ⫻ 10⫺3 m, RMSEY ⫽ 2.9 ⫻ 10⫺3 m
RMSEZ ⫽ 3.0 ⫻ 10⫺3 m
RMSE␸ ⫽ 12.89⬙, RMSE␻ ⫽ 23.82⬙
RMSE␬ ⫽ 41.37⬙
⌬P ⬍ 1.1 ⫻ 10⫺4 mm
RMSEX ⫽ 2.8 ⫻ 10⫺3 m, RMSEY ⫽ 2.9 ⫻ 10⫺3 m
RMSEZ ⫽ 3.0 ⫻ 10⫺3 m
RMSE␸ ⫽ 12.89⬙, RMSE␻ ⫽ 23.82⬙
RMSE␬ ⫽ 41.37⬙
RMSEX ⫽ 2.8 ⫻ 10⫺3 m, RMSEY ⫽ 2.9 ⫻ 10⫺3 m
RMSEZ ⫽ 3.0 ⫻ 10⫺3 m
RMSE␸ ⫽ 12.89⬙, RMSE␻ ⫽ 23.82⬙
RMSE␬ ⫽ 41.37⬙
⌬F ⬍ 2.5 ⫻ 10⫺6 mm, ⌬P ⬍ 6.1 ⫻ 10⫺7 mm
discussed above. Then the approximate values of the coefficients of the constant and first-order terms of the EO polynomials were estimated from these two image lines. The coefficients
of the second- and third-order terms were initially set to zero.
The image coordinates and the ground coordinates of the
CPs and the above initial values of the EO polynomial coefficients were then employed to build observation equations in a
linear array stereo bundle adjustment system in order to estimate the IO parameters and the coefficients of the EO polynomials. The results of various scenarios are illustrated in Table 5.
The IO parameters do not change from image line to image line.
Their differences, including ⌬I, ⌬F, and ⌬P, were computed
from the known and the recovered IO parameters. The EO
parameters vary along with the image lines. The EO parameters
calculated from the adjusted EO polynomials were compared
with the known EO parameters of the image lines in order to
compute the RMS errors of the exposure center coordinates
(RMSX , RMSY , and RMSZ) and the RMS errors of the rotation
angles (RMS␻ , RMS␸ , and RMS␬ ). It is shown that, if the initial
values of the rotation angles are given within 10⬚, both the IO
and the EO parameters can be recovered at the same time at a
sufficiently accurate level. If partial IO or EO parameters are
given, the same quality orientation parameters can be achieved
without the required initial value condition. Although the
result form the first scenario is not as accurate as those from
other scenarios, it is acceptable and much better than that from
the first scenario of one image line.
The above results were achieved under the assumption
that the CPs that are used to compute the RF coefficients are
located on layers with different elevations. This requirement
can be easily met for satellite imaging vendors who have rigorous sensor models. In the event that the RF coefficients are estimated from CPs distributed on a flat plane, the recovery of the
orientation parameters may not be possible. To demonstrate
this situation, we selected another set of 2,132 CPs located on a
flat plane. The same method was employed to recover the orientation parameters of the above linear array image segment. It
failed to recover both full and partial orientation parameters
even when very high quality initial values of the unknowns
were given.
Overall, the sensor model of an airborne linear array
imaging system such as the HRSC can be recovered, provided
that (1) the images are corrected for lens distortions, (2) the general sensor configuration (e.g., number of sensors and scanning
40
Ja nuar y 20 03
Quality of Initial Values
⫺5
EO angles better than 10⬚
method) is known, and (3) the given RF coefficients are computed from CPs with a sufficient elevation variation. Because the
recovered IO parameters do not change from place to place, they
can be treated as known parameters once estimated, and then
can be employed in other areas to recover the EO parameters.
The Ikonos stereo images in Data Set II are the Geo product, and
the vendor-provided RF coefficients are not sufficiently accurate to recover the orientation parameters. Furthermore, it is
impossible to compare the recovered orientation parameters
with the known Ikonos parameters because up to this time the
camera model of the Ikonos system has been unavailable to the
public. The sensor model recovery experiment conducted
above using the HRSC images demonstrated the principle and
the computational results of our sensor model recovery efforts.
Conclusions
We believe that the rigorous sensor model has the explicit physical sensor parameters that can be used efficiently for calibration, debugging algorithms, improving computational efficiency by separating correlated parameters, etc. The RF coefficients are scene specific because the IO and EO parameters are
merged together and they are produced using the CPs. Based on
the above theoretical derivation, computational results, and
analysis, we draw the following conclusions:
● The experiments with the frame image using the derived equations of the upward RF space intersection verified that the RF
coefficients can approximate the rigorous sensor models very
accurately and can be used for photogrammetric processing.
● We demonstrated two ways to improve the geopositioning accuracy of Ikonos Geo stereo imagery with ground control points
by either refining the vendor-provided RFCs or refining the RFderived ground coordinates. The Ikonos Geo product of 1-m
resolution Ikonos stereo images can be improved to achieve an
accuracy of 1 to 2 m.
● Our preliminary study on sensor model recovery showed that
the orientation parameters of a frame image can be recovered
by a special form of the coefficients of the RF, namely DLT, if
appropriate CPs are selected.
● Recovery of the orientation parameters from the RF for an airborne linear array image can be realized by (1) determining the
EO parameters of the two end image lines by space resection,
(2) computing initial values of the coefficients of the EO polynomials of the linear array image strip using the EO parameters of
the two end image lines, and (3) applying the 3D CPs to calculate
the IO parameters and EO polynomial coefficients through a
bundle adjustment. The experimental results using the HRSC
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
images are promising. Further study employing appropriate satellite images should be conducted in the future.
Acknowledgments
This research was supported by a grant from the Sea GrantNOAA Partnership Program and an NSF Digital Government
Grant. Support from and collaboration with the Coastal Service
Center, National Geodetic Survey, and Office of Coastal Survey
of NOAA and the Lake Erie Protection Fund are appreciated. The
authors wish to thank Dr. Mostafa Madani, Dr. C. Vincent Tao,
and Dr. Philip Cheng for providing reprints of their related
research papers. Discussions with Dr. Xinghe Yang and Dr.
Younian Wang were valuable. We thank the DLR for providing
the HRSC data.
References
Alamus, R., M. Langner, and W. Kresse, 2000. Accuracy potential of
point measurements in MOMS-Images using a rigorous model
and a rational function, Proceedings, XIXth ISPRS Congress, 16–22
July, Amsterdam, The Netherlands (International Archives of Photogrammetry and Remote Sensing, 33(Part B4):515–517).
Baltsavias, E., M. Pateraki, and L. Zhang, 2001. Radiometric and geometric evaluation of IKONOS GEO images and their use for 3-D
building modeling, Proceedings of ISPRS Joint Workshop “High
Resolution Mapping from Space” 2001, 19–21 September, Hanover, Germany, on CD-ROM.
Cheng, P., and T. Toutin, 2000. Orthorectification of IKONOS data
using rational function, Proceedings of ASPRS Annual Convention, 22–26 May, Washington, D.C. (American Society for Photogrammetry and Remote Sensing, Bethesda, Maryland), unpaginated abstract on CD-ROM.
Davis, C.H., and X. Wang, 2001. Planimetric accuracy of IKONOS
1-m panchromatic image products, Proceedings of ASPRS Annual
Convention 2001, 25–27 April, St. Louis, Missouri (American
Society for Photogrammetry and Remote Sensing, Bethesda, Maryland), unpaginated CD-ROM.
Di, K., R. Ma, and R. Li, 2001. Deriving 3-D shorelines from high
resolution IKONOS satellite images with rational functions, Proceedings of ASPRS Annual Convention 2001, 25–27 April, St.
Louis, Missouri (American Society for Photogrammetry and
Remote Sensing, Bethesda, Maryland), unpaginated CD-ROM.
Dowman, I., and J.T. Dolloff, 2000. An evaluation of rational functions
for photogrammetric restitution, Proceedings, XIXth ISPRS Congress, 16–22 July, Amsterdam, The Netherlands (International
Archives of Photogrammetry and Remote Sensing, 33(Part B3):
254–266).
Fraser, C.S., H.B. Hanley, and T. Yamakawa, 2001. Sub-meter geopositioning with IKONOS GEO imagery, Proceedings of ISPRS Joint
Workshop “High Resolution Mapping from Space” 2001, 19–21
September, Hanover, Germany, on CD-ROM.
Grodecki, J., 2001. IKONOS stereo feature extraction—RPC approach,
Proceedings of ASPRS Annual Convention 2001, 25–27 April,
2001, St. Louis, Missouri (American Society for Photogrammetry
and Remote Sensing, Bethesda, Maryland), unpaginated CDROM.
Hu, Y., and C.V. Tao, 2001. Updating solutions of the rational function
model using additional control points for enhanced photogrammetric processing, Proceedings of ISPRS Joint Workshop “High
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
Resolution Mapping from Space” 2001, 19–21 September, Hanover, Germany, unpaginated CD-ROM.
Karara, H.M., 1989. Non-Topographic Photogrammetry, American
Society for Photogrammetry and Remote Sensing, Falls Church,
Virginia, 445 p.
Li, R., 1998. Potential of high-resolution satellite imagery for National
Mapping Products, Photogrammetric Engineering & Remote Sensing, 64(2):1165–1169.
Li, R., G. Zhou, A. Gonzalez, J.-K. Liu, F. Ma, and Y. Felus, 1998.
Coastline Mapping and Change Detection Using One-Meter Resolution Satellite Imagery, Project Report submitted to Sea Grant/
NOAA, The Ohio State University, Columbus, Ohio, 88 p.
Madani, M., 1999. Real-time sensor-independent positioning by rational functions, Proceedings of ISPRS Workshop on Direct Versus
Indirect Methods of Sensor Orientation, 25–26 November, Barcelona, Spain, pp. 64–75.
Mikhail, E.M., J.S. Bethel, and J.D. McGlone, 2001. Introduction to
Modern Photogrammetry, John Wiley & Sons, Inc., New York,
N.Y., 479 p.
Moffitt, F.H., and E.M. Mikhail, 1980. Photogrammetry, Harper & Row
Publishers, Inc., New York, N.Y., 648 p.
OGC (Open GIS Consortium), 1999. The OpenGIS娃 Abstract Specification, Topic 7: The Earth Imagery Case, OpenGIS Web Site, URL:
http://www.opengis.org/public/abstract/99-107.pdf (last accessed
on 30 June 2000).
Tao, C.V., and Y. Hu, 2000. Investigation of the rational function model,
Proceedings of ASPRS Annual Convention, 22–26 May, Washington, D.C. (American Society for Photogrammetry and Remote Sensing, Bethesda, Maryland), unpaginated CD-ROM.
Tao, C.V., Y. Hu, J.B. Mercer, S. Schnick, and Y. Zhang, 2000. Image
rectification using a generic sensor model—Rational function
model, Proceedings, XIXth ISPRS Congress, 16–22 July, Amsterdam, The Netherlands (International Archives of Photogrammetry
and Remote Sensing, 33(Part B3):874–881).
Tao, C.V., and Y. Hu, 2001a. 3-D reconstruction algorithms based on
the rational function model, Proceedings of ISPRS Joint Workshop
“High Resolution Mapping from Space” 2001, 19–21 September,
Hanover, Germany, unpaginated CD-ROM.
———, 2001b. A comprehensive study of the rational function model
for photogrammetric processing, Photogrammetric Engineering &
Remote Sensing, 67(12):1347–1357.
Toutin, T., and P. Cheng, 2000. Demystification of IKONOS, Earth
Observation Magazine, 9(7):17–21.
Wang, Z., 1990. Principles of Photogrammetry (with Remote Sensing),
Press of Wuhan Technical University of Surveying and Mapping
and Publishing House of Surveying and Mapping, Wuhan, China,
575 p.
Whiteside, A., 1997. Recommended Standard Image Geometry Models,
OpenGIS Web Site, URL: http://www.opengis.org/ipt/9702tf/
UniversalImage/TaskForce.ppt (last accessed 30 June 2000).
Yang, X., 2000. Accuracy of rational function approximation in photogrammetry, Proceeding of ASPRS Annual Convention, 22–26 May,
Washington, D.C. (American Society for Photogrammetry and
Remote Sensing, Bethesda, Maryland), unpaginated CD-ROM.
Zhou, G., and R. Li, 2000. Accuracy evaluation of ground points from
IKONOS high-resolution satellite imagery, Photogrammetric
Engineering & Remote Sensing, 66(9):1103–1112.
(Received 29 May 2001; revised and accepted 05 May 2002)
January2003
41