Correcting Chromatic Aberrations Using Image Warping

Correcting Chromatic Aberrations Using Image Warping
Terrance E. Boult
Center for Research in Intelligent Systems,
Columbia University Department of Computer Science, NYC, NY 10027.
and
George Wolberg
Department of Computer Science City College, CUNY, NYC NY.
Abstract
The problem of chromatic aberration arises
because each wavelength of light is refracted
dierently by the elements of a lens. Unfortunately, this means that is image will be blurred
and distorted. In color imaging these distortions cause measurable dierences between the
images. Recent research has proposed an approach for dealing with this aberrations by actively controlling the optics of the imaging system. This paper addresses the same problem,
but instead of adapting the optics, we adapt
the geometry of the (already obtained) images;
we do chromatic aberration correction by image
warping. We briey discuss the image restoration/reconstruction techniques used, since they
are non-standard. This is followed by a discussion of the techniques used to dene the chromatic aberration correcting warp. The technique is demonstrated and analyzed on two test
cases and is directly compared to the active optics approach.
1 Introduction
The rst stage of an imaging system is the
lens, which refracts the incoming light to focus
it on the image plane. It has long been known
that the refraction of light depends upon the
wavelength, a ray refracted at the lens surface
becomes a small spectral fan. According to geometric optics for a a simple lens, if a point
object is placed in front of a lens, there would
be a plane at some focal distance where the image of that point would be in focus, see Fig. 1.
Unfortunately, because refraction is wavelength
dependent, this focal distance is also wavelength
dependent. The dierence, due to wavelength,
between the ideally focused image and actual
image is called chromatic aberration. Chromatic
aberration is generally broken up into two categories: axial aberrations and lateral aberrations
[Slama et al., 80]. In axial aberrations, the focal plane for one wavelength, say red, will be
displaced along the optic axis from the focus
plane for another wavelength, say blue. In lateral aberrations, the image of a one colored feature point will be displaced laterally within the
image plane with respect to another color point.
There are two parts to this lateral displacement.
In general the largest component is a dierence
in the magnication, which causes a generally
radial translation of image features. Secondly,
there may be dierences in the optic axis for the
dierent wavelength.
The fact that some parts of a color image
are more blurred that others, or that there the
wavelength dependent distortions may not seem
too important at rst. If our algorithms were
truly robust this might be true, but increased
accuracy is always desirable, especially if it does
not require expensive new equipment.
For example, consider a technique which requires clustering in color space. The eects of
chromatic aberration can be very important, especially if the clustering makes assumptions on
the shape/distribution of the cluster. Examine
the color space depicted in gure 2 and imagine trying to nd a \T" which is a process required in the color segmentation/highlight detection algorithm in [Klinker, 88]. Even if a
color vision technique does not use the actual
values of the RGB triples, but rather just uses
Image plane
Object
Point object
on axis
Optic axis
Focus Rays
for
Blue,
Green,
and Red
Lateral
chromatic
aberration
causes
shifting and
magnification.
Axial chromatic
aberration causes
blurring. The size
of the "blur circle"
is determined by
distance between
the focus rays when
they intersect the
image plane.
R, G, and B
arrows shown
in their plane
of best focus.
The imaged
version would
be about the
same size, just
blurred.
Figure 1: Figure showing the geometric optic interpretation for Chromatic aberration caused by a
thin lens. Longer wavelength focus long, and are magnied greater than the shorter wavelength.
edges detected separately in each color band,
the misregist<ration of one or two pixels may be
signicant.
Once one determines that chromatic aberration correction is necessary for a vision application, there are at least three things to do. First
you can go out and buy an expensive lens and
hope it is properly corrected. We briey discuss, in the next subsection, some of the techniques of lens designers.
A more recent development is the use of active lens control to achieve reduced chromatic
eects. This technique, developed by R. Willson and S. Shafer at CMU [Willson and Shafer,
91a] and [Willson and Shafer, 91b], takes three
separate images with slightly dierent focus and
zoom settings designed to compensate for the
optics. This is discussed in section 1.2.
The nal choice is to do image warping for
chromatic correction, as described in this paper. We will show, in section 2.2 how to determine the warping function and briey recall
how to do the warping and the image restoration/reconstruction needed for it. Then we will
demonstrate and analyze the algorithm in section 3.2.
We are not the rst researchers to suggest
using image warping for image registration or
correction, e.g. NASA has used image warping in various applications, see [Green, 89], and
much of the early work on image reconstruc In [Willson and Shafer, 91b] they report that they have
tested a number of lenses, including some that are supposedto
be chromatic correcting lenses, only to nd signicant errors
in every lens.
tion centered around \digital correction", e.g.
see [Rifman and McKinnon, 74]. We are unaware, however, of any quantitative experimentation studying the eectiveness of warping to
correct for chromatic aberration.
1.1 Chromatic aberration correction by
better lens design
We now briey discuss the \traditional" techniques for dealing with chromatic aberration.
This reivew of lens design is based on the material in [Slama et al., 80], and [Kingslake, 78].
The subject of design good lenses is quite involved and increasing uses computer simulations.
While one might think the space of lens designs
have been completely explored, modern techniques continue to produce better lenses, see
[Laikin, 91]. If the price is right, a lens can
be designed to meet particular imaging criterion. However, most vision researchers use o
the shelf lenses with either no chromatic correction or only simple correction. In addition, the
correction of aberrations is generally becomes
more dicult as one reduces the focal length, increases the aperture, increases the eld of view,
or allows zooming. Note that wide eld of view,
large aperture zoom lenses, with short minimal
focal length, are exactly what many vision researchers use.
Axial chromatic aberration is usually handled
by using multiple elements, some positive others
negative, with dierent optical indices. These
elements are chosen such that if the rst causes
the longer wavelength (red) to focus too far, the
next causes the shorter wavelength ((blues) to
Figure 2: Upper left shows a test image (the blue channel). The gures on the right show 2D
histograms of color space top is blue vs green, bottom is red vs green. If there were no chromatic
aberration both of these plots would be a line. The original images were taken when the lens was
focused for blue light. Thus for this uncorrected lens system, the green channel show some errors,
and red channel much larger errors. The graph in the lower left is the plot of red vs blue for a
square window in the lower left of the image. These plots are described in more detail in section 3.
focus long. A standard item, called an achromatic doublet, uses two-lenses made with different glasses such that it can bring 2 wavelengths into alignment on the axis. When there
are more elements in the system, more wavelengths can be brought into axial alignment. For
the simple \corrected" lenses two wavelength
are brought into correspondence resulting in a
quadratic like error between these wavelength.
In such lenses, deep violets and deep reds generally focus short and the middle wavelength,
like yellow, focus long, see [Slama et al., 80],
[Kingslake, 78]. Note that this correction is generally measured on the optic axis, and in simple
systems degrades with increasing distance from
the axis. Many \three lens" systems correct on
the axis and in a zonal band some distance from
the optic axis producing a spatially varying correction. There is often a tradeo between correction of chromatic aberration and correction
of shperical aberrations and commas. Still high
quality photogrametric lenses can be obtained
with axial chromatic errors of less than .01%.
For simple lenses the correction for lateral
aberration is, in theory, easier. To correct for
the rst order eects, all that is necessary is for
the prismatic eects of the rst lens to be canceled in the second. For simple symmetric lens
designs this is straightforward, see [Slama et al.,
80]. Unfortunately, for asymmetric lenses, e.g.
wavelength, this can exactly correct for axial errors. For more complex images this will have
zero error for three wavelength and, hopefully,
small error for the remaining wavelengths. This
approach can be applied even if the lens was
chromatically corrected. The technique is automated and determines best focus using an algorithm by E. Krotkov, see [Krotkov, 87]. This
involves computing an image sharpness measure
at a number of focal positions and searching for
the maximum of this sharpness measure.
The second stage begins to correct for lateral chromatic aberration. They determine a
magnication factor for each band and use this
to actively control the zoom lens. Unlike focus
determination, this stage requires more than a
simple image based operator; it requires some
type of geometric calibration image. They have
used subpixel detection of edges (vertical and
horizontal) to determine this zoom. They then
compute the magnication needed and actively
update the lens for each of the green and red
images.y
The nal stage is to deal with dierences in
the image of the optic axis. Because the lens elements are not all perfectly aligned, it will often
1.2 Chromatic aberration correction by occur that the image of the red optic axis is displaced with respect to the blue optic axis. When
active control
Because of the emerging use of color imag- this dierence is coupled with refocus and zooming in computer vision, and because of the ex- ing, the eect can become more pronounced.
pense of well corrected optics (which are still Thus using the above mentioned edge data they
not perfect), researchers at CMU have recently also compute a shift for red and green to bring
been investigating the use of active lenses el- them into correspondence with blue. This is
ements to deal with chromatic aberration, see done by physically shifting the camera in the
[Willson and Shafer, 91a], [Willson and Shafer, plane of the imaging sensor. Obviously, be91b]. We describe their approach in some de- cause of the magnication aects of the lens
tail since our experiments will be compared with very small shifts, on the order of .005in, will
generally be used.
their approach and on their data.
We feel it is worth noting that their paper
The CMU active optics approach to chromatic aberration correction has three main steps: also addresses use of active optics in shape from
focus computations. In this case, the active op1. determination of best focus for each color,
tics for chromatic correction is a minimal incre2. determination of a magnication factor for mental cost in that of shape from focus. Bered and green,
cause they also correct for axial aberrations (by
3. and determination of camera shift to align refocusing), they can, theoretically, correct for
images.
y Blue is assumed as the correct image. Their errors might
The rst stage corrects for axial chromatic actually be smaller than reported in their paper had they
green to be the standard and actively corrected both
aberration by varying the focal plane for each of chosen
red and blue.
the colors. Note that for images with only three
zoom and tele-photo lenses, the correction is
considerably more dicult (some of these have
15+ lenses with complex internal moving parts).
Additionally, if the lens has higher order lens
aects, i.e. comma, radial and tangential geometric distortions, these are also wavelength
dependent complicating the correction process.
While the \older" lenses were often designed
by hand and as a result corrected only for a
few wavelength at a few points, modern computational techniques have allowed lens designers
the ability to numerically designs lenses which
have minimal aberration. Actually manufacturing and maintaining such compensation is, however, another issue. With increasing part number the chance of misalignment of an internal
lens (or shifting of the lens after manufacturing greatly increases. Furthermore, it is very
common in modern lenses to use coatings on
the optics to reduce reections. Such coatings
are also wavelength dependent and good systems use a multi-coating technique to reduce
the wavelength dependence and decrease chromatic aberrations. Such coatings are, however,
easily damaged or removed severely impacting
the performance of the lens.
needed to correct for most lens distortions are
not so severe and do not require such a complex
algorithm. Instead, ignoring boundary eects,
2 Correction Using Image Warping we can simply warp the image in one direction
When we did our initial work on imaging- (say x) and then the other direction (y ). This
consistent reconstruction algorithms, [Boult and is accomplished row by row and then column by
Wolberg, 91], we realized that one use of such column.z This allows for both ecient pipelintechniques would be in image warping to cor- ing (within a scan) and parallelism (each row is
rect for lens aberrations. However, at the time warped independently).
we felt that the cost of of image warping for
This leaves us with the question of how to
lens correction was probably unwarrented since warp a regularly sampled 1D signal into another
we assumed (from reading [Slama et al., 80]) 1D array of pixels, with a nonlinear transforthat the available lenses probably did not have mation between their geometry. To do this we
signicant chromatic aberrations and that ra- need some approximation to the signal underlydial type lens distortions could be handled by ing the input 1D signal, and some way of warpdirectly mapping the location of feature points ing this signal. We approximate the signal with
as opposed to warping the intensity image be- what we call imaging-consistent reconstruction
fore feature detection. When we heard a talk / restoration lters. These linear lters use a
by Reg Willson, about the CMU active lens model of input PSF (blur) within a pixel to obapproach to chromatic aberration, we realized tain a functional restoration. This functional
we were wrong; standard CCTV lenses have form is then then warped, and reblurred acnoticeable aberrations that require correction. cording to an output PSF, using an approach
Furthermore, an increasing number of \physics- we call the integrating resampler. An imporbased" vision algorithms use the actual radio- tant property of these lters is that they do not
metric quantities measured, and hence they need necessarily pass through the data, but rather
more than just a calibration of the geometric when blurred according to the assumed input
distortions, they need to have the intensities PSF they return the original data. While there
registered. Unlike human interpretation, these are many variations on this idea, this paper
phsyics-based algorithms are probably not par- mainly considers one, called a quadratic restoraticularly robust with respect to violations of their tion lter with a rect type PSF, or quadraticimaging assumptions.
box restoration for short. We dene the funcThere are two main parts of the image warp- tion here, but see [Boult and Wolberg, 91] for
ing technique to chromatic aberration correc- more details and discussion. This lter is lotion: how to warp images in general, and de- cal, so consider pixel i with which spans the
termining what warp to apply. We will briey interval from ki to ki+1 . We assume some indiscuss each of these.
terpolation technique, e.g. linear interpolation
or cubic-convolution ([Rifman and McKinnon,
2.1 Image warping
Image warping has been most commonly used 74]), is use to determine values ej at kj . The
in graphics, where the underlying question is value of the quadratic-box restoration function
\does it look good", a qualitative assessment. is then given by:
To use image warping in vision we need to ad2
dress the question, are the pixel values correct, ei + (6vi ? 2ei+1 ? 4ei )x + 3(ei+1 + ei ? 2vi )x
a more quantitative question.
where x = t?mk . The integral of this quadratic
For complex warps we described a technique over
interval ki to ki+1 is exactly vi the meato increase the accuracy of the warp while main- suredthe
input pixel value. In this paper we containing low cost, see [Wolberg and Boult, 89]. sider the
use of linear interpolation as a means
This separable image warping technique warped
z Actually for increased accuracy we might use full 2D lthe image twice and then combined the results
tering,
but this would really push up the cost of the algorithm.
to increase the accuracy. Fortunately, the warps
blurs that would be problematic in shape-fromfocus when used on a colored world.
i
of determining the endpoints because its cheap.
We have also determined the endpoints using
cubic-convolution with numerous values of of
their magic parameter A, with no signicant
visible dierence in this application.
Given the \restoration" of the input function,
we now have a functional form to warp, which
we do using the following idea we call the integrating resampler. Assume the n input pixels,
at locations ki = ni ; i = 1::n, are being mapped
into m output pixels oj ; j = 1::m, according to a
warping function g (t). Compute qj as the linear
approximation to the location of g ?1(oj ), i.e.
for(j = i = 0; j < n; j ++) f
while(g(ki) < j ) i++;
qj = i + g(kj+1?g)(?kg)(k )
g
i
i
i
Now to process the data we run along the input determining the next event: either an input
pixel will be consumed, or an output pixel will
be generated. If the next event will be the completion of an input pixel we compute the integral from the location of the last event to the
end of this input pixel, adding the value to the
accumulator. If, however, the next event is the
generation of output pixel j , we use qj as an
approximation of g ?1(oj ), which gives the location of this event in the input space. We then
compute the integral from the location of the
last event to the location of this event, and add
this value to the accumulator. For more details on this process see [Wolberg and Boult, 91].
The underlying idea of this integrating resampler can be found in [Fant, 86] which proposed a
similar algorithm for the special case of warping
a linear reconstruction of the input.
As mentioned before, other researchers have
used image warping in vision. In particular, researchers at NASA have frequently used image
warping, though more often to allow better human viewing than for the application of vision
algorithms. A standard approach is described
in [Green, 89]. This approach uses a very simple reconstruction lter (bi-linear interpolation)
with point sampling. While an example pre In the general case we would compute the integral with
the output PSF, in this case a rect lter.
sented, there is no quantitative analysis of the
approach.
2.2 Computing the distortion
As mentioned previously, there are really two
related types of chromatic aberration, axial and
lateral. The main goal of image warping-based
chromatic aberration correction is to correct for
the lateral aberrations. To do this we start with
the the same types of geometric features used to
compute the zoom and shift factors in the in the
active lens approach. In this case we have data
on location of horizontal and vertical edges in
each of the R, G, and B images (with the blue
image focused). These are the same uncorrected
images used in [Willson and Shafer, 91b], and
we use the edge data computed by their algorithms.
The test target was a checker-board pattern,
see Fig. 2. For each of the sides of each checker,
the horizontal edge position provides a stable
measurement. Since the feature detector only
computes the horizontal edge position near a
vertex, each horizontal edge position has an accompanying vertical position which, while not
stable enough to use in warping, facilitated grouping of related horizontal edges. Edges with approximately the same vertical location were thus
associated to form a \row" of horizontal positions in each color band.yy. A similar matching
was done to form \columns" of vertical edge position data, using the tops and bottoms of each
checker. For the CCTV example below we obtained 270 horizontal edges points and 416 vertical points. For the Photometrics example there
were 210 horizontal edge points and 220 vertical edge points. We do this processing for each
of the three color bands. Using the edges in the
blue image as the desired location, we the dierence between those positions and the computed
edge positions for red (or green) to dene the
warp. To nd the warping at other points we
need to t some model. To maintain exibility,
we choose to t a cubic spline through the location data in each row/column and consider the
yy Unfortunately, every row did not have the same number of
detected horizontal edge features. To make our computation
easier, we used linear interpolation between the data points
with minimal variation to get an equal number of datapoints
per row
warp to be the tensor product of these splines.
Once we have computed the warping function, we can then use the integrating resampler,
as described in section 2.1 and [Wolberg and
Boult, 91], to warp the red channel and the
green channel to their respective desired positions. Note that other techniques might be used
to determine the warping function. In particular, a future direction will be to consider a global
deformation model, which might be parameterized by the lens setting for focus and zoom. It
will also examine warping all three images to
correct for geometric lens distortions as well as
chromatic aberration. We note that in [Green,
89], they used correlation between features in
each color channel, as well as a priori calibration information. Again, however, the results
seemed to intended for human analysis and no
quantitative analysis was given. Future work
will compare use of correlation with edge based
warp determination.
3 Experimental Analysis
In this section section, we discuss the results
of application of the image-warping chromatic
aberration correction technique on two test images. The test images were obtained from the
Calibrated Imaging Laboratory at CMU, and
were used in [Willson and Shafer, 91b] to describe/test the active optics approach to chromatic aberration correction. We rst discuss
how to measure the quality of a correction, then
get into the actual data.
3.0.1 Photometrics examples
These images were also collected at CMU,
and used a Photometrics camera connected to
Matrox frame grabber. The lens was a Fujinon motorized zoom lens. For the color lters
they used a Hoya IR block + the same RGB
lters. The 1/2" checkerboard was imaged at
a distance of 2.03m. The original image is not
shown. The actual sensor array (384x576) is
larger than the lens's spot size, and so the images were clipped to 338 by 388. For each image 10 frames were averaged together. The integer pixel values (0..4059) were converted to
the range (0..255). The color histograms of the
uncorrected data are shown in Figs 7.
3.1 Determining the quality of a correction and displaying the results.
In their paper, [Willson and Shafer, 91b], Willson and shafer use the displacement of the location of the zero-crossings edges as an error measure. In this paper we do not use such geometric measures but rather use colormetric measures. The primary reason for this change in
error measures is that we will directly manipulate the geometry of the image and felt this
would be an unfair measure to apply to our algorithm. Secondly, the edge position does not
actually address the issue of focus in each band
since a blurry red image might have its edge in
the same position as blue but the color properties of the images could be poor.
Since the actual scene is a black and white
checker pattern, all pixels in the scene should
image to some shade of gray. To visualize the
errors we consider two techniques: direct plotting, and computing an error measure. The
rst, direct display, involves plotting two 2D histograms where for each pixel its b pixel value is
used as the y coordinate of the plot while the
x coordinate is the r pixel value or the g pixel
value. In these plots, their is some ideal curve
for the camera response. If the camera is linear
the ideal would be a straight line. The important information in these plots is the spread of
the points around their \central" tendency, the
wider the spread, the more uncorrected the images.
Examples of these plots can be seen in Fig. 2
which shows the 2d histograms for the uncorrected images for the CCTV camera example.
Given the intensity ranges of the images we show
the part of the histogram containing data. On
the right of that gure are the plots for green
vs-blue (top) and red-vs-blue for what we call
the big window. Because we did not have information about the warp at the very edges and
because without a global model to allow extrapolation of the warping function we consider only
the window with rows [15, 465] and columns [15,
497]. Note that the active optics approach also
has its poorest performance near the boundary, and all comparisons will consider its performance in the same window. The 2D histograms
for the big window uses intensity to encode the
Figure 3: 2D histograms showing dispersion in color space using the big window [15 465] [15
497]. The top row shows the corrections to green (blue-green histogram), and the bottom shows
corrections for red. The left shows the results obtainable with active optics. The right shows the
image warping results. Both techniques work extremely well on green and do a credible job on
red, for quantitative comparisons see table 1. Overall the active optics approach is better (tighter
cluster) and also more symmetric in its error. The sigmoid shape which is slightly visible in the
blue-red histogram for image warping is caused by dierential blur after correcting for magnication
aects.
logarithm of the number of items in each of the quantitative. While it seems intuitive to con64x64 bins, with black meaning 1000 pixels in sider RMS distance to the gray-line, this has a
that bin. Note that a few bins are clipped be- problem: the image values are not normalized,
cause they contain > 1000 pixels. On the left and we did not have radiometric calibration inof Fig. 2, we see the 2D histograms for the red formation. Thus we adopted the following apcomponents, this time restricted to a window proach. Because the only information is near
in the lower left of the image, [15 65] [15 the edges we restricted our attention to this
65]. This represents a region which is, approx- region.zz The rows in tables 1 and 3 marked
imately, maximally distant from the optic axis \outside mask" computed their values outside
and hence where one would expect to nd max- this mask region to show the size of the error
imal artifacts. The plots are similar in nature
zz Determined as an expansion of the region where the graexcept that they use a log scale with 100 items dient magnitude was > 4 for the CCTV example and > 2
for the Photometrics examples. (Obtained by the KBVision
being black.
sequence FastGauss(2), GradIm, NormIm , ThrshIm followed
The remaining measures we will report are by a MorphOps with \d8 5e8".
Figure 4: 2D histograms showing dispersion in color space using dierent windows. The shows the
results on the box from the lower left of the image, [15 65] [15 65]. In this case the active optics
(left) does not do as well, probably because of geometric lens distortions aecting the image. (The
open nature of the plot is indicative of a magnication error.) The image warping approach, lower
right, does reasonably well. It too has more problems in the corners, possibly because of inaccuracies
in the warping, plus focus aects. These observations are also supported by the quantitative results
in table 1.
Algorithm
Uncorrected
Uncorrected
Active Optics
Image Warping
Uncorrected
Active Optics
Image Warping
Uncorrected
Active Optics
Image Warping
Uncorrected
Active Optics
Image warping
Region
outside mask
[15 465] [15 497]
[15 465] [15 497]
[15 465] [15 497]
[ 15 65] [15 65]
[ 15 65] [15 65]
[ 15 65] [15 65]
[ 50 400] [270 300]
[ 50 400] [270 300]
[ 50 400] [270 300]
[ 235 260] [50 430]
[ 235 260] [50 430]
[ 235 260] [50 430]
Gray-line
error
237.234
568.381
378.378
411.030
616.824
557.978
449.579
778.941
637.172
703.478
554.214
363.741
472.019
BW-RGB
error
0.096
0.363
0.353
0.364
1.049
1.039
1.053
0.880
0.875
0.880
0.831
0.800
0.833
BW-R
error
0.073
0.300
0.284
0.301
0.861
0.838
0.869
0.692
0.681
0.693
0.692
0.641
0.696
BW-G
error
0.069
0.261
0.257
0.261
0.761
0.765
0.760
0.636
0.637
0.635
0.594
0.585
0.595
BW-B
error
0.076
0.260
0.260
0.260
0.756
0.756
0.756
0.673
0.673
0.673
0.592
0.592
0.592
Table 1: Table of error for CCTV examples. As can be seen the active optics approach produces
quantitatively better values for all examples except the one in the lower left part of the image. Note
the dierences in the blur related measures on the vertical vs horizontal windows in the images.
measures outside this region of interest.
Given that we do not have radiometric calibration information, we use a simple heuristic
to compute approximate information. We know
that the calibration target was to be black and
white. We computed a set of reference values
for every 10 x 10 window in the input. These
references values are obtained by considering
only pixels outside the mask described above
but within a window of 60x80 pixels for the
CCTV (25 x 60 for the Photometrics) centered
around the current point. Within this window
we consider average all those above 130 to get
a \white" level and all those below 100 to get
a \black" level. This was done separately for
each color band. Using these reference values,
we dene ve error measures. The rst, which
we call Gray-line error, is the average distance
from a RGB triple to the line dened by the
reference values. This error measure relates to
the color shift of a pixel. The remaining measures were meant to be more sensitive to blur
Figure 5: 2D histograms showing dispersion in color space using only edges of one type. The upper
plots show the performance of the two algorithms (active optics on the left) when applied to vertical
window with extents [50, 400] [270, 300]. This window contains only horizontal edges. In this
case the active optics does extremely well (it is best on or near axis). As can be seen in the the
upper right the image-warping technique has a sigmoid shaped bias. This is caused by dierence
in focus between the red and blue channels. The bottom row uses the horizontal box [235, 260]
[15, 430] which contains only vertical edges. Again, active optics is right on the money. The
image-warping technique does not show the dierential blur as in the vertical case, but this time
shows some residual magnication error (mostly near the outer edges of the image.) We are not sure
about the exact cause of this directionally selective behavior, but have found a similar dierence
in the blur of uncorrected images.
within the image. Note that the Gray-line error
could be made small by overly blurring the image, because everything would approach gray!
The second quantitative error measure, which
we call BW-RGB error is dened as the mean
pointwise distance from each RGB triple to the
nearer of the reference triples. Since the underlying image was supposed to be black and white
this measure should be small. If the images are
locally blurred this measure will grow. If there
is excessive blurring, the heuristic calibration
processes will cause this measure to become too
small. The remaining measures were meant to
be sensitive to blur separate in each wavelength.
The third measure, which we call BW-R error,
is the distance between the R value of a pixel
and closer of the reference values for R. The
measures BW-G error and BW-B error are dened similarly.
Under ideal imaging, each of the error mea-
Figure 6: 2D histograms showing dispersion in color space using the big window and the red channel.
Here we see the 4 dierent image warping results using dierent image reconstruction techniques.
The upper left is the quadratic-box using cubic-convolution based edge values with a value of
A = ?1. This results in a correction which is barely dierent from using linear interpolation to get
the edges. The upper right shows the results using just linear interpolation for image reconstruction.
While it still does well, it is not as tight a cluster as the quadratic-box restoration approach. In the
lower right we see the results of applying the integrating resampler using cubic-convolution with
A = ?1. While it may be a superior image reconstruction lter, it has signicant problems in this
application. We also tried cubic-convolution as it was originally intended (i.e. doing point sampling
reconstruction), and the results were even worse. On the lower left we show the result of another
restoration lter. This one is also a quadratic spline, but the point spread function used to dene
it was a 4 piece cubic approximation to a Gaussian.
sures would be zero, and for non-ideal imaging smaller error measures are better. Unfortunately, interpretation of these measures is complicated by the fact that they do not have intuitive units of measure, so its not clear how important a dierence of 10 units is in the gray-line
error. Additionally, because the images have
noise and also because our approximated calibration data is not perfect, the error measures
have an oset so that it is unlikely they would
ever attain the value zero. When we report the
quantitative results, we will also report the error
measures in the area outside the aforementioned
edge mask. In these regions the image should
have little chromatic aberration, and the residual error measure gives some indication of the
base level of each error measures.
3.2 Analysis of results
The analysis is mostly the data itself, presented in gures 3{8 and tables 1{3. The cap-
Figure 7: Here we see some examples using a Photometrics camera and a Fujinon lens. The gure
shows 2D histograms of color space, with blue vs green on theleft, and red vs blue on the right.
The lens was supposed to have been corrected for chromatic aberration. Obviously it was not
completely corrected. Note that these plots are on a dierent scale (black =100) than the CCTV
examples (black=1000).
Figure 8: Here we see the corrected versions of the Photometrics example. Green is on top, red
on the bottom. On the left are the active optics approach. On the right is the imaging warping
approach. Qualitatively, we did comparably well, for a quantitative comparison see table 3. While
the active optics approach reduced the RMS error for the red channel, it did increase the size of
the envelope in color space.
Algorithm
Uncorrected
Active Optics
Quadratic-box Restoration, edges
from linear interpolation
Quadratic-box Restoration, edges
from Cubic Convolution, A=-1
Quadratic-gauss restoration, edges
from Cubic Convolution, A=-1
Bi-linear Interpolation
Cubic Convolution, A=0
Cubic Convolution, A=-1
Gray-line
error
568.381
378.378
411.030
412.021
411.744
453.921
581.295
585.868
BW-RGB
error
0.363
0.353
0.364
0.363
0.364
0.366
0.367
0.363
BW-R
error
0.300
0.284
0.301
0.301
0.301
0.303
0.305
0.300
BW-RGB
error
0.154
0.391
0.392
0.399
0.969
0.968
1.027
BW-R
error
0.113
0.302
0.301
0.304
0.731
0.709
0.774
BW-G
error
0.261
0.257
0.261
0.261
0.261
0.264
0.266
0.260
BW-B
error
0.260
0.260
0.260
0.260
0.260
0.260
0.260
0.260
Table 2: Table of error for dierent reconstruction algorithms applied to CCTV example. There
is a slight sharpening when using A = ?1, but because the warp is rather small, the sharpening
is not very signicant. Note that the dierence between the new reconstruction methods and
linear interpolation is about the same as the dierence between active optics and the new methods.
Finally, cubic convolution seems worse than the uncorrected image, although the qualitative results
looked like some improvement. We are still investigating this behavior.
Algorithm
Region
Uncorrected
[outside mask]
Uncorrected [15 323] [15 323]
Active Optics [15 323] [15 323]
Image Warping [15 323] [15 323]
Uncorrected
[ 15 65] [15 65]
Active Optics [ 15 65] [15 65]
Image Warping [ 15 65] [15 65]
Gray-line
error
306.499
694.093
456.365
562.715
805.603
546.352
567.845
Table 3: Table of error for Photometrics example
tions in the gures are meant to be almost self
contained, and present most of the commentary. The 2D histograms are meant to provide a qualitative measure of performance, and
clearly show the largest errors. The tables provide a more quantitative comparison with a unweighted sum of distances type mixing of large
and small errors. The references to the active
optics approach referees to [Willson and Shafer,
91b], who were kind enough to supply us with
their data. References to quadratic-box restoration refer to the method described in section 2.1,
[Boult and Wolberg, 91] and [Wolberg and Boult,
91], with edge values determined with linear interpolation. Details on cubic convolution, which
is use in gure 6 and table 2 can be found in
the 2 previous references as well as [Rifman and
McKinnon, 74] and [Park and Schowengerdt,
83]. We note that the gures use cubic convolution in the integrating resampling approach. We
BW-G
error
0.119
0.288
0.291
0.305
0.670
0.694
0.775
BW-B
error
0.119
0.301
0.301
0.301
0.791
0.791
0.791
also tested cubic convolution using point sampling, and its performance was slightly worse.
3.2.1 CCTV camera based examples
These images were collected at CMU, and
used in [Willson and Shafer, 91b]. The used
a General Imaging camera connected to Matrox
frame grabber. The lens was a Cosmicar motorized zoom lens (12.5-75mm), with a minimum
focus distance of 1.2m. For imaging they used a
Corion IR block + lters for R,G and B. The
the 1/2" checkerboard was imaged at a distance
of 1.5m. The original blue channel and initial
color histograms were shown in Figs 2.
3.3 Improvement to the image warping
technique
If the original images had been taken at a
point of maximal focus for yellow (or \white"),
The used Wratten lters. #25 + 0.9ND for red, #58 +
0.6ND for green, and #47B for blue.
then the results of the image warping techniques
would probably have been even better. A signicant part of our error is due to the dierential blur between wavelengths, which is not
corrected by warping. In the CCTV case this
dierence should be maximal when focused on
an extremal wavelength (e.g. blue) as was the
case here. Note that focusing on yellow or an
unltered image should reduce the error in the
uncorrected images as well, but should not affect the results of the active optics approach.
The image warping technique would also likely
benet from a denser set of feature points, especially near the edges where the lens distortions
are changing most quickly. Future work will examine the calibration techniques to correct both
for chromatic aberration and radial/tangential
lens distortion. Further we will be looking into
determining a functional form, parameterized
by zoom, focus and aperture settings, for the
correction functions.
As discussed below, one of the main advantages of the active optics approach is the ability
to correct for axial aberrations, i.e. wavelength
dependent blur. An interesting approach would
be to use active focus for each channel, then
use image warping on the results to correct for
lateral chromatic aberration. This should provide a good tradeo between delity and cost of
implementation since it would require only active focus control. In fact, the focus dierences
might be implemented directly in an RGB sensor, e.g. by beam splitting and imaging each
separately with a slightly dierent focal length.
4 Critical Comparison
+
+
?
?
is growing, the ability to precisely translate
the camera between images is less common.
Warping can handle signicant chromatically
varying geometric distortions even if no focus/zoom could correct for them (such as
higher order lens distortions).
The warping approach holds the potential to
correct for other geometric and radiometric
aberrations at the same time it corrects for
chromatic aberrations. This, however, would
need more calibration information.
The warping approach can be successfully
applied to lenses that have undergone \chromatic aberration correction". Because of the
complex nature of the chromatic aberrations
on such lenses, image warping may, depending on your error criterion, do better than
active optics.
Because image warping, as describe here, is
local in nature errors in the localization of
calibration features will have a local aect.
Such errors, however, will not be mitigated,
except in spatial extent, by the number of
features. If we used a global model for the
distortion then we might oset feature localization errors by overconstraining the model.
For simple (uncorrected) lenses, the active
optics approach yields better overall results
since it can refocus the channels. We can not
quantitatively say how much active optics
gains since the image warping results should
improve if the images are taken focused with
yellow or unltered light.
The active optics approach does need the
same amount of calibration information for
operation on a number of focus planes. They
would only need to compute change in zoom,
and shifts for each dierent depth. The current image warping would need a full cubicspline mesh for each depth, though a more
global model might be developed.
We now critically compare the two techniques,
in a relative fashion. We show a + when the
image warping technique has the advantage, a
? when the active optics approach has the advantage, and a , when the advantage might
depend on the application.
+ Warping can be applied to images taken with
an \RGB" camera where each frame is col- 5 Conclusions and future work
This paper demonstrated the idea of image
lected simultaneously. Thus it has the powarping
for the correction of chromatic aberratential for use in color sequences.
tion
using
images from two dierent camera /
+ Warping does not require specialized equip We were not able to determine the importance of this last
ment. While the use of motorized zoom/focus
stage of the CMU active optics approach.
lenses. The method compared reasonably well, [Park and Schowengerdt, 83] S.K. Park and
both qualitatively and quantitative, to the CMU
R.A. Schowengerdt. Image reconstruction
active optics approach, [Willson and Shafer, 91b].
by parametric cubic convolution. ComThe proposed warping methods used recently
puter Vision, Graphics, and Image Processdeveloped image reconstruction/restoration meth- ing, 23:258{272, 1983.
ods [Boult and Wolberg, 91], which were shown [Rifman and McKinnon, 74] S.S. Rifman and
to out perform other techniques.
D.M. McKinnon. Evaluation of digital correction techiques for ERTS images. TechAcknowledgments
nical Report 20634-6003-TU-00, TRW SysThis work was supported in part by DARPA
tems, Redondo Beach, Calif., July 1974.
Contract #N00039-84-C-0165 and in part by
the NSF PYI award #IRI-90-57951 with addi- [Slama et al., 80] C.C Slama, C. Theurer, and
S.W. Henriksen. Manual of Photogrammetry.
tional support from Seimans and A&TT. Thanks
American Society of Photogrammetry, Falls
to Reg Wilson, Steve Shafer and the folks at the
Church, VA, Fourth Edition, 1980.
CMU Calibrated Imaging Lab for images, the
data, the early drafts of the tech. report, and [Willson and Shafer, 91a] Reg Willson and Steven A. Shafer. Active lens control for high
for generally useful discussions about image acprecision computer imaging. In Proceedings
quisition.
of IEEE Conference on Robotics and AuReferences
tomation, volume 3, pages 2063{2070, Sacra[Boult and Wolberg, 91] T. Boult and G. Wolmento, CA, 1991.
berg. Local image reconstruction and sub- [Willson and Shafer, 91b] Reg Willson and Stepixel restoration algorithms. Technical reven A. Shafer. Dynamic lens compensation
port, Center for Research in Intelligent Sysfor active color imaging and constant magtems, Department of Computer Science,
nication focusing. Technical report, The
Columbia University, 1991.
Robotics Institute, Carnegie Mellon Univer[Fant, 86] K.M. Fant. A nonaliasing, real-time
sity, 1991.
spatial transform technique. IEEE Computer [Wolberg and Boult, 89] G. Wolberg and TerGraphics and Applications, 6(1):71{80, Janrance E. Boult. Separable image warping
uary 1986. See also \Letters to the Editor"
with spatial lookup tables. Computer Graphin Vol.6 No.3, pp. 66-67, Mar. 1986 and Vol.6
ics, 23(3):369{378, July 1989. (SIGGRAPH
No.7, pp 3-8 July 1986.
'89 Proceedings).
[Green, 89] W.B. Green. Digital Image Pro- [Wolberg and Boult, 91] G. Wolberg and Tercessing: A Systems Approach. Van Nostrand
rance E. Boult. Imaging consistent reconReinhold Co., NY, 1989.
struction/restoration and the integrating re[Kingslake, 78] R. Kingslake. Lens Design Funsampler. Technical report, Center for Redamentals. Academic Press, NYC NY, 1978.
search in Intelligent Systems, Department
of Computer Science, Columbia University,
[Klinker, 88] G.J. Klinker. A Physical Ap1991.
proach to Color Image Understanding. PhD
thesis, Carnegie Mellon University, Pitt. PA,
May 1988.
[Krotkov, 87] E.P. Krotkov. Exploratory visual
sensing for determining spatial layour with
an agile stereo camera system. PhD thesis,
University of Pennsylvania, Phila. PA, April
1987.
[Laikin, 91] M. Laikin. Lens Design. Marcel
Dekker, Inc., NY, 1991.