A B C D

Annual Meeting of the Lunar Exploration Analysis Group (2016)
5037.pdf
QUANTITATIVE EVALUATION OF A PLANETARY RENDERER FOR TERRAIN RELATIVE NAVIGATION. E. Amoroso1, H. Jones2, N. Otten2, D. Wettergreen2, and W. Whittaker12 1Astrobotic Technology, Inc. 2515
Liberty Ave, Pittsburgh, PA 15222. [email protected], 2Carnegie Mellon University Robotics Institute.
5000 Forbes Ave, Pittsburgh, PA 15213, [email protected].
Introduction: New missions in planetary research
require a spacecraft to autonomously land with a precision that is difficult to achieve with traditional space
sensors. Visual navigation techniques have been developed, specifically terrain relative navigation (TRN), to
achieve low landing dispersions [1][2]. TRN achieves
an absolute pose measurement by registering a visual
image to a georeferenced image database. This database
can be compromised of previous spacecraft images of
planetary terrain or can be simulated renderings. One
advantage of using renderings as the georeferenced database is that renderings can be generated at the specific
date and time the spacecraft will expect to use TRN [3].
Thus, illumination angles and planetary and solar
ephemeris will be very similar to the spacecraft’s visual
imagery. Our work presents a ray-tracing lunar map
generator based on the Mitsuba renderer[4] that uses
graphical textures and stochastic path-tracing algorithms to generate realistic, map-projected lunar images
at multiple spatial resolutions.
Methods: The renderer uses a combination of
LOLA digital elevation models (DEMs), NAC stereo
DEMs, the SLDEM2013 dataset, and Clementine albedo maps as data inputs to achieve its precision at multiple scales [5][6]. We then quantitatively compare raytraced renderings using DEMs at various spatial resolutions to LRO NAC and WAC images. Pixel-by-pixel
comparisons are made to the radiance simulated and received by the WAC and NAC instruments. Multiple locations are compared, including polar regions as seen by
Figure 1, and previous Apollo landing sites as shown in
Figure 2. Next, a preliminary investigation in the use of
this renderer for TRN applications is presented. We generate a rendered lunar map of high resolution images of
the Lacus Mortis region and register images capture by
LRO WAC, NAC, and Apollo’s Metric Camera instruments. Registration is performed using an a priori position estimate to rectify camera images to the database
projection, from which a homography is estimated using
visual correspondences. Using the respective instrument’s camera model, a pose measurement is obtained.
Position measurement error is then quantified using
spacecraft ephemeris data as ground truth. Limitations
and sensitivity to image spatial resolutions, illumination
angles, and a priori estimates are presented.
Acknowledgement: This work was supported in
part by NASA contract NNX13AR25G.
References: [1] Johnson, A., et al. (2016) AIAA
Guidance, Navigation, and Control Conference. [2]
Johnson, A., et al. (2015) Proc. AIAA Guidance, Navigation, and Control Conference. [3] Peterson, K., et al.
(2012) i-SAIRAS. [4] Jakob, W. Mitsuba Renderer.
(2010)
http://www.mitsuba-renderer.org.
[5]
Mazarico, E., et al (2011) Icarus 211.2: 1066–1081. [6]
Gläser, P., et al. (2014) Icarus 243: 78–90.
Figure 1. Quantitative comparison of a ray-traced simulation
(left) and an image from LRO’s WAC instrument (middle). The
pixel-by-pixel normalized difference in radiance (right) shows
that 98% of rendered pixels are within 15% of the radiance values measured in the LRO image.
A
B
C
D
Figure 2. High resolution (1.2m/pixel) rendering enhancements of
the Apollo 17 landing site from using a Clementine albedo map vs.
assuming constant albedo. A: LRO NAC image M1190504960L.
B: Rendered image without albedo map. C: Clementine albedo
map. D: Rendered image with albedo map. Without an albedo
map, the render was measured to be 72% similar. With an albedo
map, a 91% similarity was achieved.