Acquiring the Reflectance Field of a Human Face

Acquiring the Reflectance Field
of a Human Face
Paul Debevec, Tim Hawkins, Chris Tchou,
Haarm-Pieter Duiker, Westley Sarokin, Mark Sagar
SIGGRAPH 2000
Michelle Brooks
Goals
• To create realistic rendering of human faces
• To extrapolate a complete reflectance field from
the acquired data which allows the rendering of
the face from novel viewpoints
• To capture models of the face that can be
rendered realistically under any illumination,
from any angle and with any sort of expression.
Challenges
• Complex and individual
shape of the face
• Subtle and spatially varying
reflectance properties of the
skin ( and a lack of method for
capturing these properties)
• Complex deformation of the
face during movement.
Traditional Method
• Texture mapping onto a geometric model
of a face
• Problem: Fails to look realistic under
changes of lighting, viewpoint and
expression
Recent Methods
• Skin Reflectance has been modeled using
the Monte Carlo Simulation
• In early 90s – Hanrahan and Kruger
developed a parameterized model for
reflection from layered surfaces due to
subsurface scattering, using human skin
as a model.
And now…
• Reflectometry
• Reflectance Field
• Non-Local Reflectance Field
Then…
• Re-illuminating Faces
• Changing the Viewpoint
• Rendering
Reflectometry
• Measurement of how materials reflect light
– Specifically how materials transform incident
illumination into radiant illumination
• Four-Dimensional Bi-Directional Reflectance
•
Distribution Function (BRDF) of the material
measured
BRDFs commonly represented a parameterized
functions known as reflectance models.
Reflectance Field
• The light field, plenoptic function and
lumigraph all describe the presence of
light within space
• P = (x, y, z, , )
Reflectance Field
• When the user is moving within unoccluded
space, the light field can be described by a 4D
function
• P’ = P’(u, v, , )
• A light field parameterized in this form
induces a 5D light field in the space
outside of A.
• P(x, y, z, , ) = P’(u, v, , )
Reflectance Field
• Radiant light field from A under every possible
•
incident field of illumination.
8 dimensional reflectance field function:
• R = R(Ri ; Rr) = R(ui, vi, i, i ; ur, vr, r, r)
• R(ui, vi, i, i)  incident light field arriving at A
• R(ur, vr, r, r)  radiant light field leaving A
Non-Local Reflectance Fields
• Incident illumination fields originates far away
from A so that
– Ri(ui, vi, i, i) = Ri(u’i, v’i, i, i)
for all (ui, vi, u’i, v’i)
• The non-local reflectance field can be
represented as
– R’ = R’(i, i ; ur, vr, r, r)
Non-Local Reflectance Fields
Re-Illuminating Faces
• Goal :
– to capture models of faces that cane be
rendered realistically under any illumination,
from any angle and with any expression.
– Acquire data (Light field)
– Transform each facial pixel location into a
reflectance function
– Render the face from the original viewpoints
under any novel form of illumination
Light Stage
Light Stage
• Lights are spun around  axis
continuously at 25 rpm
• Lights are lowered along the  axis by
180/32 degrees per revolution of 
• Cameras capture frames continuously at
30 frames/sec which yields 64 divisions of
 (64 x 32 size picture) and 32 divisions of
 in approximately 1 minute.
Constructing Reflectance Functions
• For each pixel location (x, y) in each
camera, that location on the face is
illuminated for 64 x 32 directions of  and

• For each pixel a slice of the reflectance
field is formed (reflectance function)
Rxy(, ) corresponding to the ray through
the pixel.
Reflectance Functions Cont.
• If we let the pixel value of (x, y) in the image
will illumination direction (, ) be represented
as:
– L(, ) (x, y)
then
Rxy(, ) = L(, ) (x, y)
• Figure: mosaic of the reflectance function for a
particular viewpoint
Novel Form of Illumination
• Rxy(, ) represents how much light is reflected
towards the camera by pixel (x,y) as a result of
the illumination from direction (, )
Novel Form of Illumination cont.
Novel Form of Illumination cont.
• Gains efficiency
• No aliasing
Also…
• Clothing and Background changes
Clothing and Background
Changing the Viewpoint
• We want to extrapolate complete
reflectance fields from the reflectance field
slices earlier acquired.
• This allows us to render the face from
arbitrary viewpoints and also under
arbitrary illumination
Changing the Viewpoint
• In order to render a face from a novel viewpoint,
•
we must resynthesize the reflectance functions
to appear as they would from the new
viewpoint.
This is accomplished using a skin reflectance
model which is used to guide the shifting and
scaling of measured reflectance function values
as the viewpoint changes.
Changing the Viewpoint
• The resynthesis technique requires that
the captured reflectance functions be
decomposed into specular and diffuse
(subsurface) components.
• Then, a resynthesis of a reflectance
function for a viewpoint is necessary
• Lastly, the entire face is rendered using
resynthesis reflectance functions.
Skin Reflectance
• Two components :
– specular
– non-Lambertian
Skin Reflectance
• Using RCB unit vectors to represent
chromaticities the diffuse chromaticity is:
(Written on board)
Separating Specular and
Subsurface Components
• For each pixel’s reflectance function, using
a color space analysis technique
• For a reflectance function RGB value
Rxy(,), R can be written as a linear
combination of its diffuse color d, specular
color s, and an error component.
Specular and Subsurface
Components
• Analysis assumes specular and diffuse
colors are known.
• Specular = same color as incident light
• Diffuse color changes from pixel to pixel
as well as within each reflectance function
Finally…
• The final separated diffuse component is
used to compute the surface normal n.
• Also the diffuse albedo d and total
specular energy ps
Transforming Reflectance Functions
• To synthesize a reflectance function form
a novel viewpoint, the diffuse and specular
components are separately synthesized
• Also a shadow map is created when
synthesizing a new specular reflectance
function to prevent a specular lobe from
appearing in shadowed directions.
Rendering
Rendering
And Finally…
• Movie on Light Stage
• Demonstration