Foundations of Precomputed Radiance Transfer

HELSINKI UNIVERSITY OF TECHNOLOGY
Department of Computer Science
Jaakko Lehtinen
Foundations of Precomputed Radiance Transfer
Master’s Thesis
September 30, 2004
Supervisor:
Lauri Savioja, D.Sc. (Tech.), Professor of Virtual Technology
Instructor:
Timo Aila, M.Sc. (Tech.)
HELSINKI UNIVERSITY OF
TECHNOLOGY
ABSTRACT OF THE
MASTER’S THESIS
Author:
Name of the thesis:
Jaakko Lehtinen
Foundations of Precomputed Radiance Transfer
Date:
September 30, 2004
Pages:
Department:
Department of Computer Science
Professorship: T-111
Supervisor:
Instructor:
Lauri Savioja, D.Sc. (Tech.)
Timo Aila, M.Sc. (Tech.)
14+73
Precomputed Radiance Transfer is a joint name for a class of methods in computer graphics.
The aim of these methods is fast rendering of realistic images of virtual scenes under dynamically changing but constrained lighting conditions. The techniques differ from the usual methods of real-time computer graphics in that they also support indirect lighting and non-pointlike
light sources.
The basic idea behind these methods is to parameterize the distribution of emitted light by
a low-dimensional linear space, and to precompute response functions on the scene for each
degree-of-freedom in the emission space. Rendering an image of the scene under dynamically
changing illumination then proceeds by taking properly weighed sums of the precomputed
response functions. This yields interactive and even real-time performance.
This thesis presents a mathematical framework for methods of precomputed radiance transfer.
Much of the previously published work on the subject may be seen as a special case of the
presented framework. Necessary background in both mathematics and realistic image synthesis
is included.
Keywords:
computer graphics, realistic image synthesis,
precomputed radiance transfer
TEKNILLINEN KORKEAKOULU
DIPLOMITYÖN TIIVISTELMÄ
Tekijä:
Työn nimi:
Jaakko Lehtinen
Foundations of Precomputed Radiance Transfer
Päivämäärä:
30. syyskuuta 2004
Sivumäärä:
14+73
Osasto:
Tietotekniikan osasto
Professuuri:
T-111
Työn valvoja:
Työn ohjaaja:
Lauri Savioja, TkT
Timo Aila, DI
Esilaskettu valonkuljetus (engl. precomputed radiance transfer) on yleisnimi tietokonegrafiikan
menetelmille, joiden tavoitteena on piirtää reaaliajassa uskottavia kuvia sellaisista virtuaalisista
ympäristöistä, joissa valaistus voi muuttua ajon aikana ennalta tunnetulla tavalla. Nämä
tekniikat eroavat tavanomaisista reaaliaikaisen tietokonegrafiikan menetelmistä siten, että ne
mallintavat myös epäsuoran heijastuksen ja ei-pistemäiset valonlähteet.
Menetelmien perusajatus on valon emissiota kuvaavan funktion parametrisointi mataladimensioisen lineaarisen avaruuden avulla. Tätä avaruutta kutsutaan emissioavaruudeksi. Jokaista emissioavaruuden vapausastetta kohden lasketaan vastefunktio, joka kertoo vapausasteen vaikutuksen ympäristön valaistukseen. Esilasketuista vastefunktioista voidaan ajon aikana määrittää tehokkaasti ympäristön valaistus mille tahansa emissioavaruuden kuvaamalle
emissiolle.
Tämä diplomityö esittelee esilasketulle valonkuljetukselle yleisen matemaattisen kehyksen.
Monet aiemmin julkaistut menetelmät voidaan selittää kehyksen erikoistapauksina. Työ sisältää
tarvittavat taustatiedot matematiikasta ja kuvasynteesistä.
Avainsanat:
tietokonegrafiikka, realistinen kuvasynteesi, esilaskettu valonkuljetus
Isälle,
Juhalle,
Pilvikille ja
äitini muistolle
Acknowledgments
The author thanks Timo Aila for his guidance, discussions, criticism and proofreading; professor
Lauri Savioja for support, criticism, and creating a good working spirit; Jan Kautz for discussions
and collaboration that led to some of the work reviewed in this thesis; Jussi Räsänen and Janne
Kontkanen for discussions and proofreading; Remedy Entertainment, Ltd., for offering an inspiring
working environment where the author’s interest to the subject was born.
Helsinki, September 2004
Jaakko Lehtinen
Copyright Notice
Brand and product names appearing in this thesis are trademarks or registered trademarks of their
respective holders.
Copyright (c) 2004 Jaakko Lehtinen.
Contents
1
2
Introduction
1
1.1
Realistic Image Synthesis and Global Illumination . . . . . . . . . . . . . . . . . . .
2
1.2
Applications of Global Illumination . . . . . . . . . . . . . . . . . . . . . . . . . .
3
1.3
An Overview of Radiance and the Rendering Equation . . . . . . . . . . . . . . . .
3
1.3.1
An Overview of Precomputed Radiance Transfer . . . . . . . . . . . . . . .
5
1.4
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
1.5
Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
Elements of Functional Analysis
7
2.1
Linear Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
2.1.1
Linear Independence and the Dimension of a Vector Space . . . . . . . . . .
8
2.1.2
Subspaces and Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
Norms and Inner Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
2.2
2.2.1
Orthogonality and Best Approximation . . . . . . . . . . . . . . . . . . . . 11
2.2.2
Orthogonal Projections and Dual Bases . . . . . . . . . . . . . . . . . . . . 12
2.3
Linear Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4
Operator Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.5
2.4.1
The Neumann Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4.2
Adjoint Operators and Adjoint Equations . . . . . . . . . . . . . . . . . . . 15
Numerical Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5.1
The Point Collocation Method . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5.2
The Galerkin Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.5.3
Stochastic Path Tracing Methods . . . . . . . . . . . . . . . . . . . . . . . . 19
i
3
Basics of Global Illumination
3.1
Basics and Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2
Radiance and Related Quantities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.3
3.4
4
23
3.2.1
The Function Space and Inner Product for Radiance Functions . . . . . . . . 26
3.2.2
Irradiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3.1
The Rendering Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.3.2
The Rendering Equation for Incident Radiance . . . . . . . . . . . . . . . . 30
3.3.3
Adjoint Equations and Importance . . . . . . . . . . . . . . . . . . . . . . . 32
An Overview of Global Illumination Algorithms . . . . . . . . . . . . . . . . . . . . 33
3.4.1
Finite Element Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.4.2
Ray and Path Tracing methods . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.4.3
Hybrid Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Precomputed Radiance Transfer
4.1
History of Precomputed Radiance Transfer . . . . . . . . . . . . . . . . . . . . . . . 40
4.1.1
4.2
4.3
4.4
39
Recent Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Mathematical Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.2.1
Emission Space and the Initial Transport Operator . . . . . . . . . . . . . . 43
4.2.2
The Equation for Precomputed Transfer . . . . . . . . . . . . . . . . . . . . 45
PRT by FEM: The Method of Sloan et al. . . . . . . . . . . . . . . . . . . . . . . . 47
4.3.1
Discretization of the Incident Radiance Field . . . . . . . . . . . . . . . . . 47
4.3.2
Discretization of the Transport Equation . . . . . . . . . . . . . . . . . . . . 48
4.3.3
Solving the Discrete System . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.3.4
Local vs. Global Coordinate Frames . . . . . . . . . . . . . . . . . . . . . . 52
4.3.5
Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Determining Outgoing Radiance from Transferred Incident Radiance . . . . . . . . . 53
4.4.1
The Original Method of Sloan et al. . . . . . . . . . . . . . . . . . . . . . . 54
4.4.2
The Method of Kautz et al.
4.4.3
The Method of Lehtinen and Kautz . . . . . . . . . . . . . . . . . . . . . . 56
. . . . . . . . . . . . . . . . . . . . . . . . . . 55
ii
5
4.4.4
Bi-scale Radiance Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.4.5
Outgoing Radiance from Separable BRDF Approximation . . . . . . . . . . 58
4.4.6
Comparison of Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Discussion and Future Work
5.1
5.2
61
Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.1.1
Architectural Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.1.2
Entertainment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Limitations and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
A Proofs and Derivations
63
A.1 The Equivalence of the Two Galerkin-type Methods . . . . . . . . . . . . . . . . . . 63
A.2 The Kernels of the Transport Operator and its Adjoint . . . . . . . . . . . . . . . . . 64
A.3 Measuring the Average Radiance Through a Pixel . . . . . . . . . . . . . . . . . . . 65
A.4 The Spherical Harmonics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
A.5 The Adjoint of the Initial Transport Operator . . . . . . . . . . . . . . . . . . . . . 67
Bibliography
69
Chapter 1
Introduction
This thesis deals with global illumination, which is a particular topic in computer graphics. More
precisely, we develop a general mathematical framework for methods jointly called precomputed
radiance transfer. These techniques have gained broad attention in the research community during
the past few years.
The uses for computer graphics are ubiquitous. Increased computing power and development of
efficient algorithms has made computer-generated imagery an everyday commodity in movies, computer games, mobile phones, advertising, scientific visualization, both civilian and military simulations etc. The methods dealt with here are mostly applicable to real-time computer games and
lighting design systems.
The quest for visual realism is all about simulating the behaviour of light. This should be taken
in a very broad sense; not only are we interested in the emission, reflection and refraction of light
in virtual scenes, but also in the perceptional and emotional effect of the generated picture on the
viewer. For example, when we have finished simulating the distribution of light in the scene, we
must then somehow map it into a picture that we can display on the computer screen; depending on
the application, we may wish to convey different feelings to the user. To this end we must account
for psychological effects as well as consider the response of the human visual system to the lighting,
for example by simulating the scattering inside the eye that creates halos, or desaturate the colors to
simulate night vision if we are generating a picture of a nightly scene. These are important topics,
but we shall not deal with them in this thesis. We aim only to solve the physically-based equations
of light transport in an efficient manner, so that the methods described above may then take over to
produce the final picture.
Organization of This Thesis. The rest of this chapter introduces the problem of realistic image
synthesis and outlines the major strategies for solving the related equations. In particular, a highlevel introduction to precomputed radiance transfer is given. The chapter concludes by listing the
prerequisites for the reader and the contributions made in this thesis. Chapter 2 reviews some basics
of linear functional analysis, the mathematical tool for dealing with illumination computations. The
treatment is mostly abstract. Chapter 3 deals with the fundamentals of global illumination, including
the relevant quantities and the equations that relate them. Chapter 4 then presents the details of our
framework for precomputed radiance transfer; in particular, it includes as an example a thorough
derivation of a previous method as a special case of our framework. We conclude with discussion in
Chapter 5.
1
Introduction
1.1
Realistic Image Synthesis and Global Illumination
Global illumination techniques in computer graphics approximate the flow of light in a synthetic
environment, with the aim of synthesizing pictures that appear realistic to the human observer. To
accomplish this goal, global illumination techniques solve physically-based equations for energy
transfer between emitters and reflectors. “Globalness” means that all surfaces potentially affect the
distribution of light on all the other surfaces. This is because objects cast shadows on each other,
and also reflect the light incident upon them onto each other in a recursive manner.
Some simplifying assumptions about the behaviour of light are usually made. These include 1)
using the ray-optics model, i.e., assuming that light travels along straight lines and that its wave
nature is neglected, and 2) that each wavelength is separate from others, i.e., fluorescence is not
modeled. Color is usually modeled by performing separate computations at the red, green and blue
color bands. Continuous spectra are usually not supported.
The equation that describes the flow of light is known as the Boltzmann integrodifferential equation, which has been adapted into the graphics literature under the name of the volume rendering
equation. In this thesis we do not deal with participating media, e.g., smoke or haze, that scatters
light as it travels through the scene. Instead, we make the assumption that light travels along straight
lines and only changes direction when it reflects off a solid boundary in the environment. This property ensures that the flow of light anywhere in the environment is uniquely specified by giving the
distribution of light leaving the surfaces of the scene; thus, the distribution function that we seek
is defined on a four-dimensional domain of two spatial and two directional variables instead of a
five-dimensional domain of three spatial and two directional variables. The equation that describes
the energy equilibrium in a nonscattering medium is called the rendering equation [38].
When solving for the distribution of light in a scene, the aforementioned, inherent web of global
interreflections makes the problem fundamentally difficult. Strategies for determining this distribution can be divided into two categories: The view-dependent and the view-independent approaches.
Methods falling into the former category concern themselves only with light that hits a virtual camera, i.e. the light that contributes to a particular view of the environment. Algorithms in the latter
category solve for the complete, four-dimensional function of two spatial and two directional arguments that completely describes the appearance of each surface point in the environment. In principle, this solution may then be used for quickly drawing images of the environment from varying
viewpoints. A quantity called importance or potential can be employed for directing computational
resources to the parts of the problem that contribute most to the final image or images.
View-independent algorithms are generally based on the Finite Element Method (FEM). They approximate the distribution of light in the scene as a linear combination of basis functions defined
on the space of surface appearance functions. The most well-known view-independent method is
radiosity, which makes the assumption that all surfaces are perfect Lambertian reflectors (or perfect
diffusers), so that the appearance of a given surface patch does not depend on the direction from
which it is viewed. This drops two dimensions out of the problem, making it much more tractable.
Also true four-dimensional methods exist, but they are not generally used in practice because of
their complexity. Finite element techniques (also the radiosity method) always involve strenuous
computation, and usually the sizes of the problems are so large that iterative matrix-free methods
must be employed. This is because even explicit construction of the matrix that defines the discrete
problem is infeasible due to the huge number of degrees-of-freedom, which often runs in millions.
This means that computation must be performed separately for each configuration of light sources.
Methods of precomputed radiance transfer are extended forms of finite-element global illumination. The techniques render scenes in lighting conditions that change according to a predetermined
2
1.2 Applications of Global Illumination
parameterization, without requiring a new preprocessing step for each lighting configuration.
1.2
Applications of Global Illumination
Global illumination algorithms have a wide field of applications, including special effects for movies,
computer games, architectural visualization and general VR applications.
The use of computer-generated imagery for augmenting regular film material shot by conventional
cameras has increased dramatically during the past decade. This has been enabled partly by growing
computational resources and development of more sophisticated algorithms, and partly by the birth
of techniques that are able to recover the motion path of a camera based only on a video sequence
shot by that camera. This “matchmoving” is required because the computer-generated and real
footage must move synchronously in the final composited image. Also, fully computer-generated
feature films have started to emerge since the mid-1990s.
However, despite widespread use of graphics in filmmaking, global illumination effects are not yet
widely accounted for. This is because global interreflection effects cannot be rendered without knowing the whole scene beforehand, i.e., a digital character that is added to a frame not only should
get lit by the lights in the real scene, but also should have an effect on light transport within the
scene, for example by casting shadows. The situation is very different for completely computergenerated films, where first approximations to indirect lighting have recently been used in actual
production [72].
Computer games are a major application for real-time graphics algorithms. Since most global illumination computations are too heavy to be performed at runtime, games often resort to displaying a
static, precomputed illumination solution. Radiosity techniques [11] are ideal for this purpose since
the solutions do not depend on the point-of-view of the virtual camera, and may be easily rendered
using consumer-level graphics hardware. However, this constrains the light sources and receiving
geometry to being static.
Architectural walkthroughs are an important tool for evaluating designs of buildings. Being able
to navigate an accurate 3D model (which is often available, although with additional processing,
from CAD models) of the building is not enough to convey the “feel” of the final construction,
since lighting provides important visual cues about the spatial relationships within an environment.
Realistic image synthesis techniques, particularly radiosity, are often used for generating images
or interactive walkthrough sequences of these models. This enables also non-architects to get an
impression of what the final building will be like. Specific global illumination software packages are
often used also for aiding illumination engineering.
Precomputed radiance transfer techniques are a natural extension to the global illumination methods
that are used in interactive or real-time applications. Being able to quickly render images of a
scene under different lighting conditions may provide essential improvement, e.g., in lighting design
applications and computer games.
1.3
An Overview of Radiance and the Rendering Equation
Radiance is the quantity that describes the appearance of the surfaces of a scene: For each point and
for each direction, it characterizes the amount of light that leaves or impinges upon a surface patch
located at that point. This quantity will be described in detail in Section 3.2. For now suffice it to
3
Introduction
say it is a scalar1 function u(x, ω) of both space and direction. Here, x is a vector variable that runs
over the surfaces of the environment, and ω denotes a direction. Usually, the function u(x, ω) will
denote the radiance leaving the point x towards direction ω. We will denote this by u(x → ω). It is
often beneficial to consider the radiance incident upon x from direction ω; this will be denoted by
u(x ← ω).
The rendering equation, whose unknown is the radiance distribution function u(x → ωout ), describes
the equilibrium distribution of light within a scene. It is an integral equation, i.e., the unknown
function u appears both by itself and under an integral. The equation is usually written as
u(x → ωout ) = (T u)(x → ωout ) + e(x → ωout ),
u = T u + e,
or
where T is a so-called transport operator and e(x → ωout ) is the radiance emitted from x towards
ωout . This equation states that the radiance u(x → ωout ) leaving x towards ωout is the sum of emitted
radiance e(x → ωout ) and reflected radiance (T u)(x → ωout ). The transport operator is a linear
integral operator that maps a given radiance distribution function u into a new distribution, denoted
T u, through one reflection: For each point x, the operator looks at the radiance incident upon x and
how it is reflected to different directions. The exact form of this operator will be derived in Sections
3.3 and 3.3.1, where we also derive the rendering equation.
Since u = T u + e ⇔ (I − T )u = e, where I is the identity mapping, the formal solution to the
global illumination problem may be written as
u = (I − T )−1 e,
where (I − T )−1 is the inverse of the operator (I − T ). However, even though the problem looks
like a regular linear system, the operators are infinite-dimensional, which means no finite number
of arguments may completely describe them. This is a direct consequence of the fact that the space
of all possible radiance distribution functions is infinite-dimensional. Because of this, the equations
must be either discretized somehow (the Finite Element Method), or, solutions must be approximated
by averaging pointwise numerical estimates (path tracing methods).
Finite Element Methods form a linear, finite-dimensional subspace of the space of all possible radiance distribution functions, and approximate the solution of the rendering equation by functions
in this subspace. This amounts to seeking for the solution as a linear combination of basis functions that span the subspace. After choosing the appropriate basis functions, application of, e.g., the
Galerkin or Point Collocation method [5] leads to a finite-dimensional linear system
(1.1)
u = (I − T )−1 e,
where u and e are vectors of coefficients that identify functions in the finite-dimensional approximating subspace, and I and T are matrices of finite size. These methods of approximation will be
reviewed in Section 2.5.
In real applications the dimension of the approximating subspace is so large – often several millions –
that it is completely infeasible to even construct the matrix I −T , let alone invert it. In addition to the
large dimensionality, a big difficulty is that the matrix T is often full, i.e., it contains many non-zero
elements. This property is in sharp contrast to the case of partial differential equations discretized
using the finite element method; there the corresponding matrices are very sparse, i.e., they contain
mostly zeros. Also, the computation of the elements of T will turn out to be a rather heavy procedure.
These memory and computational costs force practical global illumination methods that are based
1 Monochromatic radiance is a scalar. If more color bands are required, more dimensions are needed. This does not make
the treatment more difficult, since the equations of rendering are assumed to hold separately for each wavelength.
4
1.3 An Overview of Radiance and the Rendering Equation
on the finite element method to work iteratively: Starting from a fixed right-hand side e, the methods
compute successive approximations to u by only evaluating parts of the matrix that are of interest
in the current iteration. This is much lighter than actually forming the matrix. Many methods
make intelligent, often hierarchical approximations in this process. In any case, computation must
be performed again if the right-hand side e changes. This is where precomputed radiance transfer
methods differ from usual finite element global illumination methods.
1.3.1
An Overview of Precomputed Radiance Transfer
Recent methods of Precomputed Radiance Transfer or PRT [68, 42, 49, 67, 69, 53] differ from the
methods outlined above in a crucial fashion. If we had an inverse of the operator (I − T ) from
equation (1.1), we would be able to quickly determine – by a vector-matrix multiplication – an
approximation u of the energy equilibrium in the scene, given any possible emission function e,
where the emission functions are defined on the same space of functions as radiance distributions.
However, we are almost always interested in lighting solutions described by only a small set of
possible emissions. For instance, being able to use a leg of a chair as an emitter seldom has practical
value, or in other words, (I − T )−1 contains largely uninteresting data whose computation is very
difficult.
A more realistic situation would be to ask the question “if I have three light sources A, B and C with
fixed positions and orientations but varying intensities I1 , I2 and I3 , what is the illumination in the
scene?”, or, “how does the illumination of this apartment change during the day, and how do clouds
in the sky affect it?”. In the former case the emissions actually have only three degrees of freedom (the intensities of the light sources) regardless of the discretization granularity of the radiance
distributions, and in the latter case the emission of light by the sun and the skydome may be parameterized by some spherical function basis, such as the spherical harmonics. In any case, there are less
parameters that describe emission than what are needed for describing the resulting radiance distributions. It turns out that this fact may be exploited for designing algorithms where the lighting may be
changed dynamically at runtime so that we obtain correct global illumination solutions interactively
for each emission configuration. It is essential that the relationship of the emission variables to the
corresponding radiance distributions is linear. Unfortunately, the requirement of linearity rules out
some parameterizations. For instance, the position of the sun in the sky for a particular geographic
location may be parameterized by one variable, time. However, the irradiance cast by the sun onto
a particular surface location on the ground is not linear with respect to time; clearly, one cannot
compute the irradiance at twelve o’clock by multiplying the irradiance from six o’clock by two.
More formally, PRT methods may be understood to parameterize the space of possible emissions by
a linear space E, and to define a linear operator V that maps an emission configuration from this
space into the corresponding radiance distribution in the scene that describes the direct illumination
due to the emission, i.e., V provides the answer “if the emission is described by e, V e is the direct
illumination incident onto the scene due to e”. This will be discussed in more detail in Section
4.2.1. In the end we will obtain a solution operator that maps an emission configuration into the
corresponding global illumination solution. This matrix is obviously much smaller than (I − T )−1 ,
since the emission space has much lower dimension than the space of possible radiance distribution
functions.
The full solution to the global illumination problem based on an operator inverse has previously
been formulated through the so-called Global Reflectance Distribution Function or GRDF [46]. The
GRDF is the kernel of (I − T )−1 , the solution operator that transforms an emission function into the
corresponding radiance distribution that solves the rendering equation. The GRDF behaves much
like Green’s functions in the theory of differential and integral equations of mathematical physics.
5
Introduction
The kernel is specified over the Cartesian product of the space of radiance functions with itself;
the domain of the GRDF is thus eight-dimensional, even in absence of participating media. This
complexity makes the GRDF more of an impractical construction. It is still valuable as a reference
point for designing practical algorithms.
1.4
Prerequisites
In the following chapters, the reader is assumed to have basic knowledge of computer graphics and
rendering, a familiarity with functions of several variables, multidimensional integration, and basic
linear algebra.
1.5
Contributions
This thesis presents a mathematical framework for precomputed radiance transfer. The framework
is given in the form of an operator equation that extends the well-known rendering equation [38] for
emission functions parameterized in a finite-dimensional linear space. To the best knowledge of the
author, such a formulation has not appeared before.
6
Chapter 2
Elements of Functional Analysis
This chapter presents some basic facts of linear functional analysis. These include the definitions
of a vector space, the inner product, inner product spaces, linear operators, and the description of
methods for solving operator equations in function spaces by means of the point collocation and
Galerkin methods. Only the set of the basic principles that are needed when dealing with global
illumination are presented, and error analysis is skipped altogether. A brief description of why a
particular topic is important in illumination computations is included in the sections.
The treatment is not fully rigorous in the mathematical sense, and many theoretically important concepts are simply left out in favor of a more intuitive picture of the essentials. In some cases footnotes
comment an apparent mismatch between general theory and our formulation. The interested reader
is referred to more complete treatises (e.g., [44, 5, 12, 4, 43, 28]) on the subject.
2.1
Linear Vector Spaces
Linear spaces of functions are basic building blocks of the mathematics of global illumination. These
spaces are called vector spaces, and they share the essential properties of the usual vector spaces Rn ,
with n ∈ N. Linearity is of key importance here; the sought-after equilibrium distribution of light is
linear with respect to the function that specifies emission.
This section presents an abstract, axiomatic definition of a vector space and gives some examples.
In subsequent sections these abstract spaces will be equipped with an inner product; this introduces
a powerful geometric flavor.
The Vector Space. A linear vector space X is a collection of elements (called vectors), a scalar
field K (which may be either R or C, but K = R is always assumed here), the operator + : X ×X 7→
X that adds two vectors, and the operator K × X 7→ X that multiplies a vector with a scalar, so that
the following axioms are satisfied:
∀ x, y, z ∈ X
∀ x, y ∈ X
∀x∈X
(x + y) + z = x + (y + z)
x+y =y+x
∃0 ∈ X : x + 0 = x
7
Elements of Functional Analysis
∃ − x ∈ X : x + (−x) = 0
1x = x
α(βx) = (αβ)x
α(x + y) = αx + αy
(α + β)x = αx + βx
∀x∈X
∀x∈X
∀ x ∈ X, α, β ∈ K
∀ x, y ∈ X, α ∈ K
∀ x ∈ X, α, β ∈ K
The meaning of these axioms is most easily grasped by thinking in terms of a simple, particular
vector space, such as R3 .
Canonical examples of vector spaces are Euclidean spaces Rn and the space C[0, 1] of continuous
functions defined on the interval [0, 1]: Adding any two such functions clearly defines a new continuous function on the same interval. Of course, there is nothing special in the interval [0, 1]; it may be
replaced by any other, even infinite interval. More generally, the space L2 (S) of square-integrable1
functions in a subset S of Rn form a linear space.
The addition of vectors is straightforward in all these example spaces. For example, in X = C[0, 1]
the addition of two vectors is evaluated pointwise in an obvious way: (x + y)(s) = x(s) + y(s),
where s ∈ [0, 1] and x, y ∈ X. Multiplication by scalars is done in an equally simple, pointwise
fashion. These rules apply for all the function spaces that we deal with in this thesis.
A special vector space that we use frequently is the space X(S × Ω) of scalar functions defined on
the Cartesian product of S, a set of 2D surfaces of a three-dimensional environment, and the set Ω of
directions; the value u(x, ω) of such a function u at point x, direction ω will denote the light either
leaving point x to direction ω or the light incident to x from direction ω.
2.1.1
Linear Independence and the Dimension of a Vector Space
Pn
A set {xi }ni=1 ⊂ X of vectors is said to be linearly independent, if i=1 αi xi = 0 implies αi = 0
for all i, where αi ∈ R. Put differently, none of the xi may be written as a linear combination
of the others. If this does not hold, i.e., there exist coefficients {αi }ni=1 , not all zero, such that
P
n
i=1 αi xi = 0, the set {xi } is said to be linearly dependent. This implies that at least some of the
xi may be written as a linear combination of the others.
If a linear space X contains sets of n vectors that are linearly independent, but all sets of n + 1
vectors are linearly dependent, the space is said to have dimension n. The dimension is denoted by
dim X.
Not all spaces have a finite dimension. For instance, the space C[0, 1] has infinite dimension; this
is easily seen by the following construction. Let Xh be the subspace of C[0, 1] whose elements are
piecewise linear functions with node points spaced a distance h apart. It is clear that each x ∈ Xh
is uniquely described by giving its values at the node points, and that the dimension of Xh is the
number of node points. Now, let h approach 0. Then the dimension of Xh grows without bound,
but no matter how finely the interval [0, 1] is sampled, we may always find a function that belongs
to C[0, 1] but does not belong to Xh . Thus C[0, 1] has infinite dimension. All the non-discretized
function spaces encountered in this thesis are infinite-dimensional.
1 i.e.,
R
S
f (x)2 dx is finite
8
2.2 Norms and Inner Products
2.1.2
Subspaces and Bases
A proper subset of vectors in a vector space X is said to form a subspace if the axioms of the vector
space are fulfilled when only the elements of the subset are considered. For example, the subset
{x ∈ C[0, 1] | x(0) = x(1) = 0} of the functions in C[0, 1] whose values at the endpoints are zero
forms a subspace; if we add two elements of the subset, the result is clearly guaranteed to lie in the
same subset, and the rest of the axioms are verified just as easily.
If there exists a set of vectors {ϕi }ni=1 ⊂ X, where n may be ∞, such that all vectors of X may be
written as a linear combination
(2.1)
x=
n
X
αi ϕi
i=1
with uniquely determined coefficients αi , the set {ϕi }ni=1 is said to form a basis of X. The number
of vectors in the basis is always the same as the dimension of the space; it may be infinite as well.
There exists a basis for all vector spaces. For example, the canonical basis vectors
e1 = {1 0 0
e2 = {0 1 0
e3 = {0 0 1
..
.
en = {0 0 0
... 0 }
... 0 }
... 0 }
... 1 }
form a basis for the Euclidean space Rn . There can exist more than one basis for a linear space. E.g.,
−1
] and [ √12 √12 ] form a basis just as well as the canonical basis
in the plane R2 , the vectors [ √12 √
2
vectors. In general, any linearly independent set of n linear combinations of the n basis vectors of
an n-dimensional space form a new basis for that same space.
The usual Fourier series (which we use as an example later in 2.2.1) representation of a function may
be seen as expressing the function in terms of the basis functions eikx with k = 0, ±1, ±2, . . .; the
dilated sines and cosines form a basis for some important spaces of functions. There exists also other
bases for such function spaces. For instance, bases of wavelets [18] have many favorable properties.
All finite-dimensional vector spaces with dimension n are in a natural one-to-one correspondence
with Rn , since the coefficients αi in (2.1) may be taken to form an element of Rn . Conversely, each
α ∈ Rn defines an unique element of X through (2.1).
The set of all linear combinations of a set {a, b, c, d, . . .} is denoted by span{a, b, c, d, . . .}.
2.2
Norms and Inner Products
This section discusses norms of vectors and inner products. The norm generalizes the notion of
length for abstract vector spaces, and the inner product generalizes the usual dot product of vectors
in Rn .
Without a norm, it makes no sense of speaking of vectors being “close to” or “far from” each other.
Thus, the norm defines the concept of convergence. This is important for instance when designing
methods for solving equations in generic vector spaces.
9
Elements of Functional Analysis
The inner product, on the other hand, provides a powerful geometric structure to a vector space
through the concept of orthogonality. Orthogonality is a valuable tool in approximation; it can be
used for deriving linear systems that give good finite approximations to infinite-dimensional vectors.
Norms
Formally, the norm is a function k·k : X 7→ R on a vector space X that satisfies the following
conditions:
kxk ≥ 0
kxk = 0 ⇔ x = 0
kx + yk ≤ kxk + kyk
kαxk = |α| kxk
∀x∈X
∀ x, y ∈ X
∀ x ∈ X, α ∈ K
(n1)
(n2)
(n3)
(n4)
A vector space equipped with a norm is called a normed space. There are usually many possibilities
for
defining a norm for a given vector space. For example, in Rn , the usual Euclidean length kxk =
pP
n
2
i=1 xi clearly defines a norm; another norm is given by max |xi | with 1 ≤ i ≤ n. Analogously,
function spaces are most often equipped with norms defined by integrals (as opposed to sums) or
by taking the supremum of the function over its domain (as opposed to taking the maximum). For
instance, the ubiquitous L2 norm, denoted by k·k2 , is defined by
vZ
u
u
kxk2 = t |x(s)|2 ds,
S
where S is the set on which the functions are defined. Another important function norm is k·k1 ,
which is defined by
Z
kxk1 = |x(s)| ds.
S
Inner Products
An inner product on a real vector space X is a bilinear function h·, ·i : X × X 7→ R, such that the
following axioms are satisfied:
hx, yi = hy, xi
hαx + βy, zi = α hx, zi + β hy, zi
hx, xi ≥ 0
hx, xi = 0 ⇔ x = 0
∀ x, y ∈ X
∀ x, y, z ∈ X, α, β ∈ R
∀x∈X
(ip1)
(ip2)
(ip3)
(ip4)
A vector space equipped with an inner product is called an inner product space.
The number
kxk :=
p
hx, xi
defines a norm on an inner product space.
Pn
The simplest example of an inner product is the dot product hx, yi = i=1 xi yi of two vectors x
and y in Rn . In spaces of functions, inner products are usually integrals of products of pointwise
10
2.2 Norms and Inner Products
values of the functions:
Z
hx, yi =
x(s)y(s) ds.
S
It is easy to verify that this inner product satisfies the above axioms. The norm induced by the inner
product is the same as the Euclidean norm on Rn and the L2 norm on spaces of functions.
2.2.1
Orthogonality and Best Approximation
Two elements x and y of an inner product space are said to be orthogonal if hx, yi = 0. This is a
powerful concept; it extends the idea of perpendicular vectors in Euclidean spaces to more abstract
function spaces. Orthogonality has a strong connection with best approximation, as we shall see.
For instance, the Fourier series on [−π, π] are based on the fact that when k, l ∈ Z, the functions
sin(kx) and cos(lx) are always orthogonal for all k, l, and sin(lx) and sin(kx) (and respectively
cos(lx) and cos(kx)) are orthogonal when k 6= l.
Let us suppose we wish to approximate a vector x ∈ X by a linear combination of some n predetermined elements {φi }ni=1 of X, i.e., we wish to find the element of the subspace span{φ1 , φ2 , . . .}
that best approximates x. Here n does not need to be as large as dim X. In other words, we seek the
approximation x̂ in the form
n
X
x̂ =
αi φi ,
i=1
and we wish that the difference kx − x̂k is small; our task is to determine the coefficients αi . It turns
out that if the vectors {φi }ni=1 form an orthonormal set, i.e., kφi k = 1 ∀i and hφi , φj i = 0 whenever
i 6= j, the coefficients for best approximation are given by
αi = hx, φi i ,
so that the best approximation is
x̂ =
(2.2)
n
X
αi φi =
i=1
n
X
hx, φi i φi .
i=1
The proof is not difficult [47, p. 82].
The connection to Fourier series is the following. Taking
1
φ0 (x) = √ ,
2π
φ2k−1 (x) =
cos(kx)
√
π
and
φ2k (x) =
sin(kx)
√
,
π
with k = 1, 2, . . ., we get the usual Fourier series representation
∞
X ak
a0
b
√ cos(kx) + √k sin(kx),
fˆ = √ +
π
π
2π k=1
R π f (x)
of a function f defined on [−π, π], with a0 = hf, φ0 i = −π √
dx, ak = hf, φ2k−1 i =
2π
R
R
π
π
1
√1
√
f
(x)
cos(kx)
dx
and
b
=
hf,
φ
i
=
f
(x)
sin(kx)
dx.
The Fourier approxik
2k
π −π
π −π
mation does not necessarily converge pointwise to the original function, but the L2 norm of the
approximation error converges to zero provided that f satisfies some rather mild conditions.
11
Elements of Functional Analysis
2.2.2
Orthogonal Projections and Dual Bases
It is also possible to approximate vectors by sets which are not orthonormal. Suppose the set of
vectors {φi }ni=1 is not pairwise orthogonal, but it is linearly independent. The orthogonal projection,
here denoted by x̂, of a vector x onto the subspace span{φ1 , φ2 , . . .} is characterized by
hx − x̂, φ1 i = 0
hx − x̂, φ2 i = 0
..
.
hx − x̂, φn i = 0,
or in other words, the approximation error x − x̂ is required to be orthogonal to all φi . It happens
that the orthogonal projection x̂ is the best approximation to x in span{φ1 , φ2 , . . .} in the sense of
the norm induced by the inner product.
Pn
Writing x̂ = i=1 αi φi and substituting into the above set of equations we get
*
+
n
X
x−
αi φi , φj = 0
∀j = 1, . . . , n
i=0
⇔
n
X
αi hφi , φj i = hx, φj i,
j = 1, . . . , n,
i=0
i.e., the above is actually a linear system
(2.3)
Gα = e
⇔
α = G−1 e,
with Gji = hφi , φj i and ej = hx, φj i. The matrix G is called the Gram matrix for the set {φi }, and
it is nonsingular if the set is linearly independent [44]. The coefficients that solve the above linear
system produce the best approximation to x by linear combinations of {φi }. Note that if the basis is
orthogonal, this reduces directly to (2.2), since then G = I.
Also sets of vectors that are linearly dependent, i.e., redundant, may be used for approximation. We
shall not need this, but refer the reader elsewhere [51, p. 125].
Dual Bases
Let us consider the finite-dimensional subspace span {φ1 , φ2 , . . . , φn } of X; the φi form a basis of
this space. As we just saw, the projection of an arbitrary vector of X into this subspace requires
solution of a linear system. However, once the inverse of the Gram matrix is computed, it can be
used to construct a dual basis {φ̃i }ni=1 , so that inner product hx, φ̃i i gives the coordinate αi of the
vector x in the basis {φi }ni=1 . To see this, consider the equation for a single αj from (2.3):
αj =
n
X
G−1
ji hφi , xi.
i=1
−1
Here G−1
. Because of the bilinearity of the inner product, the sum may be
ji are elements of G
moved inside to yield
* n
+
X
−1
αj =
Gji φi , x .
i=1
12
2.3 Linear Operators
Now, by taking
φ̃j :=
(2.4)
n
X
G−1
ji φi
i=1
the above observation follows.
The dual basis has the property hφi , φ̃j i = δji , where the Kronecker symbol δji is defined by
(
1 if i = j
i
δj =
0 otherwise.
This property of the dual basis follows directly by taking x = φi above and using the definition of
the inverse of the Gram matrix. This is to be expected – the coordinate of φi in span {φ1 , . . . , φn }
is 1 at index i and zero elsewhere.
2.3
Linear Operators
Linear operators are linear mappings between vector spaces. They play an essential role in describing
the behaviour of light: Special kinds of linear operators will be used for describing light transport.
Here we consider mappings T : X 7→ Y that map elements of a vector space X into another vector
space Y (we do not outrule Y = X). Such maps are called operators. The application of T to x
is denoted by T x. If we are dealing with function spaces, the value of T x at point z is denoted by
(T x)(z), where x ∈ X, T x ∈ Y and z ∈ S, where S is the set of points over which the function
space Y is defined.
An operator is said to be linear if T (αx + βy) = αT x + βT y holds for all x, y ∈ X and α, β ∈ R.
Interestingly, all linear operators between spaces X and Y form themselves a vector space.
A linear operator T is said to be bounded if there exists a constant C such that kT xkY ≤ C kxkX for
all x ∈ X, where k·kX denotes the norm in X, and k·kY the one in Y . The smallest such constant
is called the norm of T , and it is denoted by kT k. Clearly,
kT k =
sup
x∈X,x6=0
kT xk
=
sup
kT xk ,
kxk
x∈X,kxk=1
where sup denotes the least upper bound. The norms of operators play an important role in the
solvability of equations involving operators; this will be the topic of the next section. A bounded
linear operator is always continuous, which means roughly that a small change in the argument will
result in a small change in the result.
A familiar example of a linear operator is an m-by-n matrix that maps vectors from Rn to vectors
in Rm . An interesting result is that all linear operators between any two finite-dimensional linear
spaces may be represented by matrices by virtue of the one-to-one correspondence between basis
coefficients and the Rn spaces (see 2.1.2 above).
Another well-known type of linear operator is the integral operator that operates on spaces of functions. An integral operator K from a space of functions into another is defined through
Z
(2.5)
(Kx)(s) = x(t) k(s, t) dt,
S
13
Elements of Functional Analysis
where k(s, t) is called the kernel of the operator. An integral operator whose kernel satisfies
Z
max |k(s, t)| dt < ∞
s∈S
S
is bounded. The equations that describe transport and reflection of light involve integral operators of
the above kind.
Also unbounded linear operators exist [44], but we shall not encounter them and will not discuss
them here.
Linear Functionals
A linear functional is a specific type of linear operator. It maps from a vector space X into the reals
R. We will make use of linear functionals by making measurements of our lighting solutions later
on.
As we just saw, the kernels of integral operators are functions of two variables. Since a linear
functional only has one output, a single real number, the functional is correspondingly represented
by a function of one argument defined on the set S, and the value of a functional w on a vector x is
found by integrating the product w(s) x(s) over S. Since this integration is similar to the definition
of an inner product, we denote the result by hx, wi.
2.4
Operator Equations
A linear equation posed in a linear space is called a linear operator equation. They are of the form
Ku = e,
where K : X 7→ Y is a linear operator, e ∈ Y and u is the unknown vector. The equations of global
illumination (which we will derive in chapter 3) are of the form
(2.6)
(I − T )u = e
⇔
u = T u + e,
where I is the identity map on X and T : X 7→ X is an integral operator. Equations of the form
(2.6) are known as Fredholm equations of the second kind. The general theory of the solvability of
such equations is somewhat involved [28]. Fortunately the operators of global illumination are so
well-behaved that we do not need the most general theory.
Solutions to equations such as the above cannot usually be found exactly due to the infinite dimensionality of the function spaces on which the equations are posed. The remainder of this section
will first present the Neumann series, a general recipe for solving operator equations of the type
(2.6) with kT k < 1 and then move on to describe numerical methods for obtaining approximate
solutions. These numerical methods will be applied to the equations of global illumination in later
chapters.
2.4.1
The Neumann Series
It turns out that solution of equations of the form (2.6) is in theory easy if kT k < 1, which will be
the case in global illumination. First, due to the definition of the norm of an operator, and kT k < 1,
14
2.4 Operator Equations
we notice that
kT n k ≤ kT kkT n−1 k ≤ kT k2 kT n−2 k ≤ . . . ≤ kT kn → 0
as n → ∞. It follows that the so-called Neumann series gives the inverse of the operator (I − T ):
−1
(I − T )
=
∞
X
Ti = I + T + T2 + ....
i=0
To prove this, we show that the series is both a left and right inverse of the operator I − T . This is
seen by
!
∞
X
i
(I − T )
T
= (I − T )(I + T + T 2 + T 3 + . . .)
i=0
= (I − T + T − T 2 + T 2 − . . .) = I,
and similarly
∞
X
!
T
i
(I − T ) = (I − T + T − T 2 + T 2 − . . .) = I.
i=0
ThePalternating terms in the sum cancel each other out successively. Each partial sum (I −
n
T ) i=1 T i of the series leaves the “tail” −T n+1 , but since kT k < 1, the leftover diminishes
as n grows.2
Put together, the above means that the solution of the equation (2.6) is given by
u = e + T e + T 2e + . . . ,
where the contribution of each higher power of T diminishes as the exponent grows. Evaluating
partial sums of this series is a practical algorithm for approximate solution of operator equations;
one just evaluates some sufficient number of terms. In the case of illumination computations this
formula has an intuitive meaning: The equilibrium distribution of light u is the sum of direct lighting
e, once-reflected light T e, twice-reflected light T T e, etc.
2.4.2
Adjoint Operators and Adjoint Equations
The adjoint operator T ∗ of a linear operator T has an important role in designing efficient global
illumination methods, as it provides an output-sensitive way of evaluating values of linear functionals
on the lighting solution. The significance of linear functionals and adjoint operators is perhaps best
demonstrated by the fact that all illumination algorithms that trace rays through the camera into the
scene are based on measuring the lighting by linear functionals.
For instance, the function that describes the distribution of light in the scene does not directly answer
questions like what intensity should be assigned to each pixel, but in order to perform proper antialiasing we must measure some weighted average radiance for each pixel. Computing such an
average is an example of a linear measurement that can be formulated using a linear functional. In
practical applications, one forms this weighted average from point samples taken from the pixel.
Anti-aliasing methods [16, 13] deal with choosing the proper locations and weights for the samples.
2 Technically,
the convergence of the Neumann series to the inverse of (I − T ) in operator norm requires that the space X
is a complete normed space, i.e., a Banach space. Since the significance of the leftover −T n+1 is guaranteed to diminish as
more terms are added, this does not cause problems, since in practice only a finite number of terms are computed anyway.
15
Elements of Functional Analysis
Adjoint Operators
For each linear operator T : X 7→ Y between inner product spaces X and Y there exists a linear
operator T ∗ : Y 7→ X that satisfies
(2.7)
hT x, yiY = hx, T ∗ yiX ,
where h·, ·iX and h·, ·iY denote the inner products in X and Y , respectively. T ∗ is called the adjoint
of T . It happens that kT ∗ k = kT k.
It is particularly easy to determine the adjoint operator for integral operators of the form (2.5): If the
integral operator T has kernel k(s, t), its adjoint T ∗ is an integral operator with with kernel k(t, s).
Adjoint Equations
This section describes how a linear functional w is applied to the solution u of the equation u =
T u + e by using adjoint operators; in other words, we wish to evaluate hu, wi. With foresight, we
introduce the adjoint equation
(2.8)
w∗ = T ∗ w∗ + w
with a new unknown functional w∗ . Now, due to (2.8) and since u = T u + e, the measurement
hu, wi is equivalent to
(2.9)
hu, wi = hu, w∗ − T ∗ w∗ i
= hu, w∗ i − hu, T ∗ w∗ i
= hu, w∗ i − hT u, w∗ i
= hu − T u, w∗ i
= he, w∗ i,
i.e., the measurement of the solution u by w is the same as measurement of the right-hand side e by
the solution w∗ of the adjoint equation.
To perform this measurement by evaluating the last inner product, we need, in principle, to solve
(2.8) for w∗ , given w. Fortunately, practical computation is simplified by the fact that we are not
interested in w∗ itself, but only in the measurement he, w∗ i. As we show next, an approximation
of the measurement can be found by evaluating a Neumann series for w∗ . This will only involve
computation in points of S which contribute to the values of he, w∗ i, and efficient approximations
can be made so that small contributions are handled with less precision; this is the source of practical
efficiency.
Solving the Adjoint Equation. We now turn to present a method for evaluating (2.9). As we saw
in the previous section, the equation
u = Tu + e
may in principle be solved by evaluating the Neumann series
u = e + T e + T 2e + T 3e + . . .
Now we wish to measure u with a linear functional w; i.e., we wish to compute hu, wi. This yields
hu, wi = he, wi + hT e, wi + hT T e, wi + hT T T e, wi + . . .
16
2.5 Numerical Methods
but recalling that hT e, wi = he, T ∗ wi due to the definition of the adjoint operator, this is the same
as
(2.10)
hu, wi = he, wi + he, T ∗ wi + he, T ∗ T ∗ wi + he, T ∗ T ∗ T ∗ wi + . . .
= he, (w + T ∗ w + T ∗ T ∗ w + T ∗ T ∗ T ∗ w + . . .)i
= he, w∗ i.
The expression in the parenthesis on the right-hand-side of the second equation is the Neumann
series that solves the adjoint equation (2.8). We will present an outline of a stochastic method based
on (2.10) in Section 2.5.3.
We also note that the terms in the series (2.10) may be written in many ways using the definition
of the adjoint operator. For instance, the term he, T ∗ T ∗ T ∗ wi may be computed just as well as
hT e, T ∗ T ∗ wi, hT T e, T ∗ wi or hT T T e, wi. Global illumination methods called bidirectional path
tracers take advantage of this.
2.5
Numerical Methods
Due to the fact that equations like (2.6) are posed in infinite-dimensional spaces of functions, solving them exactly is most often impossible, and thus numerical methods must be employed. Most
numerical methods involve searching for the solution u in some finite-dimensional subspace of X.
Methods that work in this way are called finite element methods. The following sections briefly
describe the Galerkin method and the point collocation method.
To find an approximate solution uh to (2.6), we first set up a finite-dimensional subspace Xh ⊂ X,
and choose a basis {ϕi }ni=1 for it. Our goal is to find uh ∈ Xh which is close to the accurate
solution
Pnu in some sense; this will result in a linear system for the coefficients αj in the expansion
uh = j=1 αj ϕj . The general approach is to force the residual (or “leftover”) (I − T )uh − e to be
small – an exact solution would make it identically zero, but this is unattainable. The two methods,
the collocation method and Galerkin’s method, each work with the residual, but in different ways.
There exist also numerical methods that do not search for the approximate solution as a linear combination of basis functions. Methods like this are based on approximating values of measurements of
the solution by averaging pointwise stochastic estimates of the Neumann series (2.10). Methods for
solving the equations of global illumination in this way are generally called path tracing methods.
We give a short introduction to these methods in the end of this section.
2.5.1
The Point Collocation Method
The Point Collocation method is the simplest way for discretizing operator equations in spaces of
functions – we simply require that the equation is satisfied pointwise in a finite number of points
which we call collocation points. If the subspace Xh has dimension n, we must prescribe n distinct
collocation points as well.
We first write the continuous equation from (2.6) with the integral operator explicitly expanded:
(2.11)
⇔
(I − T )u = e
Z
((I − T )u)(s) = u(s) − k(s, t) u(t) dt = e(s),
S
17
Elements of Functional Analysis
where k(s, t) is the kernel of T . We
Pnbegin discretization by writing the approximation uh as the
basis function expansion uh (s) = j=1 αj ϕj (s), where s ∈ S. Then, pick n distinct collocation
points {si }ni=1 ⊂ S and require that (2.11) holds only at the node points si ; for each i, set
Z
((I − T )uh )(si ) = uh (si ) − k(si , t) uh (t) dt
S
n
X
=

Z
αj ϕj (si ) −
j=1
=
αj ϕj (si ) −
j=1
n
X
αj ϕj (t) dt
Z
αj
[k(si , t) ϕj (t)] dt
S


Z
αj ϕj (si ) −
j=1
=
n
X
j=1
n
X

j=1
S
n
X
=
k(si , t)
n
X
k(si , t) ϕj (t) dt
S
αj [ϕj (si ) − (Kϕj )(si )] = e(si ).
j=1
Now, writing
Z
Vij = ϕj (si ),
Mij =
k(si , t) ϕj (t) dt = (Kϕj )(si ),
and ei = e(si ),
S
we are left with the linear system
(V − M )α = e,
(2.12)
whose solution α = (V − M )−1 e gives the sought-after approximation coefficients.
The collocation method is equivalent of forcing the residual (I − T )uh − e to be exactly zero at the
collocation points {si }. This is easily seen from the above derivation. The collocation points must
be chosen so that V is non-singular [28, p. 82]. In practice this is not hard.
2.5.2
The Galerkin Method
The Galerkin method for solving (2.6) relies on the idea of orthogonality. As we saw in Section
2.2.1, it is possible to compute approximations to elements of X by a linear combination of a finite
number of basis functions by solving a linear system. Here we show that orthogonality may be used
for finding approximations to the solution of (2.6).
As
Pnwith the collocation method, our task is to choose the coefficients {αi } for the expansion uh =
j=1 αj ϕj . The Galerkin method accomplishes this by requiring that the residual (I − T )uh − e
is orthogonal to the basis functions {ϕi }: For each i, we set
h(I − T )uh − e, ϕi i = 0.
Substituting the definition of uh yields
(2.13)
n
X
αj h(I − T )ϕj , ϕi i = he, ϕi i
j=1
(2.14)
⇔
(G − M )α = e,
18
2.5 Numerical Methods
where Gij = hϕi , ϕj i, Mij = hT ϕj , ϕi i and ei = he, ϕi i. It is noteworthy that using an orthonormal basis leads to a simplified situation, since then G = I. If an orthonormal basis is unavailable,
but we know the dual basis {ϕ̃i }, the problem may still be transformed into an easier one, as we now
turn to show.
A Variant of the Galerkin Method
In the above derivation of the Galerkin method we obtained a linear system of equations by forcing
the residual (I − T )uh − e to be orthogonal to the basis functions {ϕi }. An equivalent formulation
results from requiring orthogonality with respect to the dual basis {ϕ̃i }ni=1 (see 2.2.2):
h(I − T )uh − e, ϕ̃i i = 0
(2.15)
⇔
⇔
n
X
αj h(I − T )ϕj , ϕ̃i i = he, ϕ̃i i
j=1
0
(I − M )α = e0 ,
0
where Mij
= hT ϕj , ϕ̃i i and e0i = he, ϕ̃i i. This is again a linear system whose solution gives the coefficients in the basis {ϕi } for the approximate solution. It happens that the coefficients α that solve
(2.15) are the same as those coefficients that solve (2.14). Appendix A.1 presents a straightforward
proof for this result.
This variant of the Galerkin method is favorable since the coefficients are easier to evaluate. Also,
the resulting linear system is of the form (I − M 0 )α = e, which can be solved by the Neumann
series if kM 0 k < 1.
2.5.3
Stochastic Path Tracing Methods
This section gives a brief introduction to path tracing methods in the abstract context of integral
equations of the second kind. Path tracing methods do not try to approximate the solution u as a
whole as finite element methods do, but instead produce estimates of measurements of the solution
through (2.10), which we repeat here for convenience:
hu, wi = he, wi + he, T ∗ wi + he, T ∗ T ∗ wi + he, T ∗ T ∗ T ∗ wi + . . .
Path tracing methods work by estimating the integrals that define the above inner products by random
point sampling. In particular, the expected value of the random variable defined by
(2.16)
N
1 X f (si )
,
N i=1 p(si )
where the siR ∈ S are distributed according to the probability density function p, equals the value of
the integral S f (s) ds; see, e.g., [23, p. 57]. Estimation of the integral in this fashion is an example
of a so-called Monte Carlo method. To see how this applies to evaluating (2.10), let us discuss some
of the first terms.
The first inner product he, wi is just a single, albeit often multiple-dimensional integral; its value may
be estimated by taking random samples si in S and averaging the samples e(si ) w(si ) according to
(2.16).
The next term is the inner product of e and T ∗ w. Applying (2.16) to evaluating T ∗ w we get
(T ∗ w)(t) ≈
N
1 X k(si , t) w(si )
,
N i=1
p(si )
19
Elements of Functional Analysis
where the sRi are distributed according to p and k(s, t) is the kernel of T . Now, the integral
he, T ∗ wi = S e(t) (T ∗ w)(t) dt may itself be estimated similarly by
PN2 k(si ,sj ) w(si )
N1
e(sj ) N12 i=1
1 X
p2 (si )
∗
he, T wi ≈
.
N1 j=1
p1 (sj )
Using a similar technique, the terms involving higher orders T ∗ T ∗ w, etc., may be estimated. It soon
becomes apparent, though, that if many samples are taken in all the nested sums, the computation
time grows in a combinatorial explosion. This is why one takes Ni = 1 for i > 1. This results in the
estimator
N
1 X e(si2 ) k(si1 , si2 ) w(si1 )
,
he, T ∗ wi ≈
N i=1
p1 (si2 ) p2 (si1 )
and analogously
he, T ∗ T ∗ wi ≈
N
1 X e(si3 ) k(si2 , si3 ) k(si1 , si2 ) w(si1 )
,
N i =1
p3 (si3 ) p2 (si2 ) p1 (si1 )
1
etc. Here the notation sij refers to that the sample points 1 − 3 are different for each i. A practical
algorithm results if we generate a series of random sequences of points {s1 , s2 , s3 , . . .} that we call
paths, and compute the averages
N 1 X e(s1 ) w(s1 ) e(s2 ) k(s1 , s2 ) w(s1 ) e(s3 ) k(s2 , s3 ) k(s1 , s2 ) w(s1 )
+
+
+ ... .
N i=1
p1 (s1 )
p1 (s1 ) p2 (s2 )
p1 (s1 ) p2 (s2 ) p3 (s3 )
Here we have dropped the outer index i from the points, with the understanding that the points {sj }
are different for each path i. This is a recursive process; for each path, the algorithm successively
generates points in S and estimates the contributions from these points, starting from an initial
location s1 . Of course, the probability distribution p1 used for picking the first point s1 should be
chosen so that only points where w has a non-zero value get chosen. Also, the probabilities should
be tuned so that we do not take samples in places which result in the kernel evaluating to zero; i.e.,
if the last vertex of the path is si , the next one sj+1 should be chosen so that k(si , si+1 ) 6= 0, since
otherwise the whole path after i + 1 will not contribute to the result.
This algorithm can be used, with a minor modification that we describe below, for global illumination. In that context the algorithm amounts to tracing rays backwards from the camera and letting
them reflect randomly in the scene. Here, the vector e describes emission of light.
It is worth noticing that the above series only considers light that is emitted by and propagated
backwards towards the eye from the points sj that form the path. In practice this is not efficient,
since if the light sources are small (i.e., the support of the emission function is small), it might take
an arbitrary number bounces until an emitting surface is hit by the random process. Kajiya [38],
who first formulated the rendering equation and presented a patch-tracing method for its solution,
noticed that it is more efficient to send a randomly distributed shadow ray from each si towards the
light sources. In effect, this amounts to estimating the measurement by the series
hu, wi = he, wi + hT e, wi + hT e, T ∗ wi + hT e, T ∗ T ∗ wi + . . .
This modifies the above procedure by introducing a new series of random points {l1 , l2 , . . .} that are
chosen randomly within the support of the emission function with probability distribution functions
{q1 , q2 , . . .}. The estimates thus become
N 1 X e(s1 ) w(s1 ) e(l1 ) k(s1 , l1 ) w(s1 ) e(l2 ) k(s2 , l2 ) k(s1 , s2 ) w(s1 )
+
+
+ ... ,
N i=1
p1 (s1 )
p1 (s1 ) q1 (l1 )
p1 (s1 ) p2 (s2 ) q2 (l2 )
20
2.5 Numerical Methods
where the points {sj } and {lj } are again different for each path i. The procedure can be interpreted
as follows: s1 is the point where a ray from the camera hits the scene. The first term is the radiance
emitted from that point to the eye. The second term, involving the random point l1 , is a point
estimate for the lighting that hits s1 directly and then proceeds to the camera. Then, a new point
s2 is generated by tracing a ray from s1 to a random direction, and a new random point l2 on the
support of the emission function is chosen. The third term is a point estimate of the radiance that hits
s2 directly from l2 and then propagates to the camera via s1 . Choosing si+1 by tracing a ray from
si to the first intersection in a random direction guarantees that k(si , si+1 ) > 0 as required. Russian
roulette [2] should be employed to truncate the infinite series to a finite number of terms.
The question of how to choose the probabilities pi (other than just avoiding zeroes from the kernel)
so that these estimates converge as fast as possible is more involved. Techniques that aim for faster
convergence by tuning the probabilities are jointly called importance sampling.
The basic algorithm presented above can be made more efficient in a number of ways. Since we do
not pursue this topic further, we refer the reader elsewhere [38, 45, 46, 75, 23].
21
Elements of Functional Analysis
22
Chapter 3
Basics of Global Illumination
This chapter is devoted to presenting an introductory treatment of the physically-based quantities
and equations that form the basis of realistic image synthesis. Specifically, the equations are related
to the more abstract framework of functional analysis that was presented in the previous chapter.
This chapter is organized as follows. We first introduce our notation. Then a description of radiance,
the fundamental quantity that describes the distribution of light in an environment, is presented.
Then, the equations that govern the transport of radiance are derived. Finally, we conclude with an
overview of global illumination algorithms.
3.1
Basics and Notation
This section describes the basic assumptions under which the global illumination problem is treated
in the rest of the chapter, introduces the concept of the solid angle, and reviews the notation conventions used in the remainder of the thesis.
We assume we are given a geometric description of the (oriented) two-dimensional surfaces of a
three-dimensional scene, the reflective properties of those surfaces, and a function that describes
emission of light from the surfaces of the scene. The geometric description is most commonly a
set of triangles that form piecewise linear surfaces. The exact form of the reflective properties and
emission function will be given later.
To limit the discussion, we further assume that the space between the surfaces is empty, i.e., there
is no smoke, fog or other scattering medium, and no light is emitted from empty space. We also
assume that the refractive index of the transparent medium is constant. These assumptions simplify
illumination computations significantly, because then light that is emitted from or is reflected by a
surface of our scene travels along a straight line, and will not be scattered or absorbed until it hits
another surface of the scene. Since by our assumption no light is emitted in empty space, this means
that the appearance of the scene is completely determined by the radiance field that describes the
light leaving each point from the surfaces of the scene into each direction.
23
Basics of Global Illumination
The Solid Angle
The solid angle is the two-dimensional equivalent of the planar angle. The solid angle ΩA subtended
by a surface A as seen from point x is defined to be the area of A0 , the projection of A onto the
unit sphere surrounding x. The unit of solid angle is called steradian. It follows immediately from
the definition that the full solid angle is 4π steradians, since the area of the unit sphere is 4π. This
is analogous to the planar angle, which may be defined (in radians) as the arc-length along the unit
circle. Any patch on the unit sphere with nonzero measure may be identified with a solid angle. The
situation is illustrated in Figure 3.1; the solid angle subtended by the light-gray surface as seen from
the point equals the area of the projection of the surface onto the unit sphere surrounding the point,
marked in black.
By the above definition, the small element of solid angle dω as
seen from point x is related to a distant small area element dA
located at point y by
(3.1)
dω =
A
dA cos θ0
,
r2
where θ0 is the angle between the normal vector at dA and yx,
~
and r is the distance between x and y. The unit vector yx
~ points
from y towards x.
A'
Notation
Figure 3.1 The solid angle.
We use x and y for denoting position in the scene. Since we work
without participating media, the points will always be located on
the surfaces of the scene, and thus these variables only have two degrees of freedom. We take the
points to lie in the three-dimensional scene, however.
ωin and ωout denote incident and outgoing directions, respectively. They are taken to be vector
quantities. For example, the bidirectional reflectance distribution function fr (x, ωin → ωout ) that
we encounter in section 3.3 is a six-dimensional function – two dimensions for position and two for
incident and outgoing directions each. The notation dω refers to a differential solid angle located in
the direction ω.
xy
~ denotes the unit vector that points from x to the direction of y; clearly yx
~ = −xy.
~
The notation bn(x) · ωc means max(0, n(x) · ω), where n(x) is the surface normal at x. Using this
notation it is natural to write integrals like
Z
fr (x, ωin → ωout ) bn(x) · ωin c dωin .
Ω
When no confusion can arise, we may also write cos θ in place of bn(x) · ωc.
24
3.2 Radiance and Related Quantities
A Note on Functions, Functionals and Integration
To cope with idealizations such as point-like light sources and perfect mirror reflection, we allow
our functions to contain Dirac functionals δ or “impulses”. This functional is defined by
Z
δs0 (f ) = δ(s − s0 ) f (s) ds := f (s0 ).
S
In words, integrating a function f against an impulse returns the value of f at the center s0 of the
impulse. When needed, we denote the center of the functional with a subscript as above. Since the
inner products we use in function spaces are essentially integrals of products of functions, we extend
the definition of the inner product to accommodate for impulses, too.
Now consider the compound functional g that is the sum of the function gn (s) that is integrable
in the usual sense, and several impulses δs1 , δs2 , . . . that have disjoint centers {sk }. As a natural
generalization of the above, we define evaluation of g on a function f to mean
Z
X
g(f ) = hf, gi =
f (sk ) + f (s) gn (s) ds,
k
S
where the last integral is interpreted in the usual sense. As with usual functions, a factor that contains
an impulse that is not dependent on the variable of integration may be freely moved out of the
integral.
In particular, we allow 1) the functions that describe radiance, 2) the kernels k(s, t) of integral
operators and 3) functions that represent linear functionals to contain impulses.
3.2
Radiance and Related Quantities
This section presents a derivation of radiance, the fundamental quantity that describes the flow of light. The following development has
been adapted in part from a previous account [23].
Consider a hypothetical differential surface dA with unit normal ω
situated in a point x somewhere in empty space in our scene. The
number of photons dN (on a single wavelength) that flow through
dA during the time dt is
dN = p(x, ω) c dt dA dω,
where dω is a differential solid angle around the normal vector ω, and
c is the speed of light. The formula has intuitive meaning; p(x, ω)
denotes a photon density, i.e., photons per volume element per unit
Figure 3.2 Illustration of
solid angle. The product c dt dA defines a small volume; c dt defines
the geometry related to
the distance a photon travels during dt. The situation is illustrated in
the definition of radiFigure 3.2. To understand why p(x, ω) is a density function also with
ance.
respect to the solid angle, consider counting all the particles that flow
through the small volume c dt dA from all directions; this is most
conveniently done by integrating over the set of directions. Thus, the function p is a density function
both with respect to volume and solid angle.
25
Basics of Global Illumination
Now suppose dA turns so that its normal vector no longer coincides with the flow direction ω, but
instead forms an angle θ with it. Now the number of photons clearly becomes dependent of the
cosine of the angle θ:
dN = p(x, ω) c dt dA cos θ dω.
Dividing through by c and dt gives
dN
= p(x, ω)dA cos θ dω.
c dt
d
Clearly cdN
dt ∝ Φ, where Φ = dt E is the flux and E is the energy of the photons, since on a fixed
wavelength all photons carry equal energy and c is constant. Now, by a formal manipulation we end
up with
Φ
∝ p(x, ω).
dA cos θ dω
The quantity radiance is defined to be
(3.2)
u(x, ω) =
Φ
= C p(x, ω),
dA cos θ dω
where C = ~λc , ~ is Planck’s constant and λ is the wavelength of the (monochromatic) light. Radiance has units of energy per unit projected area per unit solid angle.
Even though we have worked in free space so far, the introduction of
the angle θ above allows us to consider surfaces too; (3.2) may just
as well be thought of as the radiant energy sent out by a little surface
patch dA located at x into direction ω during the small time interval
dt; see Figure 3.3 for an illustration. Since air is presumed fully nonscattering for this treatment, we will never have use for the radiance
function in free space. From here on we consider radiance only on
the surfaces of the environment.
3.2.1 The Function Space and Inner Product for Radiance Functions
Figure 3.3 Radiance as
the light leaving from or
incident upon a surface
patch dA with normal
vector n.
We consider the radiance functions to lie in an inner product space
X(S ×Ω) of scalar functions defined over S ×Ω, where S is the twodimensional set of surfaces in the scene and Ω is the two-dimensional
set of directions. The functions are required to have point values in all
of their domain except at points that contain a Dirac impulse – point
values have no meaning in such locations, since no usual function can
represent the impulse.
We define the inner product of two functions in this space to be
Z Z
(3.3)
hu, vi =
u(x, ω) v(x, ω) dAx dω,
S Ω
where dAx denotes an infinitesimal area element located at point x, and integration against impulse
functionals is understood in the usual way. This definition differs from some references (e.g., [23]),
which also include the cosine factor bn(x) · ωc under the integral for symmetry reasons.
26
3.3 Reflection
Since we allow impulses both in the radiance distributions and the linear functionals that we use for
measuring radiance, we in essence assume that linear functionals that act on the space X(S × Ω) are
essentially similar to the members of that space1 .
3.2.2
Irradiance
As shown above, radiance is a distribution function with respect to both space and solid angle. To
find the total power impinging upon a small surface element dA from the hemisphere above it, we
integrate incident radiance over the hemisphere:
Z
(3.4)
E(x) = u(x ← ωin ) cos θin dωin .
Ω
The result is the flux density, with units of power per unit area, that lands on dA. This quantity is
called irradiance. Working backwards from the above formula, we define differential irradiance as
dE(x ← ωin ) = u(x ← ωin ) cos θin dωin
This unit will be important when we consider reflection in the next section, since the radiance reflected by dA to a given outgoing direction will be proportional to dE instead of u.
3.3
Reflection
Next we consider how light is reflected from surfaces. The light incident on a small surface element
dA from an infinitesimally small solid angle around direction ωin reflects off the surface according
to a directional distribution function fr (ωin → ωout ); the function fr is called the BRDF, or the
Bidirectional Reflectance Distribution Function [55]. Here the outgoing direction is denoted by ωout .
Figure 3.4 depicts an example distribution as a function of the outgoing direction for a single incident direction. Put mathematically, the BRDF relates incident differential irradiance and outgoing
radiance:
du(x → ωout )
du(x → ωout )
=
= fr (ωin → ωout ),
dE(x ← ωin )
u(x ← ωin ) bn(x) · ωin c dωin
or equivalently
(3.5)
du(x → ωout ) = u(x ← ωin ) fr (ωin → ωout ) bn(x) · ωin c dωin .
Intuitively, for each incident direction, the BRDF can be roughly thought of as the percentage of
light that reflects to other directions2 . For each surface location x it is generally a four-dimensional
function, although in some cases its dimensionality is smaller. For instance, a surface that scatters
the incident light into all directions with equal intensity regardless of the angle of incidence has a
constant BRDF. A BRDF is called isotropic if rotation of the underlying reflector around the surface
normal leaves the reflection unchanged. In this case the function can have up to three degrees of
freedom.
1 Rigorously speaking, treating vectors and bounded linear functionals as equal is only possible in Hilbert spaces. Generally, the linear functionals that operate on a normed space X cannot be identified with the members of X. However, this
formulation is convenient for the task at hand.
2 This is not entirely correct, but we skip the technical details.
27
Basics of Global Illumination
Figure 3.4 A two-dimensional slice of a simple BRDF. Here, the values of fr are depicted for a
single incident angle, denoted by the black arrow. This BRDF contains a diffuse part, seen as the
constant-radius sphere, and a simple specular lobe centered on the mirror direction.
To enforce an energy balance, i.e. that reflected power does not exceed incident power, the BRDF
must fulfill the condition [23]
Z
fr (ωin → ωout ) cos θout dωout ≤ 1 ∀ωin .
Ω
Representing general four-dimensional functions is not a straightforward task, particularly in presence of storage limitations and concerns of computational efficiency. There is a wealth of literature
describing methods for representation of BRDFs. Some well-known examples are the diffuse (Lambertian) BRDF and the Phong model3 [57]. Methods for representing the BRDF always compromise
between expressive power, storage requirements and the complexity of evaluation. For an overview
of methods for representing BRDFs, the reader is referred elsewhere [60, 48, 40, 41].
The differential nature of (3.5) immediately suggests that we may find the total reflected light into a
given direction ωout by integrating over the direction ωin . This leads to the reflectance equation
Z
(3.6)
u(x → ωout ) = u(x ← ωin ) fr (x, ωin → ωout ) bn(x) · ωin c dωin .
Ω
Here we have added x as an argument of fr to emphasize the fact that the BRDF may vary from
point to point in the environment. Notice that we do not yet have any relation between the incident
radiance u(x ← ωin ) and outgoing radiance u(x → ωout ). In the next section we move on to consider
where the incident light in (3.6) comes from, but first we comment on the relationship of radiosity
and irradiance.
3 Although
the Phong model must be normalized before it fulfills the energy conservation condition given above.
28
3.3 Reflection
Radiosity. If the BRDF is completely diffuse, i.e., fr (x, ωin → ωout ) = ρπd = const., (3.6) becomes
Z
ρd
u(x → ωout ) =
u(x ← ωin ) cos θin dωin =: B(x),
π
Ω
where ρd is the diffuse reflectance, i.e. the fraction of light that is reflected by the surface. We
note that u(x → ωout ) becomes independent of ωout . Comparing this to (3.4) we notice that the
outgoing radiance in this case is directly proportional to full hemispherical irradiance. In this case it
is customary to denote u(x → ωout ) by B(x), a quantity called radiosity.
3.3.1
The Rendering Equation
Equation (3.6) describes a reflection at a single point. Now we may ask: Where does the light go
after reflection? Obviously, in a closed environment, it will hit another surface and re-reflect there,
and so on. We capture this observation mathematically by replacing the term u(x ← ωin ) under the
integral in (3.6) by u(x0 → −ωin ), where x0 = r(x, ωin ) is the ray-casting function, defined as the
closest point in the direction ωin as seen from point x. The relevant geometry is illustrated in Figure
3.5. This modification is legitimate since in a non-scattering medium radiance remains constant
along a line [11, p. 22]. After this modification, the reflectance equation becomes
Z
u(x → ωout ) = u(x0 → −ωin ) fr (x, ωin → ωout ) cos θin dωin .
Ω
Although this last equation seems to govern the flow of light in an environment completely, it does
not yet account for emitted light. This is countered by introducing an emission function e(x → ωout ),
which describes the radiance emitted to direction ωout from point x. Now we are ready to present the
?
Figure 3.5 Geometry related to the rendering equation. Due to the invariance of radiance on straight
lines, the radiance u(x ← ωin ) incident on x from direction ωin equals the radiance u(x0 → −ωin )
leaving x0 to the opposite direction. Here x0 = r(x, ωin ) is the closest point from x in direction ωin .
29
Basics of Global Illumination
rendering equation, one of the most important equations of computer graphics:
Z
(3.7)
u(x → ωout ) = e(x → ωout ) +
u(x0 → −ωin ) fr (x, ωin → ωout ) cos θin dω,
Ω
where x0 is defined as above. This is an integral equation, since the unknown function is both on the
left side and on the right side under the integral. Equations of this form are exactly the Fredholm
equations of the second kind [44] that were introduced in Section 2.4.
We note that it is possible to change variables in the rendering equation and recast the integral over
solid angles as an integral over the surfaces of the scene; this can be done using the formula (3.1)
that relates differential solid angles to differential surface patches. The original rendering equation
of Kajiya [38] was given in a form where the integration was performed over surfaces. Also, Kajiya
described the distribution of light energy using so-called two-point transport intensities. However,
this quantity relates to radiance in a simple way. The above equation is equivalent to the original
formulation.
Operator Form of the Rendering Equation
To simplify the appearance of the rendering equation, we will now rewrite it using the operator
notation introduced in Chapter 2. To accomplish this, we introduce the transport operator T , which
is a linear integral operator that takes as input a radiance distribution that describes the light leaving
the surfaces of the environment, propagates the light to the next intersection, reflects it there once
and gives out a new distribution which represents the reflected light. T is defined by
Z
(3.8)
(T u)(x → ωout ) = u(x0 → −ωin ) fr (x, ωin → ωout ) cos θin dωin .
Ω
The kernels of the operator T and its adjoint T ∗ are derived in Appendix A.2. Using this new
notation the rendering equation may be written as
u = e + Tu
(3.9)
⇔
(I − T )u = e,
where I is the identity map on the space of radiance functions. Formally, the solution of the equation
is given by
u = (I − T )−1 e,
where (I − T )−1 is the inverse operator of I − T . The inverse of a linear operator is itself linear,
−1
and
R the existence of (I − T ) is guaranteed at least if kT k <4 1 (see Section 2.4.1). This is the case
if Ω fr (x, ωin → ωout ) cos θout dωout < 1 almost everywhere in the environment, i.e., the transport
operator sends out less energy than is received almost everywhere. This form makes it clear that the
equilibrium lighting solution is linear with respect to the emission function. The operator I − T is
also an integral operator, and its kernel is the Global Reflectance Distribution Function [46].
3.3.2
The Rendering Equation for Incident Radiance
In the previous section we derived the rendering equation in the space of outgoing radiance functions,
i.e., the unknown we solved for is the spatial and angular distribution of light leaving the surfaces
4 Everywhere
except possibly some sets of measure zero, e.g., at isolated points
30
3.3 Reflection
of the scene. However, this is not the only possibility. An equivalent formulation results from using
incident radiance as the unknown function. We will make use of this form of the rendering equation
when dealing with precomputed radiance transfer.
The key idea is that because of the invariance of radiance along straight lines, u(x ← ωin ) must
be equivalent to the sum of the radiance reflected from x0 towards x and the radiance emitted from
x0 := r(x, ωin ) towards x; the situation is illustrated in Figure 3.6. This description may directly be
turned into a formula:
(3.10) u(x ← ωin ) = e(x ← ωin ) + u(x0 → −ωin ) =
Z
e(x ← ωin ) + u(x0 ← ω 0 ) fr (x0 , ω 0 → −ωin ) bn(x0 ) · ω 0 c dω 0 ,
Ω0
where e(x ← ωin ) = e(x0 → −ωin ), and the integration is performed over the hemisphere Ω0
above x0 . This may be written in operator form as
ui = Ti ui + ei ,
(3.11)
where the subscripts signify that the functions are of an incident nature, and the incident transport
operator Ti is defined as
Z
(Ti ui )(x ← ωin ) = u(x0 ← ω 0 ) fr (x0 , ω 0 → −ωin ) bn(x0 ) · ω 0 c dω 0 .
Ω0
When the incident radiance function is known, determination of the corresponding outgoing radiance
function is straightforward; one just needs to apply the local reflection operator, which we denote
?
Figure 3.6 Geometry related to the rendering equation for incident radiance. The radiance
u(x ← ωin ) incident on x from direction ωin equals the radiance u(x0 → −ωin ) leaving x0 to
the opposite direction, which, in turn, is determined by integrating over the hemisphere over x0 .
31
Basics of Global Illumination
by R:
(3.12) u(x → ωout ) =
Z
(Rui )(x → ωout ) =
ui (x ← ωin ) fr (x, ωin → ωout ) bn(x) · ωin c dωin .
Ω
3.3.3
Adjoint Equations and Importance
Rendering problems may often be stated as measuring the equilibrium radiance distribution over
some subset of S × Ω. For instance, the equilibrium distribution itself does not directly answer
questions like “what is the value that should be assigned to pixel (i, j)?” To answer such questions,
we need to measure the solution function somehow. For instance, the answer to the above question
could be “the average radiance into the direction of the camera from the points intersected by view
rays through the pixel”. It turns out that this and many other questions may be formulated using
linear functionals.
The adjoint equation is a valuable tool for performing these measurements efficiently; in a way, it
allows us to perform linear measurements of the solution in output-sensitive efficiency. For instance,
if we are interested in the average radiance shining through a single pixel, we do not need to first
compute the full radiance distribution to compute this average, but instead, we can start from the eye
and trace light paths backwards towards the light sources by employing the adjoint operator and a
suitable measurement functional. This will be made more explicit in the following subsections.
Radiance Measurements and Linear Functionals
Any linear measurement of the radiance solution u(x → ωout ) may be written in terms of a unique
linear functional w(x, ω) so that the result of the measurement is the value of the functional w on
the solution u:
Z Z
(3.13)
hu, wi =
u(x → ωout ) w(x, ωout ) dAx dωout .
S Ω
As an example, appendix A.3 presents a derivation for a linear functional that determines the average
radiance that hits a pinhole camera through a given pixel.
Measurements of average radiances through pixels are of course not the only case where we measure
the solution. For instance, looking back at 2.2.1, we notice that projecting the solution function into
a different basis set requires taking inner products between u and the basis functions; this is exactly
the same as measuring the solution using the basis functions.
Measurements and Adjoint Equations
We now have the tools to apply the theory from Section 2.4.2 into global illumination. Suppose that
u solves the rendering equation u = T u + e and, when given a linear functional we , we find a w that
solves the equation
(3.14)
w = T ∗ w + we .
32
3.4 An Overview of Global Illumination Algorithms
From these definitions and the derivation in 2.4.2 it follows immediately that
hu, we i = he, wi ,
i.e., the measurement of the lighting solution u by the functional we may be carried out by measurement of the emission function e by another functional w that solves the adjoint equation (3.14).
Often “to solve the rendering equation” is taken to be synonymous with measuring the solution by
some functionals.
Importance. In the literature, linear functionals are often given the meaning of importance. The
name is motivated by the observation that the functionals are propagated “backwards” by the adjoint
operator, and that a certain initial distribution we (e.g., [70, 8]) can be used as a measure of how
important a given subset of the domain of radiance functions is to the image that is being rendered. In
this context, importance is used for guiding the solution process of the “forward” rendering equation.
Importance is often thought of being emanated from the virtual camera.
Some authors like to think of importance as an incident quantity, while radiance is taken to be an
outgoing quantity. This results in needless confusion, and we do not follow this convention here.
While this intuitive picture is naturally helpful at times, we should bear in mind that not all tasks in
rendering are as simple as measuring averages of radiance or propagating importance that guides the
solution of the forward equation. For instance, we can use a projection method to capture radiance
functions by basis functions that also have negative values. The coefficients for the basis functions
are computed by applying the dual basis functions to the solution as linear functionals, but in this
case it makes no sense to speak of “emanations”, nor does it seem very natural to think about negative
importance. Because of this, we only speak of linear functionals and adjoint equations.
3.4
An Overview of Global Illumination Algorithms
Here we briefly describe methods for solving the rendering equation. We do not treat PRT and
relighting methods here, since a more complete account of them will be given in Chapter 4.
Methods for solving the rendering equation may be roughly classified as finite element methods,
path tracing methods and combinations of the two. However, no clear line can be drawn here: In
general, due to the linearity of the rendering equation, the light transport operator may be broken
into as many terms as desired. There are a host of methods that separate different mechanisms of
the transport operators and compute approximations for each of these terms in a specialized manner,
e.g., using a radiosity approach for indirect diffuse lighting and path tracing for specularities [7].
Breaking the operator up to many pieces is a useful technique behind many state-of-the-art methods,
such as photon mapping [37]. Also, path tracing -like methods can often be used for evaluating the
coefficient matrices in a finite element approach.
3.4.1
Finite Element Methods
Finite element methods search for the solution of the rendering equation as a linear combination of
basis functions whose linear span approximates the space of radiance distribution functions. The
problem is simplified considerably if the surfaces reflect only diffusely, i.e., when their appearance
does not depend on the viewing direction. In this case the radiance functions only have two spatial
degrees of freedom. Techniques that assume diffuse reflection are called radiosity methods.
33
Basics of Global Illumination
Radiosity methods
Early Methods. Most early element-based methods work with diffuse lighting. The original radiosity algorithm [24] is an application of a well-known method from heat transfer to the diffuse
global illumination problem. It is essentially a finite element method with piecewise constant basis
functions. The equations of radiosity may be easily derived from the rendering equation using the
Galerkin method. The elements of the coefficient matrix of the resulting linear system are called
form factors in this context. Their physical significance is the percentage of energy radiated by a
patch that hits other patches. Since the original method actually constructs the linear system that
arises, it is not able to handle large scenes.
The worst inefficiencies of the radiosity method were soon addressed by Cohen et al. [10], who
introduced a two-level hierarchy of piecewise constant basis functions, where coarser patches act as
emitters and the finer basis functions receive light. The transport problem is first solved using the
coarser mesh, and the solution is then used to compute radiosities for the finer elements. The finer
elements are then used for displaying the solution.
Progressive, iterative refinement of the solution was introduced by Cohen et al. [9]. Progressive
refinement marked the birth of matrix-free iterative radiosity solvers, where the full linear system
is not constructed. Along with intelligent ordering of computation, this allows treatment of much
larger problems than previous algorithms. Here form factors are not computed in advance, but
instead evaluated only when needed in the iterative process.
Finite Element Formulations. The connection of the radiosity technique and the finite element
method was first fully recognized by Heckbert [30] and later developed by Zatz [81] and Troutman
and Max [74]. They demonstrated radiosity using higher-order basis functions, which allows the
solution to capture smooth radiosity gradients using less basis functions.
However, higher-order finite element methods have not lead to much follow-up research. This
is because the high-order basis functions cannot acceptably represent discontinuous or derivativediscontinuous functions. The radiosity function is not globally smooth because of the presence of
shadows. Thus, the underlying element discretization must be adapted to the discontinuities of the
radiosity function; methods for this are called discontinuity meshing [22, 71]. The downside of
such algorithms is that they are complex and prone to numerical instability. Discontinuity meshing has recently been combined with a higher-order wavelet basis in a wavelet radiosity method by
Holzschuch and Alonso [31], however.
Hierarchical Radiosity. The complexity inherent to the radiosity system was tackled by generalized hierarchical methods. The two-level hierarchy of piecewise constant basis functions of Cohen
et al. was generalized to arbitrary levels of hierarchy by Hanrahan et al. [29] and wavelet basis functions by Gortler et al. [25]. These methods approximate the kernel of the radiosity equation using
an adaptive hierarchy. Their efficiency stems from concentrating computation on important parts
of the problem; they achieve linear growth of computation time in terms of scene complexity in
the solution process. The crucial idea is to represent the interactions between patches at different
resolutions both for the source and the receiver; e.g., a tabletop sending radiance to the ceiling may
be modeled to good precision as a single interaction, but to account for the shadows cast by a book
resting on the tabletop, the interactions must be computed at much higher resolution. However, these
hierarchical methods still have quadratic complexity with respect to the tessellation granularity of
the input scene.
Hierarchical radiosity methods resemble the multigrid method [28] for solving partial differential
34
3.4 An Overview of Global Illumination Algorithms
and integral equations. The idea is to first solve a coarse version of the problem, and use the information obtained from the coarse solution to guide a process where the subspace where the solution
is sought is refined in order to capture the solution more accurately. In the context of radiosity
this amounts to iterating between solving the radiosity equation using current basis functions, refining the interactions between the patches in places where significant energy transfers are made, and
re-solving the equation using the refined basis functions.
Even though the above hierarchical methods reduce the complexity of the solution process, the initial tessellation of the faces in the scene still determines an upper bound for the size of the problem.
Methods that cluster nearby faces and use the clusters as basic units for energy transfer where applicable are able to reduce the complexity even further. See Willmott [80] and the references therein
for an introduction.
Importance-based Radiosity. Basic radiosity methods produce a solution that is completely viewindependent. Thus, computational resources are spent evenly on all parts of the scene. Smits et
al. [70] noticed that this may be wasteful, since not all parts of the scene are equally important in
the final solution. For instance, if the illumination solution will be used for rendering a walkthrough
animation of a scene, parts of the environment that are not visible in any frame do not need their
illumination to be resolved accurately. Motivated by this observation, the authors introduced the
concept of importance, the dual quantity of radiosity. Based on the adjoint equation, their algorithm
iterates between solving for the unknown importance and radiosity functions simultaneously and
then using these solutions for refinement of the links between patches.
Non-diffuse Pure Finite Element Methods. The finite element method itself does not outrule the
use of directional basis functions. The first method to work with a four-dimensional basis function
set was that of Immel et al. [35]; although the authors did not call it such, their method may be
seen as a finite element method using piecewise constant basis functions both for the spatial and
directional variation in the radiance function. The problems of trying to capture specular effects
by direct visualization of a directional basis were apparent; an immense number of basis functions
would have been required for capturing finer detail.
Sillion et al. [66] present a method that generalizes progressive radiosity to nondiffuse reflectance
functions. The authors represent the angular variation in the radiance distribution by spherical harmonics. In essence, their representation is quite similar to the one that will be used in Chapter 4, but
they store outgoing radiance instead of incident radiance. Also, perfect mirror reflection is handled
as a special case.
Final Gathering is Iterated Projection
The number of elements used in finite element computations needs to be kept as low as possible
due to obvious performance and memory issues. Final gathering is a way to enhance the images
produced using a coarser solution. In the mathematics literature this is called an iterated projection
method [4].
The idea of final gathering is to take a coarse solution û that has been computed using some method,
substitute that into the rendering equation, and compute a final solution with higher accuracy:
ufinal = T û + e.
For this to be beneficial in case of finite element methods, the space where the final solution is
sought is chosen so that it has significantly higher granularity than the space used to represent û. It
35
Basics of Global Illumination
is possible also to use ray tracing for computing ufinal so that the finite element solution is used for
evaluating lighting incident onto the points where eye rays hit (e.g., [61, 63, 8, 37]).
The two-level hierarchical radiosity method of Cohen et al. [10] that was described earlier may be
seen as a direct implementation of iterated projection.
3.4.2
Ray and Path Tracing methods
Early methods in realistic image synthesis were largely formulated based on intuition. Many of
these methods are based on tracing rays backwards from the virtual camera towards light sources,
and letting the rays reflect and refract in the environment. This intuition was later proven to be
mostly correct.
“Ray tracing” is a classical technique of computer graphics. It was introduced in its original form by
Whitted [79] for rendering reflections and refractions. His method captured all light transport paths
that included any number of perfect mirror reflections and refractions and that ended on a diffuse
surface. The Phong illumination model was used for determining the lighting for each intersection.
Cook et al. [14] introduced distributed ray tracing, which was able to render non-perfect glossy
reflections and area light sources. Still, distributed ray tracing does not account for all types of light
transport.
Kajiya [38] presented the equivalent of the rendering equation (3.7) in his seminal paper in 1986.
He showed that previous realistic image synthesis methods (ray tracing, distributed ray tracing and
radiosity) were all just different numerical methods for solving the same equation. He also presented
a stochastic path tracing method that accounts for all possible light transport paths, given enough
computational resources. Kajiya’s path tracing method is based on the method outlined in Section
2.5.3.
Later, Lafortune and Willems [45] and Veach and Guibas [75] introduced bi-directional path tracing.
This method generalizes Kajiya’s path tracing by utilizing a suitable mixture of forward and adjoint
operations as noted at the end of Section 2.4.2.
3.4.3
Hybrid Methods
Multi-pass Finite Element Methods
The basic radiosity method is only capable of producing lighting solutions in environments that are
completely diffuse. However, most real-world environments do not fulfill this criterion, and thus
pictures synthesized under this assumption are not entirely realistic. The fundamental difficulty
associated with using four-dimensional basis functions in finite element methods gave rise to twopass methods. These methods are hybrids that combine a radiosity-like preprocessing step to picture
generation by ray tracing.
One of the first generalizations to the radiosity method for producing solutions for glossy environments was due to Wallace et al. [76]. In a preprocessing step, they simulated radiosity using modified
form factors that captured the effect of a single bounce via perfect planar mirrors. This solution was
then used in a view-dependent ray-tracing step for producing final images. Their method was not
able to capture all light transport paths, however.
An elegant two-pass method is due to Sillion and Puech [65]. As their method may easily be de36
3.4 An Overview of Global Illumination Algorithms
scribed using operator notation, we give an overview here. First, the authors break the transport
operator T into two as T = Td + Ts , where Td is a purely diffuse transport operator and Ts is the
glossy transport operator that accounts for all non-diffuse transport. Assuming that emission is still
diffuse, the rendering equation may be written as u = e + Td u + Ts u. Now, if we define a new
unknown (diffuse) function β = e + Td u, the above becomes u = β + Ts u, which we may solve by
the Neumann series:
(3.15)
∞
X
u=(
Tsi )β.
i=0
Substituting this in the definition of β and denoting S :=
(3.16)
P∞
i=0
Tsi we get
β = e + Td Sβ.
These two equations lead to a two-pass method; first, equation (3.16) is solved using a modified
radiosity algorithm. The concept of the form factor between patches i and j is extended to signify
not only the direct diffuse energy transfer between the patches, but also the amount of energy emitted
by patch i that reaches patch j through any number of specular bounces followed by a final diffuse
interaction; i.e., the form factors form a discretized version of the operator Td S. After the modified
radiosity system has been solved, the full global illumination solution u is found by evaluating (3.15)
using a ray tracer and the isotropic solution β.
Methods that combine a four-dimensional finite element method to ray tracing also exist: Christensen et al. [8] solve the full, glossy light transport problem by combining a finite element method
built on a wavelet basis with ray tracing. An importance function derived from the viewport is used
for guiding the finite element solution process. In the end, a final gathering step is used for hiding
the necessary coarseness of the finite element solution.
Density Estimation
The Neumann series u = e + T e + T T e + . . . that solves the rendering equation has an immediate
physical interpretation: The solution is the sum of emission, direct lighting, light that has bounced
once, twice, etc. This process may also be viewed as a particle transport problem [2].
Density estimation methods (e.g., [64, 37]) shoot energy from the light sources in finite packets
(“photons”) and trace them through the environment, recording information of the locations where
they hit the scene. Assuming that all photons carry equal energy, it is not difficult to show that an
estimate of the radiance at a point x in the scene may be obtained by estimating the local density
of photons around x. In this simplest form this applies to diffuse surfaces only, but it is straightforward to extend this to glossy surfaces as well. The most prominent of these methods is photon
mapping [37].
The number of photons traced in the scene must be kept as low as possible because of memory
and performance reasons. This is why a final gathering step usually needs to be employed in order
to produce images of acceptable quality. For instance, the photon mapping method relies on final
gathering. The method evaluates direct lighting separately and uses density estimation combined
with final gathering to account for indirect light.
37
Basics of Global Illumination
38
Chapter 4
Precomputed Radiance Transfer
Recent methods of Precomputed Radiance Transfer or PRT [68, 42, 49, 53, 67, 69] all strive to
render global illumination effects in a dynamically changing lighting environment at real-time rates.
The methods have gained attention due to the impressive results they achieve and relative ease of
implementation. So far none of the methods is able to render models that deform freely, although
work in that direction has started to appear [36, 39].
All the methods have one thing in common: They parameterize incident lighting by a lowdimensional linear space, and precompute a global illumination solution for each degree-of-freedom
in the emission space. Due to the linearity of the rendering equation, the illumination of the scene
by any emission function represented as a vector in the emission space, i.e., a linear combination of
the basis functions that span the emission space, is simply obtained as a linear combination of the
corresponding “basis solutions”. In this chapter we build a mathematical framework that describes
this process in a formal way, and includes all the abovementioned methods as special cases. As an
example, we show how the original method of Sloan et al. [68] is obtained from this more general
framework.
All the methods presented here first solve the rendering equation for the incident (not outgoing) radiance. This allows for more freedom in determining outgoing radiances. For instance, the BRDF of
the final bounce may be changed at runtime, even though this is technically wrong since the incident
solution has been computed using some other BRDF. The method of bi-scale radiance transfer [69]
that is described in Section 4.4.4 takes this further and adds small-scale self-shadowing on top of the
global illumination solution.
The rest of this chapter is organized as follows. First, we shortly review the work that led up to
precomputed radiance transfer. Then, we present our novel mathematical framework, including
description of the emission space and the initial transport operator. Next we proceed to formulate
the PRT method as an abstract operator equation, after which we show how the abstract framework
may be used to explain the original PRT method of Sloan et al. We conclude the chapter by reviewing
several methods for computing outgoing radiances from the incident PRT solutions.
39
Precomputed Radiance Transfer
4.1
History of Precomputed Radiance Transfer
This section gives a high-level overview of the work that has culminated in the recent methods of
precomputed radiance transfer. We also briefly describe these newer methods below. The reader
unfamiliar with spherical harmonics may wish to first read Appendix A.4 for a brief introduction.
The roots of precomputed radiance transfer are in relighting of images and blending between illumination solutions computed using multiple light source configurations. The key enabling factor in all
cases is the linearity of the rendering equation with respect to the emission function.
Airey et al. [1] presented a radiosity-based method for interactive rendering of walkthroughs. In
their method the user was able to control lighting interactively by linearly blending between radiosity solutions computed using different light source configurations, in effect tuning the intensities of
different light sources. The operatic lighting design method of Dorsey et al. [21] controlled intensities of the light sources in a similar manner.
On the other front, image relighting methods [56, 20, 73] aim to efficiently synthesize pictures
of a static environment under static viewing conditions but time-varying lighting. The method of
Nimeroff et al. [56] produces pictures under different skylight scenarios. The authors define a spherical function basis and project the skylight model into it. Since the rendering equation is linear,
they are able to render images under illumination conditions that are arbitrary linear combinations
of the basis functions by simply blending “basis images” that have been precomputed for each basis
function. This is the essential idea behind modern PRT methods as well. Ashikhmin and Shirley [3]
recently enhanced the method of Nimeroff et al. by proposing a new basis set.
The method of Dobashi et al. [20] renders interactive walkthroughs of environments lit by static
point lights with dynamically varying emission distributions. The distributions are modeled as linear
combinations of the spherical harmonic functions. The method is built on the same basic idea as the
method of Nimeroff et al. Teo et al. [73] relight images similarly to Nimeroff et al., using a steerable
function basis to parameterize incident illumination. Furthermore, they reduce the resulting set of
basis images using Principal Component Analysis.
Relighting Real Scenes
Recent work on relighting real scenes by capturing images of the scene under different lighting
conditions (e.g., [19, 52]) can be seen to be akin to precomputed radiance transfer. These methods
parameterize the incident lighting usually using a directional basis and record an image or images
of the scene under each light direction. The scene may then be relit simply by weighing acquired
images appropriately. In a sense to be made more precise later, these methods let nature solve the
transport equations.
4.1.1
Recent Methods
The more recent methods of precomputed radiance transfer extend the algorithms reviewed above
primarily in two areas. First, they all allow changing both the viewpoint and lighting scenario interactively. Second, they support glossy, i.e., view-dependent reflection functions. An overview of
these methods is given below.
All of the recent methods described here aim to relight either scenes or images under arbitrary,
distant illumination. Recent work can be classified into two fields: Methods that represent incident
40
4.1 History of Precomputed Radiance Transfer
lighting as Spherical Harmonics, and those that use Haar wavelets instead.
Methods based on Spherical Harmonics
The Method of Sloan et al. The kick-off for recent work on PRT was given by Sloan et al. [68]
in 2002. Their method assumes that a scene is lit by a distant, spherical light source, i.e., an environment map. This spherical incident radiance function is parameterized using the spherical harmonic
basis. Since the lighting is expressed as a spherical harmonic expansion, high frequencies in the
lighting cannot be accounted for unless an excessive number of coefficients are used. Two methods,
a simple one for diffuse scenes, and a more complex one for glossy scenes, are presented.
In the diffuse case, the authors precompute a set of transfer vectors at the vertices of the scene. These
vectors allow computation of outgoing (diffuse) radiance from the vertices by simple dot product
against the coefficients that represent the lighting. This may again be seen as linearly blending
between basis solutions that have been computed for each spherical harmonic.
The authors also present a method for rendering with
glossy BRDFs. This is more complicated, since the
outgoing radiance now varies with viewing direction.
To cope with this, the quantity transferred incident radiance u(p ← ωin ) is introduced. Transferred incident
radiance is a spherical function that depends on the surface location p on the object that represents the lighting
incident to p; i.e., it includes the direct lighting at infinity which is partly shadowed by the object itself and
indirect lighting that has bounced at least once on the
surface of the object before impinging upon p. The
situation is illustrated in figure 4.1.
Transferred incident radiance for point p is found
Figure 4.1 Transferred radiance. Figure
from the distant incident lighting e(ωin ) – denoted by
courtesy of Peter-Pike Sloan [67].
“source radiance” in the figure – by a linear transformation that depends on p. When both the distant incident radiance and transferred incident radiance at p are represented in a function basis by coefficient
vectors, the linearity of the transformation enables computation of the coefficients for transferred
incident radiance by a matrix-vector multiplication from the coefficients of incident radiance. The
matrix depends on p. This matrix that transforms incident lighting into transferred incident radiance
is called a transfer matrix. These matrices are precomputed for each vertex on the object. We present
a detailed derivation of these matrices as a special case of our framework in Section 4.3.
Once the coefficients for transferred incident radiance has been computed, it needs to be integrated
against the BRDF and the cosine at p to yield outgoing radiance into the viewing direction. Methods
of integration will be reviewed later in Section 4.4.
Compression. The transfer matrices for the vertices consume a significant amount of memory.
For instance, if both incident and transferred radiance are represented using 25 spherical harmonic
coefficients, this sums to a 625-value matrix for each vertex in the scene.
It is possible to compress the matrices using methods from machine learning and information
processing science. The author and Kautz [49] proposed a global PCA (Principal Component Analysis) technique. This method finds a least-squares-optimal set of basis matrices and represents the
41
Precomputed Radiance Transfer
transfer matrices of the vertices as linear combinations of these basis matrices. This also speeds up
rendering. However, since the matrices vary significantly over the object, this global method needs
a large number of basis matrices for faithful reconstruction.
Sloan et al. [67] use Clustered Principal Component Analysis (CPCA), a better method of compression. Here, the matrices are initially partitioned into N clusters that contain only matrices that are
close to each other in some norm. N is chosen by hand in advance. After clustering, ordinary PCA
is performed on each cluster separately. Since each cluster only contains similar matrices, a low
number of basis matrices will suffice for each cluster. This yields faster rendering speeds and better
quality than global PCA.
Wavelet Methods
Ng et al. [53] presented a technique for relighting images of scenes under arbitrary, time-varying,
also high-frequency lighting. Also their method assumes the incident lighting is distant. Incident radiance is represented using a cube map parameterization. The method allows changing the viewpoint
if the scene is diffuse.
The method achieves interactive framerates with lighting that has arbitrarily high frequencies up to a
limit imposed by the cube map resolution. The key idea is to exploit the good decorrelation property
of the wavelet transform: The authors transform the incident radiance into the Haar basis and select
only a subset of coefficients that contain the largest amount of energy. This procedure is known
as non-linear wavelet approximation. As the transfer matrices have also been computed using the
wavelet basis, determination of the final outgoing radiances in the diffuse case and pixel colors in
the glossy case boils again down to a sparse matrix-vector multiplication.
An inherent problem with non-linear wavelet approximation is temporal incoherency; when the
lighting changes, the coefficients used for its representation may change abruptly, and this results in
flickering of the lights in the final animation.
The Haar wavelet method was recently extended to glossy objects and changing viewpoints by employing a separable BRDF approximation. The same principle was discovered in concurrent work
by both Xiu et al. [50] and Wang et al. [77].
4.2
Mathematical Framework
This section formulates a mathematical framework for PRT methods. The key ingredients are the
emission space and the initial transport operator V . The emission space will be a low-dimensional
linear space that controls the emission of light. The initial transport operator is a linear operator that
maps a vector from the emission space onto a radiance distribution in the scene. It should be noted
that before discretization the operator V is infinite-dimensional.
As previous work has shown, computing the light incident upon the sample points on the object is
more fruitful than solving directly for outgoing radiance, since this allows increased visual fidelity
by changing the method for computing the outgoing radiance from the final surface interaction. For
this we rely on the rendering equation for incident lighting (3.10) from Section 3.3.2.
We begin by discussing the emission space and initial transport operator.
42
4.2 Mathematical Framework
4.2.1
Emission Space and the Initial Transport Operator
The Emission Space E.
Often the situation in image synthesis tasks is such that the emission of light into the scene is relatively well described by a low number of parameters. Perhaps the simplest illustrative case is a
closed room with multiple light sources. If there are N light sources that we assume to have a fixed
orientation in the scene, the emission of light into the scene may be described exactly using N arguments: The intensities of the light sources. It is also intuitively clear that the appearance of the
scene under a lighting configuration that is an arbitrary linear combination of these light sources
can be computed by combining appearances that have been precomputed using a single light at unit
intensity at a time, so that these “basis appearances”
Pare weighed using the intensities of the light
sources. This is easy to verify by substituting e = i αi ei (·) into (3.7). In this simple case of N
light sources, the emission space has dimension N , and the degrees of freedom control the intensities
of individual light sources.
Emissions need not be due to traditional light sources, however. For instance, already the method
of Nimeroff et al. [56] used a function basis for representing the light emitted by the sky onto the
scene. The same basic approach is taken in the work of Sloan et al., where the distant incident
lighting is projected into the spherical harmonic basis. In this case the dimension of E is the same
as the number of coefficients used for representing incident lighting.
The radiance distributions due to individual degrees of freedom in the emission space do not need to
be physical. For instance, such a distribution can well have negative values – this is always the case
when using the spherical harmonics, since they have both positive and negative values on the sphere.
This is no cause for concern, though: The important thing is that useful physical emissions may be
represented using the basis. For a related example, the well-known two-dimensional Fourier basis
functions have both positive and negative values, and they may even taken to be complex-valued.
Still, they are perfectly well suited for representing images with only non-negative intensities.
The Initial Transport Operator
Even when we have fully characterized the emission space, we do not yet have any information
as to how it lights the environment. This is where the initial transport operator V steps in – to
answer the question “given that my emission vector is e, what is the resulting direct incident radiance
distribution e(x ← ω) on the scene?” Put mathematically, V maps linearly from the emission space
to the space of incident radiance functions on the scene, i.e., V : E 7→ X(S × Ω).
A more intuitive picture is offered by the following examples.
Lighting from Environment Maps. Consider a scene lit by distant and low-frequency spherical
lighting, e.g., skylight, that has been represented using some number of spherical harmonic coefficients. That is, the emission is given as
e(ωin ) =
n
X
ei yi (ωin ),
i=1
where the yi are the spherical harmonics. Now, by our definition, e(x ← ωin ) = (V e)(ωin ), i.e., the
operator V turns this spherical skylight function e(ωin ) that is represented by the coefficient vector
e into a spherical function e(x ← ωin ) that varies with surface location x in the scene. This function
43
Precomputed Radiance Transfer
is defined as
e(x ← ωin ) = (V e)(x, ωin ) = e(ωin ) v(x, ωin )
" n
#
X
=
ei yi (ωin ) v(x, ωin )
(4.1)
i=1
=
n
X
ei [yi (ωin ) v(x, ωin )] =
i=1
n
X
ei tSH
i (x, ωin ),
i=1
where v(x, ωin ) is the binary visibility function that has value one if the skydome is visible from
point x into direction ωin and zero otherwise, and tSH
i (x, ωin ) := yi (ωin ) v(x, ωin ) denotes a transfer
function for the i:th emission basis function. The linearity of the operation with respect to the
incident coefficients ei is apparent from the above. In essence, the operator V produces locally
self-shadowed incident radiance distributions in the scene by linear combinations of the transfer
functions tSH
i .
Higher-frequency lighting environments may be represented better using a wavelet basis, as in the
method of Ng et al. [53]. In this case the above development of the initial transport operator stays
fundamentally the same – only the expressions for the wavelet functions replace the spherical harmonics yi .
N Separate Area Lights. For N static area lights within a room, the operator V has a slightly
different form. This is because the visibility from a point x to each light source is different, whereas
in the previous example the visibility function to the sky is the same for all basis functions. The
formula that defines V now becomes
"N
#
X
0
V e = (V e)(x, ωin ) = e(x ← ωin ) =
ei li (x → −ωin ) vi (x, ωin )
i=1
=
N
X
ei [vi (x, ωin ) li (x0 → −ωin )] =
N
X
ei tarealights
(x, ωin ),
i
i=1
i=1
where x0 = r(x, ωin ), and the n different visibility functions vi (x, ωin ) now have value one if the
i:th light source is visible from x in direction ωin , the emission functions li (x, ωout ) define the (unit)
emitted radiance distribution of the i:th light source, and tarealights
(x, ωin ) := vi (x, ωin ) li (x0 →
i
−ωin ).
Combinations and Extensions
It is clear from the above that E may well be designed for instance to contain both skylight as
parameterized by spherical harmonics or wavelets and local light sources as in the second example.
Also, locally-defined spatial function bases may be used. For instance, a large surface might be fit
with either a two-dimensional Fourier basis, so that the spatial variation in its emission function is
given by a Fourier series. The same could easily be done using a 2D wavelet basis. This way one
could render, for instance, the light bouncing off a silver screen due to a film projector, etc. The form
of the resulting operator V is straightforward in all cases.
It is also possible to leave out the direct lighting component from these computations. This opens up
an interesting topic for future work: Suppose that the emission space E is only a low-dimensional
approximation of the real emission space. In this case the indirect lighting would be computed
44
4.2 Mathematical Framework
efficiently at a lower fidelity, while the direct component would be rendered more accurately using
a more specialized technique. As indirect lighting is most often very smooth, this would probably
increase the perceived quality of the renderings of spaces where indirect lighting is important, e.g.,
in indoor scenes.
4.2.2
The Equation for Precomputed Transfer
Using the emission space and initial transport operator introduced above, we are now in position to
piece together the operator equation for precomputed radiance transfer.
We repeat the rendering equation for incident radiance (3.10) ui here for convenience:
(4.2)
ui = Ti ui + ei .
Here ei = e(x ← ωin ) is the emitted radiance directly incident upon x and Ti is the incident
transport operator from Section 3.3.2 that maps an incident distribution ui into a new one through
one reflection. Now, due to the way we defined V , we may rewrite the equation as
(4.3)
ui = Ti ui + V e,
so that the emission is now given as a vector e ∈ Rn in the emission space E, and V e defines the
incident radiance function due to e.
The equation is similar in form to the rendering equation for incident radiance; the only difference
is that the emission function is determined from the vector e by V . Rearranging gives
(4.4)
(I − Ti )ui = V e
(4.5)
⇔ ui = (I − Ti )−1 V e.
The above immediately reduces to the usual rendering equation for incident radiance if the emission
space is taken to be the same as the space of radiance distributions on the scene, since in this case
V = I. Of course then the “vectors” in E are no longer finite-dimensional.
There are basically two possibilities for continuing from (4.5). We will describe them next.
Direct Discretization of the Transport Equation
The first possibility is to discretize (4.5) directly using a finite element method. As we have seen in
Chapter 2, this requires setting up a finite-dimensional approximating subspace and looking for the
approximate solution in this subspace. In the end this yields a finite representation of the transport
operator (I − Ti )−1 V , i.e., a matrix. Then, given an emission vector e, an approximation to ui
can be found from e my multiplication with the discrete transport matrix. This approach will be
described in more detail in the next section.
After the approximate incident radiance solution ui has been determined, it needs to be reflected
once more by the local reflection operator R to produce outgoing radiance for rendering images of
the scene. A number of ways have been proposed for this task. Many of them will be reviewed in
detail in Section 4.4.
45
Precomputed Radiance Transfer
Reflection and Measurement
Another possibility is to apply the final reflection operator R directly, followed by a measurement
operator M : X(S × Ω) 7→ Rm that produces m linear measurements of the outgoing radiance; for
instance, the mi may be tuned to give intensities of pixels from a particular viewpoint (see Appendix
A.3). This is essentially the method of Ng et al. [53].
As described in Section 3.3.2, outgoing radiance u may be found from incident radiance ui by
u = Rui ,
where R is the (local) reflection operator from equation (3.12) that transforms incident radiance into
outgoing radiance. The final equation now becomes
r = M R(I − Ti )−1 V e,
(4.6)
where r ∈ Rm is the resulting vector of measurements.
Now we are in a situation where we have a finite-dimensional result r that is determined by a linear
operator from another finite- dimensional vector e; the relationship of r and e is represented by a
the finite matrix M R(I − Ti )−1 V . Intriguingly, even though the matrix is of finite size, we need to
(in principle) invert an infinite dimensional operator I − Ti to compute its entries!
We now briefly outline how (4.6) may be solved using a path tracing method and adjoint operators.
First, we expand (I − Ti )−1 by its Neumann series to get
r = M R(I + Ti + Ti2 + . . .)V e.
We then note that since M produces m linear measurements, each of the m measurements must correspond to a unique linear functional mi ; i.e., the i:th component ri of r is computed by evaluating
ri = hmi , R(I + Ti + Ti2 + . . .)V ei
(4.7)
= hmi , RV ei + hmi , RTi V ei + hmi , RTi2 V ei + . . .
Of course, by linearity of V , we may expand V e as


n
n
X
X
Ve=V 
ej cj  =
ej (V cj ) ,
j=1
j=1
where ei denote the components of e and cj are the canonical basis vectors of Rn , i.e., the k:th
component of cj equals 1 iff j = k, and 0 otherwise. Each V cj gives the radiance distribution that
results from a single degree of freedom in the emission space E. Substituting this into (4.7) gives
ri =
n
X
ej hmi , RV cj i + hmi , RTi V cj i + hmi , RTi2 V cj i + . . . ,
j=1
or, in other words, the (i, j):th entry of the matrix M R(I − Ti )−1 V is given by
(4.8)
hmi , RV cj i + hmi , RTi V cj i + hmi , RTi2 V cj i + . . . ,
where the linear functionals mi are defined as above. The intuitive significance of the terms RTi V cj ,
RTi2 V cj , etc. is clear; they represent the initial incident radiance distributions going through a
number of reflections and being finally transformed into outgoing radiance by R.
46
4.3 PRT by FEM: The Method of Sloan et al.
Now, using the adjoint operators R∗ , Ti∗ and V ∗ , all of the terms may be evaluated by propagating the
functionals mi in the scene by R∗ and successive applications of T ∗ , and finally onto the emission
space by V ∗ . For example,
hmi , RTi2 V cj i = hR∗ mi , Ti2 V cj i = . . . = hV ∗ Ti∗ Ti∗ R∗ mi , cj i = (V ∗ Ti∗ Ti∗ R∗ mi )j .
In practice, one would compute the whole vector V ∗ Ti∗ Ti∗ R∗ mi simultaneously using path tracing,
and then use all of its components to increment the matrix elements at all columns.
In summary, we have shown that the transport matrix may be evaluated using a modified Monte
Carlo path tracing algorithm. The procedure for determining the adjoint of the transport operator T
presented in Appendix A.2 may be used for constructing R∗ . A derivation for V ∗ is presented in
Appendix A.5.
Image Relighting. Some image relighting methods may be seen as a special case of the above.
For instance, Debevec et al. [19] fix a digital video camera looking at a subject, let a point-like light
source rotate around the subject and record an image for each light direction. Here, taking a picture
corresponds to application of the operator M where each measurement functional corresponds to the
value of a pixel in the image. Thus each image taken under illumination from a single light source
corresponds to a single column of the matrix M R(I − Ti )−1 V .
4.3
PRT by FEM: The Method of Sloan et al.
In the following sections we show how the original method of Sloan et al. [68] may be derived from
the operator framework we presented above by discretizing (4.5) using a finite element method. The
discussion here is more general than what Sloan et al. presented. In particular, we derive the method
using general function bases for representing incident illumination. The original method is then
obtained by choosing the spherical harmonics as the basis for incident radiance.
4.3.1
Discretization of the Incident Radiance Field
In order to apply the finite element method for discretizing (4.5), we need to set up a finitedimensional function space in which we look for the approximate solution. The function ui (x ←
ωin ) is defined on a four-dimensional domain; two dimensions for x and ωin each.
We choose to base our approximating subspace on a tensor product construction. This means that we
define two basis sets, one with only a spatial argument and another with a directional argument only;
the resulting tensor product basis consists of all pairs with one spatial and one directional function,
multiplied together. More exactly, when the spatial basis contains functions {φi (x)}ni=1 and the
directional basis contains functions {ψj (ω)}m
j=1 , the approximation is of the form
(4.9)
ui (x ← ωin ) ≈
n X
m
X
α(ij) φi (x) ψj (ωin ),
i=1 j=1
with the coefficients α(ij) .
Numbering of Degrees of Freedom. Even though α has two indices, it is essentially a vector,
not a matrix. This is because it encodes a representation for a single function, not a linear operator.
47
Precomputed Radiance Transfer
We define the notation convention (ij) = (i − 1) ∗ m + j for “unwrapped” indices: The intuitive
meaning of (ij) is “the j:th directional component that is associated with i:th spatial component.”
Spatial Basis. We build the spatial basis of piecewise linear functions. These functions are centered at vertices, and fall linearly off towards adjacent vertices within the triangles that connect to
the vertex; these are the classical piecewise linear “hat” functions (or linear Lagrange elements [6])
often used in Finite Element calculations. If there are n vertices {xj }nj=1 , this gives rise to n functions {φi (x)}ni=1 , which have the property φi (xj ) = δij . In other words, the function φk associated
with vertex k has value zero at all the other vertices. This will be important when we discretize the
spatial part of the transport equation using the point collocation method.
Directional Basis. Since we need to represent four-dimensional functions, the spatial hat functions
have to be augmented by directional functions. The directional space is spanned by m functions of
one direction argument: {ψj (ωin )}m
j=1 . The functions ψ may be chosen in a number of different
ways. For instance, Sloan et al. chose the spherical harmonics, but other basis sets are possible, too.
As demonstrated in Section 2.5.2, the coefficients of the linear system resulting from the Galerkin
method are easier to compute when orthogonality is required w.r.t. the dual basis and not the primal
basis. This is why we also need the dual basis {ψej }m
j=1 . Naturally, if the basis {ψj } is orthonormal,
e
the situation is simpler, since then ψj ≡ ψj by definition.
4.3.2
Discretization of the Transport Equation
Our aim in this section is to derive an expression for the coefficients α that approximate the solution
of the PRT equation (4.5); i.e., we seek the linear combination of the form (4.9) that closely matches
the solution of (4.5).
Spatial Discretization. We first use the point collocation method with respect to the spatial coordinate. That is, we require that equation (4.5) holds exactly (only) on a set of n points {xk }nk=1 , and
choose these so that xk is the center of the function φk . Put in a formula, we require that
(4.10)
ui (xk ← ωin ) = (Ti ui )(xk ← ωin ) + (V e)(xk ← ωin )
∀k = 1, . . . , n.
Even though the function is now “captured” at the collocation points xk , the variation of the function
over ωin is still unconstrained, i.e., this system of n equations is not yet fully discrete, and thus we
cannot solve it as a finite linear system. A system of this kind is called semidiscrete. Also the
directional part must be discretized in order to obtain a finite linear system.
Directional Discretization. To discretize the n remaining semidiscrete equations, we use
Galerkin’s method on the directional part. We do this, since using the collocation method would
essentially reconstruct an approximate solution from point samples. In the present situation this is
not advisable, since the number of collocation points (m) would be small, but the incident radiance function can contain arbitrarily high frequencies. This combination would lead to bad aliasing
artifacts.
We require that in all the n equations, the residual is orthogonal to the directional dual basis
{ψel (ω)}m
l=1 , i.e., we use the variant of the Galerkin method described in the end of Section 2.5.2. In
48
4.3 PRT by FEM: The Method of Sloan et al.
practice, we take inner products with respect to the directional variable with all the dual functions
ψel and set the residual to zero:
D
E
(4.11)
ui (xk ← ωin ) − (Ti ui )(xk ← ωin ) − (V e)(xk ← ωin ), ψel = 0
∀ k = 1, . . . , n, ∀ l = 1, . . . , m.
Here and below the inner product h·, ·i denotes the inner product on the directional space only, i.e.,
integration is only performed over ωin .
Full Discretization. Since we assume ui is a finite linear sum of the form (4.9), the last equation is
fully discrete; its unknowns have no continuous variables. We are now free to extract the underlying
linear system by substituting (4.9) in place of ui . This yields
(4.12)
*
X


+
X
α(ij) φi (xk ) ψj (·) − (Ti 
α(ij) φi ψj )(xk , ·) − (V e)(xk , ·), ψel
i,j
=
X
i,j
D
E
α(ij) φi (xk ) ψj (·), ψel −
i,j
X
D
E D
E
α(ij) (Ti φi ψj )(xk , ·), ψel − (V e)(xk , ·), ψel = 0
i,j
∀ k = 1, . . . , n, ∀ l = 1, . . . , m.
This may be written in matrix form as
Sα − T α = β,
(4.13)
where
D
E
D
E
S(kl)(ij) = φi (xk ) ψj , ψel = φi (xk ) ψj , ψel ,
D
E
T(kl)(ij) = (Ti φi ψj )(xk , ·), ψel ,
(4.14)
(4.15)
and the coefficients
β(kl) = h(V e)(xk , ·), ψel i
(4.16)
represent the projection of the incident lighting V e onto the approximating tensor product subspace.
By
of our approximating subspace, S = I: This is seen from φi (xk ) = δki and
D the construction
E
j
ψj , ψel = δ .
l
Computing the Vector β. Using the same trick as in Section 4.2.2, subsection Reflection and
Pdim E
measurement, we may write any e ∈ E as s=1 es cs , where the cs are the canonical basis
vectors of Rdim E . This leads to
D
E X D
E
(4.17)
β(kl) = (V e)(xk , ·), ψel =
es (V cs )(xk , ·), ψel .
s
Said differently, the coefficients β are linear in e:
(4.18)
β = V e,
where the elements of V ∈ Rmn×dim E are given by
D
E
(4.19)
V(kl)s = (V cs )(xk , ·), ψel .
49
Precomputed Radiance Transfer
The evaluation of these elements is easy. For an element (kl)s, one needs to form the transfer
function for emitter s and take the directional inner product of the transfer function restricted to
vertex xk against the dual basis function ψel . This amounts to a simple spherical integral for each
entry.
Final, Fully Discrete Equation. Using all the pieces derived above, our final, discrete equation is
(I − T )α = V e
α = (I − T )−1 V e.
(4.20)
This is not a surprising result; the matrix (I − T )−1 V maps linearly from emission space E into
our approximating subspace. The dimension of the matrix is mn × dim E.
4.3.3
Solving the Discrete System
To present a method for solving the discrete system (4.20) derived above, we approximate the inverse
(I − T )−1 by a truncated Neumann series. This is legitimate, since energy considerations show that
kT k < 1. This yields
(4.21)
α = (I − T )−1 V e
∞
N
X
X
Ti V e ≈
Ti V e
=
i=0
=
N
X
i=0
T i V e = IV + T V + T 2 V + . . . + T N V
e =: M e
i=0
where N is the number of terms used in the truncated Neumann series and M ∈ R(mn)×dim E .
Again, the intuition in the preceding formula for M is obvious; the lighting incident onto a point
on the surface of the object is the sum of direct illumination (IV )e, illumination reflected once
(T V )e, two times (T 2 V )e, and so on.
Computing the Entries of M
Here we describe how to compute the entries of the solution matrix. We begin by noting that the
sum that defines M may be written recursively:
(4.22)
M=
N
X
t=0
T tV
=
N
X
Mt ,
with M t := T t V .
t=0
Now clearly
(
M t = T M t−1 ,
M0 = V .
(4.23)
t≥1
Expanding the definition of the matrix-matrix product above gives
(4.24)
Mt(kl)s =
X
T(kl)(ij) M(t−1)(ij)s =
i,j
n,m D
X
i,j
50
E
(Ti φi ψj )(xk , ·), ψel M(t−1)(ij)s .
4.3 PRT by FEM: The Method of Sloan et al.
This equation already suggests an algorithm for computing M . Before presenting the algorithm, let
us first examine the cases t = 1, 2 to gain intuitive understanding of the above equation. Expanding
M 1 gives
M1(kl)s =
(4.25)
n,m
X
T(kl)(ij) V(ij)s
i,j
(4.26)
=
n,m D
X
(Ti φi ψj )(xk , ·), ψel
ED
E
(V cs )(xi , ·), ψej .
i,j
This formula has an intuitive interpretation: The factor on the right describes light incident from the
s:th emitter captured by the basis function pair (ij), and the factor involving the operator Ti gives
the fraction of this light reflected from (ij) and captured by the pair (kl). In a similar fashion, the
element M2(kl)s describes the light from emitter s captured by (kl) via two bounces:
(4.27)
M2(kl)s =
n,m
X
T(kl)(ij) M1(ij)s =
i,j
n,m D
X
E
(Ti φi ψj )(xk , ·), ψel M1(ij)s ,
i,j
and similarly for the higher-order bounces.
Evaluation of these elements is, in principle, straightforward. The factor h(Ti φi ψj )(xk , ·), ψel i is the
directional inner product, evaluated at vertex xk , of the once-reflected basis function pair (ij) and
the directional dual basis function ψel . This means that the radiance reflected by the basis function
pair (ij) towards the vertex xk must be integrated against ψel . This amounts to evaluating a spherical
double integral, since for each direction ωin as seen from xk , the light reflected by the pair (ij)
towards xk is itself a spherical integral (see section 3.3.2).
A Practical Algorithm. The above method for computing the entries is not as practical as one
could hope for. A simpler (and more intuitive) way is to recursively propagate the unit emissions
through the whole scene one bounce at a time.
This can be done by first initializing the transfer matrix M with V . Then, for each bounce, we
iterate through all collocation points xk , with k = 1, . . . , n, and by using some hidden-surface
method, such as shooting rays or rendering a cube map centered on xk , evaluate the entire M t(kl)s
simultaneously. This procedure is merely a transposition of the equation (4.24): Instead of separately
evaluating the entry Mt(kl)s for each (l, s), we process one ωin at a time. For each direction ωin , we
determine the point where the ray from xk towards ωin hits the scene and find the basis function
pairs (i, j) that have a nonzero value at the point r(xk , ωin ). Then, for each such pair (i, j), the
product M(t−1)(ij)s (Ti φi ψj )(xk , ωin ) is evaluated, and the result is added to the master copy of
M(kl)s . For this to work, we always need to keep a copy of M (t−1) in memory during the time
when M t is evaluated. For performance reasons, (Ti φi ψj ) should be evaluated using one of the
methods reviewed in Section 4.4 instead of direct quadrature.
An Alternative Method of Solution
The solution presented above is based on the traditional finite element procedure: First constructing an approximating subspace and then teasing a linear system out by constraining the residual.
However, another method, based on adjoint equations and path tracing, can also be employed. It is
based on the fact that we wish to capture the incident radiance distribution that results from a given
51
Precomputed Radiance Transfer
emission vector at a finite number of points {xk } in the scene by projecting the function into a linear
basis.
Projecting the solution ui (x ← ωin ) into a function basis in an inner product sense requires computing the inner products of the radiance function and dual basis functions. The dual basis functions
may be taken to be linear functionals, and they may be transported using adjoint operators. This is
the idea behind the method of Green et al. [26]. This results in a path tracing -like method almost
identical to the one presented in Section 4.2.2.
Notation Shorthands. A Comparison to the Method of Sloan et al.
In the original 2002 paper Sloan et al. derived essentially the above method without explicitly utilizing the finite element method as we have done. Their notation is also different from ours. However,
if we partition the matrix M into submatrices according to the spatial basis function index, we will
end up with a separate m × dim E matrix M k for each vertex xk ; i.e.
k
Mls
= M(kl)s .
This notation is consistent with the one used by Sloan et al. We may call each matrix M k the transfer
matrix for vertex xk , since multiplication of the emission vector e by M k produces the coefficients
that represent the directionally-varying transferred incident radiance at xk .
Analogously, we also introduce the shorthand vectors αk , whose entries are given by αlk = α(kl) .
We also note that the method for evaluation of the elements of M presented by Sloan et al. is
essentially the same as what we derived above, although their presentation contains elements specific
to the use of spherical harmonics. We stress that our above derivation does not rely on any particular
directional basis set.
4.3.4
Local vs. Global Coordinate Frames
The finite element method that is derived above is based on a tensor product construction, i.e., the
functions {ψl }m
l=1 are the same all over the scene. However, in certain situations it is beneficial to
represent the transferred incident radiance distribution in a local tangent frame on the object. Clearly,
the representation of a spherical function in another (orthogonal) coordinate system is accomplished
by rotating the function.
The need for representing incident radiance in the local frame arises in the following section, where
we review methods for turning the incident radiance distributions to outgoing radiance. The principal
reason is that in the local tangent frame the BRDF and the cosine factor are in their simplest form:
The cosine factor is constant no matter what the orientation of the surface is in object space, and if
the BRDF does not vary over the object, the BRDF representations we make use of later will also be
independent of position.
Rotation of a spherical function around a fixed axis for a fixed angle is clearly a linear operation:
The rotation of a sum of functions is the sum of rotated functions. This means that if a function
is expressed in a function basis by a coefficient vector, the coefficients representing the rotated
function can be found from the original coefficients by a linear transformation, i.e., by a matrix
multiplication. The form of the rotation matrices naturally depends on the basis and the axis and
angle of rotation. However, with general basis sets this transformation is not necessarily exactly
52
4.4 Determining Outgoing Radiance from Transferred Incident Radiance
invertible by the opposite rotation, i.e., the rotated function loses some information in the process.
In the most general case the rotation matrices can be evaluated by numerical quadrature.
Fortunately the spherical harmonics have a valuable property: They are closed under rotation. This
means that any rotated spherical harmonic function may be represented exactly as a sum of unrotated
spherical harmonics which do not contain frequencies higher than the rotated harmonic. This property is equivalent to the well-known translational formula of the Fourier basis functions in Cartesian
spaces. Formulas for rotating spherical harmonic functions are given by Ramamoorthi and Hanrahan
[58] and Kautz et al. [42].
From now on we denote the coefficients of the incident radiance function for collocation point xk ,
expressed in the local tangent frame, by ᾱk . As described above, these coefficients are found from
the normal unbarred coefficients by
ᾱk = Rk αk ,
where the matrix Rk rotates from the global frame onto the local frame at xk .
4.3.5
Rendering
This section presents an overview of the process of rendering pictures of scenes using the transfer
information M . Skipping the details on how to compute outgoing radiance from transferred incident
radiance, the overall process is
1. Choose the vector e ∈ E that describes emission of light.
2. For each vertex xk in the scene, determine the coefficients ᾱk for transferred incident radiance
by evaluating ᾱk = Rk M k e.
3. Draw the scene. For each pixel in the rendered image, interpolate the coefficients ᾱk and
determine outgoing radiance from the corresponding point towards the camera by one of the
methods described in the next section.
In most cases, this process can be further approximated by computing the outgoing radiances only
at the vertices, and linearly interpolating them instead of the coefficients for transferred incident
radiance.
4.4
Determining Outgoing Radiance from Transferred Incident
Radiance
In the last section we derived formulas for determining an approximation to the incident radiance
field generated by a low-dimensional function space of emissions. As such, the incident radiance
field is not yet sufficient for rendering images, since the pixels should be assigned outgoing radiances.
This section discusses different ways of obtaining outgoing radiance from the incident radiance
solution. Multiple methods have been proposed for this task. First, we review the original method
from Sloan et al. [68] and the immediate extension of Kautz et al. [42]. Then, a method first presented
by the author and Kautz [49] is described. Bi-scale radiance transfer [69] and techniques that rely on
separable BRDF approximation [50, 77] are treated next. A brief comparison between the different
methods concludes the section.
53
Precomputed Radiance Transfer
All of these methods, with the exception of the ones using separable BRDF approximations, start
from the assumption that the angular variation of transferred incident radiance is represented using
spherical harmonics.
Out of necessity, this section also deals with a number of different approaches for representing
BRDFs.
The BRDF Product Function fr∗ . In the remainder of the section, integrals that involve the BRDF
always include the cosine factor bn(x) · ωin c as well. For convenience, we define the BRDF product
function fr∗ as the product of the BRDF and the cosine:
fr∗ (x, ωin → ωout ) := fr (x, ωin → ωout ) bn(x) · ωin c .
4.4.1
The Original Method of Sloan et al.
Sloan et al. [68] presented two methods for determining outgoing radiance from transferred incident
radiance. A method for diffuse BRDFs is derived below, after which a method for Phong-like glossy
BRDFs is briefly outlined.
Diffuse BRDF. If the surface of the object being shaded is perfectly diffuse, the appearance of a
point on the surface has no dependence on viewing direction, but only on the incident lighting. In
this case the outgoing radiance at a node point xk is just a suitably scaled, cosine-weighted integral
of the incident radiance function ui (xk ← ωin ):
Z
σ
ui (xk ← ωin ) bn(xk ) · ωin c dωin ,
u(xk → ωout ) =
π
Ω
where σ is the diffuse reflectance at xk . Since ui (xk ← ωin ) =
element construction, outgoing radiance is found by
!
Z X
m
σ
k
αl ψl (ωin ) bn(xk ) · ωin c dωin
(4.28) u(xk → ωout ) =
π
=
σ
π
Ω
m
X
l=1
Pm
l=1
αlk ψl (ωin ) by our finite
l=1
αlk
Z
ψl (ωin ) bn(xk ) · ωin c dωin =
m
X
αlk bkl ,
l=1
Ω
with bkl :=
σ
π
Z
ψl (ωin ) bn(xk ) · ωin c dωin ,
Ω
i.e., the incident radiance is turned into diffuse outgoing radiance by an inner product against the
vector bk :
u(xk → ωout ) = (bk )T αk = (bk )T M k e,
where superscript T denotes the transpose. The product of bk and M k can be premultiplied to yield
u(xk → ωout ) = (tk )T e,
(4.29)
with tks :=
Pm
k
l=1 bl
k
Mls
.
Put in words, diffuse reflectance simplifies the problem of determining final outgoing radiance into
a mere inner product.
54
4.4 Determining Outgoing Radiance from Transferred Incident Radiance
Here we should note that if the whole object has a purely diffuse reflectance, we should not bother
computing the transferred incident radiance with angular variation in the first place; it is more efficient to use the “Reflection and measurement” framework presented in Section 4.2.2 with the
clamped cosine factors bn(xi ) · ωin c as components mi of the measurement operator. Doing this
will lead directly into the above result (4.29) with ri = u(xi → ωout ).
Glossy, Phong-like BRDF. The case of glossy reflection is more demanding than the diffuse case,
since the directional variation of both the incident and outgoing radiance functions must be accounted for.
Relying on the results of Ramamoorthi and Hanrahan [58], reflection from an isotropic BRDF that
is circularly symmetric around the reflected viewing direction – a simple Phong-like model – may
be found by a spherical convolution. This convolution has a simple expression in spherical harmonics coefficients – it amounts to a simple componentwise multiplication of the transferred radiance
coefficients by a vector of coefficients derived from the BRDF. Outgoing radiance into the viewing
direction is found after this convolution by evaluating a spherical harmonic expansion with the coefficients of the convolved function into the mirror direction. It should be noted that this method does
not account for the cosine factor bn(xk ) · ωin c and is thus not physically-based.
4.4.2
The Method of Kautz et al.
Kautz et al. [42] presented a method that allows usage of arbitrary, anisotropic BRDFs. We present
a derivation below, again concentrating on a single node point xk on the surface of the object.
In this method, the incident radiance function ui (xk ← ωin ) is represented as a sum in spherical
harmonics. Suppose that we have, for some collection of viewing directions ωout , tabulated the
spherical harmonic coefficients bi (ωout ) for the BRDF product function with respect to the incident
direction ωin :
m
X
fr∗ (xk , ωin → ωout ) ≈
bi (ωout ) yi (ωin ).
i=1
This tabulation is done in a canonical coordinate system where the north pole of the unit sphere
corresponds to the surface normal.
Substituting this approximation and the definition of transferred, rotated incident radiance into the
local reflection equation yields
Z
u(x → ωout ) = ui (x ← ωin ) fr∗ (xk , ωin → ωout ) dωin
Ω
=
=
Z "X
m

# m
X
ᾱik yi (ωin ) 
bj (ωout ) yj (ωin ) dωin
i=0
Ω
m
X
ᾱik bi (ωout ),
i=0
j=0
where the coefficients ᾱk determine the incident radiance function, in local coordinates, at xk as
before. The result is that outgoing radiance may be found by computing an inner product between
the coefficients of the rotated incident radiance function and the view-dependent coefficients bi for
the BRDF. The authors used an environment map with parabolic parameterization for storing these
55
Precomputed Radiance Transfer
view-dependent coefficient vectors. This method can be written in matrix-vector form as
u(x → ωout ) = b(ωout )T Rk α = b(ωout )T Rk M k e,
where the matrix Rk rotates the incident radiance into the local frame. Rk can of course be premultiplied into M k . The extension of this method to basis sets other than the spherical harmonics is
trivial.
4.4.3
The Method of Lehtinen and Kautz
The author and Kautz [49] present another method for determining outgoing radiances from glossy
surfaces. Their technique is also based on representing the BRDF product function in spherical harmonics. The difference to the method of Kautz et al. is that also the view-dependence is represented
by SH coefficients. This also means that the outgoing radiance into the viewing direction will be
represented by suitably transformed spherical harmonics coefficients, and thus it cannot be evaluated
by an inner product.
The doubly-projected spherical harmonics representation of the BRDF product function was first
presented by Westin et al. [78]. Lehtinen and Kautz showed that the resulting matrix of coefficients
is actually a linear operator that maps incident radiance (expressed in the SH basis) into outgoing radiance, again represented by spherical harmonic coefficients; the coefficients that represent outgoing
radiance are obtained from the incident coefficient vector by multiplication with the BRDF matrix.
Again, the BRDF representation is computed in the canonical coordinate frame, so that the incident
radiance function must first be rotated into the canonical frame. We derive the method below.
First, we present the doubly-projected BRDF representation of Westin et al. To derive the projection,
we start in a way similar to the method of Kautz et al. (see 4.4.2 above); for each viewing direction
ωout , we project the BRDF product function into the spherical harmonics:
fr∗ (xk , ωin
(4.30)
m
X
→ ωout ) ≈
bi (ωout ) yi (ωin ),
i=1
where the coefficients bi are view-dependent, and m is the number of the spherical harmonics used
for representing transferred incident radiance. Now, we note that for each i, the coefficients bi (ωout )
are scalar functions on the sphere; we are thus free to project these coefficients into SH. This yields


m
N
X
X

(4.31)
fr∗ (xk , ωin → ωout ) ≈
bji yj (ωout ) yi (ωin ),
i=1
j=1
where the matrix with entries bji now defines an approximation of the BRDF. Note that the number
N of SH coefficients used to represent the view-dependency does not need to be the same as the
order of incident lighting.
Next we substitute the approximation derived above into the local reflectance equation (3.6), with
incident lighting represented by an SH expansion:
Z
u(xk → ωout ) = ui (xk ← ωin ) fr∗ (xk , ωin → ωout ) dωin
Ω
=
Z "X
m
Ω
l=0


# m  N
X X

ᾱlk yl (ωin ) 
bji yj (ωout ) yi (ωin ) dωin ,
i=0
56
j=0
4.4 Determining Outgoing Radiance from Transferred Incident Radiance
where ᾱk again represents the coefficients of incident radiance rotated to the local frame. Now,
rearranging and using the orthonormality of the SH basis gives


Z
m
m X
N
X
X
u(xk → ωout ) =
ᾱlk
bji  yl (ωin ) yi (ωin ) dωin  yj (ωout )
i=0 j=0
l=0
=
N
X
j=0
"
m
X
Ω
#
ᾱlk bjl yj (ωout ),
l=0
Pm
the desired result. Denoting α̃jk = l=0 ᾱlk bjl , i.e., α̃k = B ᾱk = BRk αk = BRk M k e, where
B is the N × m matrix with entries bjl , Rk is the matrix that rotates from global to local frame at
xk and M k is the transfer matrix for vertex k, we have
u(xk → ωout ) =
N
X
α̃ik yi (ωout ).
i=0
The authors note that using higher-order projections for the view-dependence, e.g., 7th-order projections that corresponds to N = 64, requires artificial attenuation of the higher-band coefficients to
prevent unwanted ringing artifacts. The ringing phenomenon is essentially the same as the artifacts
that result from low-pass filtering an image with an ideal low-pass filter.
Change of Basis
The method described above turns the incident radiance function into an outgoing radiance function,
which is again expressed in the spherical harmonics basis. The authors also described a technique for
expressing the outgoing radiance function in a new basis function set of piecewise bilinear functions
{ϕi }O
i=1 on the hemisphere. By construction, only four basis functions have a nonzero value for any
direction on the sphere.
This change of basis is easily derived using orthogonal projection (see Section 2.2.2). The change
is represented by the matrix product G−1 C, where G is the Gram matrix of the new basis and C
~k =
has the components Cij = hyj , ϕi i. With obvious abuse of notation, this may be denoted α
G−1 C α̃k , i.e.,
~ k = G−1 CBRk M k e.
α
Now, as before, outgoing radiance from xk is evaluated as
u(x → ωout ) =
O
X
α
~ ik ϕi (ωout ),
i=1
where O is the number of the new basis functions. Since by construction there are at maximum
four basis functions which are non-zero for a given ωout , only four terms of this last sum need ever
be evaluated at a single time. Immediate hindsight [32] revealed that it is also possible to easily
construct a basis where only three functions have non-zero values for any given ωout .
4.4.4
Bi-scale Radiance Transfer
In order to model small-scale surface structures such as weavework, the methods presented earlier
require an immense amount of sampling points. On the other hand, it is intuitively clear that the
57
Precomputed Radiance Transfer
local self-shadowing in a complex material does not drastically change the overall appearance of
surface locations that are far away, e.g., the small shadows cast by bumps in a stucco surface do
not propagate very far on the surface. Motivated by this, Sloan et al. [69] separated the global and
local self-shadowing effects. The idea is to model the global self-shadowing and interreflections as
before using a relatively coarse mesh, but alter the final determination of outgoing radiance to allow
for more surface detail. This is motivated by the fact that while a complex surface produces highly
irregular outgoing radiance, the incident radiance varies much more slowly.
Bidirectional Texture Functions or BTFs [17] are six-dimensional functions of one 2D spatial argument and two directional arguments. The BTF B(x, ωout , ωin ) describes for each point x the
appearance of a small surface patch located at x viewed from direction ωout , when the patch is lit
by a directional light source at ωin . When the patch is lit by a full hemispherical light source, its
appearance is found by
Z
(4.32)
u(x → ωout ) = u(x ← ωin ) B(x, ωout , ωin ) dωin ,
Ω
where B(x, ωout , ωin ) is the BTF. The BTF is able to model surface microstructure well, including
small-scale self-shadowing in surface features such as snake or fish scales, stucco and weavework.
Sloan et al. [69] derive a method for computing outgoing radiances utilizing BTFs. They define
radiance transfer textures or RTTs by projecting the BTF into the spherical harmonics with respect to
the incident angle. There are m RTTs Bi (x, ωout ) with i = 1, . . . , m, where each Bi corresponds to
a different SH function that is used to represent the angular variation of transferred incident radiance.
The approximation to the BTF is thus
B(x, ωout , ωin ) ≈
m
X
Bi (x, ωout ) yi (ωin ).
i=0
Substituting this into the equation (4.32) and rearranging yields the approximation
u(x → ωout ) =
m
X
Bi (x, ωout ) ᾱik
i=0
for the outgoing radiance, where ᾱ again represents incident radiance rotated to the local frame.
The crucial observation here is that the coefficients ᾱ that represent incident radiance are sampled
at the vertices and then interpolated over the polygons, but the functions Bi (x, ωout ) are defined
over the whole surface. In effect, this method turns slowly-varying transferred incident radiance into
higher-frequency outgoing radiance.
4.4.5
Outgoing Radiance from Separable BRDF Approximation
In contrast to the previous methods for determining outgoing radiance from transferred incident
radiance, this section does not rely on the spherical harmonics. The idea described here was found
recently by two research groups in concurrent work [50, 77].
An elegant method for approximating arbitrary BRDFs is to represent the four-dimensional BRDF
product function as a sum of products of two-dimensional functions [40]; i.e.,
(4.33)
fr∗ (ωin → ωout ) ≈
N
X
i=1
58
fi (ωin ) gi (ωout ).
4.4 Determining Outgoing Radiance from Transferred Incident Radiance
Here, the functions fi and gi are most conveniently represented by environment maps, and the number of terms N controls the accuracy of the representation. This kind of decomposition may easily be
computed to arbitrary accuracy for any BRDF using the singular value decomposition or SVD [33].
To see how this decomposition helps in determining outgoing radiance from incident transferred
radiance, let us substitute (4.33) into the local reflection equation (3.12):
Z
(4.34) u(xk → ωout ) =
ui (xk ← ωin ) fr∗ (xk , ωin → ωout ) dωin
Ω
=
Z "X
m
Ω
#"N
#
X
ᾱlk ψl (ωin )
fi (ωin ) gi (ωout ) dωin
i=1
l=1
=
m
X
l=1
or, denoting Bil :=
R
Ω
ᾱlk
N
X
Z
gi (ωout )
i=1
ψl (ωin ) fi (ωin ) dωin ,
Ω
ψl (ωin ) fi (ωin ) dωin ,
u(xk → ωout ) =
m
X
ᾱlk
N
X
!
gi (ωout ) Bil
.
i=1
l=1
Here, the cosine factor is absorbed in the BRDF product function as usual. This last equation may
be written in matrix-vector notation as
(4.35)
u(xk → ωout ) = g(ωout )T B ᾱk = g(ωout )T BRk M k e,
where B is the N × m matrix with elements Bil and g(ωout ) is the view-dependent N - vector with
elements gi (ωout ). As before, m is the number of the ψ functions. Of course, the product BRk M k
may be precomputed for each xk to yield a composite N × dim E matrix that combines transfer with
the incident part of the BRDF product function. Evaluating g for a single ωout amounts simply to N
reads from an environment map.
4.4.6
Comparison of Methods
A number of methods for determining outgoing radiance from the sample points on the scene were
reviewed in the last sections. This section discusses their relative strengths and weaknesses. The
discussion only applies to non-diffuse reflection, since outgoing radiance from a diffuse surface is
always found by computing an inner product of the emission and transfer vectors.
All methods that represent transferred incident radiance using spherical harmonics suffer from a
common limitation: Since the incident illumination is low-frequency, sharp reflections cannot be
rendered accurately. In other words, these methods are naturally limited to blurry BRDFs.
The first method for glossy rendering in the PRT context [68] has serious drawbacks. First, it is only
able to model BRDFs that have a pure convolution structure, i.e., BRDFs that have circularly symmetric lobes. This rules out many physically-based BRDFs because the cosine factor bn(x) · ωin c
cannot be modeled. Second, the method requires evaluation of the spherical harmonic functions into
the reflected viewing direction; this is an expensive operation.
The most general of the methods based on spherical harmonics is perhaps the one presented by the
author and Kautz [49], since it encloses the matrix representation of the BRDF inside a big linear
59
Precomputed Radiance Transfer
expression that transforms the emission vector e directly into outgoing radiance coefficients in a
directional basis. The advantage of the method is that only a small subset of the large matrix-vector
product needs to be evaluated. However, this method is not practically usable, since the per-vertex
data is prohibitively large without compression.
The method of Kautz et al. [42], on the other hand, has a simple implementation while it also
supports arbitrary BRDFs. While the method requires a significant amount of processing per vertex
in the form of the multiplication of the emission vector e by the product Rk M k , the complexity
cannot be reduced further: Any method that relies on separate representations of both the BRDF
and the incident radiance function in the canonical frame needs to apply the compound transfer and
rotation matrix in any case.
The fundamental limitation of requiring a large linear expression because transfer and rotation is
lifted by the methods based on separable BRDF approximation. There the BRDF matrix B “folds”
the dependence on the incident direction (and thus also the effect of the rotation) together with the
incident factors fi (ωin ) of the BRDF to yield a smaller composite operator BRk M k which only has
dimension N ×dim E, where N is the number of terms in the separable BRDF approximation. Often
only few (N 10) terms are required for good approximation [40], and thus the compound matrix
is smaller than in the case of other methods. At the time of writing, this method seems superior to
the other methods for rendering with arbitrary BRDFs.
The bi-scale radiance transfer method is based on bidirectional texture functions or BTFs. In itself, it
produces by far the most visually rich results of all the methods. Good quality comes at a high cost,
however. The most obvious problem is the high dimensionality: Regular BTFs are six-dimensional;
two dimensions for surface location and two for incident and outgoing light directions each. Even
though each of the m radiance transfer textures of the bi-scale method have lower dimensionality
(four; two arguments for location and two for outgoing direction), representing and storing a number
of 4D textures is still demanding. Because of this, objects must be covered by repeatedly using the
same BTF patches many times over. Also, acquiring BTFs is demanding, and it takes a long time to
construct the radiance transfer textures from BTFs. In summary, the algorithm is not yet practical.
60
Chapter 5
Discussion and Future Work
The previous chapters have reviewed mathematical tools for global illumination and presented a new
framework for describing methods for precomputed radiance transfer. This chapter presents some
potential applications and discusses the drawbacks of these methods. The thesis concludes by shortly
outlining some open problems.
5.1
Applications
Precomputed radiance transfer methods offer fast solutions to global illumination problems in constrained lighting scenarios. The following two sections outline a few possible uses for this technology.
5.1.1
Architectural Applications
Precomputed radiance transfer methods work best when used to light objects using environment
maps. For instance, skylight can be well modeled this way. An architectural applications, where
the light emitted by the sky can be changed interactively by tuning time-of-day and cloudiness parameters, would be helpful in designing the propagation of daylight inside buildings. This is an
important topic, for instance since the minimum amount of daylight in office spaces is regulated in
some parts of the world. Being able to see how the building is lit in different skylight conditions
would help architects and illumination engineers to understand daylight propagation better, and to
change building designs and lighting fixtures to obtain the most pleasing result.
Technically, an application like the one described above should be constructed so that direct sunlight
is rendered using a traditional shadow algorithm, such as shadow volumes [15], and precomputed
radiance transfer should be used to render the direct illumination from the rest of the sky and indirect
illumination due to the sun. This is because skylight with the sun removed is very smooth, and
thus it would be well represented using only a few spherical harmonic or low-frequency wavelet
basis functions. This limits the amount of precomputation dramatically. Of course, it is possible to
represent also sunlight using, say, a wavelet basis. This is not advisable because representing the sun
requires high-frequency basis functions, and this in turn increases precomputation work.
61
Discussion and Future Work
5.1.2
Entertainment
Illumination and the lack of it, shadows, are the key ingredient of perceived realism in computergenerated imagery. Currently, computer games are the strongest force driving the development of
consumer-level graphics hardware. Due to the requirement of real-time performance, the methods
used for computing lighting and shadow information in games are still limited at best: General global
illumination methods cannot be employed at run-time because of the associated computational costs,
and thus lighting is often precomputed in advance for a static set of light sources. This can yield
visually pleasing results (e.g., [62]), but the lack of dynamics in the lighting may be undesirable.
On the other hand, methods that offer moving shadows from dynamic light sources are currently
constrained to point sources (and thus hard-edged shadows), and do not readily support indirect
illumination (e.g., [34]). However, indirect illumination is important, particularly in the rendering of
realistic indoor scenes.
The large-scale use of precomputed radiance transfer in games, at least in any of its more sophisticated view-dependent forms, is currently not feasible. For instance, lighting an outdoor scene
with terrain, trees, rocks, etc., requires precomputation of transfer functions for a huge number of
sampling points. Storage of this large dataset is the bottleneck: Gaming computers currently have
relatively more processing power than storage capacity. The difference is particularly pronounced
in gaming consoles. Problems due to limited memory could possibly be circumvented by rendering
only indirect illumination using precomputed radiance transfer, as suggested above in the lighting
design example. Indirect lighting tends to be smooth, and thus it can be sampled much more sparsely
with obvious savings in both precomputation and storage.
On smaller scale, using precomputed radiance transfer for lighting small objects that move in the
game environment is viable. For instance, basis coefficients that represent the lighting incident into
points in free space of the scene can be precomputed on a grid as suggested by Sloan et al. [68],
and these spatially-varying coefficients can be used for lighting dynamic objects. The author has
implemented a method resembling this for a commercial game [62] using the method of Ramamoorthi and Hanrahan [59] for determining outgoing radiances from the vertices of the moving objects.
The incident radiance was taken from a static radiosity solution instead of parameterizing it using an
emission space. The method closely resembles the irradiance volume of Greger et al. [27].
5.2
Limitations and Future Work
The incapability of coping with changing or moving geometry is a clear drawback in any precomputed global illumination solution. This applies equally to traditional solutions rendered using a
single emission configuration (a radiosity solution, for example) and any precomputed transfer solution. The fundamental problem is that the transport operator depends on the geometry and reflectance
functions of the scene. In order to conceive a method for obtaining most global illumination effects
in real-time, methods for fast determination of visibility in dynamic scenes and fast multidimensional
integration are required.
If indirect lighting is ignored, changes in reflectance functions and visibility can be incorporated into
a PRT-like method by representing the direct light transport operator as a product of three separatelyprojected functions: Incident lighting, the BRDF, and visibility [54]. An interesting question is
whether or not this method could be extended to give a full illumination solution that includes indirect lighting.
62
Appendix A
Proofs and Derivations
A.1
The Equivalence of the Two Galerkin-type Methods
Here we present the simple proof of equivalence of the two Galerkin-type methods (2.15) and (2.14),
i.e., we show that the vector α that solves the other also solves the other. To see this, let us first
substitute the definition of the dual basis from (2.4) into (2.15):
+
*
n
X
−1
Gij ϕj
h(I − T )uh − e, ϕ̃i i = (I − T )uh − e,
j=1
=
n
X
G−1
ij
[h(I − T )uh , ϕj i − he, ϕj i] =
j=1
n
X
"
G−1
ij
j=1
n
X
#
αk h(I − T )ϕk , ϕj i − he, ϕj i = 0
k=1
But the expression in the square brackets is nothing but (2.13), and the outer sum involving the G−1
ij
is just a multiplication of the equation by G−1 , as we show next. Moving the term involving e in the
square brackets to the right-hand side we obtain
n
X
G−1
ij
j=1
⇔
n
X
j=1
αk h(I − T )ϕk , ϕj i =
n
X
n
X
G−1
ij he, ϕj i
j=1
k=1
"
G−1
ij
n
X
αk hϕk , ϕj i −
k=1
n
X
#
αk hT ϕk , ϕj i =
n
X
G−1
ij he, ϕj i.
j=1
k=1
Now, noting that hϕk , ϕj i = Gjk , hT ϕk , ϕj i = Mjk , and he, ϕj i = ej , we get
n
X
k=1
αk
n
n
X
−1
X
Gij (Gjk − Mjk ) =
G−1
ij ej ,
j=1
j=1
which, in turn, is just
G−1 (G − M )α = G−1 e
due to the definition of the matrix-matrix product. Since G−1 is non-singular and the above equation is the same as (2.14) multiplied by G−1 , we conclude that the equations (2.14) and (2.15) are
equivalent, and thus have the same solution α.
63
Proofs and Derivations
Exactly the above kind of argument may be used to show that requiring orthogonality w.r.t. any
linearly independent basis set that is derived from {ϕi } by a linear transformation results in the
same solution. However, generally this will not result in an identity matrix inside the parenthesis.
A.2
The Kernels of the Transport Operator and its Adjoint
Here we show how the kernels of adjoint operators may be derived from those of the “forward”
operator. In Chapter 2 we encountered integral operators in an abstract setting. There we noted that
integral operators are of the form
Z
(A.1)
(T w)(s) = k(s, t) w(t) dt,
S
where S is the domain over which the function space is defined. In particular, if S has dimension
n, the integral is also n-dimensional. Since our space of radiance functions is defined over S × Ω,
the four-dimensional space with two spatial and two directional coordinates, this principle would at
first seem to be violated by the rendering equation: The pointwise evaluation of T u includes only
a two-dimensional integral over the incoming direction. We now show that the rendering equation
may be written in the above form; this will make the determination of the adjoint T ∗ trivial.
By virtue of the Dirac impulse functional δ we may write
Z
u(r(x, −ω) → ω) = w(y → ω) δ(y − r(x, −ω)) dAy .
S
Substituting this to the rendering equation (3.7) we have


Z Z
u(x → ωout ) =  w(y → ω) δ(y − r(x, −ω)) dAy  fr (x, −ω → ωout ) bn(x) · −ωc dω.
Ω
S
Now, letting
k(x, ωout , y, ω) = fr (x, −ω → ωout ) bn(x) · −ωc δ(y − r(x, −ω))
we may write the rendering equation in the form (A.1):
Z Z
u(x → ωout ) =
u(y → ω) k(x, ωout , y, ω) dω dAy .
S Ω
Determining the expression for the adjoint operator T ∗ is now simple. As was seen in Section 2.4.2,
the kernel of the adjoint is found by swapping the two pairs of arguments for k: k ∗ (x, ωout , y, ω) =
k(y, ω, x, ωout ). This gives
Z Z
(T ∗ w)(x, ωout ) =
k(y, ω, x, ωout ) w(y → ω) dω dAy
S Ω
Z Z
w(y → ω) fr (y, −ωout → ω) bn(y) · −ωout c δ(x − r(y, −ωout )) dω dAy
=
S Ω
Z
= bn(r(x, ωout )) · −ωout c
w(r(x, ωout ) → ω) fr (r(x, ωout ), −ωout → ω) dω.
Ω
64
A.3 Measuring the Average Radiance Through a Pixel
This form of the adjoint operator is different from what is usually found in the literature (e.g., [23,
p. 99]). This is because our definition of the inner product does not include the cosine factor. The
geometric situation is depicted in figure A.1.
?
Figure A.1 Geometry related to the adjoint to the reflection operator. The functional is propagated
backwards when compared to radiance.
A.3
Measuring the Average Radiance Through a Pixel
To see how a certain linear functional answers the question about the average radiance through the
∗
pixel (i, j) assuming a pinhole camera located at c, we define the linear functional wij
as
∗
wij
(x, ωout ) = V (x, c) δij (x, ωout ) G(x, c),
where δij (x, ωout ) = δ(ωout − xc)
~ Γij (x, ωout ) and G(x, c) =
bn(x) · xcc
~
2
kx − ck
.
Here Γij (x, ωout ) is the characteristic function for the pixel (i, j) that has value one if the ray from x
towards ωout goes through the pixel and hits the camera and zero otherwise. The visibility function
V (x, c) is defined as usual, and G(x, c) is a function that normalizes the contribution of the area
element dA at x to be proportional only to the solid angle dA is seen through from c. In words, the
functional picks for each point x visible from c through pixel (i, j) the direction ωout in which the
camera lies when looked at from x.
Note that due to the formulation of the inner product, this functional is defined on the surfaces of
the scene, not solid angles as seen by the camera at c. The purpose of the geometric factor G(x, c)
is to make the contribution of each surface element dA uniform in screen-space, much in the same
way that the rendering equation (3.7) may be written by integration over the surfaces instead of solid
angles by introducing a similar geometric factor. If this functional is used in a ray-tracing program so
that the samples are distributed evenly on the pixel (i, j), the geometric term must be left out. This is
because sampling the pixel uniformly already balances the surface elements correctly. Because the
65
Proofs and Derivations
surface element visibile through the pixel in the direction of a single sample is found by ray-tracing,
the characteristic function and the visibility function are redundant, too. In total, the functional
reduces to a single Dirac impulse from the point towards the camera.
To complete the measurement functional, we next define a new, normalized functional by
wij = R
∗
wij
.
w∗ (x, ωout ) dA dωout
S×Ω ij
Substituting this and a radiance solution u into eq. (3.13) we have
R
∗
u(x, ωout ) wij
(x, ωout ) dA dωout
uavg (i, j) = S×ΩR
.
∗
w (x, ωout ) dA dωout
S×Ω
To see that this actually is a well-defined average of the radiance function, let us denote by Svis the
set of surface points x that are visible from c through pixel (i, j), i.e., those points that are picked
by δij (x, ωout ). Because of the delta functional, the integration over the solid angle simplifies to just
evaluating the radiance from each x towards the camera; i.e., the equation becomes
R
u(x → cx)
~ G(x, c) dA
,
uavg (i, j) = Svis R
G(x, c) dA
Svis
which, in turn, is clearly a formula for the average radiance, weighted by the geometric factor that
normalizes the contribution with respect to screen-space. If we change variables and integrate over
the same set Svis of visible surfaces represented represented as solid angle in screen-space, the geometric term disappears altogether, since dω equals G(x, c) dA exactly.
A.4
The Spherical Harmonics
The spherical harmonics are an infinite, orthonormal set of functions defined on the sphere. The
functions are organized into sets of bands, starting from zero. Band number l contains 2l + 1
functions. Within each band, functions have an index m that ranges from −l to l; we denote each
function by ylm . As the exact expressions for the functions are not relevant within the context of this
thesis, we refer the reader elsewhere (e.g., [68]).
The spherical harmonic basis shares many properties of the usual Fourier bases in two dimensions.
The most important of these properties is that the functions are closed under rotation; this means that
any spherical harmonic that is rotated on the sphere around an arbitrary axis can be exactly represented as a linear combination of other (unrotated) spherical harmonics within the same band. This
is exactly analogous to the well-known translational formula for Fourier bases. The consequence
of this is that any function represented as a linear combination of the spherical harmonics may be
rotated exactly. The coefficients of the rotated function can be found from the unrotated coefficients
by a linear transformation.
Numbering of the Spherical Harmonics. As we saw above, the spherical harmonics are naturally associated with two indices each. On the other hand, if we wish to use them for representing
functions on the sphere, the band information is usually not relevant. This is why we have in the text
used a single index yi . This index is understood to enumerate the harmonics band by band, starting
from zero, so that index 1 means the only function of the first band, 2 means the function y1−1 , 3
denotes the function y10 , 5 denotes the function y2−2 , and so on. It can be verified that the amount of
functions on all bands up to and including band m is (m + 1)2 .
66
A.5 The Adjoint of the Initial Transport Operator
A.5
The Adjoint of the Initial Transport Operator
The initial transport operator V maps from the low-dimensional emission space onto direct incident
radiance distributions on the scene. Its adjoint that maps from the space of radiance functions onto
the emission space, is defined by hV e, wiX(S×Ω) = he, V ∗ wiE for all e ∈ E, w ∈ X(S × Ω). Here
we derive an expression for this adjoint in the case of a spherical light source at infinity represented
by an environment map given as basis coefficients. As usual, the directional variation of emission is
encoded using a spherical function basis {ψi }m
i=1 . Here dim E = m.
The operator V is defined by
(V e)(x, ωin ) = e(ωin ) v(x, ωin )
(A.2)
where v(x, ω) encodes visibility of the environment sphere from point x to direction ω and e(ωin ) is
the radiance emitted by the sphere:
X
e(ωin ) =
ei ψi (ωin ).
i
Expanding the inner product hV e, wi and rearranging yields
Z Z "
(A.3)
hV e, wi =
#
v(x, ωin )
S Ω
X
ei ψi (ωin ) w(x, ωin ) dAx dωin
i
=
X
i
Z Z
ei
v(x, ωin ) ψi (ωin ) w(x, ωin ) dAx dω.
S Ω
The inner product on the emission space E is the same as in Rm . The last row clearly has this form;
by taking
Z Z
(A.4)
(V ∗ w)i =
v(x, ωin ) ψi (ωin ) w(x, ωin ) dAx dω
S Ω
as the i:th component of (V ∗ w), we have obtained a formula for V ∗ .
67
Proofs and Derivations
68
Bibliography
[1] John M. Airey, John H. Rohlf, and Frederick P. Brooks, Jr. Towards image realism with interactive update rates in complex virtual building environments. In Proceedings of the 1990
symposium on Interactive 3D graphics, pages 41–50. ACM Press, 1990.
[2] James Arvo and David Kirk. Particle transport and image synthesis. In Computer Graphics
(Proceedings of ACM SIGGRAPH 90), pages 63–66. ACM Press, 1990.
[3] Michael Ashikhmin and Peter Shirley. Steerable Illumination Textures. ACM Transactions on
Graphics, 3(2):1–19, 2002.
[4] Kendall Atkinson. The Numerical Solution of Integral Equations of the Second Kind. Cambridge University Press, 1997.
[5] Kendall Atkinson and Weimin Han. Theoretical Numerical Analysis. Springer-Verlag, 2001.
[6] Dietrich Braess. Finite Elements: Theory, Fast Solvers and Applications in Solid Mechanics.
Cambridge University Press, 1997.
[7] Shenchang Eric Chen, Holly E. Rushmeier, Gavin Miller, and Douglass Turner. A progressive multi-pass method for global illumination. In Computer Graphics (Proceedings of ACM
SIGGRAPH 91), pages 165–174. ACM Press, 1991.
[8] Per H. Christensen, Eric J. Stollnitz, David H. Salesin, and Tony D. DeRose. Global illumination of glossy environments using wavelets and importance. ACM Transactions on Graphics,
15(1):37–71, 1996.
[9] Michael F. Cohen, Shenchang Eric Chen, John R. Wallace, and Donald P. Greenberg. A progressive refinement approach to fast radiosity image generation. In Computer Graphics (Proceedings of ACM SIGGRAPH 88), pages 75–84. ACM Press, 1988.
[10] Michael F. Cohen, Donald P. Greenberg, David S. Immel, and Philip J. Brock. An efficient
radiosity approach for realistic image synthesis. IEEE Computer Graphics and Applications,
6(3), 1986.
[11] Michael F. Cohen and John R. Wallace. Radiosity and Realistic Image Synthesis. Morgan
Kaufmann, 1993.
[12] John B. Conway. A Course in Functional Analysis. Springer, 1990.
[13] Robert L. Cook. Stochastic Sampling in Computer Graphics. ACM Transactions on Graphics,
5(1):51–72, 1986.
[14] Robert L. Cook, Thomas Porter, and Loren Carpenter. Distributed ray tracing. In Computer
Graphics (Proceedings of SIGGRAPH 84), pages 137–145, July 1984.
69
BIBLIOGRAPHY
[15] Franklin C. Crow. Shadow algorithms for computer graphics. In Computer Graphics (Proceedings of ACM SIGGRAPH 77), pages 242–248. ACM Press, 1977.
[16] Franklin C. Crow. The Aliasing Problem in Computer-Generated Shaded Images. Commun.
ACM, 20(11):799–805, 1977.
[17] Kristin J. Dana, Bram van Ginneken, Shree K. Nayar, and Jan J. Koenderink. Reflectance and
texture of real-world surfaces. ACM Trans. Graph., 18(1):1–34, 1999.
[18] Ingrid Daubechies. Ten Lectures on Wavelets. SIAM, 1993.
[19] Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, and Mark
Sagar. Acquiring the reflectance field of a human face. In Proceedings of ACM SIGGRAPH
2000, pages 145–156. ACM Press, 2000.
[20] Yoshinori Dobashi, Kazufumi Kaneda, Hideki Nakatani, and Hideo Yamashita. A quick rendering method using basis functions for interactive lighting design. Computer Graphics Forum,
14(3):229–240, 1995.
[21] Julie O’B. Dorsey, François X. Sillion, and Donald P. Greenberg. Design and simulation of
opera lighting and projection effects. In Computer Graphics (Proceedings of ACM SIGGRAPH
91), pages 41–50. ACM Press, 1991.
[22] George Drettakis and Eugene Fiume. A fast shadow algorithm for area light sources using
backprojection. In Proceedings of ACM SIGGRAPH 94, pages 223–230. ACM Press, 1994.
[23] Philip Dutré, Philippe Bekaert, and Kavita Bala. Advanced Global Illumination. AK Peters,
2003.
[24] Cindy M. Goral, Kenneth E. Torrance, Donald P. Greenberg, and Bennett Battaile. Modeling
the interaction of light between diffuse surfaces. In Computer Graphics (Proceedings of ACM
SIGGRAPH 84), pages 213–222. ACM Press, 1984.
[25] Steven J. Gortler, Peter Schröder, Michael F. Cohen, and Pat Hanrahan. Wavelet radiosity. In
Proceedings of ACM SIGGRAPH 93, pages 221–230. ACM Press, 1993.
[26] Paul Green, Frédo Durand, Henrik Wann Jensen, Jan Kautz, and Wojciech Matusik. NonLinear Kernel-Based Precomputed Light Transport. SIGGRAPH Sketch, August 2004.
[27] Gene Greger, Peter Shirley, Philip M. Hubbard, and Donald P. Greenberg. The irradiance
volume. IEEE Computer Graphics and Applications, 18(2):32–43, 1998.
[28] Wolfgang Hackbusch. Integral Equations – Theory and Numerical Treatment. Birkhäuser
Verlag, 1995.
[29] Pat Hanrahan, David Salzman, and Larry Aupperle. A rapid hierarchical radiosity algorithm.
In Computer Graphics (Proceedings of ACM SIGGRAPH 91), pages 197–206. ACM Press,
1991.
[30] Paul S. Heckbert. Simulating Global Illumination Using Adaptive Meshing. PhD thesis, University of California, Berkeley, 1991.
[31] Nicolas Holzschuch and Laurent Alonso. Combining Higher-Order Wavelets and Discontinuity
Meshing: a Compact Representation for Radiosity. In Rendering Techniques 2004 (Proceedings of the Eurographics Symposium on Rendering 2004), pages 275–286. The Eurographics
Association, 2004.
[32] Hugues Hoppe. Personal communication, 2003.
70
BIBLIOGRAPHY
[33] Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge University Press, 1999.
[34] ID Software. Doom 3. Computer game, 2004.
[35] David S. Immel, Michael F. Cohen, and Donald P. Greenberg. A radiosity method for nondiffuse environments. In Computer Graphics (Proceedings of ACM SIGGRAPH 86), pages
133–142. ACM Press, 1986.
[36] Doug James and Kayvon Fatahalian. Precomputing Interactive Dynamic Deformable Scenes.
ACM Transactions on Graphics, 22(3):879–887, July 2003.
[37] Henrik Wann Jensen. Global illumination using photon maps. In Proceedings of the Eurographics Workshop on Rendering ’96, pages 21–30. Springer-Verlag, 1996.
[38] James T. Kajiya. The Rendering Equation. In Computer Graphics (Proceedings of ACM
SIGGRAPH 86), pages 143–150. ACM Press, 1986.
[39] Jan Kautz, Jaakko Lehtinen, and Timo Aila. Hemispherical Rasterization for Self-Shadowing
of Dynamic Objects. In Rendering Techniques 2004 (Proceedings of the Eurographics Symposium on Rendering 2004), pages 179–184. The Eurographics Association, 2004.
[40] Jan Kautz and Michael D. McCool. Interactive Rendering with Arbitrary BRDFs using Separable Approximations. In Proceedings of the 10th Eurographics Workshop on Rendering, pages
281–292, June 1999.
[41] Jan Kautz and Michael D. McCool. Approximation of Glossy Reflection with Prefiltered Environment Maps. In Proceedings Graphics Interface 2000, pages 119–126, May 2000.
[42] Jan Kautz, Peter-Pike Sloan, and John Snyder. Fast, Arbitrary BRDF Shading for LowFrequency Lighting Using Spherical Harmonics. In 13th Eurographics Workshop on Rendering, pages 301–308, June 2002.
[43] Rainer Kress. Linear integral equations, 2nd ed. Springer, 1999.
[44] Erwin Kreyszig. Introductory Functional Analysis. John Wiley & Sons, 1989.
[45] Eric Lafortune and Yves Willems. Bi-directional path tracing. In Proceedings of Third International Conference on Computational Graphics and Visualization Techniques (Compugraphics
’93), 1993.
[46] Eric P. Lafortune and Yves D. Willems. A Theoretical Framework for Physically Based Rendering. Computer Graphics Forum, 13(2), 1994.
[47] Stig Larsson and Vidar Thomée. Partial Differential Equations with Numerical Methods.
Springer, 2003.
[48] Lutz Latta and Andreas Kolb. Homomorphic factorization of BRDF-based lighting computation. ACM Transactions on Graphics, 21(3):509–516, 2002.
[49] Jaakko Lehtinen and Jan Kautz. Matrix Radiance Transfer. In Proceedings of the 2003 Symposium on Interactive 3D graphics, pages 59–64, 2003.
[50] Xinguo Liu, Peter-Pike Sloan, Heung-Yeung Shum, and John Snyder. All-Frequency Precomputed Radiance Transfer for Glossy Objects. In Rendering Techniques 2004 (Proceedings of the
Eurographics Symposium on Rendering 2004), pages 337–344. The Eurographics Association,
2004.
[51] Stéphane Mallat. A Wavelet Tour of Signal Processing, 2nd ed. Academic Press, 1999.
71
BIBLIOGRAPHY
[52] Vincent Masselus, Pieter Peers, Philip Dutré, and Yves D. Willems. Relighting with 4D Incident Light Fields. ACM Transactions on Graphics, 22(3):613–620, 2003.
[53] Ren Ng, Ravi Ramamoorthi, and Pat Hanrahan. All-Frequency Shadows Using Non-linear
Wavelet Lighting Approximation. ACM Transactions on Graphics, 22(3):376–381, 2003.
[54] Ren Ng, Ravi Ramamoorthi, and Pat Hanrahan. Triple Product Wavelet Integrals for Allfrequency Relighting. ACM Transactions on Graphics, 23(3):477–487, 2004.
[55] F. E. Nicodemus, J. C. Richmond, J. J. Hsia, I. W. Ginsberg, and T. Limperis. Geometric
Considerations and Nomenclature for Reflectance. NBS Monograph 160, National Bureau of
Standards, 1977.
[56] Jeffry S. Nimeroff, Eero Simoncelli, and Julie Dorsey. Efficient Re-rendering of Naturally
Illuminated Environments. In Proceedings of the Fifth Eurographics Workshop on Rendering,
pages 359–373, Darmstadt, Germany, 1994.
[57] Bui Tuong Phong. Illumination for computer generated pictures. Communications of the ACM,
18(6):311–317, 1975.
[58] Ravi Ramamoorthi and Pat Hanrahan. A Signal-Processing Framework for Inverse Rendering.
In Proceedings of ACM SIGGRAPH 2001, pages 117–128, August 2001.
[59] Ravi Ramamoorthi and Pat Hanrahan. An Efficient Representation for Irradiance Environment
Maps. In Proceedings of ACM SIGGRAPH 2001, pages 497–500. ACM Press, August 2001.
[60] Ravi Ramamoorthi and Pat Hanrahan. Frequency Space Environment Map Rendering. ACM
Transactions on Graphics, 21(3):517–526, 2002.
[61] Mark C. Reichert. A two-pass radiosity method driven by lights and viewer position. Master’s
thesis, Program of Computer Graphics, Cornell University, Ithaca, New York, January 1992.
[62] Remedy Entertainment and Rockstar Games. Max Payne 2. Computer game, 2003.
[63] Holly Edith Rushmeier. Realistic image synthesis for scenes with radiatively participating
media. PhD thesis, Program of Computer Graphics, Cornell University, 1988.
[64] Peter Shirley, Bretton Wade, Philip M. Hubbard, David Zareski, Bruce Walter, and Donald P.
Greenberg. Global illumination via density-estimation. In Proceedings of the 6th Eurographics
Workshop on Rendering, pages 219–230, 1995.
[65] François Sillion and Claude Puech. A general two-pass method integrating specular and diffuse
reflection. In Computer Graphics (Proceedings of ACM SIGGRAPH 89), pages 335–344. ACM
Press, 1989.
[66] François X. Sillion, James R. Arvo, Stephen H. Westin, and Donald P. Greenberg. A global
illumination solution for general reflectance distributions. In Computer Graphics (Proceedings
of ACM SIGGRAPH 91), pages 187–196. ACM Press, 1991.
[67] Peter-Pike Sloan, Jesse Hall, John Hart, and John Snyder. Clustered Principal Components for
Precomputed Radiance Transfer. ACM Transactions on Graphics, 22(3):382–391, 2003.
[68] Peter-Pike Sloan, Jan Kautz, and John Snyder. Precomputed radiance transfer for real-time
rendering in dynamic, low-frequency lighting environments. ACM Transactions on Graphics,
21(3):527–536, 2002.
[69] Peter-Pike Sloan, Xinguo Liu, Heung-Yeung Shum, and John Snyder. Bi-Scale Radiance
Transfer. ACM Transactions on Graphics, 22(3):370–375, July 2003.
72
BIBLIOGRAPHY
[70] Brian E. Smits, James R. Arvo, and David H. Salesin. An importance-driven radiosity algorithm. In Computer Graphics (Proceedings of ACM SIGGRAPH 92), pages 273–282. ACM
Press, 1992.
[71] A. James Stewart and Sherif Ghali. Fast computation of shadow boundaries using spatial
coherence and backprojections. In Proceedings of ACM SIGGRAPH 94, pages 231–238. ACM
Press, 1994.
[72] Eric Tabellion and Arnauld Lamorlette. An Approximate Global Illumination System for
Computer-Generated Films. ACM Transactions on Graphics, 23(3), 2004.
[73] Patrick Teo, Eero Simoncelli, and David Heeger. Efficient Linear Re-rendering for Interactive
Lighting Design. Technical Report CS-TN-97-60, Stanford University, 1997.
[74] Roy Troutman and Nelson L. Max. Radiosity algorithms using higher order finite element
methods. In Proceedings of ACM SIGGRAPH 93, pages 209–212. ACM Press, 1993.
[75] Eric Veach and Leonidas J. Guibas. Bidirectional estimators for light transport. In Proceedings
of Eurographics Rendering Workshop 1994, pages 147–162, 1994.
[76] John R. Wallace, Michael F. Cohen, and Donald P. Greenberg. A two-pass solution to the
rendering equation: A synthesis of ray tracing and radiosity methods. In Computer Graphics
(Proceedings of ACM SIGGRAPH 87), pages 311–320. ACM Press, 1987.
[77] Rui Wang, John Tran, and David Luebke. All-Frequency Relighting of Non-Diffuse Objects
using Separable BRDF Approximation. In Rendering Techniques 2004 (Proceedings of the
Eurographics Symposium on Rendering 2004), pages 345–354. The Eurographics Association,
2004.
[78] Stephen Westin, James Arvo, and Kenneth Torrance. Predicting Reflectance Functions From
Complex Surfaces. In Computer Graphics (Proceedings of ACM SIGGRAPH 92), pages 255–
264, July 1992.
[79] Turner Whitted. An improved illumination model for shaded display.
23(6):343–349, 1980.
Commun. ACM,
[80] Andrew J. Willmott. Hierarchical Radiosity with Multiresolution Meshes. PhD thesis, Carnegie
Mellon University, 2000.
[81] Harold R. Zatz. Galerkin radiosity: a higher order solution method for global illumination. In
Proceedings of ACM SIGGRAPH 93, pages 213–220. ACM Press, 1993.
73