x the irradiance level En(x En(x L(w , x multiplied by a reflectivity

Irradiance and Incoming Radiance
~
Imagine a sensor which is a small, flat plane centered at a point x in space and oriented so that its normal points
in the direction n. This sensor can compute the total light energy received per unit time in watts.
n
~
x
After dividing by the area of the sensor, we can say that the energy flux that the sensor receives per unit area is
~
the irradiance level En(x).
⇀ ~
This irradiance level can be explained by making reference to the incoming radiance L( w , x), which is a
~
⇀
measure of the light energy arriving at point x from direction w . By integrating this incoming radiance over the
~
~
entire hemisphere Hn centered in direction n at x we can compute the irradiance level En(x):
~
⇀ ~ ⇀
⇀
E n (x ) = ∫ L ( w , x ) w · n d w
Hn
⇀
The w · n term models the usual "cosine of the angle between the normal and the light direction vector" term
that we are used to seeing in diffuse lighting calculations.
In computer graphics, we are mostly interested in the light reflected from the surface, since that light is what we
~
see when we look at the surface. To a first approximation, we can assume that the incoming energy En(x) gets
~
multiplied by a reflectivity term ρ(x) and then gets reflected equally in all directions throughout the upper
~ ⇀
⇀
hemisphere Hn to make an outgoing radiance term L(x, v ), where v is the view direction from the surface
toward the viewer.
Hemispheric Lighting
One of the simplest incoming radiance functions models an outdoor scene as a sky color for all directions ω in
the upper half-sphere of directions and a ground color for all directions ω in the lower half-sphere of directions.
⇀ ~
~
This L( w , x) is independent of the position x.
1
θ
I will not present a derivation here, but the result of this hemispherical illumination is an irradiance term
~
⇀ ~ ⇀
⇀
En(x) = ∫ L( w , x) w · n d w = a· Sky_Color + (1 - a) · Ground_Color
Hn
where
⎧1 - sin θ θ ≤ π/2
⎪
2
a=⎨
⎪ sin θ θ > π/2
⎩ 2
Here θ is the angle between the normal n and the vector that points up vertically.
This simple irradiance model produces reasonable results when rendering outdoor scenes, and can be combined
with a traditional diffuse lighting calculation that models lighting from a point source "sun" at some specific
location in the sky.
Environment Mapping
A technique that is occasionally used in computer graphics for various applications is environment mapping. In
this technique we imagine that some object is contained inside an imaginary cube. Imagine further that we could
stand in the center of that cube and make six snapshots of the scene by looking out at the scene through the
center of each of the six faces in turn. In one version of this technique we make textures from each of those
snapshots and essemble these six face textures into a single texture that could be wrapped around the interior of
the cube.
Another more sophisticated version of this idea uses a sphere instead of a cube to enclose the object. We then
make a single texture that shows everything we could possibly see from the center of the sphere looking outward
and then paste that texture onto the interior of the sphere. This texture is known as a spherical environment map.
2
Irradiance Mapping
Imagine a polygon positioned in a complex scene with many areas of light and shadow. Suppose also that we
could reduce everything that we could see from some point on the surface of the polygon to a spherical
environment map positioned over that point. Just as above, we could model the illumination at that point on the
surface by
~
⇀ ~ ⇀
⇀
E n (x ) = ∫ L ( w , x ) w · n d w
Hn
⇀ ~
The only difference this time around is that L( w , x) will be a complicated function that captures all the various
⇀
light and dark colors we would see if we were to look out into the surrounding scene in the direction w from the
~
point x.
One simplifying assumption we can make is that the object we are trying to illuminate is small relative to the
⇀ ~
~
objects that surround it in the scene. In that case, to a first approximation L( w , x) is independent of x:
⇀ ⇀
⇀
En = ∫ L( w ) w · n d w
Hn
An important optimization we can apply at this point is to precompute values for En and place them in a special
kind of environment map, called an irradiance map. By using this irradiance map to lookup diffuse illumination
values and using the environment map to compute specular terms, we can produce a high quality lighting model.
3
Technical details - integrating over an environment map
To compute an irradiance value from an environment map we have to compute an integral.
⇀ ⇀
⇀
En = ∫ L( w ) w · n d w = ∫ L(ω) M(nn · ω) d ω
Hn
S
For simplicity here I have replaced the integral over the hemisphere Hn with the integral over the entire sphere S,
⇀
⇀
and at the same time replaced the dot product w · n with a term M( w · n),
n) where M(x) is a function that
evaluates to x when x is positive and to 0 when x is negative. To emphasize the fact that we are integrating over a
⇀
sphere, I have also replaced the direction vectors w with angles ω.
The term L(ω) is a color value sampled from an environment map. In practice, most environment maps are
actually cube maps made up of discrete texture pixels (texels). In that case, the integral over all directions in an
environment map can be replaced by a sum over all texels in the six faces of a cube map.
En = ∑ Li M(nn · ωi) dωi
i
Here Li is the color of texel i, ωi is the direction to that texel in the cube map, and dωi is the solid angle subtended
by that texel.
4
If we treat one of the faces of the cube map as a texture, we would access the texels via texture coordinates s and
t. Since we are treating the texture as the face of a cube centered at the origin, we can replace texture coordinates
with a more convenient coordinate system in x and y:
(x = -1, y = 1)
(s = 1, t = 1)
(x = 0, y = 0)
(x = 1, y = -1)
(s = 0, t = 0)
To transform coordinate systems, we would do
x 2 0 -1 s
y = 0 2 -1 t
1
00 1 1
In the x, y coordinate system it is easy to compute the direction vector. All we have to do is to normalize the
vector (xi , yi , 1) and it becomes a direction vector ωi.
Computing the solid angle dωi subtended by a particular texel is a little more involved. Texels near the center of
the texture subtend a somewhat larger angle, while texels near the corner take up a smaller solid angle when we
map the cube texture onto a sphere. Here is a reference that explains how to compute the solid angle correctly
from the positions xi and yi.
Using a shader to compute En
Computing En as described above is a very compute intensive operation, because we have to compute the integral
over the cube map for each new direction vector n that we want to work with. At the same time, the details for
different values of n are highly repetitive. This suggests that we should enlist the aid of shaders to compute this
mapping for us. Here is the outline of a strategy that makes it possible to do this.
1. Use OpenGL to render a two dimensional square centered at the origin with sides of length 2.
This square represents one of the sides of our cubical irradiance map. Given the mapping above
we can map any point (x , y) on the square to a direction vector n.
5
2. We render the square to a framebuffer with dimensions size by size pixels, where size is the
desired size of our irradiance cube map texture.
3. In the fragment shader, we translate the interpolated fragment positions we are given into
direction vectors n and construct loops that sum over the six faces of the environment map:
En = ∑ Li M(nn · ωi) dωi
i
4. When rendering is done, we convert the image in the frame buffer to a texture for use by our
irradiance cube map.
Here is a reference that shows how to render an image to a texture.
Environment maps versus irradiance maps
Here are some pictures that illustrate how enviroment maps relate to irradiance maps. The first image below is a
typical cubical environment map.
6
7
Here is that same environment map mapped onto a sphere.
Next we have the irradiance map derived from this environment map:
Irradiance Mapping with Spherical Harmonics
In 2001, Ramamoorthi and Hanrahan published a paper describing a much more efficient technique for
computing irradiance maps. In this paper they used spherical harmonics to decompose the terms L(ω) and M(n
n·
ω) in the irradiance integral
En = ∫ L(ω) M(nn · ω) d ω
and subsequently greatly reduced the amount of time needed to compute an irradiance map.
Their technique is based on the use of spherical harmonics. Since we have not encounted this concept before,
some basic background is in order.
The spherical harmonic functions Yl,m(θ,φ) are a set of orthogonal functions defined on the unit sphere. These
functions are defined in terms of the complex valued spherical harmonics
8
m
m
Yl (θ,φ) = N ei m φ Pl (cos θ)
m
where Pl (x) is the mth Legendre polynomial of order l and N is a normalization factor that depends on l and m.
m
The functions Yl (θ,φ) arise in the solution of the polar form of the Laplace equation.
In terms of the complex-valued spherical harmonics. the real-valued spherical harmonics are given by
⎧ 1 Y m + (-1)m Y -m m > 0
( l
l )
⎪
⎪ 2
⎪
0
Y l ,m = ⎨
Yl
m=0
⎪
m
m
m
⎪ 1 (Yl - (-1) Yl ) m < 0
⎪
⎩i 2
Because the spherical harmonic functions are orthonormal on the unit sphere, any function defined on the unit
sphere can be described as a linear combination of spherical harmonics:
∞
f(θ,φ) = ∑
l
∑
l = 0 m = -l
fl,m Yl,m(θ,φ)
the coefficients fl,m are computed by integrating the target function against the spherical harmonics:
f l ,m = ∫
2π
0
∫0
π
f(θ,φ) Yl,m(θ,φ) sin θ d θ d φ
In the Ramamoorthi and Hanrahan paper the key insight was that that the integral we need to compute
En = ∫ L(ω) M(nn · ω) d ω
is in the form of a convolution
En = ∫ L(ω) f(nn , ω) d ω
This is a useful observation, because convolutions map to products when we transform to the space of spherical
harmonic coefficients. Most importantly, if the function f(n
n , ω) is rotationally symmetric the mapping to a
product is particularly simple and straightforward.
Ramamoorthi and Hanrahan determined that I(n)
n) can be computed more easily by computing the coefficients of
L and M with respect to the spherical harmonics and then forming
4 π Ml,0 Ll,m
2l+1
E l ,m =
where
Ml,0 = ∫
2π
0
L l ,m = ∫
0
2π
∫0
∫0
π/2
π
cos θ Yl,0(θ,φ) sin θ d θ d φ
L(θ,φ) Yl,m(θ,φ) sin θ d θ d φ
9
This result is useful because the coefficients Ml,0 and the term
L, and can be precomputed:
‸
Al =
4 π/ 2 l + 1 are independent of the illumination
⎧
2π
l=1
⎪
3
⎪
0
l odd and > 1
⎪
4 π Ml,0 = ⎨
l/2 - 1
(-1)
l!
l even
2l+1
⎪2 π
2
(
l
+
2)(
l
1)
⎪
l
2 l !
⎪
(2 )
⎩
More simply, the first few terms are
‸
A0 = π
‸
A1 = 2 π
3
‸
A2 = π
4
Once we have computed the coefficients El,m we can recover the function En via an inverse transform:
∞
En = ∑
l
∑
l = 0 m = -l
El,m Yl,m(nn)
Ramamoorthi and Hanrahan pointed out that although the sum above is an infinite sum, we can compute En
accurate to about 2% by using only the terms up through and including l = 2. In practice this means that we only
have to compute 9 El,m terms:
‸
E0,0 = A0 L0,0 = π L0,0
‸
E1,m = A1 L1,m = 2 π L1,m for m = -1, 0, 1
3
‸
E2,m = A2 L2,m = π L2,m for m = -2, -1, 0, 1, 2
4
Practical summary of the method
To compute our estimate En for any normal vector n we compute
2
En = ∑
l
∑
l = 0 m = -l
El,m Yl,m(nn)
‸
E l ,m = A l L l ,m
‸
Since the Al are precomputed and fixed, we only have to compute the Ll,m terms:
10
L l ,m = ∫
0
2π
∫0
π
L(θ,φ) Yl,m(θ,φ) sin θ d θ d φ
Since the Ll,m terms are independent of the normal direction n we can precompute these terms from the
environment map ahead of time. This will allow us to precompute the El,m terms ahead of time, pass them down
to a fragment shader, and compute En very quickly and efficiently when we need it in the fragment shader. (In
fact, since En values typically do not vary all that much across a typical polygon, we can compute En values in a
vertex shader and use interpolation for the fragment shader.)
Since the values L(θ,φ) are in practice computed by doing lookups in an environment map, once again it makes
the most sense to reduce the Ll,m integral to a sum over texels in the environment map:
Ll,m = ∑ Li Yl,m(ωi) dωi
i
To compute these sums in OpenGL we will need to fetch the pixels in the 6 textures that make up the
environment map. Recall that in an earlier step we computed these textures by rendering views of the scene into a
framebuffer. As we are doing this step, we can fetch the data in the frame buffer and dump that data into an array
array of 3*width*height floats in OpenGL via the function glReadPixels:
glReadPixels(0,0,width,height,GL_RGB,GL_FLOAT,array);
A convenient way to handle the spherical harmonic terms is to represent them in terms of rectangular coordinates
x, y, and z. If (x, y, z) is a point on the unit sphere, the spherical harmonics of interest are
Y0,0(x,y,z) = 1
1
π
2
Y1,-1(x,y,z) =
3 y
4π
Y1,0(x,y,z) =
3 z
4π
Y1,1(x,y,z) =
3 x
4π
Y2,-2(x,y,z) = 1
15 x y
Y2,-1(x,y,z) = 1
15 y z
π
2
π
2
Y2,0(x,y,z) = 1
5 (3 z2 - 1)
π
4
Y2,1(x,y,z) = 1
2
11
15 x z
π
Y2,2(x,y,z) = 1
4
15 (x2 - y2)
π
It is important to note that as we read pixels from the environment map, those pixels will correspond to locations
(x, y, z) that are not on the unit sphere. Those pixel coordinates will need to be normalized to points on the unit
sphere before using the formulas above.
12