CS121 - My E-town

Week 10 - Friday





What did we talk about last time?
Barry Caudill was a guest speaker
Reflections
Transmittance
Refractions



Light is focused by reflective or refractive surfaces
A caustic is the curve or surface of concentrated light
The name comes from the Greek for burning
Reflective:
Refractive:

First:
 The scene is rendered from the view of light
 Track the diversion of light and see which
locations are hit
 Store the result in an image with Z-buffer
values called a photon buffer

Second:
 Treat each location that received light as a
point object called a splat
 Transform these to eye viewpoint and
render them to a caustic map

Third:
 Project the map onto the screen and
combine with the shadow map

Look at each generator triangle
 Those that are specular or refractive
Each vertex on each generator triangle has a normal
Create a caustic volume like a shadow volume except that
the sides are warped by either reflection or refraction
 For receiver pixels in the volume, intensity is computed





Subsurface scattering occurs when light enters
an object, bounces around, and exits at a
different point
If the exit point is close to the entrance point (in
the same pixel), we can use a BRDF
If it spans a larger distance, we need an
algorithm to track photon propagation

Examples
 Pearlescent paint
 Human skin
▪ Which matters

Causes




Foreign Particles (pearls)
Discontinuities (air bubbles)
Density variations
Structural changes
We need to know how long light has traveled through
the object
 Tracking individual photons is impossible, so all
algorithms will be statistical

Subsurface scattering does not affect specular reflection
We often use normal maps to add detail to specular reflection
characteristics
 Some work suggests that this same normal map should be ignored
for diffuse terms
 Or the normals can be blurred further since surface direction
appears to change slowly if light from other directions is exiting
diffusely
 More complex models render the diffuse lighting onto a texture and
then selectively blur R, G, and B components for more realism


This texture space
diffusion technique
was used in The
Matrix Reloaded for
rendering skin
We could cast rays into objects to see where they
come out, but it's expensive
 An alternative is to use depth maps to record how far
the light travels through the object which determines
how colored by the object it is

 Refraction when the light enters the object is usually
ignored
 Only exiting refraction is computed



To create a realistic
scene, it is necessary for
light to bounce between
surfaces many times
This causes subtle
effects in how light and
shadow interact
This also causes certain
lighting effects such as
color bleeding (where
the color of an object is
projected onto nearby
surfaces)




Radiosity was the first graphics technique
designed to simulate radiance transfer
Turn on the light sources and allow the
environmental light to reach equilibrium
While the light is in stable state, each surface
may be treated a light source
A general simplification is to assume that all
indirect light is emitted from a diffuse surface
 Radiosity doesn't do specular reflection


The outgoing radiance of a diffuse surface is:
𝑟𝐸
𝐿=
𝜋
where r and E are the reflectance and irradiance
Each surface is represented by a number of
patches
 To get even lighting and soft shadows, it may be
necessary to break polygons down into smaller
patches
 It's even possible to have fewer patches than
polygons
To create a radiosity solution, we need to create a matrix of form
factors
 These are geometric values saying what proportion of light travels
directly from one surface to another
 The form factor between a surface point with differential area dai
and another surface point with daj is
cos 𝜃𝑖 cos 𝜃𝑗
𝑑𝑓𝑖𝑗 =
ℎ𝑖𝑗 𝑑𝑎𝑗
2
𝜋𝑑


These differentials have to be (numerically) integrated for each
patch


If the receiving patch faces away from the
viewed patch, the form factor is zero
hij is the visibility factor, which ranges from
either 0 (not visible) to 1 (visible)
 It is scaled if the view between the surfaces is fully
or partially blocked
A common way to solve for the equilibrium is to
form a square matrix with each row formed by
the form factors for a given patch times the
patch’s reflectivity
 Performing Gaussian elimination on the
resulting matrix gives the exitance of the patch
in question
 Much research has focused on how to do this
better

 Though radiosity is no longer a sexy research topic
since it doesn't allow for specular effects



Radiosity is a great technique for getting soft
shadows and many other effects
characteristic of diffuse lighting
Specular highlights and reflections are not
present
An alternative global illumination model is
ray tracing which shoots single rays through
the scene and computes their colors
Rays are traced from the camera through the screen to the
closest object, called the intersection point
 For each intersection point:

 Trace a ray to each light source
 If the object is shiny, trace a reflection ray
 If the object is not opaque, trace a refraction ray

Opaque objects can block the rays, while transparent
objects attenuate the light

Pros
 Classical ray tracing is relatively fast (only a few
rays are traced per pixel)
 Good for direct lighting and specular surfaces

Cons
 Not good for environmental lighting
 Does not handle glossy and diffuse
interreflections
 Only makes hard shadows

Ray directions are randomly chosen,
weighted by the BRDF
 Called importance sampling
 Usually it means tracing more rays along the
specular reflection path

There are two main types of Monte Carlo ray
tracing:
 Path tracing
 Distribution ray tracing

Path tracing
 A single ray is reflected or refracted throughout the
scene, changing direction at each surface intersection
 Good sampling requires millions of rays or more per
pixel

Distribution ray tracing
 Spawns random rays from every surface intersection
 Starts with fewer rays through each pixel than path
tracing, but ends up with lots at each surface

Pros
 Very realistic

Cons
 Tons of computational power is necessary
 Not good for real-time rendering with many
objects
Full global illumination
algorithms are
expensive
 Some results can be precomputed

 Those results can be used
in real-time rendering

Scene and light sources
must remain static
 The majority of scenes are
only partially static

Lighting on smooth, Lambertian surfaces is simple
 Single RGB value giving irradiance
Dynamic lights can simply be added on top
Irradiance can be stored in vertices if there is a lot of geometric
detail
 Or texture maps (which could change depending on situation)
 Cannot be used with glossy or specular surfaces


 Irradiance maps have no directionality

Can’t be used with high frequency normal maps


Can be used with specular and glossy surfaces, and high
frequency normal maps
Additional directional data must be stored
 How irradiance changes with surface normal

Storing irradiance environment map at each surface location


Indirect light illuminates dynamic objects
Multiple methods:
1.
2.
3.

Interpolate irradiance values from closest environment maps (Valve)
Use the prelighting on adjacent surfaces (Racing games)
Store average irradiances at each point (Little Big Planet)
Little has been done for specular and glossy surfaces


Global illumination algorithms precompute
quantities other than lighting
Often, a measure of how much parts of a
scene block light are computed
 Bent normal, occlusion factor


These precomputed occlusion quantities can
be applied to changing light in a scene
Creates a more realistic appearance than
precomputing lighting alone



Cast rays over hemisphere around each
surface location or vertex
Cast rays may be restricted to a set distance
Usually involves a cosine weighting factor
 Most efficient way is importance sampling
 Instead of casting rays uniformly over
hemisphere, distribution of ray directions is
cosine-weighted around surface normal

Precomputed ambient occlusion factors are
only valid on stationary objects
 Example: a racetrack


For moving objects (like a car), ambient
occlusion can be computed on a large flat
plane
This works for rigid objects, but deformable
objects would need many precomputed
poses
 Like a human



Horizon mapping is used to determine selfocclusion on a height field surface
For each point of the surface, the altitude
angle of the horizon is determined
Soft shadowing can be supported by tracking
the angular extents of the light source


An alternative technique is to use volume
textures to store horizon angles
This is useful for a light source that travels
along some predetermined path
 Like the sun


Multiple occluded angle intervals are stored,
rather than one horizon angle
This enables modeling non-height field
geometry
Spherical harmonics are
the angular portion of a
set of solutions to
Laplace's equation
 The spherical harmonics
that we're interested in
form an orthogonal
system that gives us an
organized way to refer to
more and more
complicated coverage of
an object from different
directions






We can record the effects on a
point from many different sources
of irradiance
These effects can be stored as
spherical harmonics coefficients
The final result gets the dynamic
irradiance from the same spherical
harmonics directions and
combines the effects
Wavelets can be used instead of
spherical harmonics with better
effects
These techniques are best suited
for diffuse (low frequency) lighting

Image based effects
 Skyboxes
 Sprites
 Billboarding
 Particle systems

Keep working on Project 3
 Due next Friday by midnight

Read Chapter 10