A Study of Photon Mapping Techniques for Global Illumination

A STUDY OF PHOTON MAPPING TECHNIQUES FOR
GLOBAL ILLUMINATION
Alireza Rasti1, Davood Rasti2, Ali Kouhzadi3, Ng Kok Why4
Faculty of Computing and Informatics, Multimedia University, Persiaran Multimedia,
63100 Cyberjaya, Selangor, Malaysia.1,2,3,4
[email protected], [email protected], [email protected], [email protected]
Abstract
Photon mapping is a biased global illumination algorithm that simulates all direct and indirect light effects
on an object. In this paper, we will investigate a few complex photon mapping algorithms and discuss their
strengths and limitations. We have implemented the original Photon Mapping and Progressive Photon
Mapping on central processing unit (CPU) and graphics processing unit (GPU) for a reasonable
comparison. This aims to provide an exhaustive study for the users to choose the most appropriate
technique to compromise one’s need. We have also indicated which method is more efficient to exploit
GPU on the most recent graphics hardware commodity.
Keywords:
Photorealistic Rendering, Photon Mapping, Physically-based Rendering, Global Illumination.
1. INTRODUCTION
One of the main objectives of Computer
Graphics is to create images from a virtual threedimensional (3D) environment that looks
identical to the real world environment. To create
such photorealistic images, one needs to simulate
all kind of lights such as reflection, refraction,
shadow and caustics which occur in the real
world.
Many algorithms have been proposed to
achieve photorealistic images. Each algorithm has
its strengths and weaknesses in different light
effects. Most of the algorithms are complex and
time consuming. One of the most efficient
lighting algorithms is photon mapping, which
able to compute most of the light effects
appropriately. It is a two pass, biased algorithm
that can simulate all type of light scattering in a
fraction of time.
In this paper, we will investigate photon
mapping algorithm and three other algorithms
which to improve the original photon mapping.
Section 2 will brief on the basic concepts of
global illumination and rendering. Section 3 will
review the related background. Section 4 will
discuss on the photon mapping techniques.
Section 5 and Section 6 will be the conclusion
and references.
2. BASIC CONCEPTS OF GLOBAL
ILLUMINATION
2.1 Global Illumination versus Local
Illumination
In real world, light intensity on a surface is the
total light effects of the direct lights from the light
sources and indirect lights reflected or refracted
from other surfaces.
Algorithms that only compute the direct light
effects are known as Local Illumination. The
groups of algorithm that compute and simulate all
kind of direct and indirect light effects to create a
photorealistic image are known as Global
Illumination.
2.2 What is rendering?
Rendering is a process of creating a 2D image
of a 3D scene or model. In rendering process, the
color of each pixel image is specified by
computing the light intensity of the scene. In
general the rendering can be categorized into two
types: Photorealistic rendering and NonPhotorealistic rendering.


Photorealistic rendering: To generate an
image of a 3D scene that is indistinguishable
as of a photograph taken from the same scene
in the real world. To create a photorealistic
image, all type of lights in the real world
should be simulated.
Non-Photorealistic rendering: To generate an
image of a 3D scene which looks alike a
drawing from the same scene in the real
world.
introduced as an extension to compute all light
scattering effects.
Path tracing is a pure unbiased algorithm
based on Monte Carlo algorithm, which to create
a high quality image from a 3D scene. A large
number of samples are required for each pixel of
an image. In this method, a ray is cast from the
camera to the scene. When the primary ray
collides to an object, a number of rays are
generated and shoot randomly through the scene.
These rays are traced until they reach a light
source or other ray path that has already reached
the light source. These rays are used to estimate
flux of each pixel.
Path tracing can generate a photorealistic
image from a complex 3D scene but it is very
time consuming. Many rays that are cast
randomly in the scene may not reach the light
source. This will relatively increase the
computation time. Besides, it needs large number
of rays in order to create high quality image. If
only few rays are used for computing the pixel
flux, the final image will be very noisy.
3.3 Bidirectional Path tracing
3. RELATED BACKGROUND
Ray tracing [13] is a technique to create an
image by tracing the path of light through pixels
in an image plane and simulating the effects of its
intersections with any virtual objects.
3.1 Monte Carlo Ray Tracing
Monte Carlo ray tracing [12] is an extension to
the classic ray tracing, which can simulate all
kind of light scattering in an environment. In this
method, rays are emitted randomly to account all
light paths. In general, this method produces
noisy result.
3.2 Path tracing
As ray tracing cannot efficiently handle some
light effects such as motion blur, soft shadow and
depth of field, it cannot generate a complete
photorealistic image. Path tracing [2] was
Bidirectional Path Tracing [7] is an extension
to path tracing algorithm. As most of the
algorithms based on ray tracing are viewerdependent, they shoot rays from the observer to
the scene. Both light source and viewer position
are crucial in global illumination. Bidirectional
path tracing algorithm takes into account of the
importance light source around the camera or
viewer position. Rays are cast from the camera
and light source to the scene simultaneously (See
Figure 1).
After tracing the light source rays and camera
rays, all the intersection points of the camera and
light ray are connected by shadow rays. In this
step, if there is no object between these
intersections, appropriate contributions will be
added to the pixel image flux (See Figure 2).
Bidirectional path tracing can simulate all kind
of light scattering and produce better result than
path tracing algorithm. This is proven in [7] for
typical indoor scenes where indirect illumination
is important. However, path tracing excels in
finding light source from the viewer position. For
example, for rendering caustics, tracing the rays
from the light source to the scene is easier and
more efficient than from the viewer position [4].
Besides, bidirectional path tracing is time
consumption. To generate less noisy image, a
large number of sample rays are required.
shadow and hard shadow. In contrast to Monte
Carlo ray tracing algorithm, photon mapping is a
biased algorithm.
Method: Photon mapping is a two-pass
algorithm: Building photon maps and Rendering.

In the first pass, photons are cast randomly
from the light sources to the scene. Based on
the material of the surface, photons are
refracted, reflected or absorbed. The
information of hit photons (intersection point,
direction and power) is stored in a data
structure.

In the second pass, a ray is shot from the
viewer and traced through the scene for each
pixel. When a ray hit an object, the
illumination detail of the nearest photon to
the intersection point is used to compute the
radiance contributed from these photons.
light path
Viewer path
Point light source
Screen
Viewer position
Figure 1: Simultaneously casting rays from the camera
and light source in bidirectional path tracing.
Light path
Shadow ray
Viewer path
4.1.1 First pass (Building photon map)
To build the photon maps, three steps are
required: photons casting, photons tracing and
photons storing.
In photons casting step, a large number of
photons are shot from the light source in to the
scene. Each emitted photon carries a portion of
the light source power to ensure the distribution
of the power in the scene is correct [5]. To
calculate this, Eq. 1 below is applied.
Point light source
Screen
Viewer position
Figure 2: A schematic representation of the
bidirectional path tracing algorithm.
4. PHOTON MAPPING TECHNIQUES
4.1 Photon Mapping
Photon mapping is a global illumination
algorithm introduced by Henrik Wan Jensen [5].
This algorithm can simulate all light scattering
including diffuse inter-reflections, caustics and
participating media. It is able to compute the soft
Pphoton 
Plight
ne
(Eq. 1)
where Pphoton is the power of each photon, Plight is
the power of the light source, ne is the number of
emitted photons.
The direction of the emitted photons depends
to the shape of the light source. For a point light
source, the photons should be cast in uniformly
distributed random directions. But for the
directional light source, the direction of the
emitted photon should be the same as light source
direction (See Figure 3).
helps to reduce the number of photos. The new
direction of a reflected photon is computed using
the Bidirectional Reflectance Distribution
Function (BRDF) of the surface.
Figure 3: Emission from light sources: (a) point light,
(b) directional light, (c) square light, (d) general light.
In photons tracing step, the emitted photons
are traced through the scene. When a photon hits
a diffused surface, one or more new photos are
generated at the intersection point and reflected in
new directions. The energy of these photons is
changed according to the hit surface material.
Figure 4 shows our stored photons in the photon
map after the first pass of the original photon
mapping.
In photon storing step, when a photon hits a
diffused surface, its detail (e.g. intersection point,
direction and energy) is stored in the photon map.
This information can be used to compute indirect
illumination for diffused surface. For simulating
specular reflection, a classic ray tracer is used.
Jensen [3] applied two different photon maps
(caustics photon map and global photon map) to
improve the rendering speed and reduce the
required memory. Caustics photon map is used to
store caustics photons which are reflected or
transmitted via one specular surface before hitting
a diffuse surface. For rendering high quality
caustics, one needs more photons in the caustics
photons map. (See Figure 5).
Global photon map is used for storing photons
which hitting a diffuse surface (See Figure 6).
Figure 4: Illustration of stored photons in the first pass
of original photon mapping.
Figure 5: Building the caustics photon map [1].
This technique raises a problem. While
generating two new photons for each intersection
point (after 8 interactions between photons and
surfaces), it should have 256 ( = 28 ) new photon
for each emitted photon from the light source.
Russian roulette technique [5] was proposed to
eliminate unnecessary photons from photon map.
At the intersection point, a photon is either
reflected or absorbed.
Let 1000 photons hit a surface with reflectivity
equal to 0.5, we can reflect 1000 photons with
half the energy or reflect 500 photons selected by
Russian roulette with full energy. This technique
Figure 6: Building the global photon map [1].
4.1.2 Second pass (Rendering)
In this pass, the scene is rendered using the
detail stored in the photon map. From the viewer
position, a ray is cast and traced through the
scene. When the ray hit a surface, the photon map
is searched to find the photons that are nearest to
the intersection point. The flux of these photons is
added to the intersection point of radiance. The
process of finding the nearest photons is to
expand a sphere around the intersection point
until it contains n number of photons that can be
used to compute the radiance [6] (See Figure 7).
Direct Illumination = ∫Ω 𝑓rLi,lcosθidωi
Specular Reflection = ∫Ω 𝑓r,s(Li,c+ Li,d) cosθidωi
Caustics = ∫Ω 𝑓r,dLi,ccosθidωi
Soft Indirect Illumination = ∫Ω 𝑓r,dLi,dcosθidωi
4.1.3 Data Structure of the Photon Map
As we have mentioned, in the first pass of the
photon map algorithm, the photons intersected a
surface are stored in a data structure called photon
map. The photon map consists of thousands or
millions of photons. In the second pass we refer
to this photon map to find the nearest photons to
each hit ray. Figure 8 shows the result of our
second pass after searching the photon map and
radiance estimation at the hit point. As this is a
costly operation, a balance KD-tree is
recommended to store the photons. This data
structure is fast and efficient for finding the
nearest photons.
Figure 7: Radiance estimate for surfaces [8].
Below is the rendering equation (Eq.2) [2, 3]
to compute radiance of an intersection point:


Ls(x, Ψr) = Le (x; Ψr) +
∫Ω 𝑓𝑟 (x, Ψi; Ψr) Li(x, Ψi) cos θidωi
where
Discussion: The original photon
algorithm has many advantages:
(Eq.2)
Ls(x,Ψr) = surface radiance
mapping
Ability to simulate all global illumination.
Decoupling the representation of the
illumination from the geometry that makes it
capable of simulating global illumination in
complex scene.
Le = emitted radiance by surface
Li = incoming radiance in direction Ψi
fr = BRDF an point x
Ω = the sphere on incoming direction
We can divide this equation into four components
(Eq.3):
Lr=∫Ω 𝑓rLi,lcosθidωi+ ∫Ω 𝑓 r,s(Li,c + Li,d) cosθidωi+
∫Ω 𝑓r,dLi,ccosθidωi+ ∫Ω 𝑓r,dLi,dcosθidωi
(Eq.3)
where
fr = fr,s + fr,d
Li = Li,l + Li,c+ Li,d
fr,d = BRDF for diffuse
fr,s = BRDF for specular
Each term of Eq.3 represent one light effect:
Figure 8: Result of second pass of original photon
mapping, without final gathering.


Faster than pure Monte Carlo based methods.
Ability to be parallelized.

In the first pass, a ray is cast from the viewer
to each pixel image. If the ray hit a specular
surface, it will bounce in new direction until
it reaches a non-specular surface. When the
ray hit a non-specular surface, the
intersection location, ray direction, scaling
factors and pixel location will be stored.

In the second pass, photons are shot from the
light source and traced through the scene for
building the photon map. We can compute
the radiance of each hit point using the detail
of the stored photons. The photon map is
searched to find the nearest photons to each
intersection point and the radiance of these
photons are added to the intersection point.
After that, the stored photons will be
discarded and new photons can be emitted to
construct a new photon map.
Its disadvantages are:


More photons are required to render less
noisy caustics.
More memory is needed to store photons.
4.2 Reverse Photon Mapping
Reverse photon mapping [9] algorithm was
proposed to improve performance of the classic
photon mapping.
Method: In contrast to the original photon
mapping, reverse photon mapping uses ray
tracing in the first pass and photon tracing in the
second pass.


First, the rays from camera/viewer are cast to
the scene. The rays which hit the objects are
stored in the data structure.
In the photon tracing pass, the data structure
is used to find the nearest hit rays to each
photon for computing the radiance.
Discussion: This reverse photon mapping
algorithm improves the computation speed and
the required memory upon the number of rays. In
photon tracing pass, it is possible to shoot photons
as much as necessary to get a better result.
4.3 Progressive Photon Mapping
Progressive photon mapping is a biased global
illumination algorithm based on photon mapping
introduced by Hachisuka [10]. This algorithm can
compute accurate global illumination using
limited amount of memory as it does not need to
store many photons in the photon map.
Method: Progressive photon mapping generally
is a multi-pass algorithm. The first pass of this
algorithm is a ray tracing. The other passes are
one or more photon map tracing to be used.
In Progressive photon mapping, one is able to
render an image immediately after the first photon
tracing pass. In contrast to the reverse photon
mapping, the second pass can be repeated many
times to improve the quality of the rendered
image and achieve the desired result.
The radiance at each intersection point in first
pass is estimated based on the photons power
within the radius of hit point. By repeating the
photon tracing pass, we can have more photons in
the region. The radius will be reduced after each
photon tracing pass to retrieve more accurate
radiance.
Figures 9 to Figure 13 show the result of our
implemented Progressive photon mapping after
repeating the second pass for 1000 times. Figures
9 and Figure 10 are constructed with total of
5x108 photons and Figures 11 to Figure 13 are
constructed with total of 5x107 photons, without
final gathering.
After each photon tracing pass radiance is
evaluated based on the current radius and the
current intercepted photons, it will be multiplied
by the pixel weight and added to the pixel
associated with the hit point.
Figure 9: Progressive photon mapping with 5x108
photons, single point light source and hard shadow.
Figure 12: Progressive photon mapping with 5x107
photons, single point light source and caustics caused
with uncolored glass ball.
Figure 10: Progressive photon mapping with5x108
photons, single square light source and soft shadow.
Figure 13: Progressive photon mapping with 5x107
photons, single point light source and caustics caused
with coloured glass ball.
Discussion: One of the main advantages of
Progressive photon mapping is it required less
memory for computing the correct solution. This
advantage speeds up the rendering process. Also,
the quality of the final image is optional and
depends on the number of photon tracing pass it
has repeated.
Figure 11: Progressive photon mapping with 5x107
photons and two point light sources.
The disadvantage of this algorithm is its
inefficiency to simulate distributed ray tracing
effects. Also, the photons are distributed poorly
and some photons might miss the visible parts of
the scene. This may create a noisy rendered
image.
4.4 Stochastic Progressive Photon Mapping
Stochastic Progressive Photon Mapping
(SPPM) algorithm was proposed [11] to improve
the Progressive Photon Mapping (PPM)
algorithm. In contrast to PPM, this algorithm is an
unbiased Mote Carlo method. Original
progressive photon mapping can only compute
the correct radiance of a point. To simulate effects
such as motion blur and depth-of-field, we need
to accurately calculate the value of a region in a
scene.
Method: SPPM is similar to PPM but the main
difference is the computation of the radiance
estimation. In SPPM, after each photon tracing
pass, a distributed ray tracing pass is added (See
Figure 14).The main idea is to use the shared
statistics over a region to compute for the average
value. Using this extra pass, SPPM will be able to
simulate most of the illumination effects which
PPM cannot handle.
Discussion: The advantage of SPPM over the
PPM is it able to simulate the distributed ray
tracing effect efficiently. However, it may
generate noisy image because of poorly
distributed photons and missing visible parts of
scene by some of the photons.
global illumination algorithms. In terms of
parallel processing, memory consuming is the
main factor. In this case, Progressive Photon
Mapping (PPM) and Stochastic Progressive
Photon Mapping (SPPM) are the best choices to
implement on GPU. In comparison with all the
existing photon mapping algorithms, SPPM
outperforms all the other photon mapping in
terms of computation efficiently and memory
consumption.
6. REFERENCES
[1]. Jensen, HenrikWann: “A Practical Guide to
Global Illumination using Photon Mapping”.
Siggraph 2001.
[2] Kajiya, James T.: "The Rendering Equation".
Computer Graphics 20 (4), pp. 143-149, 1986.
[3] H. W. Jensen, “Global illumination using
photon maps" Rendering Techniques, vol.
Proceedings of the Seventh Eurographics
Workshop on Rendering, pp. 21-36, 1996.
[4] Henrik Wan Jensen, "Realistic Image
Synthesis Using Photon Mapping", 2001, A K
Peters, ISBN 978-1568814629
Figure 14: Difference between the algorithms of progressive photon mapping (PPM) and stochastic
progressive photon mapping (SPPM) [11].
5. CONCLUSION
Photon mapping algorithms are simple and
efficient to compute for most of the light effects
in photorealistic rendering. These algorithms are
also easy to implement as compared to the other
[5] Jensen, HenrikWann and Niels Jorgen
Christensen: "Photon maps in Bidirectional
Monte Carlo Ray Tracing of Complex Objects".
Computers and Graphics, 19(2), pp. 215-224,
1995.
[6] Per H. Christensen. “Faster photon map
global illumination”. Journal of Graphics Tools,
4(3), pp. 1–10. ACM, 1999.
[7] Lafortune, Eric P.; Yves D. Willems:
"Bidirectional Path Tracing". Proceedings of
Computer Graphics, pp. 95-104, 1993.
[8] HenrikWann Jensen and Per H. Christensen.
“Efficient simulation of light transport in scenes
with participating media using photon maps”. In
Computer Graphics (ACM SIGGRAPH ’98
Proceedings), pages 311–320, 1998.
[9] Havran, V., Herzog, R., and Seidel, H.P.
2005. “Fast final gathering via reverse photon
mapping”. Computer Graphics Forum, 24(3). Pp.
323-332, Sept. 2005.
[10] Hachisuka, T., Ogaki, S., and Jensen, H. W.
2008. “Progressive photon mapping”.ACM
Transactions on Graphics (SIGGRAPH Asia
Proceedings), 27(5), Article 130.
[11] Hachisuka T., Jensen H. W.: “Stochastic
progressive photon mapping”. ACM Transactions
on Graphics, SIGGRAPH Asia 2009. 28(5), 2009.
[12] SIGGRAPH 2003 course, “Monte Carlo
Ray Tracing”, (pp. 7-11). San Diego, CA, USA.
[13] T. Whitted, “An improved illumination
model for shaded display," Communications of
the ACM, vol. 23, pp. 343-349, 1980.