5B-4

Proceedings of the IIEEJ Image Electronics
and Visual Computing Workshop 2012
Kuching, Malaysia, November 21-24, 2012
A Rendering Method for Subsurface Scattering Effects Using Interpolated Luminance Distribution Functions
†
‡
‡
‡
‡
Tomohisa MANABE , Ayaka FURUTSUKI , Bisser RAYTCHEV , Toru TAMAKI and Kazufumi KANEDA
†Kagawa National College of Technology, ‡Graduate School of Engineering, Hiroshima University
ABSTRACT
In this paper, we propose a method for rendering
translucent objects which takes into account subsurface
scattering phenomena. To calculate the luminance of a
translucent object, we introduce a convolution-based
approach which uses the luminance distributions caused
by a set of incident light beams to the object surface.
The luminance distributions of the light beams are calculated from a pair of sampled distributions. Depending
on the incident angle of the light beam and the material
parameters of the translucent object, the luminance distributions are interpolated in an angular parameter space
obtained from the original planar space. We also propose an efficient method for calculating the convolution
using light beams of different sizes. Several rendered
images of translucent objects lit by slanting light beams
demonstrate the usefulness of the proposed method.
1. INTRODUCTION
In order to be able to render realistically translucent
objects (like human skin or marble, etc.), subsurface
scattering needs to be taken into consideration. Several
methods able to render subsurface scattering have been
previously developed. Among them, the dipole light
source model [1] provides a simple method to generate
realistic images and is widely used in many applications.
However, the luminance distributions generated by the
dipole model are different from the actual luminance
distributions, especially in the case when a light beam
hits a translucent object in an oblique direction. Figure 1
shows a luminance distribution taken from a real photograph. An acrylic plastic plate is lit by a laser beam at
45 degrees incident angle (Figure 1 (a)). In Figure 1 (b)
pseudo-color is assigned to the luminance values to
easily observe the luminance distribution around the
light incidence point. The brightest point shifts from the
incident point to the right that is the inclined direction of
the beam. The luminance distribution is also deformed
into an elliptical shape. Traditional methods cannot render such phenomena of subsurface scattering.
We propose a method for rendering subsurface scattering effects taking into account the change of luminance distributions depending on both the incident angle
of light and the material parameters of a translucent
object. The proposed method uses a convolution approach to calculate the luminance of a translucent object.
That is, light cast onto the surface of the translucent
object is decomposed into thin beams, and the luminance distributions generated by the beams are accumulated to calculate the luminance on the translucent object. We also propose a method for accelerating the calculation of the convolution using an adaptive size of the
decomposed beams.
In the proposed method, the luminance distributions
caused by the light beams are interpolated from a couple
of luminance distributions depending on the incident
angle of the beam. Taking into account the property of
the luminance distributions caused by slanting light
beams, the interpolation of the luminance distributions
is executed in an angular parameter space converted
from an original planar space. The method is also able
to interpolate luminance distributions depending on the
material parameters of a translucent material. We need
only few distribution samples to generate various kinds
of luminance distributions for light beams from any
direction and for any material parameters.
A translucent material with different albedos for different wavelengths of light can be rendered by the proposed method. Also, subsurface scattering phenomena
depending on the incident angle of light are successfully
rendered. The rendered images of a translucent object lit
by slanting light beams demonstrate the usefulness of
the proposed method.
(a) Photograph
(b) Pseudo-color image
Figure 1 The luminance distribution of an acrylic plastic
plate lit by an inclined light beam
2. RELATED WORK
Several methods have been developed to render a
translucent object. An applicative model using a dipole
point light source for approximating the components of
multiple scattering was developed in [1]. As dipole, a
positive virtual source is set beneath the incident point
and a negative virtual source above the point. The dipole model can display translucent objects which have a
smooth and soft appearance. A faster version of this
method has been developed in [2], while Mertens et al.
further made possible the interactive rendering of transformable semitransparent materials [3].
Chen et al. proposed a method for relating a texture
function and dipole light sources to a photon mapping
[4]. Donner and Jensen [5] proposed to use a multipole
light source for rendering multilayer materials (such as
human skin) and thin materials (like paper).
Because all these methods simply approximate the
multiple scattering’s components through models based
on experience, they are unable to account for some actual phenomena like, for example, when the brightest
point of the luminance distribution moves away from
the incident point of a beam and the luminance distribution gets distorted.
To solve the problems with the dipole point source
and the multipole point source models, Masuike et al.
designed a simulation model based on the theory of light
scattering [6]. However, when the secondary scattering
of light is considered this method is computationally too
expensive and cannot be used directly for display.
Takamura et al. proposed a method for interpolating the
intermediate irradiance distributions using the SubSurface Scattering Irradiance Distribution (SSSID) obtained from the simulation when the incident angle of
light or material parameters are changed [7]. The method was further improved to be able to render complex
occlusion patterns efficiently [8].
Shinya et al. proposed a method for finding a semianalytical solution by using plane-parallel approximation in a volume rendering equation [9]. Plane-parallel
approximation is simplified by assuming a directional
light [10]. Moreover, it was shown that plane-parallel
approximation can be applied to anisotropic scattering,
and was used in a model for rendering hairy objects [11].
Donner et al. proposed a rendering method using a
BSSRDF based on a simulation of photon mapping [12].
In the method, an experimental BSSRDF of a simulated
material is constructed. Hence, it is necessary to construct BSSRDF by preparing a database of irradiance
distributions when the albedo or phase function is different.
Here, we further extend Takamura’s method [7] to
be able to render subsurface scattering effects such as
the shift of the brightest point and the distorted luminance distributions mentioned above. The proposed
method interpolates the luminance distribution for both
the incident angle of light and the material parameters
simultaneously. The improvement makes it possible to
render various kinds of translucent materials as well as
spectral (color) images.
3. RENDERING SUBSURFACE SCATTERING
USING INTERPOLATED LDFS
3.1. Outline of the proposed method
The proposed method is based on a convolution approach to calculate the luminance of a translucent material. Given the impulse response function g of a system,
the output L to an input signal f is calculated by the following convolution equation:
.
(1)
When calculating the luminance of a surface, we need a
2D space (u, v) and the response function changes depending on both the incident angle, θ, of the light beam
and the material parameters, α, of the translucent object,
such as the scattering coefficients, albedos, phase functions, etc. Considering these peculiarities of subsurface
scattering, we derive the following equation for the luminance calculation (see Figure 2):
, , , , , (2)
where the input signal f(u, v) represents the intensity of
the incident beam at point (u, v) on the surface. The
response function gθ,α is the luminance distribution when
a single beam with incident angle θ hits a translucent
object with material parameters α, and will be called the
Luminance Distribution Function (LDF) in the following discussion.
We calculate the convolution in Eq. 2 over the area
of the object surface to get the luminance L of the surface. Then, the luminance L is mapped onto the surface
to display the projection image.
response
v
incident
angle θ
beam
v
u
u
material
parameter α
gθ, α(u,v)
(a) Luminance Distribution Function (LDF)
incident beams
y
L(x, y)
v
u
v
u
v
y
x
u
(b) Convolution to calculate the luminance
Figure 2 Luminance calculation
x
3.2. Interpolation of LDFs
When the incident angle of a light beam is changed
and/or the material parameters of a translucent object
are changed, the LDF also changes. To get the LDFs
corresponding to the incident light beams and the material parameters, we introduce an interpolation scheme
using a couple of the LDF samples whose incident angle
and material parameters are known.
Note that a linear interpolation in the original planar
space (u, v) does not give a good LDF, because the luminance value at a specified point P(u, v) changes nonlinearly. To address this problem, we transform the original planar space to an angular space (φ, ψ) as shown in
Figure 3. First, we set a “reference point” Q above the
brightest point H of the LDF sample. The azimuth angle
φ is defined as the angle between vectors PO and PH,
which originate from P(u, v) and are oriented in the directions of the beam incident point O and the brightest
point H, respectively. The zenith angle ψ is defined as
the angle between the surface normal n and the vector
PQ pointing to the reference point Q.
Figure 4 shows LDFs in the original planar space (u,
v) and in the angular space (φ, ψ). As shown in Figure
4(a), the LDF in the original planar space has a radial
luminance distribution whose center is the brightest
point, and the attenuation rates of the luminance values
are differ in each direction. Figure 4 (b) shows the LDF
in the angular space (φ, ψ) based on the reference point.
The luminance values decrease monotonically in the
azimuth angle φ, and vary gently in a limited range in
the zenith angle ψ. We can easily interpolate the LDFs
in the angular space using an interpolation function of a
lower degree.
Given a pair of two LDF samples with different incident angles, θ1 and θ2, and material parameters, α1 and
α2, a new LDF of incident angle θ and material parameter α can be calculated by the following expression using bilinear interpolation:
gθ, α(φ, ψ)=
(3)
where,
/ , ,
/ , .
PQ ψ
n
v
P(u,v)
PO
LDF
PH
φ
O
u
H
Figure 3 The relationship between the planar and angular spaces
0
2π
φ
v
u
(a) The original
planar space
π/2
ψ
0
255[level]
(b) The corresponding angular space
Figure 4 LDFs in two different parameter spaces
non-occluded
partially occluded
occluded
bottom
search
top
(level)
Figure 5 Determining the diameters of the light beams
from mipmap images
3.3. Acceleration of the luminance calculations
(1-wθ)(1-wα)gθ1, α1(φ, ψ) + wθ(1-wα)gθ2, α1(φ, ψ)
+ (1-wθ)wαgθ1, α2(φ, ψ) + wθwαgθ2, α2(φ, ψ),
Q
(4)
Finally the interpolated LDF in the angular space is
transformed into that of the planar space by using the
relationship between the angular and planar spaces illustrated in Figure 3. Note that we need a reference point
for the parameter space conversion. We also use a bilinear interpolation from the reference points corresponding to four LDF samples to obtain a new reference point
for the incident angle θ and the material parameter α.
We discretize Eq. 2 with a small interval ∆ in u and
v directions to calculate the luminance of a surface.
Since the size of the interval is related to the diameter of
the light beam, the cost of the luminance calculation
would be quite high if we use light beams with a small
diameter. To address this problem, we determine the
diameters of the light beams taking into account the size
of the non-occluded area.
We first calculate a mask image (an occlusion map)
to get the non-occluded areas. The viewpoint is set to
the position of the light source and the occluding objects
are projected to a screen to generate the mask image. A
non-occluded area is corresponding to the pixels of the
mask image where no objects are projected. Alternatively, we could make use of a shadow map [13] to determine the non-occluded areas in rendering. If we want
to accelerate the rendering process, the mask image
helps to determine an appropriate diameter of a beam.
0°
6°
12°
18°
24°
30°
LDF sample
LDF sample
interpolated LDFs
(a) Interpolated LDFs (α=0.999)
z
Incident
beam
θ
y
150
0
x
2
[%]
(b) Interpolated LDF (θ=15°)
and the relative error distribution
3000
750
150
3000
(c) The simulation model
Figure 6 Interpolated LDFs for changing incident angles
Next, we make a mipmap [14] of the mask image,
and determine the diameters of the light beams. We find
the pixels that do not contain any occluded areas at all,
traversing the mipmap hierarchy from the top to the
bottom as shown in Figure 5. For example, let us assume that white pixels in the mask image are assigned
to non-occluded areas and black pixels are occluded
areas. We choose only white pixels, not gray or black
pixels in each mipmap-level image to get fully nonoccluded areas. The diameters of the light beams are
determined by the mipmap-levels in which we first find
only white pixels, and the directions of the beams are
determined by the positions of the white pixels. The
algorithm works well even when the mask image has
concave non-occluded areas.
4. EXAMPLES ILLUSTRATING THE PROPOSED
METHOD
Figure 6 (a) shows interpolated LDFs while changing the incident angles of the light beams. At both ends
of the figure are LDF samples generated by a subsurface
scattering simulation [6] with a Henyey-Greenstein
phase function [15] (g=0.8). We use the simulation
model shown in Fig. 6(c). The size of the translucent
object is 30003000750 units (1 unit = 1 pixel). The
observation area is located at the beam incident point
with 150 widths in x, y directions. The incident angle
θ is the angle between the incident beam and the normal
vector of the simulated object (z direction). We set a
reference point at height of 100 units above the brightest
point.
The brightest point moves to the right in the direction of the incident beam, and the distribution of LDFs
is distorted. Figure 6 (b) shows an interpolated LDF
(θ=15°) and the relative error distribution of the LDF
compared to a LDF generated by the subsurface scattering simulation.
Figure 7 (a) shows an image rendered by a method
in which the change in incident angles has no effect on
scattering. This method is equivalent to most traditional
methods. Figure 7 (b) is rendered by the proposed
method. Here, light beams of different incident angles
and directions are used. The incident angle of the center
beam is set to 0° and the outside beams are 45° and 55°.
Scattering phenomena depend heavily on the wavelength of light. We use different albedos for the translucent object for the R, G, B color components: αR=0.6,
αG=0.9, αB=0.3. We use light beams with seven colors:
white (center), red, magenta, blue, cyan, green and yellow. The proposed method uses four LDF samples: (θ1,
α1)=(0, 0.2), (θ2, α2)=(0, 0.999), (θ3, α3)=(60, 0.2) and
(θ4, α4)=(60, 0.999), where θ and α indicate the incident
angles and albedos, respectively.
(a) Traditional method
(b) The proposed method
Figure 7 Subsurface scattering caused by inclined light beams
(a)
(b)
Figure 9 Variable diameters of the beams
(incident angle, θ=55°, of a light
source located at the left side)
(c)
(d)
Figure 8 Subsurface scattering considering obstacles
Comparing the two images, in the proposed method
the actual scattering effects described before are realistically rendered, especially the distorted luminance distributions and the shift of the brightest points. The incident
points of the light beams are exactly the same in both
figures, but the appearances differ because of the shift
of the brightest points. The small support of the LDFs
causes an artifact at the boundaries of each LDF in Fig.
7 (b), but this can be suppressed by using LDFs with a
larger support.
As shown in Figure 8, the proposed method is able
to render subsurface scattering effects taking into account an obstacle. We use a directional light source and
the incident angle of the light is set to 55°. The LDF
samples are the same as those of Figure 7. In Figs. 8 (a)
and (c), the light comes from the left side, while in Figs.
8 (b) and (d), from the right side. In Figs. 8 (a) and (b),
the albedos of the translucent object are set to αR=0.6,
αG=0.9 and αB=0.3, while in Figs. 8 (c) and (d), to
αR=0.9, αG=0.3 and αB=0.6.
The edges between the illuminated and unilluminated areas are blurred and the color of the blurred
regions slightly changes because of the scattering property of light depending on the wavelength. Comparing
Figs. 8 (a) and (b), the blurred areas are varying depending on the incident directions. Comparing Figs. 8 (a) and
(c), the color of the scattered light is varying depending
on the material’s albedos. Figure 9 shows an image rendered with variable diameters of beams, while the image
in Figure 8 (a) was rendered with a constant diameter of
beams. The quality of the images is almost the same.
The rendering time was 55.875 seconds for the image in
Figure 9, while the method using a constant diameter of
the beams (Fig. 8 (a)) required 387.607 seconds with a
Dual Core AMD Opteron 2.59 [GHz] CPU and 2 GB
memory. The number of light beams was 4107 in Figure
9, and 45600 in Fig. 8 (a). This shows that we are able
to accelerate the rendering process without sacrificing
the image quality.
5. CONCLUSIONS
We proposed a method for rendering a translucent
object taking into account subsurface scattering phenomena such as the shift of the brightest point and distorted luminance distributions for the light that slopes in
on the surface. The proposed method efficiently calculates the luminance of a surface by convolving Luminance Distribution Functions (LDFs) corresponding to
light beams with different diameters, and the LDFs are
interpolated from a pair of LDF samples depending on
the incident angles of the light beams and the material
parameters of the translucent object.
The reference point Q is set to a constant height in
the proposed method. Better results can be obtained by
an extension which would set the reference point based
on the luminance distributions of the LDF samples.
The method is a promising technique for rendering a
variety of translucent materials. Given a few LDF samples, we can get a wide variety of LDFs by introducing
an interpolation in multi-dimensional space. For examples, a tri-linear interpolation makes it possible to
change two material parameters such as an albedo and a
phase function, simultaneously, as well as the incident
angle of the light beam.
Although in the proposed method presently we assume a planar surface, it is possible to extend the method to be able to handle also curved surfaces with small
curvature, such as flat plates. Such an extension can be
accomplished with the help of a mapping technique, that
is, the surface luminance L can be calculated in the parameter space by which a curved surface is defined, and
the obtained luminance mapped onto the curved surface.
The proposed method can be further accelerated by utilizing GPU power.
6. REFERENCES
[1] H. W. Jensen, S. R. Marschener, M. Levoy, P.
Hanrahan: “A practical model for subsurface light
transport,” ACM Trans. Graphics 20(3): 511-518
(2001).
[2] H. W. Jensen, J. Buhler: “A Rapid Hierarchical
Rendering Technique for Translucent Materials,”
ACM Trans. Graphics, 21(3): 576-581 (2002).
[3] T. Mertens, J. Kautz, P. Bekaert, H. P. Seidel, F. V.
Reeth: “Interactive Rendering of Translucent Deformable Objects,” Proc. the 14th Eurographics
Workshop on Rendering: 130-140 (2003).
[4] Y. Chen, X. Tong, J. Wang, S. Lin, B. Guo, H. Y.
Shum: “Shell Texture Functions,” ACM Trans.
Graphics, 23(3): 343-353 (2004).
[5] C. Donner, H. W. Jensen: “Light Diffusion in Multi-Layered Translucent Materials,” ACM Trans.
Graphics, 24(3): 1032-1039 (2005).
[6] I. Masuike, T. Tamaki, K. Kaneda: “A Reserch for
A Solution Method of Sub-Surface Scattering
Equation Using Light Beams,” Proc. IEICE General Conf.: 83 (2007). (in Japanese)
[7] K. Takamura, T. Manabe, T. Tamaki, K. Kaneda:
“Subsurface scattering simulation and a rendering
model taking into account the irradiance distributions,” IEICE Tech. Report, PRMU 109: 37-42
(2009). (in Japanese)
[8] A. Furutsuki, K. Takamura, T.Manabe, T. Tamaki,
K. Kaneda: “A Rendering Model for Sub-surface
Scattering in Translucent Materials based on Irradiance Distributions”, Proc. VC/GCAD Sympo.:
26.1-8 (2011). (in Japanese)
[9] M. Shinya, M. Shiraishi, Y. Dobashi, K. Iwasaki,
T. Nishita: “Rendering Translucent Materials with
Plane-parallel Solution,” J. Information Processing,
17: 180-190 (2009).
[10] M. Shinya, Y. Dobashi, M. Shiraishi, K. Iwasaki,
T. Nishita: “An Efficient Multiple-Scattering Calculation Method and Its Application to Tree Rendering,” VC/GCAD Sympo.: 18.1-6 (2009). (in
Japanese)
[11] M. Shinya, M. Shiraishi, Y. Dobashi, K. Iwasaki,
T. Nishita: “A Fast Hair Rendering Method with
Anisotropic Plane-Parallel Model,” VC/GCAD
Symp.: 2.1-6 (2010). (in Japanese)
[12] C. Donnor, J. Lawrence, R. Ramamoorthi, T. Hachisuka, H. W. Jensen, S. Nayar: “An Empirical
BSSRDF Model”, ACM Trans. Graphics 28(3),
Article No.30: 1-10 (2009).
[13] L. Williams: “Casting curved shadows on curved
surfaces”, Proc. SIGGRAPH’78, 270-274 (1978).
[14] L. Williams: “Pyramidal parametrics”, ACM
SIGGRAPH Computer Graphics 17(3), 1-11
(1983).
[15] L.G. Henyey, J. L. Greenstein: “Diffuse radiation
in the galaxy,” Astrophys J. 93: 70-83 (1941).