Scale-Invariant Volume Rendering

Scale-Invariant Volume Rendering
Martin Kraus∗
(a)
(b)
(c)
(d)
Figure 1: Various kinds of isocontours: (a) isolines at equidistant isovalues on a slice; (b) standard volume rendering of isosurfaces (with
gradient-dependent opacity); (c) scale-invariant volume rendering of isosurfaces with uniform opacity and (d) with silhouette enhancement
A BSTRACT
As standard volume rendering is based on an integral in physical
space (or “coordinate space”), it is inherently dependent on the scaling of this space. Although this dependency is appropriate for the
realistic rendering of semitransparent volumetric objects, it has several unpleasant consequences for volume visualization. In order to
overcome these disadvantages, a new variant of the volume rendering integral is proposed, which is defined in data space instead of
physical space.
Apart from achieving scale invariance, this new method supports
the rendering of isosurfaces of uniform opacity and color, independently of the local gradient of the visualized scalar field. Moreover, it reveals certain structures in scalar fields even with constant
transfer functions. Furthermore, it can be defined as the limit of
infinitely many semitransparent isosurfaces, and is therefore based
on an intuitive and at the same time precise definition. In addition
to the discussion of these features of scale-invariant volume rendering, efficient adaptations of existing volume rendering algorithms
and extensions for silhouette enhancement and local illumination
by transmitted light are presented.
CR Categories:
I.3.3 [Computer Graphics]: Picture/Image
Generation—Display algorithms; I.3.7 [Computer Graphics]:
Three-Dimensional Graphics and Realism—Raytracing
Keywords: volume visualization, volume rendering, isosurfaces,
silhouette enhancement, volume shading, translucence
1
I NTRODUCTION
The goals of photorealistic volume rendering of semitransparent
volumetric objects are considerably different from the goals of volume visualization. Thus, volume rendering algorithms for volume visualization often include computations that are not based on
physics but are designed to visually enhance particular structures
∗ e-mail:
martin kraus [email protected]
of the data. This is not only true for recent approaches that explicitly refer to non-photorealistic rendering, e.g., volume illustration
by Ebert and Rheingans [1], but also for older volume rendering
algorithms, e.g., the rendering of surfaces by Levoy [7].
In fact, without these enhancements, standard volume rendering
is rather unfit for many visualization tasks. Apart from the problem of visualizing surfaces in volumetric data sets, another main
challenge is the specification of particular opacities for parts of the
data set. The ability to specify an appropriate opacity is crucial in
volume visualization as structures become invisible if their opacity is too small, and hide occluded structures if their opacity is too
large. However, the opacity in standard volume rendering depends
on several factors that are often unknown to the user, e.g., the geometric size of the data set, the units in which colors and opacities
are defined, or the magnitude of the local gradient. Thus, appropriate colors and opacities for direct volume visualization usually have
to be determined by trial and error. Moreover, any local variation of
the opacity, for example due to a dependency on the local gradient,
makes it difficult to specify visualization parameters that generate
appropriate opacities for all visible parts of a volumetric data set.
An example is depicted in Figure 1b: the two isosurfaces in a synthetic data set (featuring three Coulomb potentials) are generated
by two sharp peaks in the transfer functions. The blue isosurface
is particularly opaque in between the red isosurface components
because of the smaller magnitude of the local gradient there. The
bottommost component of the three red isosurface components in
Figure 1b is more opaque than the other two for the same reason. A
smaller local gradient also corresponds to less “dense” isolines in
Figure 1a; therefore, the dependency of the opacity on the local gradient in standard volume rendering is quite counterintuitive since a
large magnitude of the local gradient results in denser isolines but
less opaque isosurfaces. (More examples of this unpleasant dependency are presented in Section 6; the mathematical background is
discussed in Section 3.2.2.)
While previous approaches tried to solve these problems by
adding additional terms to the standard volume rendering integral,
this paper describes a variant of the integral itself; more specifically,
an integral in data space instead of physical space. One important
motivation for this approach is to achieve scale invariance, which
offers several immediate advantages. For example, the geometric
size of the data set becomes irrelevant as it no longer affects the
rendering. Note that scale invariance is also a basic feature of the
rendering of semitransparent polygons as their opacity is usually
specified independently of their size. The mathematical definition
of this scale-invariant integral is presented in Section 3 and allows
us to understand its relation to previously published approaches; in
particular, to the rendering of surfaces from volume data suggested
by Levoy [7] and the pre-integrated isosurface rendering published
by Röttger et al. [10].
In Section 4, the most important features of scale-invariant volume rendering are discussed. These include scale invariance, rendering of isosurfaces of uniform opacity (see also Figure 1c), viewdependency, rendering with constant transfer functions, and the interpretation as the limit of infinitely many isosurfaces.
Analogously to the rendering of semitransparent polygons, a uniform opacity for an entire semitransparent surface is not always desirable; thus, extensions for silhouette enhancement (see also Figure 1d) and volume shading are presented in Section 5.
Volume shading is particularly useful for rendering illuminated
isosurfaces. However, previously published methods usually lack
support for illumination by transmitted light. Therefore, an empirically based, local illumination model for translucent surfaces
is proposed in Section 5.2. This model supports plausible diffuse
and specular transmission of light and guarantees consistent illumination by transmitted light for surfaces of any opacity without
requiring additional parameters. Thus, it can replace Phong’s illumination model in many volume shading methods.
In Section 6, additional results are presented. Finally, conclusions and future work are discussed in Section 7.
2
R ELATED W ORK
Scale invariance has never been a topic in volume visualization;
probably because there is no physically based reason to require
it for volume rendering. However, the opacity of semitransparent
surfaces is usually expected to be scale-invariant; thus, any approach that uses volume rendering to display isosurfaces or material
boundaries of a uniform opacity will inevitably face the problem of
scale invariance itself or a related problem.
One of the first approaches to volume rendering of isosurfaces
was published by Levoy [7], who already observed that the transition region associated with the rendered surface should have a
uniform thickness. In particular, it should be independent of the local gradient of the scalar field. Since the thickness of this transition
region increases in inverse proportion to the magnitude of the local gradient, Levoy suggests to either scale the opacity of samples
or the width of a tent-shaped opacity transfer function by the local
gradient’s magnitude. As will be discussed in Sections 3 and 5.1,
the former scaling results in a good approximation to the rendering
of surfaces of uniform opacity with silhouette enhancement.
More recently, Kindlmann and Durkin [5] proposed the use of
multidimensional transfer functions to render material boundaries
by means of volume rendering. They also suggested transfer functions that depend on the local gradient.
Volume rendering of isosurfaces has received new attention in
the context of pre-integrated volume rendering [2, 8, 10]. In fact,
the pre-integrated technique for rendering isosurfaces in tetrahedral meshes suggested by Röttger et al. [10] (and later by Engel
et al. [2] for texture-based volume rendering) results in isosurfaces
of uniform opacity independently of the local gradient. Thus, this
technique already obtains scale-invariant opacities; however, this is
only true for the special case of isosurfaces. Moreover, it lacks a
mathematical justification since the uniform opacities are achieved
by ignoring the difference between an integration in physical space
and data space.
One of the first techniques for silhouette enhancement of semitransparent surfaces has been published by Kay and Greenberg [4].
Since standard volume rendering of surfaces already results in a
similar effect, silhouette enhancement has not received much interest in volume visualization. However, the adaptation of nonphotorealistic techniques for volume visualization has also included
techniques for silhouette enhancement; see, for example, the approach by Ebert and Rheingans [1].
Volume shading was also pioneered by Levoy [7], who suggested
to apply Phong’s illumination model to volume rendering by substituting the normalized local gradient of the scalar field for the
surface normal. While volume shading is very popular and has
been extended to global volume illumination models, diffuse and
specular transmission of light directly emitted by light sources is
usually ignored in local volume illumination models although the
transmission of light is often considered in physically based surface illumination models (see, for example, the work by Hall and
Greenberg [3] or Rushmeier and Torrance [11]).
3
S CALE -I NVARIANT I NTEGRATION IN DATA S PACE
In this section, a variant of the volume rendering integral in data
space is proposed and its numerical evaluation in data space as well
as in physical space is discussed. This integral provides the mathematical foundation of scale-invariant volume rendering.
3.1
Scale-Invariant Volume Rendering Integral
In this work, the standard volume rendering integral for the line
segment from point x1 to point x2 in physical space visualizing a
scalar field s(x) using transfer functions for (non-associated) colors
cp (s) and attenuation coefficients τp (s) is written as
Ip
=
Z |x2 −x1 |
cp (s(x(ξ )))τp (s(x(ξ )))
Z ξ
× exp −
τp (s(x(ξ 0 )))dξ 0 dξ
0
(1)
0
with
x(ξ ) = x1 + ξ (x2 − x1 ) / |x2 − x1 | .
Note that the transfer function τp (s) specifies attenuation per unit
length. The transfer function cp (s) specifies a non-associated color,
i.e., it is not pre-multiplied by an opacity. If cp (s) is constant
(and τp (s) > 0), this color is the (opaque) limit of Equation (1) for
|x2 − x1 | → ∞.
In contrast to this integration in physical space, the scaleinvariant volume rendering integral is defined in data space. For
this definition, the line segment from x1 to x2 is decomposed into
smaller line segments such that the function s(x(ξ )) is strictly
monotone on each part, i.e., the derivative ds(x(ξ ))/dξ is either
(i)
(i)
positive or negative on the i-th line segment between x1 and x2 .
This decomposition has to cover all points of the original line segment where s(x(ξ )) is strictly monotone while intervals with constant s(x(ξ )) are not part of this decomposition and, therefore, will
not contribute to the scale-invariant integral. Note also that the parts
(i)
(i)
are enumerated from 1 to NI such that x1 is closer to x1 than x2 ,
(i)
(i+1)
and x1 is closer to x1 than x1 .
The scale-invariant
integral for the i-th part
volume rendering
(i)
(i)
(i)
(i)
between s1 = s x1 and s2 = s x2 is defined as
(i)
Id
(i)
Z s
Z s2
0
0
= (i) cd (s)τd (s) exp − (i) τd (s )ds ds
s1
s1
(2)
(i)
and the transparency Td of the i-th part is
!
(i)
Z s2
(i)
Td = exp − (i) τd (s)ds .
s1
(3)
Analogously to cp (s), cd (s) is a transfer function for a nonassociated color. Similarly to τp (s), the transfer function τd (s) specifies attenuation per unit of the scalar data instead of length. Since
the data value is also the argument of this transfer function, integrating it over its own argument results in meaningful attenuation
coefficients.
For the complete scale-invariant volume rendering integral, the
(i)
(i)
contributions of the NI integrals Id have to be attenuated, i.e., Id
( j)
has to be multiplied by all transparencies Td
Id =
NI
∑ Id
i=1
(i)
i−1
( j)
∏ Td
j=1
!
.
with j < i:
(4)
This definition might appear very different from the definition of
the standard volume rendering integral in Equation (1). However,
the numerical evaluation of both integrals is very similar in practice.
3.2
Numerical Integration
The previously described decomposition into strictly monotone intervals is usually unnecessary since most numerical evaluations of
the scale-invariant volume rendering integral can safely assume that
the function s(x(ξ )) is locally strictly monotone as long as intervals
with a constant data value do not contribute to the integral.
Here, a sampling of the line segment between x1 and x2 at a
uniform sampling distance d is assumed. The resulting NC sampled
data values are denoted by si with 1 ≤ i ≤ NC . The generalization to
arbitrary sampling distances is straightforward. Note that samples
are ordered with the first sample s1 obtained at x1 . Each sample si
corresponds to an associated color C̃i and an opacity αi , which are
determined by applying transfer functions as discussed below. The
scale-invariant volume rendering integral is then approximated by
a sum of NC terms:
Id ≈
NC
i−1
i=1
j=1
∑ C̃i ∏ (1 − α j ).
(5)
This sum can be evaluated by the same blending operations as in
the case of the standard volume rendering integral; see, for example, the survey by Kraus and Ertl [6], which also covers the use of
transfer functions for an associated color density c̃(s) instead of the
product cp (s)τp (s).
There are several ways to determine the color C̃i and the opacity
αi of the i-th sample, i.e., to classify it; in particular, pre-integrated
classification, post-classification, and pre-classification. The adaptations of these three methods for scale-invariant volume rendering
are discussed next.
3.2.1
Pre-Integrated Classification
For pre-integrated volume rendering, colors C̃i and opacities αi are
pre-computed for ray segments between two adjacent samples sf
and sb at distance d; i.e., in general a three-dimensional lookup table has to be pre-computed. However, a two-dimensional table is
sufficient for a uniform sampling distance d. For scale-invariant
volume rendering, the sampling distance is not relevant at all (see
Equations (2) and (3)); thus, a two-dimensional lookup table is in
fact sufficient. Pre-integration assumes a linear interpolation between two adjacent samples; therefore, the assumption of strict
monotony is always fulfilled unless the two samples sf and sb are
identical. In this case, the pre-integration has to result in transparent
black.
The lookup table for colors C̃d (sf , sb ) is computed by the scaleinvariant integral
Z
Z s
sb
C̃d (sf , sb ) = cd (s)τd (s) exp − τd (s0 )ds0 ds .
(6)
sf
sf
Analogously, the lookup table for opacities αd (sf , sb ) is precomputed by
Z s
b
αd (sf , sb ) = 1 − exp − τd (s)ds .
(7)
sf
Note that these integrals are zero if sf is equal to sb . With these preintegrated lookup tables, C̃i is determined by C̃d (si , si+1 ) and αi
by αd (si , si+1 ). Since these tables are two-dimensional, the scaleinvariant variant is either as efficient as standard pre-integrated volume rendering or more efficient if a three-dimensional table is required for the latter.
3.2.2
Post-Classification
The mapping of interpolated data samples to colors and opacities is
often referred to as post-classification since the transfer functions
are applied after the interpolation step. In contrast to pre-integrated
volume rendering, post-classification processes each sample separately. Therefore, it is less straightforward to apply this approach to
scale-invariant volume rendering.
In order to evaluate the scale-invariant integrals of Equations (2)
and (3) by means of post-classification, the integration variables
have to be replaced by spacial variables:
(i)
Id
=
Z |x(i) −x1 |
2
(i)
|x1 −x1 |
cd (s(x(ξ )))τd (s(x(ξ )))|∇ξ s(x(ξ ))|
× exp −
(i)
Td
=
exp −
Z ξ
(i)
Z |x(i) −x1 |
2
(i)
|x1 −x1 |
τd (s(x(ξ )))|∇ξ s(x(ξ ))|dξ
0
|x1 −x1 |
(8)
0
τd (s(x(ξ )))|∇ξ s(x(ξ ))|dξ
!
,
0
!
dξ ,
(9)
where |∇ξ s(x)| is the magnitude of the projection of the local gradient of s(x) onto the line from x1 to x2 :
|∇ξ s(x)| = |∇s(x) · (x2 − x1 )| / |x2 − x1 | .
Based on these integrals and the assumption of local monotony,
Equation (5) may be employed for a discrete approximation of the
scale-invariant integral with colors C̃i and opacities αi being evaluated according to these equations:
C̃i
=
αi
=
cd (si )αi ,
1 − exp −τd (si )|∇ξ si |d ≈ τd (si )|∇ξ si |d,
(10)
(11)
where d is the uniform sampling distance and |∇ξ si | is the magnitude of the projected local gradient at the i-th sample.
Note that this factor |∇ξ si | is missing in the corresponding equations for standard volume rendering. Thus, opacities and colors
computed in standard volume rendering are scaled by 1/|∇ξ si |. In
other words, a small magnitude of the projection of the local gradient in view direction tends to increase opacities and colors in standard volume rendering; for examples see Figures 1b, 8a, and 8c.
3.2.3
Pre-Classification
For pre-classification, transfer functions are applied to the data
before the interpolation is performed. The transfer functions in
Equations (10) and (11) can also be evaluated in this way; however, the projection of the gradient is view-dependent and, therefore, cannot be pre-computed. Moreover, pre-classification is less
appropriate for thin isosurfaces. In order to visualize these with
pre-classification, more powerful techniques than one-dimensional
transfer functions should be employed to extract voxels, vertices, or
polygons that form a surface.
(a)
3.2.4
Required Sampling Rate
The sampling rates required for an accurate evaluation of the scaleinvariant volume rendering integral are the same as for the standard volume rendering integral; see the discussion by Schulze et
al. [12]. I.e., pre-integrated volume rendering requires a sampling
rate high enough to accurately sample the scalar data s(x) while
post-classification requires a product of this sampling rate and a
sampling rate that is high enough to accurately sample the transfer functions. Pre-classification only requires a sampling rate that
is high enough to accurately sample the pre-computed colors and
opacities since the sampling of the transfer functions is determined
by the data set’s mesh.
3.3
Comparison with Previous Work
As discussed in Section 3.2, pre-integrated volume rendering is particularly well suited for scale-invariant volume rendering. In fact,
the two-dimensional lookup table employed for pre-integrated volume rendering in Section 3.2.1 was used by Röttger et al. [10] and
later by Engel et al. [2] to render semitransparent isosurfaces with a
uniform, user-specified color and opacity, independently of the local gradient or the geometric size of the data set. Since a uniform
color and opacity provides less information about the shape of surfaces (see Figure 2a), it was important to combine this technique
with volume shading (see Figure 2b).
For the special case of infinitesimally thin isosurfaces, this
method is equivalent to scale-invariant volume rendering. However,
the main motivation in those publications was the efficient computation of two-dimensional lookup tables while the authors were unaware of the integration in data space, and the uniform color and
opacity was not recognized as an advantage. In fact, the “flat” appearance of isosurfaces of uniform opacity and color (see Figure 2a)
was considered disadvantageous; thus, more recent publications,
e.g., Lum et al. [8], again employed standard volume rendering for
isosurfaces since efficient techniques for the computation of lookup
tables had been developed.
There are also predecessors of scale-invariant volume rendering using post-classification and pre-classification. In particular,
Levoy [7] suggested to scale opacities of samples by the magnitude of the local gradient (see Equation (4) in his work [7]). (The
scaling of the width of a tent-shaped opacity transfer function also
proposed by Levoy [7] has a similar effect on the total opacity of a
surface.) The scaling by the local gradient instead of the projected
local gradient (as in Equation (11)) results in an approximation to
the silhouette enhancement discussed in Section 5.1.
More generally spoken, the use of gradient-dependent, twodimensional transfer functions can approximate scale-invariant volume rendering; however, the specification and efficient implementation of appropriate transfer functions is far more complicated than
the approaches proposed in Section 3.2.
Thus, scale-invariant volume rendering provides the mathematical background for a thorough understanding of several volume
rendering techniques, which are already known to generate reasonable results.
(b)
Figure 2: Scale-invariant volume rendering of isosurfaces with uniform opacity: (a) without and (b) with volume shading
4
F EATURES OF S CALE -I NVARIANT VOLUME R ENDERING
There are several characteristics of scale-invariant volume rendering that distinguish it from standard volume rendering. The most
important features are discussed in this section.
4.1
Scale Invariance
In this work, scale invariance denotes the invariance of colors
and opacities under scaling transformations of the physical space.
While this is a basic feature of standard polygonal graphics, colors
and opacities in standard volume rendering are not scale-invariant.
This is appropriate for many applications of volume graphics; however, for volume visualization and especially for the rendering of
isosurfaces, scale invariance is necessary to allow users to specify
colors and opacities independently of particular properties of a data
set, e.g., its geometric size or local gradients.
The scale invariance of the presented approach depends on the
invariance of the volume rendering integral in Equation (4) under
scaling transformations of the physical space. Since the decomposition of the line segment from x1 to x2 is scale-invariant (apart
from the scaled coordinates), all of the variables in Equations (2)
and (3) are scale-invariant; thus, Equations (2), (3), and (4) are also
scale-invariant.
In addition to Equation (4), the numerical integration presented
in Section 3 should also be scale-invariant. Since the required sampling rate depends on the scaling of the physical space, it is appropriate to scale the sampling distance accordingly in order to guarantee the accuracy of the numerical integration. In this case, all sampled values si will be invariant. Since no other variables are used
to compute colors and opacities in the pre-integrated case, these
computations are explicitly scale-invariant.
The computations for post-classification and pre-classification
include the sampling distance and the local gradient, which are not
scale-invariant. However, the sampling distance is scaled like any
distance while the magnitude of the projected gradient is scaled by
the reciprocal factor. Thus, the product of the sampling distance
and the magnitude of the projected gradient is also scale-invariant.
Since the sampling distance and the local gradient occur only in
such products, these computations are implicitly scale-invariant.
Figure 2a provides an example for scale invariance because the
two small red isosurface components correspond to two Coulomb
potentials, which are smaller copies of the bottommost Coulomb
potential. While all three components are rendered with the same
opacity by scale-invariant volume rendering (see Figure 2), standard volume rendering results in visible differences (see Figure 1b).
4.2
Isosurfaces of Uniform Opacity
One of the most important consequences of scale invariance is
the possibility to render isosurfaces of uniform color and opacity
whereas the opacity of isosurfaces in standard volume rendering
depends on the local gradient. Moreover, this dependency is counterintuitive because it results in brighter and more opaque colors for
parts of the isosurface with a small local gradient, which usually indicates a soft, less pronounced transition; see also the discussion of
Figure 1b in Section 1 and Figures 8a and 8c in Section 6.
An infinitesimally thin isosurface can be specified by a δ -peak
in the transfer function τd (s) that is placed at the corresponding isovalue. Since the integration for scale-invariant volume rendering is
performed in data space, any integral that includes this isovalue will
result in the same color and opacity. This special case was already
implemented by Röttger et al. [10] and Engel et al. [2] although
they were not fully aware of the mathematical background.
The scale-invariant integration also results in well-defined colors
and opacities for extended peaks provided that the integration covers the whole peak and that the sampling is fine enough. In contrast
to this integration in data space, the integration in physical space
for standard volume rendering scales a peak in physical space by
the reciprocal magnitude of the projected local gradient and, therefore, results in scaled colors and opacities. This corresponds to an
isosurface of varying thickness while scale-invariant volume rendering guarantees a uniform thickness.
Just as in the case of semitransparent polygonal surfaces, the realistic appearance of semitransparent isosurfaces can be improved
by shading and/or silhouette enhancement; see Section 5. However, the uniform color and opacity can also be modulated by other
quantities in order to visualize additional data on isosurfaces.
4.3
View-Dependency
The colors and opacities of point samples in scale-invariant volume
rendering depend on the orientation of the local gradient in relation to the position of the viewer; more specifically, it depends on
the magnitude of the local gradient projected onto the view direction; see Section 3.2. This additional view-dependency might appear counterintuitive; however, there is a similar view-dependency
in polygonal graphics: the projected area and therefore the total
color contribution of a flat, semitransparent polygon to an image
also depends on the projection of its normal vector onto the view
direction. In both cases, this view-dependency results in a “flat”
appearance, which can be cured by shading and/or silhouette enhancement, as discussed in Section 5.
4.4
Rendering with Constant Transfer Functions
Another interesting feature of scale-invariant volume rendering is
the possibility to reveal structures with the help of constant transfer
functions, i.e., a mapping of all data values to the same color and
attenuation coefficient. While standard volume rendering only renders the domain of the whole data set with a uniform color and attenuation coefficient under these circumstances, scale-invariant volume rendering reveals regions with large variations of the data; see
Figure 3a, which visualizes the same scalar field as Figures 1 and
2. Thus, material boundaries and local extrema in the data can be
visualized even without specifically designed transfer functions or
techniques such as volume shading.
4.5
Limit of Infinitely Many Isosurfaces
Standard volume rendering can be defined as the limit of infinitely
many sampling slices (or shells) through a volumetric data set. For
this limit, the slicing distance and the opacities of samples are decreased at the same rate. A finite number of slices corresponds to
(a)
(b)
Figure 3: (a) Scale-invariant volume rendering with constant transfer
functions and (b) a set of isosurfaces approximating the limit in (a)
the evaluation of the standard volume rendering integral in physical
space by a discrete sum.
Similarly, scale-invariant volume rendering can be defined as the
limit of infinitely many isosurfaces. In this case, each isosurface
has a uniform color and opacity, which is decreased at the same rate
as the distance between isovalues. Therefore, the evaluation of the
scale-invariant volume rendering integral in data space by a discrete
sum corresponds to a finite number of isosurfaces. For example, the
set of isosurfaces in Figure 3b approximates the constant transfer
function employed in Figure 3a.
5
E XTENSIONS FOR I SOSURFACE R ENDERING
Scale-invariant volume rendering is very well suited for the rendering of isosurfaces since it allows users to specify uniform colors and
opacities for isosurfaces independently of the data set. Therefore,
this section presents two extensions, which are particularly beneficial for the rendering of isosurfaces, namely silhouette enhancement and local illumination by transmitted light. Note, however,
that both extensions are not limited to isosurfaces but are useful for
scale-invariant volume rendering in general.
5.1
Silhouette Enhancement
One positive aspect of the dependency of colors and opacities on
the local gradient in standard volume rendering is a tendency to enhance silhouettes of isosurfaces and, thereby, to provide additional
cues about their shape; see, for example, Figure 1b. Since there is
no such dependency in basic scale-invariant volume rendering, it is
lacking this enhancement of silhouettes; see Figure 1c.
In computer graphics, silhouette enhancement is a common technique to improve the realistic appearance of semitransparent surfaces. One approach for thin surfaces has been proposed by Kay
and Greenberg [4], who implemented an exponential attenuation
of the transparency by the total length ∆ of the path through the
medium. In our notation:
α = 1 − exp (−τ0 ∆) ;
(12)
see Equation (6) in their work [4]. Here, τ0 is the uniform attenuation coefficient of the medium.
In order to integrate silhouette enhancement into scale-invariant
volume rendering, no new parameters such as the indices of refraction or the uniform thickness of the surface should be introduced
since the primary goal is a plausible silhouette enhancement for isosurfaces of any opacity without the necessity to specify additional
parameters. Thus, refraction is not taken into account in this work,
n
D
v
D0
(a)
Figure 4: Path of a ray of length ∆ through a surface of thickness ∆0
and the thickness ∆0 and the attenuation coefficient τ0 of isosurfaces are computed from the user-specified opacity α0 :
τ0 ∆0 = −ln (1 − α0 ) .
(13)
This computation assumes that the opacity α0 is specified for a view
direction perpendicular to the surface. Only the product of τ0 and
∆0 can be determined this way and, fortunately, no additional information is required.
Without refraction and assuming a (locally) flat surface of thickness ∆0 , the length ∆ of the path through the medium is given by
∆ = ∆0
|v||n|
,
|v · n|
(14)
where n is the surface normal and v is the direction to the viewer;
see Figure 4. Thus, the silhouette enhancement proposed here is
achieved by increasing the opacity α0 according to the following
equation:
|v||n|
α = 1 − exp ln(1 − α0 )
.
(15)
|v · n|
Associated colors should be scaled by the factor α /α0 ; see the discussion by Schulze et al. [12]. An example of this silhouette enhancement is depicted in Figure 5a; see also Figure 2a for the same
scene without silhouette enhancement.
A good approximation of Equation (15) for small opacities is:
|v||n|
α ≈ min 1, α0
.
(16)
|v · n|
Unfortunately, this approximation breaks down near the silhouettes,
resulting in too large opacities. Nonetheless, the visual results with
clamped opacities are often acceptable; see Figure 5b.
This procedure can also be employed to modify the opacity of
any volume sample by replacing α0 by the sample’s opacity and
the surface normal n by the gradient ∇s(x). Since the direction v
corresponds to −(x2 − x1 ), the magnitude of the projected gradient
|∇ξ s(x)| is |v · n|/|v|. This results in the equation
|v||n|
|∇s(x)|
=
.
|v · n| |∇ξ s(x)|
(17)
This relation is particularly useful because the gradient ∇s(x) is often already computed for volume shading and the projected gradient |∇ξ s(x)| may be approximated by |sb − sf | /d in a pre-integrated
volume rendering algorithm.
It was mentioned in Section 3.3 that substituting the local gradient for the projected local gradient corresponds to adding an approximation to the silhouette enhancement proposed in this section.
(b)
Figure 5: Scale-invariant volume rendering of isosurfaces with silhouette enhancement: (a) without and (b) with the approximation
in Equation (16)
In fact, the magnitude of the local gradient corresponds to the product of the magnitude of the projected local gradient and the factor
that is responsible for the silhouette enhancement:
|∇s(x)| = |∇ξ s(x)|
|∇s(x)|
|v||n|
= |∇ξ s(x)|
.
|∇ξ s(x)|
|v · n|
(18)
Thus, within the limits of the approximation in Equation (16), the
use of the local gradient instead of the projected local gradient in
Equation (11) can be justified by silhouette enhancement.
5.2
Illumination by Transmitted Light
Local volume shading (or volume lighting) is often used to illuminate surface structures in volume rendering. For this purpose,
Phong’s empirically based illumination model is usually employed
(see for example Figure 6a) because it can be implemented very
efficiently and is often supported by graphics hardware. For volume shading, the surface normal is replaced by the local gradient
of the scalar field because the local gradient is orthogonal to the local isosurface. Since Phong’s illumination model does not include
diffuse or specular transmissions, these are also not implemented
in standard volume shading. Thus, backlight usually has very little effect in local volume shading; for an example see Figure 6b.
This results in an implausible illumination, which can be avoided in
some cases by placing additional lights opposite the original light
sources. However, these additional lights do not cure the reason for
the implausible illumination but rather distract the viewer from the
missing transmitted light by adding additional reflected light.
Therefore, a consistent extension of Phong’s illumination model
is suggested here, which includes plausible diffuse and specular
transmission (see Figures 6c, 6d, and 7) for surfaces and volume
samples of any opacity without requiring additional parameters.
Since programmable graphics hardware provides a convenient way
to customize local shading, this extension can be integrated easily
in many hardware-accelerated volume shading algorithms.
As in Section 5.1, no refraction is taken into account. Furthermore, ideal diffuse transmission is assumed and specular transmission is modeled analogously to specular reflection in Phong’s model
with the reflected light direction ri being replaced by the light direction −li ; see Figure 7. In particular, the specular-reflection exponent n is also employed for specular transmission.
The diffuse transmission is affected by the color of the material
and by its opacity. As the material’s color for diffuse reflection is
used for surfaces of any opacity, i.e., also for almost completely
transparent surfaces, it is plausible to use the same color for diffuse and specular transmission. While a different color for specular
transmission can result in more realistic images, it is very unlikely
that specular transmission could be modeled by the same color as
(a)
(b)
(c)
(d)
Figure 6: Scale-invariant volume rendering with silhouette enhancement and volume shading: (a) light from behind the viewer; (b) backlight
without transmission, (c) backlight with diffuse transmission, and (d) backlight with diffuse and specular transmission
li
where Iamb is the intensity of the ambient light; Ii is the intensity
of the i-th light source; f att,i is an attenuation factor for the i-th
light source; Cdiff is the material’s (non-associated) color for diffuse
reflection as well as diffuse and specular transmission; Cspec is the
material’s (non-associated) color for specular reflection (often equal
to 1); n is the exponent for specular reflection and transmission; the
Heaviside function Θ(x) (also known as step function) is 0 for x < 0
and 1 for x > 0; li and v are the vectors from the illuminated point
to the i-th light source and the viewer, respectively; n is the surface
normal at this point; and ri is the reflection of li at n:
ri
±n
diffuse
reflection
µΑ
diffuse
transmission
µ ΑH1-ΑL
specular
reflection
µΑ
ri = 2n̄(n̄ · li ) − li .
specular
transmission
µ ΑH1-ΑL
¡n
-li
Figure 7: Illumination of a semitransparent surface of opacity α with
normal ±n by light from direction li , reflected to ri
specular reflection since the corresponding physical processes are
rather different.
The effect of the material’s opacity α is slightly more complicated because there should be no transmission for completely
opaque surfaces (α = 1) nor for completely transparent surfaces
(α = 0) because light passes through a completely transparent surface without any interaction in our model. The most straightforward way to achieve this behavior is a factor of α (1 − α ) for diffuse
and specular transmission. This factor is plausible because only a
fraction (1 − α ) of the incident light is transmitted through a semitransparent surface and of the transmitted light only a fraction (α )
interacts with it.
In summary, the proposed local illumination model takes the following five contributions into account (see also Figure 7): ambient
light, diffuse reflection, specular reflection, diffuse transmission,
and specular transmission. The total (associated) intensity Itotal (of
one color component) is the sum of these contributions:
Itotal
=
+
Iamb αCdiff
∑ fatt,i Ii Θ ((li · n)(v · n)) αCdiff l̄i · n̄
i
+
+
+
Θ ((li · n)(v · n)) αCspec (max (0, r̄i · v̄))n
Θ (−(li · n)(v · n)) α (1 − α )Cdiff l̄i · n̄
Θ (−(li · n)(v · n)) α (1 − α )Cdiff max 0, −l̄i · v̄
(19)
n ,
l̄i , v̄, n̄, and r̄i are the corresponding normalized vectors.
Note that the orientation of n is irrelevant for Itotal , i.e., substituting −n for n leads to the same result. For numerical reasons it is often advantageous to replace the term v · n in the computation of Itotal
by a more robust computation. For example, in pre-integrated volume rendering (see Section 3.2.1) v·n could be replaced by (sf −sb )
since only the sign of v · n is relevant here.
If different sets of material parameters (Cdiff , Cspec , and n) are
used for the two sides of the surface, it should be considered to
introduce a third set of parameters (or the average values of the
two sets) for diffuse and specular transmission in either direction
because the transmission of light is usually independent of its direction.
Although the proposed model results in slightly more expensive
computations than implementations of Phong’s illumination model,
it offers the advantages of a plausible local illumination of semitransparent surfaces and volume samples of any opacity by light
from any direction; for an example see Figure 6.
6
R ESULTS
The volume renderings in this work are all of dimensions 512 ×
512. All but Figures 8e and 8f were generated by a prototypical
implementation of the proposed techniques within an experimental
software ray caster. In order to avoid any problems caused by filtering or interpolation of sampled data, all scalar fields and gradient
fields were computed procedurally in this ray caster.
Since the visual differences between standard volume rendering
and scale-invariant volume rendering with silhouette enhancement
(see, for example, Figures 1b and 1c) are not always clearly recognizable, two additional examples are presented in this section. No
illumination is included in these examples in order to avoid unnecessary complications.
Figures 8a and 8b depict a semitransparent isosurface (isovalue
0.5) in the test signal published by Marschner and Lobb [9]. While
the scale-invariant volume rendering in Figure 8b shows plausible
(a)
(b)
(c)
(d)
(e)
(f)
Figure 8: (a) & (b) Isosurface in the Marschner-Lobb test signal; (c) & (d) isosurface visualizing the 3d z2 hydrogen orbital; (e) & (f) volume
visualization of a tetrahedralization of the 323 bucky ball data set. (a), (c), and (e) were generated by standard volume rendering; (b), (d), and
(f) by scale-invariant volume rendering with silhouette enhancement.
silhouette enhancement, the standard volume rendering in Figure 8a
results in a too opaque surface near the top of ridges and near the
bottom of valleys, where the local gradient is relatively small. This
additional opacity is not understandable without knowledge about
the gradient field and difficult to distinguish from the increased
opacity near silhouettes.
In Figures 8c and 8d, the 3dz2 orbital of the hydrogen atom is visualized by one semitransparent isosurface, which consists of three
separate components. Again, the scale-invariant volume rendering
in Figure 8d is plausible while standard volume rendering in Figure 8c shows a more opaque ring in the center of the image. The
reason for this difference is a small local gradient near the local
maximum of the scalar field that is within this component of the
isosurface. Additional visual differences between Figures 8c and
8d can be found at the top and the bottom of the images, where the
gradient is slowly decreasing.
Figures 8e and 8f show an example of hardware-accelerated
tetrahedra projection. The visualized tetrahedral mesh was produced by the decomposition of a uniform 323 grid into 148955
tetrahedral cells. While the standard volume rendering in Figure 8e
employs three-dimensional textures as suggested by Röttger et
al. [10], scale-invariant volume rendering with silhouette enhancement requires only two-dimensional lookup tables as discussed in
Section 3.2.1. The use of two-dimensional textures in Figure 8f
increased the rendering performance by about 23 % on the tested
system (800 MHz PowerPC G3 with an ATI Radeon 7500 Mobility). Without silhouette enhancement, which was implemented
using the texture combine environment mode, the measured performance gain was even about 27 %.
7
C ONCLUSIONS AND F UTURE W ORK
As shown in this work, scale-invariant volume rendering offers
various advantages for volume visualization. Apart from scaleinvariant colors and opacities, the most important advantage is the
possibility to render isosurfaces of uniform opacity and color. In
combination with the presented silhouette enhancement and the
proposed illumination model, scale-invariant volume rendering can
considerably improve the quality of isosurfaces rendered by many
volume visualization algorithms, including hardware-accelerated
algorithms. The performance cost compared to standard volume
rendering depends on the particular volume rendering algorithm;
for some pre-integrated algorithms, however, scale-invariant volume rendering offers a significant increase in rendering performance. Moreover, it provides a variant of volume rendering that
represents the limit of infinitely many semitransparent isosurfaces,
and it also provides the mathematical background for at least two
ad hoc techniques that have been published previously.
Apart from implementing scale-invariant variants of various volume rendering algorithms, the most important future work is the de-
sign of appropriate user interfaces since users can gain much more
control over colors and opacities in scale-invariant volume rendering than in standard volume rendering if appropriate interfaces are
provided. Furthermore, the specific advantages of scale-invariant
volume rendering (in respect of and beyond isosurfaces) have to be
investigated in actual applications.
R EFERENCES
[1] David Ebert and Penny Rheingans. Volume illustration: Nonphotorealistic rendering of volume models. In Thomas Ertl, Bernd
Hamann, and Amitabh Varshney, editors, Proceedings IEEE Visualization 2000, pages 195–202. IEEE Computer Society Press, 2000.
[2] Klaus Engel, Martin Kraus, and Thomas Ertl. High-quality preintegrated volume rendering using hardware-accelerated pixel shading. In William Mark and Andreas Schilling, editors, Proceedings
Graphics Hardware 2001, pages 9–16. ACM Press, 2001.
[3] R. A. Hall and Donald P. Greenberg. A testbed for realistic image
synthesis. IEEE Computer Graphics and Applications, 3(8):10–20,
1983.
[4] Douglas Scott Kay and Donald Greenberg. Transparency for computer
synthesized images. In Proceedings of the 6th Annual Conference
on Computer Graphics and Interactive Techniques, pages 158–164.
ACM Press, 1979.
[5] Gordon Kindlmann and James W. Durkin. Semi-automatic generation
of transfer functions for direct volume rendering. In Proceedings 1998
IEEE Symposium on Volume Visualization, pages 79–86, 1998.
[6] Martin Kraus and Thomas Ertl. Pre-integrated volume rendering. In
Charles D. Hansen and Christopher R. Johnson, editors, The Visualization Handbook. Academic Press, 2004.
[7] Marc Levoy. Display of surfaces from volume data. IEEE Computer
Graphics and Applications, 8(3):29–37, 1988.
[8] Eric B. Lum, Brett Wilson, and Kwa-Liu Ma. High-quality lighting
and efficient pre-integration for volume rendering. In Proceedings
Joint Eurographics-IEEE TVCG Symposium on Visualization 2004
(VisSym ’04), pages 25–34, 2004.
[9] S. Marschner and R. Lobb. An evaluation of reconstruction filters for
volume rendering. In R. D. Bergeron and A. E. Kaufman, editors, Proceedings IEEE Visualization 1994, pages 100–107. IEEE Computer
Society Press, 1994.
[10] Stefan Röttger, Martin Kraus, and Thomas Ertl. Hardware-accelerated
volume and isosurface rendering based on cell-projection. In Thomas
Ertl, Bernd Hamann, and Amitabh Varshney, editors, Proceedings
IEEE Visualization 2000, pages 109–116. IEEE Computer Society
Press, 2000.
[11] Holly E. Rushmeier and Kenneth E. Torrance. Extending the radiosity method to include specularly reflecting and translucent materials.
ACM Transactions on Graphics, 9(1):1–27, 1990.
[12] Jürgen P. Schulze, Martin Kraus, Ulrich Lang, and Thomas Ertl. Integrating pre-integration in the shear-warp algorithm. In Proceedings
Third International Workshop on Volume Graphics 2003, pages 109–
118, 2003.