ebert.pdf

PROCEDURAL VOLUME MODELING,
RENDERING, AND VISUALIZATION
David Ebert
Penny Rheingans
Purdue University
University of Maryland Baltimore County
1.
INTRODUCTION
Volume visualization techniques have advanced dramatically over the
past fifteen years. However, the increase in scale of visualization tasks has
been increasing at even a faster rate. Today, many problems require the
visualization of gigabytes to terabytes of data. Additionally, the number of
variables and dimensionality of many scientific simulations and observations
has increased, while the resolution of computer graphics displays has not
changed substantially (still a few million pixels). These factors present a
significant challenge to current visualization techniques and approaches. We
propose a new approach to visualization to solve these problems and provide
flexibility and extensibility for visualization challenges over the next decade:
procedural visualization. In this approach, we encode and abstract datasets
to a more manageable level, while also developing more effective
visualization and rendering techniques.
1.1
Background on Procedural Techniques
Procedural techniques have been used throughout the history of
computer graphics. Many early modeling and texturing techniques included
procedural definitions of geometry and surface color. From these early
beginnings, procedural techniques have exploded into an important, powerful
modeling, texturing, and animation paradigm. During the mid to late 1980s,
procedural techniques for creating realistic textures, such as marble, wood,
stone, and other natural material, gained widespread use. These techniques
were extended to procedural modeling, including models of water, smoke,
steam, fire, planets, and even tribbles. The development of the RenderMan1
1
RenderMan is a registered trademark of Pixar.
shading language [Pixar89] in 1989 greatly expanded the use of procedural
techniques. Currently, most commercial rendering and animation systems
even provide a procedural interface. Procedural techniques have become an
exciting, vital aspect of creating realistic computer generated images and
animations. As the field continues to evolve, the importance and significance
of procedural techniques will continue to grow.
Procedural techniques are code segments or algorithms that specify some
characteristic of a computer generated model or effect. For example, a
procedural texture for a marble surface does not use a scanned-in image to
define the color values. Instead, it uses algorithms and mathematical
functions to determine the color.
One of the most important features of procedural techniques is
abstraction. In a procedural approach, rather than explicitly specifying and
storing all the complex details of a scene or sequence, we abstract them into
a function or an algorithm (i.e., a procedure) and evaluate that procedure
when and where needed. We gain a storage savings, as the details are no
longer explicitly specified but rather implicit in the procedure, and shift the
time requirements for specification of details from the programmer to the
computer. This allows us to create inherently multi-resolution models and
textures that we can evaluate to the resolution needed.
We also gain the power of parametric control, allowing us to assign to a
parameter a meaningful concept (e.g., a number that makes mountains
"rougher’’ or "smoother’’). Parametric control also provides amplification of
the modeler/animator’s efforts: a few parameters yield large amounts of detail
(Smith [Smith84] referred to this as database amplification). This parametric
control unburdens the user from the low-level control and specification of
detail. We also gain the serendipity inherent in procedural techniques: we are
often pleasantly surprised by the unexpected behaviors of procedures,
particularly stochastic procedures. Procedural models also offer flexibility.
The designer of the procedures can capture the essence of the object,
phenomenon, or motion without being constrained by the complex laws of
physics. Procedural techniques allow the inclusion of any amount of physical
accuracy into the model that is desired. The designer may produce a wide
range of effects, from accurately simulating natural laws to purely artistic
effects.
1.2
Overview of the Procedural Visualization Approach
We believe that using a procedural approach in volume graphics
flexible, extensible, and powerful method for modeling, rendering,
visualizing volumetric data. Our procedural visualization methodology
procedural techniques for general volume modeling, improving
is a
and
uses
the
effectiveness of volume rendering, and abstracting large datasets for
interactive visualization on the desktop.
2.
PROCEDURAL VOLUME MODELING
TECHNIQUES
Many advanced geometric modeling techniques, such as fractals
[Peitgen92], implicit surfaces [Blinn82, Wyvill86, Nishimura85], grammarbased modeling [Smith84, Prusinkiewicz90], and volumetric procedural
models/hypertextures [Perlin85, Ebert94] use procedural abstraction of detail
to allow the designer to control and animate objects at a high level. When
modeling complex volumetric phenomena, such as clouds, this abstraction of
detail and data amplification are necessary to make the modeling and
animation tractable. It would be impractical for an animator to specify and
control the detailed three-dimensional density for most intricate volume
models. As an example of the advantages of procedural volume modeling,
we will describe our approach to volumetric cloud modeling. Volumetric
procedural models are a natural choice for cloud modeling because they are
the most flexible advanced modeling technique. Since a procedure is
evaluated to determine the object’s density, any advanced modeling
technique, simple physics simulation, mathematical function or artistic
algorithm can be included in the model.
Combining traditional volumetric procedural models with implicit
functions creates a model that has the advantages of both techniques. Implicit
functions have been used for many years as a modeling tool for creating solid
objects and smoothly blended surfaces [Bloomenthal97]. However, only a
few researchers have explored their potential for modeling volumetric
density distributions of semi-transparent volumes (e.g., [Nishita96], [Stam91,
Stam93, Stam95], [Ebert97a]). Ebert’s early work on using volume rendered
implicit spheres to produce a fly-through of a volumetric cloud was described
in [Ebert97a]. This work has been developed further to use implicits to
provide a natural way of specifying and animating the global structure of the
cloud, while using more traditional procedural techniques to model the
detailed structure. More details on the implementation of these techniques
can be found in [Ebert98].
The volumetric cloud model uses a two-level: the cloud macrostructure
and the cloud microstructure. These are modeled by implicit functions and
turbulent volume densities, respectively. The basic structure of the cloud
model combines these two components to determine the final density of the
cloud.
The cloud's microstructure is created by using procedural turbulence and
noise functions [Ebert98]. This allows the procedural simulation of natural
detail to the level needed. Simple mathematical functions are added to allow
shaping of the density distributions and control over the sharpness of the
density falloff.
Implicit functions were chosen to model the cloud macrostructure
because of their ease of specification and smoothly blending density
distributions. The user simply specifies the location, type, and weight of the
implicit primitives to create the overall cloud shape. Since these are volume
rendered as a semi-transparent medium, the whole volumetric field function
is being rendered, as compared to implicit surface rendering where only a
small range of values of the field are used to create the objects. The implicit
density functions are primitive-based density functions: they are defined by
summed, weighted, parameterized, primitive implicit surfaces.
The real power of implicit functions is the smooth blending of the
density fields from separate primitive sources. Wyvill’s standard cubic
function [Wyvill1986] is used as the density (blending) function for the
implicit primitives. The final implicit density value is then the weighted sum
of the density field values of each primitive.
To create non-solid implicit primitives, the location of the point is
procedurally altered before the evaluation of the blending functions. This
alteration can be the product of the procedure and the implicit function
and/or a warping of the implicit space. These techniques are combined into a
simple cloud model as shown below:
volumetric_procedural_implicit_function(pnt, blend%,
pixel_size)
perturbed_point = procedurally alter pnt using
noise and turbulence
density1 = implicit_function(perturbed_point)
density2 = turbulence(pnt, pixel_size)
blend = blend% * density1 +(1 - blend%) * density2
density = shape resulting density based on user
controls for wispiness and
denseness(e.g., use pow & exponential
function)
return(density)
The density from the implicit primitives is combined with a pure
turbulence based density using a user specified blend% (60% to 80% gives
good results). The blending of the two densities allows the creation of clouds
that range from entirely determined by the implicit function density to
entirely determined by the procedural turbulence function. When the clouds
are completely determined by the implicit functions, they tend to look more
like cotton balls. The addition of the procedural alteration and turbulence is
what gives them their naturalistic look. An example resulting cloud can be
seen in Figure 1.
3.
PROCEDURALLY-ENHANCED VOLUME
RENDERING
We have applied the procedural visualization approach to the problem of
direct volume visualization, using procedural methods to generate illustrative
enhancements to a typical raycast volume rendering pipeline. The typical
direct volume renderer uses a transfer function to map the scalar voxel values
to color and opacity. This mapping is essentially a very simple procedural
method. Some methods have incorporated gradient along with the voxel
value as parameters to the transfer functions. We generalize still further to
use a variety of volume feature indicators as parameters for multivariate
transfer functions, using these procedural methods to augment standard
rendering process with non-photorealistic rendering (NPR) techniques to
enhance the expressiveness of the visualization. NPR draws inspiration from
such fields as art and technical illustration to develop automatic methods to
synthesize images with an illustrated look from geometric surface models.
Procedural methods can be used to develop a set of NPR techniques
specifically for the visualization of volume data, including both the
adaptation of existing NPR techniques to volume rendering and the
development of new techniques specifically suited for volume models. We
call this approach volume illustration.
3.1
Related Work
Traditional volume rendering spans a spectrum from the accurate to the
ad hoc, using both realistic illumination models to simulate atmospheric
attenuation and transfer functions to produce artificial views of the data to
highlight regions of interest [Drebin88, Kindlmann98, Fujishiro99]. While
transfer functions can be effective at bringing out the structure in the value
distribution of a volume, they are generally limited by their dependence on
voxel value as the sole transfer function domain.
In contrast, there has been extensive research for illustrating surface
shape using non-photorealistic rendering techniques. Techniques adapted
from art and illustration include a tone-based illumination model [Gooch98],
the extraction and rendering of silhouettes and other expressive lines
[Salisbury94, Gooch99], and the use of expressive textures to convey surface
shape [Rheingans96, Salisbury97, Interrante97]. A few researchers have
applied NPR techniques to the display of data, drawing techniques from
painting [Kirby99], pen-and-ink illustrations [Treavett00], and technical
illustration [Interrante98, Saito94]. With the exceptions of the work of Saito
and Interrante, the use of NPR techniques has been confined to surface
rendering.
3.2
Approach
We have used the procedural approach to develop a collection of volume
illustration techniques that adapt and extend NPR techniques to volume
objects. Most traditional volume enhancement has relied on functions of the
volume sample values (e.g., opacity transfer functions), although some
techniques have also used the volume gradient (e.g., [Levoy90]). In contrast,
our volume illustration techniques are fully incorporated into the volume
rendering process, utilizing viewing information, lighting information, and
additional volumetric properties to provide a powerful, easily extensible
framework for volumetric enhancement. Comparing Diagram 1, the
traditional volume rendering system, and Diagram 2, our volume illustration
rendering system, demonstrates the difference in our approach to volume
enhancement. By procedurally enhancing the volume sample’s color,
illumination, and opacity into the rendering system, we can implement a
wide range of enhancement techniques. The properties that can be
incorporated into the volume illustration procedures include the following:
•
Volume sample location and value
•
Local volumetric properties, such as gradient and minimal change
direction
•
View direction
•
Light information
The view direction and light information allows global orientation
information to be used in enhancing local volumetric features. Combining
this rendering information with user selected parameters provides a powerful
framework for volumetric enhancement and modification for artistic effects.
Traditional Volume Rendering Pipeline
Volume Illustration Rendering Pipeline
volume values f1(xi)
Volume values f1(xi)
shading
classification
voxel colors cλ(xi)
voxel opacities α(xi)
shaded, segmented volume [cλ(xi), α(xi)]
resampling and compositing
(raycasting, splatting, etc.)
image pixels Cλ(ui)
Diagram 1.
Pipeline.
Traditional Volume Rendering
Volume
Rendering
Transfer function
Volume Illustration
color modification
Volume Illustration
opacity modification
Final volume sample [cλ(xi), α(xi)]
image pixels Cλ (ui)
Diagram 2.
Volume
Rendering Pipeline.
Illustration
Volumetric illustration differs from surface-based NPR in several
important ways. In NPR, the surfaces (features) are well defined, whereas
with volumes, feature areas within the volume must be determined through
analysis of local volumetric properties. The volumetric features vary
continuously throughout three-dimensional space and are not as well defined
as surface features. Once these volumetric feature volumes are identified,
user selected parametric properties can be used to enhance and illustrate
them.
Figure 2 shows gaseous illumination of an abdominal CT volume of
256×256×128 voxels. In this image, as in others of this data set, the scene is
illuminated by a single light above the volume and slightly toward the
viewer. The structure of tissues and organs is difficult to understand. In
Figure 3, a transfer function has been used to assign voxel colors which
mimic those found in actual tissue. The volume is illuminated as before.
Organization of tissues into organs is clear, but the interiors of structures are
still unclear.
3.3
Enhancement Techniques
In a surface model, the essential feature is the surface itself. The surface
is explicitly and discretely defined by a surface model, making “surfaceness”
a boolean quality. Many other features, such as silhouettes or regions of
high curvature, are simply interesting parts of the surface. Such features can
be identified by analysis of regions of the surface. In a volume model, there
are no such discretely defined features. Volume characteristics and the
features that they indicate exist continuously throughout the volume.
However, the boundaries between regions are still one feature of interest.
The local gradient magnitude at a volume sample can be used to indicate the
degree to which the sample is a boundary between disparate regions. The
direction of the gradient is analogous to the surface normal. Regions of high
gradient are similar to surfaces, but now “surfaceness” is a continuous,
volumetric quality, rather than a boolean quality.
Few of the usual depth cues are present in traditional rendering of
translucent volumes. Obscuration cues are largely missing since there are no
opaque objects to show a clear depth ordering. Perspective cues from
converging lines and texture compression are also lacking, since few volume
models contain straight lines or uniform textures. The dearth of clear depth
cues makes understanding spatial relationships of features in the volume
difficult. Similarly, information about the orientation of features within the
volume is also largely missing. As a result, the shape of individual structures
within even illuminated volumes is difficult to perceive.
We have developed several techniques for the procedural enhancement
of volume features based on gradient, as well as depth and orientation cues in
volume models, inspired by shading concepts in art and technical illustration.
3.3.1
Silhouettes
Surface orientation is an important visual cue that has been successfully
conveyed by artists for centuries through numerous techniques, including
silhouette lines and orientation-determined saturation effects. Silhouette lines
are particularly important in the perception of surface shape, and have been
utilized in surface illustration and surface visualization rendering
[Salisbury94, Interrante95]. Similarly, silhouette volumes increase the
perception of volumetric features.
In order to strengthen the cues provided by silhouette volumes, we
increase the opacity of volume samples where the gradient nears
perpendicular to the view direction, indicated by a dot product between
gradient and view direction which nears zero. Figure 4 shows the result of
both boundary and silhouette enhancement in the medical volume. The fine
honeycomb structure of the liver interior is clearly apparent, as well as
additional internal structure of the kidneys.
3.3.2
Feature halos
Illustrators sometimes use null halos around foreground features to
reinforce the perception of depth relationships within a scene. The effect is
to leave the areas just outside surfaces empty, even if an accurate depiction
would show a background object in that place. Interrante [Interrante98] used
a similar idea to show depth relationships in 3D flow data using Line Integral
Convolution (LIC). The resulting halos achieved the desired effect, but the
method depended on having flow data suitable for processing with LIC.
We use a more general method for creating halo effects during the
illumination process using the local spatial properties of the volume. Halos
are created primarily in planes orthogonal to the view vector by making
regions just outside features darker and more opaque, obscuring background
elements which would otherwise be visible. The strongest halos are created
in empty regions just outside (in the plane perpendicular to the view
direction) of a strong feature.
Figure 5 shows the effectiveness of adding halos to the medical volume.
Structures in the foreground, such as the liver and kidneys, stand out more
clearly.
3.3.3
Tone shading
Another illustrative technique used by painters is to modify the tone of an
object based on the orientation of that object relative to the light. This
technique can be used to give surfaces facing the light a warm cast while
surfaces not facing the light get a cool cast, giving effects suggestive of
illumination by a warm light source, such as sunlight. Gooch et al. proposed
an illumination model based on this technique [Gooch98], defining a
parameterized model for effects from pure tone shading to pure illuminated
object color. The parameters define a warm color by combining yellow and
the scaled fully illuminated object color. Similarly, a cool color combines
blue and the scaled ambient illuminated object color. The final surface color
is formed by interpolation between the warm and cool color based on the
signed dot product between the surface normal and light vector.
We implemented an illumination model similar to Gooch tone shading
for use with volume models. The color at a voxel is a weighted sum of the
illuminated gaseous color (including any traditional transfer function
calculations) and the total tone and directed shading from all directed light
sources. The tone contribution from a single light source is interpolated from
the warm and cool colors, depending on the angle between the light vector
and the sample gradient. Samples oriented toward the light become more
like the warm color while samples oriented away from the light become more
like the cool color.
Figure 6 shows tone shading applied together with colors from a transfer
function. The tone effects are subtle, but still improve shape perception. The
basic tissue colors are preserved, but the banded structure of the aorta is more
apparent than in a simple illuminated and color-mapped image (Figure 3).
3.4
Enhancement Examples
We have also applied the techniques in the previous sections to several
other scientific data sets. Figure 7 shows a 512x512x128 element flow data
set from the time series simulation of unsteady flow emanating from a 2D
rectangular slot jet. The 2D jet source is located at the left of the image and
the flow is to the right. Flow researchers notice that both Figures 7 and 8
resemble Schlieren photographs that are traditionally used to analyze flow.
Figure 8 shows the effectiveness of boundary enhancement, silhouette
enhancement, and tone shading on this data set. The overall flow structure,
vortex shedding, and helical structure are much easier to perceive in Figure 8
than in Figure 7.
Figures 9 and 10 are volume renderings of a 64x64x64 high-potential
iron protein data set. Figure 9 is a traditional gas-based rendering of the data.
Figure 10 has our tone shading volume illustration techniques applied. The
relationship of structure features and the three-dimensional location of the
features is much clearer with the tone-based shading enhancements applied.
4.
PROCEDURAL ABSTRACTION OF LARGE
DATASETS
The idea of functionally representing volumetric data has been inherent in
visualization for the past 10 years. Most isosurface visualization algorithms
assume that a continuous function describes the field being visualized and
use the sampled data points and implied characteristics of the function (e.g.,
gradient) to create smooth surfaces representing isovalues of the field
[Gallagher89]. Similarly, a procedural approach to data abstraction extends
this inherent nature to capitalize on the compact representation capability of
functions for characterizing data. A collection of procedural basis functions
(implicit field functions, fractals, fractal interpolation functions) can be used
to describe large data sets at multiple scales of resolution, capitalizing on the
power of procedural modeling techniques and parametric representation to
provide a flexible, extensible system that features data amplification and
inherent multi-resolution models.
4.1
Characterization of data by functional description
A data set can be prepreprocessed in order to create a compact functional
representation of each data variable. This functional representation can allow
viewers to quickly transfer a compact representation of the potentially multigigabyte database to their workstation for visualization, interaction, and
exploration, which will be processed on the local workstation for
visualization.
Within the procedural model, statistically appropriate
procedural detail can be generated to quickly simulate data detail during
interaction and exploration. While the viewer performs an initial exploration
of the data, more detailed data can be retrieved from the database to replace
the procedural detail.
Implicit function techniques provide a compact, flexible method for
describing three-dimensional fields [Bloomenthal97]. Figure 1 shows the
detailed volumetric models of clouds that can be represented with as few as
nine implicit functions (each with four parameters, totaling less than 200
bytes of storage). This use of implicits is different from traditional implicit
modeling techniques: in that it uses implicits to characterize volume
distributions of data and not to model surfaces.
A collection of implicit and fractal basis functions, skeletal primitives,
and blending functions can be used to functionally represent and visualize
general multivariate volumetric data. Additionally, specialized functional
representations can take advantage of domain-specific knowledge, allowing
more sophisticated procedural simulations (e.g., simplified Navier-Stokes
simulations, diffusion simulations) to be incorporated into the functional
representation to provide more accurate characterization of the data, more
compact representation of the data, and the ability to generate appropriate
detail and missing data.
One key challenge in this research is the development of automatic and
semi-automatic techniques for generating compact implicit, multi-fractal, and
FIF representations of large volumetric data sets. Implicits are a natural
choice for multi-resolution volumetric data representation because of their C2
continuous blending, well-defined spatial extent of each implicit (local
control), and the ability to both add and subtract detail from the volumetric
approximation through the use of positive and negative weighting of
primitives. A small number of implicits can be used to represent the main
characteristics of the volumetric data set. The addition of more implicits
with finer spatial extents will add detail on demand. In general, implicits
provide a smoothly varying volumetric field representation, which is
appropriate for large-scale volumetric features and, in some applications, fine
resolution features. In contrast, multi-fractals provide sharp detail and more
discontinuous volumetric functions through the addition of detail at varying
frequency scales. FIFs and multi-fractals are a natural choice for representing
data with large abrupt changes in values, discontinuous functions, and
substantial noise. FIFs provide primitives that can represent detailed data that
is smooth, rough, and even transitions from smooth to rough. The
combination of these three types of primitives provides a powerful system for
representing volumetric data with a wide variety of characteristics.
4.2
Visualization from functional descriptions
Two basic approaches for are available for the visualization of functionally
represented data: voxelize, then visualize, and visualize directly from the
functional representation. To voxelize the functional representation, all that
is needed is to evaluate the appropriate functions at the centers of a fixed
volume grid. Direct visualization of the functional representation can include
statistically appropriate procedural pseudo-detail to provide a better
representation of the underlying data set. This pseudo-detail can be tagged
internally and replaced with better approximations as computational time
allows.
5.
CONCLUSIONS
We have described a procedural visualization framework that is a
powerful, flexible, and extensible approach to visualization challenges today
and over the next decade. Using procedural techniques allow the abstraction
of volumetric models and datasets to a compact, higher-level for ease in
specification and interactive viewing on desktop computers. Extending this
visualization approach to volume illustration provide procedural control of
visualization enhancement and a powerful technique for expressively
conveying important features within data volumes.
6.
ACKNOWLEDGMENTS
The authors would like to thank Dr. Elliott Fishman of Johns Hopkins
Medical Institutions for the abdominal CT dataset. The iron protein dataset
came from the vtk website (www.kitware.com/vtk.html). Christopher Morris
generated some of the pictures included in this paper. This material is based
on work supported by the National Science Foundation under Grants No.
0081581, 9996043, and 9978032.
7.
REFERENCES
[Blinn82] Blinn, J., "Light Reflection Functions for Simulation of Clouds and Dusty
Surfaces," Computer Graphics (SIGGRAPH ’82 Proceedings), 16(3)21:29, July 1982.
[Bloomenthal97] Bloomenthal, J., Bajaj, C., Blinn, J., Cani-Gascuel, M.P., Rockwood, A.,
Wyvill, B., Wyvill, G., Introduction to Implicit Surfaces, Morgan Kaufman Publishers,
1997. .
[Drebin88] Robert A. Drebin and Loren Carpenter and Pat Hanrahan. Volume Rendering,
Computer Graphics (Proceedings of SIGGRAPH 88), 22(4), pp. 65-74 (August 1988,
Atlanta, Georgia). Edited by John Dill.
[Ebert94] Ebert, D., Carlson, W., Parent, R., "Solid Spaces and Inverse Particle Systems for
Controlling the Animation of Gases and Fluids," The Visual Computer, 10(4):179-190,
1994.
[Ebert97a] Ebert, D., "Volumetric Modeling with Implicit Functions: A Cloud is Born,"
SIGGRAPH 97 Visual Proceedings (Technical Sketch), 147, ACM SIGGRAPH 1997.
[Ebert98] Ebert, D., Musgrave, F., Peachey, D., Perlin, K., Worley, S., Texturing and
Modeling: A Procedural Approach, Second Edition, AP Professional, July 1998.
[Fujishiro99] Issei Fujishiro and Taeko Azuma and Yuriko Takeshima. Automating Transfer
Function Design for Comprehensible Volume Rendering Based on 3D Field Topology
Analysis, IEEE Visualization ’99, pp. 467-470 (October 1999, San Francisco, California).
IEEE. Edited by David Ebert and Markus Gross and Bernd Hamann.
[Gallgher89] R. Gallagher and J. Nagtegaal, "An Efficient 3-D Visualization Technique for
Finite Element Models and Other Coarse Volumes," Computer Graphics (SIGGRAPH
’89 Proceedings), 23 (3), pp. 185-194 (July 1989). Edited by Jeffrey Lane.
[Gooch98] Amy Gooch, Bruce Gooch, Peter Shirley, and Elaine Cohen. A Non-photorealistic
Lighting Model for Automatic Technical Illustration. In Proceedings of SIGGRAPH ’98
(Orlando, FL, July 1998), Computer Graphics Proceedings, Annual Conference Series,
pp. 447-452, ACM SIGGRAPH, ACM Press, July 1998.
[Gooch99] Bruce Gooch and Peter-Pike J. Sloan and Amy Gooch and Peter Shirley and Rich
Riesenfeld. Interactive Technical Illustration, 1999 ACM Symposium on Interactive 3D
Graphics, pp. 31-38 (April 1999). ACM SIGGRAPH. Edited by Jessica Hodgins and
James D. Foley. ISBN 1-58113-082-1 .
[Interrante97] Victoria Interrante and Henry Fuchs and Stephen M. Pizer. Conveying the 3D
Shape of Smoothly Curving Transparent Surfaces via Texture, IEEE Transactions on
Visualization and Computer Graphics, 3(2), (April - June 1997). ISSN 1077-2626.
[Interrante98] Victoria Interrante and Chester Grosch. Visualizing 3D Flow, IEEE Computer
Graphics & Applications, 18(4), pp. 49-53 (July - August 1998). ISSN 0272-1716.
[Kajiya84] James T. Kajiya and Brian P. Von Herzen. Ray Tracing Volume Densities,
Computer Graphics (Proceedings of SIGGRAPH 84), 18(3), pp. 165-174 (July 1984,
Minneapolis, Minnesota). Edited by Hank Christiansen.
[Kindlmann98] Gordon Kindlmann and James Durkin. Semi-Automatic Generation of
Transfer Functions for Direct Volume Rendering, In Proceedings of 1998 IEEE
Symposium on Volume Visualization, pp. 79-86.
[Kirby99] R.M. Kirby, H. Marmanis, and D.H. Laidlaw. Visualizing Multivalued Data from
2D Incompressible Flows Using Concepts from Painting, IEEE Visualization ’99, pp.
333-340 (October 1999, San Francisco, California). IEEE. Edited by David Ebert and
Markus Gross and Bernd Hamann. ISBN 0-7803-5897-X.
[Levoy90] Marc Levoy. Efficient Ray Tracing of Volume Data, ACM Transactions on
Graphics, 9 (3), pp. 245-261 (July 1990). ISSN 0730-0301.
[Nishimura85] Nishimura, H., Hirai, A., Kawai, T., Kawata, T., Shirakawa, I., Omura, K.,
"Object Modeling by Distribution Function and a Method of Image Generation," Journal
of Papers Given at the Electronics Communication Conference ’85, J68-D(4), 1985.
[Nishita87] Nishita, T., Miyawaki, Y., Nakamae, E., "A shading model for atmospheric
scattering considering luminous intensity distribution of light sources," Computer
Graphics (SIGGRAPH ’87Proceedings), 21, pages 303-310, July 1987.
[Nishita96] Nishita, T., Nakamae, E., Dobashi, Y., "Display Of Clouds And Snow Taking Into
Account Multiple Anisotropic Scattering And Sky Light," SIGGRAPH 96 Conference
Proceedings, pages 379-386. ACM SIGGRAPH, August 1996.
[Nishita98] Tomoyuki Nishita. Light Scattering Models for the Realistic Rendering of Natural
Scenes, Eurographics Rendering Workshop 1998, pp. 1-10 (June 1998, Vienna, Austria).
Eurographics. Edited by George Drettakis and Nelson Max. ISBN3-211-83213-0.
[Rheingans96] Penny Rheingans. Opacity-modulating Triangular Textures for Irregular
Surfaces, Proceedings of IEEE Visualization ’96, pp. 219-225 (October 1996, San
Francisco CA). IEEE. Edited by Roni Yagel and Gregory Nielson. ISBN 0-89791-864-9.
[Rheingans01] Penny Rheingans and David Ebert. Volume Illustration: Nonphotorealistic
Rendering of Volume Models, IEEE Transactions on Visualization and Computer
Graphics, vol. 7, no.3, pp. 253-264, July-September 2001.
[Saito94] Takafumi Saito. Real-time Previewing for Volume Visualization. In Proceedings
of 1994 IEEE Symposium on Volume Visualization, pp. 99-106.
[Salisbury94] Michael P. Salisbury and Sean E. Anderson and Ronen Barzel and David H.
Salesin. Interactive Pen-And-Ink Illustration, Proceedings of SIGGRAPH 94, Computer
Graphics Proceedings, Annual Conference Series, pp. 101-108 (July 1994, Orlando,
Florida). ACM Press. Edited by Andrew Glassner. ISBN 0-89791-667-0.
[Salisbury97] Michael P. Salisbury and Michael T. Wong and John F. Hughes and David H.
Salesin. Orientable Textures for Image-Based Pen-and-Ink Illustration, Proceedings of
SIGGRAPH 97, Computer Graphics Proceedings, Annual Conference Series, pp. 401406 (August 1997, Los Angeles, California). Addison Wesley. Edited by Turner Whitted.
ISBN 0-89791-896-7.
[Treavett00] S.M.F. Treavett and M. Chen. Pen-and-Ink Rendering in Volume Visualisation,
Proceedings of IEEE Visualization 20000, October 2000, ACM SIGGRAPH Press.
Figure 1: Closeup of a procedural cloud created with nine volumetric implicits.
Figure 2. Gaseous illumination of medical
CT volume. Voxels are a constant color.
Figure 4. Silhouette and
enhancement of CT volume.
boundary
Figure 3. Gaseous illumination of colormapped CT volume.
Figure 5. Distance color blending and halos
around features of CT volume.
Figure 6. Tone shading in colored volume. Surfaces toward light receive warm color.
Figure 7. Atmospheric volume rendering of
square jet. No illustration enhancements.
Figure 8. Square jet with boundary and
silhouette enhancement, and tone shading.
Figure 9.
protein.
Figure 10. Tone shaded iron protein.
Atmospheric rendering of iron