HIGH QUALITY SPLATTING AND VOLUME SYNTHESIS Roger Crawfis Department of Computer and Information Science The Ohio State University, Columbus, OH [email protected] Jian Huang Department of Computer Science University of Tennessee, Knoxville, TN [email protected] ABSTRACT Our recent improvements to volume rendering using splatting allow for an accurate functional reconstruction and volume integration, occlusion-based acceleration, post-rendering classification and shading, 3D texture mapping, bump mapping, anti-aliasing and gaseous animation. A pure software implementation requiring low storage and using simple operations has been implemented to easily support these operations. This paper presents a unified framework and discussion of these enhancements, as well as extensions to support hypertextures efficiently. Discussions on appropriate volume models for effective usage of these techniques is also presented. 1. INTRODUCTION The splatting approach, introduced by Westover [24] and improved by the research community over the years [12-15], represents a volume as an array of reconstruction voxel kernels, which are classified to have colors and opacities based on the transfer functions applied. Each of these kernels leaves a footprint or splat on the screen, and the composite of all splats yields the final image. In recent years, a renewed interest in point-based methods [19][20] has brought even more attention to splatting. This interest has evolved in computer graphics and visualization areas besides volume rendering [15]. Such examples include image-based rendering [22], texture mapping [2] and object modeling [9]. In image-based rendering, Shade et.al. [22] represent their layered-depth image (LDI) as an array of points that are subsequently projected to the screen. Here, the use of antialiased or “fuzzy” points provides good blending on the image-plane. The LDIs are very similar to volumes. Chen [2] in his forward-texture mapping approach also uses non-spherical splats to distribute the energy of the source pixels onto the destination image. An earlier approach, taken by Levoy [9], represents solid objects by a hull of fuzzy point primitives, that are augmented with normal vectors. Rendering of these objects is performed by projecting these points to the screen, whereby special care is taken to avoid holes and other artifacts. The current splatting algorithm [14][15] has a number of nice properties in addition to greatly improved image qualities. It uses a sparse data model, and only requires the projection of the relevant voxels, i.e., the voxels that fall within a certain density range-of-interest. Other voxels can be culled from the pipeline at the beginning. This results in a sparse volume representation [26]. As a consequence, we can store large volumes in a space equivalent to that occupied by much smaller volumes that are being raycast or texture-mapped. Furthermore, most other point-based approaches and some image-based rendering approaches all use such a sparse data model. But despite such interests and advantages in using splatting, no specific hardware has yet been proposed for splatting. The only acceleration that splatting could exploit is surface texture mapping hardware. The community, however, has already seen hardware accelerations of other volume rendering algorithms. The VolumePro board is a recent product from Mitsubishi [18]. It uses a deeply pipelined raycasting approach and can render a 2563 volume at 30 frames/sec in orthogonal projection. Volumes larger than what is allowed require reloading across the system bus, reducing the frame rate significantly. Such deficiencies are also true for approaches that use 3D texture mapping hardware [1], which usually has even more limited on-board storage space (64MB). The hardware features parallel interpolation planes and composites the interpolated slices in depth order. For a comparison of different volume rendering algorithms, see [12]. Using texture mapping hardware to render each splat was proposed by Crawfis and Max [3] to alleviate the main CPU from the computational complexity incurred to resample the footprint tables and composite each pixel of each splat into the framebuffer. While this approach is now commonly used, the surface texture mapping hardware introduces performance bottlenecks in most current graphics architectures. First, because footprint rasterization is purely 2-dimensional, steps in the surface texture mapping are unnecessary in splatting. Second, recent improvements in splatting are now increasingly dependent on direct access to the framebuffers. Mueller et.al [15] achieved a considerable speed up in splatting by incorporating opacity-based culling, akin to the occlusion maps introduced in [27]. Even with the cost of many expensive framebuffer reads and writes, the speed up is still significant for opaque renderings. More recent work by Mueller et. al [14] uses per-pixel post-rasterization classification and shading to pro- duce sharper images. Both the per-pixel classification and the occlusion test require that the framebuffer be read after each slice is rasterized, and per-pixel classification requires another framebuffer write for each slice. For a 2563 volume, this amounts to approximately 1000 framebuffer reads and writes in their implementation. Texture mapping hardware can not be efficiently utilized when using the improvements to splatting. This performance degradation of splatting using texture hardware is also reported in other point-based approaches [22]. Flexible sample and pixel accesses are now highly desired. Due to the inefficiencies in framebuffer access using current graphics architectures in the context of splatting, we explore a fast software alternative to splat rasterization. The goal of this paper is to implement this in software and provide for other per-pixel operations. 2. IMAGE-ALIGNED SHEET-BASED SPLATTING 2.1 The General Approach Volume rendering involves the reconstruction and volume integration [6] of a 3D raster along sight rays. In practice, volume rendering algorithms are able to only approximate this projection [11]. Splatting [15][24][25] is an object-order approach that performs both volume reconstruction and integration by projecting and compositing weighted reconstruction kernels at each voxel in a depth sorted order. A reconstruction kernel is centered at each voxel and its contribution is accumulated onto the image with a 2D projection called a splat. This projection (or line integration) contains the integration of the kernel along a ray from infinity to the viewing plane. The splat is typically precomputed and resampled into what is called a footprint table. This table is then re-sampled into the framebuffer for each splat. For orthogonal projections and radially symmetric interpolation kernels, a single footprint can be pre-computed and used for all views. This is not the case with perspective projections or nonuniform volumes, which require that the splat size change and that the shape of the distorted reconstruction kernel be non-spherical. Swan [23] addresses the issue of aliasing due to perspective projections with an adaptive footprint size defined by the z-depth. Westover's [24] first approach to splatting suffered from color bleeding and popping artifacts because of the incorrect volume integration computation. To alleviate this, he accumulated the voxels onto axis-aligned sheets [25], which were composited together to compute the final image. This introduces a more substantial popping artifact when the orientation of the sheets change as the viewpoint moves. Mueller et.al. [13][15] overcomes this drawback, with a scheme that aligns the sheets to be parallel to the image plane. To further improve the integration accuracy, the spacing between sheets is set to a subvoxel resolution. The 3D reconstruction kernel is sliced into subsections and the contribution of each section is inte- grated and accumulated onto the sheet. While this significantly improves image quality over previous methods, it requires much more compositing and several footprint sections per voxel to be scan-converted. However, by using a front-to-back traversal, this method allows for the culling of occluded voxels by checking whether the pixels that a voxel projects to have reached full opacity [15]. An occlusion map, which is an image containing the accumulated opacities is updated as the renderer steps sheet-bysheet in the front-to-back traversal. This is akin to early ray termination and occlusion maps [27]. To address the blurriness of volume renderings with close up views, Mueller et. al [14] moves the classification and shading operations to occur after the accumulation of the splats into each sheet. In this approach, unlike traditional splatting which renders voxels pre-classified and pre-shaded, voxels are rendered using their density values. The sheets rendered with data values resampled at the image resolution, are then classified, shaded and composited pixel by pixel. Their results show impressively sharp images. Our image-aligned sheet based splatting algorithm performs the following steps. For each viewpoint: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. Transform each voxel to image space; Bucket Sort the voxels according to the transformed z-values, where each bucket covers the distance between successive sheet buffers; Initialize the occlusion map to zero opacity; For each sheet in front-to-back order do: For each footprint do Adjust the footprint’s size due to aliasing. Adjust the footprint’s size due to depth-of-field. If (the pixels within the extent of the footprint have not reached full opacity) then Rasterize and add the footprint to the sheet buffer; End; If (Hypertextures) perturb the function slice by the hypertexture. Calculate the gradient Classify each pixel using the classification map. If (3D texture mapping) compute pixel colors Apply lighting and shading to obtain final color Composite the sheet “under” the current framebuffer. Update the occlusion map; End; The above pseudo code is better illustrated by Figure 1. All voxel sections that fall into the same slice are accumulated together into a sheet buffer. The sheets then undergo a per-pixel classification and are finally composited in a front-to-back order. This algorithm needs direct read and write access to the framebuffer. Hardware rasterization requires such accesses to take place across the system bus, at a high cost. We thus have developed an optimized software splatter which easily accesses the pixel buffers residing in memory. 3. THE FASTSPLAT PRIMITIVE The first task to implementing splatting in software is to design an efficient scan-converter of the footprints. Low storage, high image quality, fast rasterization, and support for flexible pixel operations are chosen as our design goals. We examined several alternatives [5], and determined that a single 1D table mapping the radius-squared to pre-integrated footprint values works the best. This avoids an expensive square-root and can be calculated incrementally. Using a table size of 16-bits we were able to guarantee sampling errors smaller than the numerical precision of an 8-bit table. Comparing the performance of these software only primitives, versus using OpenGL hardware and 2D texture maps, we actually achieved a significant speed-up for splats that project to a few pixels. This is due to the expensive framebuffer reads required for occlusion culling. 3.1 1D FastSplat For the 1D FastSplats, only the values along a radial line are stored in the footprint. To compute the values for pixels within the splat extent, we calculate the radius from the voxel's center, use this to look-up the value in the 1D footprint table, and then composite the resulting value into the pixel. The minimal 1D storage requirements allow for high resolution footprints. To avoid the expensive square root calculation, we examined footprint tables indexed with a squared radius: 2 2 2 = ( x – xo ) + ( y – yo ) (1) Here, the splat center is located at ( x o, yo ) and the pixel is at ( x, y ) . r x, y Akin to scan conversion algorithms for 2D shapes using midpoint algorithms, we scan convert the circle described by (1) incrementally. For a neighboring pixel, (x + 1,y) , in the scanline, the radius squared is: current slicin g slab s in terp o latio n k ern el contribution i r fe uf ng te - b s i t i e o sh m p co b sla wi g ma ep d th Figure 1: Image-aligned sheet-based splatting. lan e r 2 = r x + 1, y 2 x, y + 2 ( x – xo ) + 1 (2) The term 2 ( x – x o ) can be scan converted incrementally as well, by adding 2 to it for each pixel stepped through on the scanline. Hence, keeping two terms of the previous pixel’s calculation, one can compute the new squared radius for the current pixel with two additions. 3.2 1D Square FastSplat For General Elliptical Kernels For many applications, the footprint projection is not always a circle or disk. Voxels on rectilinear grids [15] have elliptical reconstruction kernels. Mao [10] renders unstructured grids by resampling them using a weighted Poisson distribution of sample points and then placing ellipsoidal interpolation kernels at each sample location. Therefore, it is desired to have a primitive that can handle elliptical projections. Given a center position ( x o, y o ) , the equation of a general ellipse is given by: r 2 2 2 = a ( x – x o ) + b ( y – yo ) + c ( ( x – x o ) ( y – y o ) ) x, y (3) 2 All screen points with equal squared-radii, r x, y , are located on one particular contour of the screen-space ellipse and will have the same footprint value. Similar to Section 3.1, we can still incrementally scan-convert these radii. For the neighboring pixels further down on the scanline from the current pixel ( x, y ) , we have: r 2 x + 1, y = r 2 x, y + 2a ( x – xo ) + a + c ( y – y o ) (4) The term 2a ( x – x o ) + a + c ( y – yo ) changes incrementally by 2a, for every consecutive pixel along a scanline. Thus, the complexity of the 1D squared FastSplat does not change for elliptical kernels, and we still only need 2 additions and one table look-up per pixel on a scanline. However, for each consecutive scanline we need to incrementally update the new term c ( y – y o ) . These calculations also need to be done in floating point arithmetic to allow for arbitrary values of a and c. 4. A Flexible Software architecture The main rendering of our system takes as input a set of volume bricks, each containing a sparse list of voxels deemed to have potentially interesting data. For surface models, this may be simply those voxels within a specified distance of the surface. For medical data, all voxels not classified as air or noise may be kept, or separate models containing segmented surfaces may be represented. For computational data, the user can specify in a preprocessing step the numerical range or ranges of interesting values (e.g., all temperatures greater than 1000ºF). The choice of volume models for image synthesis will be discussed in more detail in the following section. The rendering algorithm takes the current view-point and marches through the voxel bricks in a front-to-back order in step with a front-to- back traversal of thin bounding sheets. An accurate reconstruction of the function is integrated at every pixel in this thin sheet. Having an accurate 2D slice of the function, we can then apply various operations. The two most common operations are to apply some transfer function or optical model to the slice, and to compute the gradient of the slice for lighting calculations. We keep three sheets in memory at any one time to calculate the gradient. More sheets could be retained for additional techniques. Our software thus supports the three following operations: 1 2. 3. The choice of any radially symmetric reconstruction kernel. Adjustment of each voxel's footprint extent. Per-pixel operations or calculations on each sheet before compositing. This paper will discuss techniques useful in volume synthesis that take advantage of these features. Anti-aliasing and depth-of-field can easily be achieved by adjusting the voxel's footprint extent. While 3D texture mapping can be achieved by simply coloring each voxel, this does not work well with post-classification nor for zoomed in views. For close-ups, the 3D texture becomes excessively blurred. We add support for per-pixel three-dimensional texture mapping, bump-mapping and opacity mapping. Hyper-textures are simply a more complex form of post-classification in our scheme. For any of these techniques to work successfully, we need a good volume model with which to work with. The next section describes our general approach to this using good functional reconstruction and surface extraction. 5. Volume modeling and extraction We now have a good reconstruction and an accurate compositing and integration of the underlying function. The question still remains as to what is the function we are trying to reconstruct. Previous volume models focused on extracting surfaces mainly from medical data. Scientific visualization for the computational sciences map the underlying data of interest into a density field and typically apply a fuzzy volume model for the integration equation [11], [21]. The choice of the function may also depend on the resulting visual effect or appearance. If a hard surface, rather than a volume cloud is desired, then a volume model that includes not only the reconstructed function, but also the mathematical operations used to extract that surface from the functional representation. A particular functional representation important to scientific visualization is that of an iso-contour surface. This operation requires a continuous function, and computes the surface or surfaces where the function crosses a specified threshold or contour value. For volume synthesis purposes, the distance function, defined as the signed distance from a geometric surface, provides a good volume model to extract hard surfaces. Given an exact continuous distance function, the iso-contour surface having a value of zero will reproduce the initial surface. This pro- Figure 3. Post-classification with different zoom levels.This allows for very sharp surface definition. cess ignores the reconstruction operation, assuming an accurate functional model already exists. Voxelization is the process of rasterizing a continuous surface or object into a discrete voxel model. Previous efforts in voxelizing surfaces initially output binary models, and then latter output a full alpha channel, blurring the opacity around the surface. Rendering of these surfaces again, produced very blurred representations when examined for a close-up view. Even the latest paper on using splatting for point-based rendering [19], suffers from this deficiency. The use of hierarchical models with increasing detail is commonly used to mitigate the blurring. 6. Adjusting the voxel's footprint 6.1 Anti-aliasing For splatting, we can also easily incorporate anti-aliasing techniques to pre-filter the data before or during rendering. Since for splatting, we reconstruct the underlying function by centering a reconstruction kernel about each relevant data point, we can easily approximate this by simply lowpass filtering the reconstruction kernel instead. This spreads the voxel’s contribution over a wider range of object space, resulting in a much smoother object space reconstruction. The low-pass filter is chosen for each slice to ensure that a voxel’s contribution is not ignored due to subpixel voxel extents. More details can be found in [23]. 6.2 Non-uniform sampling To handle non-isotropic spacing of the discrete data points, elliptical reconstruction kernels can be used. These lead to elliptical footprint functions that must be rasterized appropriately. The software rasterization process achieves this using only one more addition. See [5] for more details. The splat size can also be adjusted for varying mesh sizes or octrees as in Laur and Hanrahan [7] or Crawfis and Max [3]. 7. Per-pixel operations 7.1 Per-pixel classification Each slice contains a small integral of the reconstructed function. After normalizing, this represents the mean value of the function between two adjacent slices. We take this function and apply the transfer function to each pixel, allowing for very sharp detail. Figure 3 illustrate this process as we zoom in extremely close to a tooth in a skeleton data set. Note, that the voxel spacing is quite large for this close-up view, resulting in only about 60 voxels across the entire 512 pixel width of the image. 7.2 Per-pixel Gradient and Shading calculations In Mueller [14], we presented three different techniques for calculating the gradient or normal for our image-aligned splatting. The first technique required a voxel model of the gradients for the reconstructed function. Each component of the gradient function was reconstructed using the same splatting approach as the original functional data. A second technique examined derivative filters, where the derivative of the reconstruction kernel is splatted, again for each component of the normal. A final technique was to use simple central differences, and calculate the gradient in the projected image space. This is the method chosen for our architecture. Figure 4. Phong shading. Figure 5. Glass surfaces with specular highlights representing several iso-contours of a diesel injection simulation. Once we have three sheets of the function reconstructed, we apply a central differences operation to calculate gradients at each pixel. This produces very pleasing results as indicated in Figure 4. One problem encountered with this while using post-classification techniques, was the numerical errors for extremely close-up views. If the voxels project to a large screen space, then the difference between adjacent pixels becomes zero or round-off error. To rectify this, we use a central difference operation with a step size equal to the number of pixels corresponding to the image-space voxel spacing. This is done in the plane of the reconstruction. For the z-component of the gradient, the slices are spaced at a fixed interval, dependent on the voxel spacing. This is usually set to 0.25 units apart, hence the numerical integration or accumulation error for close-ups is not a problem. Once we have the appropriate color for each pixel, and the gradient, we can apply lighting and shading. We would like the lighting to occur only on the surface or material interfaces. This can be accomplished by either calculating a strength volume (or slice), as in [4], or by using a transfer function that takes into account both the value of the function and the gradient in the classification stage [8]. Per-pixel specular and diffuse shading is easily included in any software renderer, provided that accurate normals to the surface can be determined. For the image in Figure 5, one should note the use of specular-shaded glass surfaces. This is accomplished with many image-order methods, but is difficult to obtain with most object-order methods such as splatting. 7.3 3D texture mapping Three-dimensional texture mapping can be accomplished either at each voxel or calculated at each pixel. The former leads to blurring of the texture as we zoom in on the object. The latter is more expensive, requiring a per-pixel transformation back into world space, followed by a possibly expensive evaluation of the texturing function. The image in Figure 6 was generated using a view-dependent texture mapping and a marble-like texture. This procedural texture calls Perlin’s Noise and turbulence functions [16] at each pixel it is evaluated. To avoid these expensive calculations, we only apply the texture mapping, as well as the lighting and shading, to those pixels having a non-zero opacity after the post-classification. Also, since we use a front-to-back rendering with occlusion culling, we can avoid these operations for pixels, or sample points, that are occluded. This is different from the voxel culling, since a voxel may still contribute to neighboring pixels. We examine the current frame-buffer to see whether it is already fully opaque at the desired pixel. If so, we skip over the lighting and texturing operations. Care must be taken for hypertextures, as will be discussed below. For Figure 6, based on the position of each pixel on each sheet, we applied a marble texture generating function [16] to each pixel, and generated a marble textured UNC Head rendering. This requires 9.41 seconds rendering time on a 300MHz Octane. The procedural 3D texture is computed on the fly. For the same view, without 3D texture-mapping, rendering with post-classified splatting takes 7.25 seconds. These timings are for a 512 by 512 image. Note, that per-pixel texture detail is present in the image. 7.4 Hypertextures Hypertextures where invented by Perlin [17] to create fascinating shape perturbations. He used a ray-casting approach to render these illdefined objects, where a rather expensive search of the boundary is required to produce good results. We have incorporated these into our system as well. The basic approach is to perturb the function reconstructed at each slice, again at a per-pixel level. We do this before the classification and shading. Figure 7a shows a single slice of our reconstructed function (contrast enhanced). In Figure 7b we have added some band-limited noise to the function, resulting in a more irregular density cloud. After post-classification (using a simple step-function), we have the resulting opacity mask shown in Figure 7c. The gradient of the original function will not work properly for the lighting and shading of this new surface. We use the gradient of the perturbed function, which is still continuous. The final result of all slices composited is shown in Figure 8. Figure 6. Three-dimensional texture mapping with per-pixel texture calculation. This particular example is view dependent. The texture is computed only for visible sample points. Figure 7. a) A single slice of a simple function having near radial properties. b) The result of adding band-limited noise to the slice. c) the slice after post-classification with a simple Occlusion culling can still take place even though we have significantly altered the underlying function. This is due to the front-to-back nature of the algorithm. However, we can not ignore pixels that would have turned transparent after classification as we do in the lighting and shading and three-dimensional texture mapping. A pixel that was not even touched may end up being altered to a degree that it now intersects the surface. To handle this, we calculate the range of function values that the classification function deems interesting (non-transparent). We enlarge this domain by the maximum and minimum distortion introduced by the hypertextures. For example, if we were interested in function values in the range (0.3,0.4) and we added band-limited noise with an amplitude of 0.2, we would have a new range of (0.1,0.6), assuming that the noise function ranged from -1 to +1. Any pixel with a reconstructed functional value in this range needs to be considered for the hypertexture operation (i.e, it may be “moved” from an occluded pixel to a non-occluded pixel). Once the hypertexture calculation is finished, we can proceed as before, treating the new function as the one we are interested in. 8. Future Work The flexibility in adjusting the voxel’s extent allows for non-uniform sample distributions, such as Poisson disk distributions and hierarchical primitives. Issues dealing with the accuracy of the slice thickness need to be examined for potential aliasing artifacts. We are also currently investigating techniques to adjust the voxel’s splat size to accommodate depth-offield calculations efficiently. Per-pixel operations, such as boolean textures, CSG models and even shadows are further areas of research. We are currently investigating techniques that would calculate the gradient after the classification. Figure 8. Applying the hypertexture to a sample dataset. 9. ACKNOWLEDGEMENT We would like to acknowledge funding for this work from the DOE ASCI program and an NSF Career Award received by Dr. Roger Crawfis. The GRIS group at University of Tuebingen, Germany, provided us with the diesel injection simulation data set. 10. REFERENCES 1. Cabral, B., N. Cam, and J. Foran, Accelerated Volume Rendering and Tomographic Reconstruction Using Texture Mapping Hardware, in 1994 Symposium on Volume Visualization, A.K.a.W. Krueger, Editor. October 1994, ACM SIGGRAPH. p. 91-98. 2. Chen, B., F. Dachille, and A. Kaufman. Forward Image Mapping. in Visualization 1999. 1999. San Francisco, California: IEEE Computer Society Press. 3. Crawfis, R. and N. Max. Texture Splats for 3D Vector and Scalar Field Visualization. in Visualization '93. 1993. Los Alamitos, CA: IEEE Computer Society Press. 4. Drebin, R.A., L. Carpenter, and P. Hanrahan, Volume Rendering, in Computer Graphics (Proceedings of SIGGRAPH 88), J. Dill, Editor. August 1988: Atlanta, Georgia. p. 65-74. 5. Huang, J., et al. FastSplats: Optimized Splatting on Rectilinear Grids. in Visualization 2000. 2000. Salt Lake City, Utah: IEEE Computer Society Press. 6. Kajiya, J.T. and B.P.V. Herzen, Ray Tracing Volume Densities, in Computer Graphics (Proceedings of SIGGRAPH 84), H. Christiansen, Editor. July 1984: Minneapolis, Minnesota. p. 165-174. 7. Laur, D. and P. Hanrahan, Hierarchical Splatting: A Progressive Refinement Algorithm for Volume Rendering. Computer Graphics, 1991. 25(4): p. 285-288. 8. Levoy, M., Efficient Ray Tracing of Volume Data. ACM Transactions on Graphics, July 1990. 9 (3): p. 245-261. 9. Levoy, M. and T. Witted, The Use of Points a s a Display Primitive,. 1985, Computer Science Dept., University of North Carolina at Chapel Hill. 10. Mao, X., Splatting of Non Rectilinear Volumes Through Stochastic Resampling. IEEE Transactions on Visualization and Computer Graphics, June 1996. 2 (2). 11. Max, N., Optical models for direct volume rendering. IEEE Transactions on Visualization and Computer Graphics, June 1995. 1 (2): p. 99-108. 12. Meiβner, M., et al. A Practical Comparison of Popular Volume Rendering Algorithms. in Volume Visualization and Graphics Symposium. 2000. Salt Lake City, Utah: ACM Press. 13. Mueller, K. and R. Crawfis, Eliminating Popping Artifacts in Sheet Buffer-Based Splatting, in IEEE Visualization '98, H. Rushmeier, Editor. October 1998, IEEE. p. 239-246. 14. Mueller, K., T. Moeller, and R. Crawfis, Splatting Without The Blur, in IEEE Visualization '99, B. Hamann, Editor. October 1999, IEEE: San Francisco, California. p. 363-370. 15. Mueller, K., et al., High-Quality Splatting on Rectilinear Grids with Efficient Culling of Occluded Voxels. IEEE Transactions on Visualization and Computer Graphics, April - June 1999. 5 (2): p. 116-134. 16. Perlin, K., An Image Synthesizer, in Computer Graphics (Proceedings of SIGGRAPH 85), B.A. Barsky, Editor. July 1985: San Francisco, California. p. 287-296. 17. Perlin, K. and E.M. Hoffert, Hypertexture, in Computer Graphics (Proceedings of SIGGRAPH 89), J. Lane, Editor. July 1989: Boston, Massachusetts. p. 253-262. 18. Pfister, H., et al., The VolumePro Real-Time Ray-casting System, in Proceedings of SIGGRAPH 99, A. Rockwood, Editor. August 1999, ACM SIGGRAPH Addison Wesley Longman: Los Angeles, California. p. 251-260. 19. Pfister, H., et al., Surfels: Surface Elements as Rendering Primitives, in Proceedings of SIGGRAPH 2000, K. Akeley, Editor. July 2000, ACM Press / ACM SIGGRAPH / Addison Wesley Longman. p. 335-342. 20. Rusinkiewicz, S. and M. Levoy, QSplat: A Multiresolution Point Rendering System for Large Meshes, in Proceedings of SIGGRAPH 2000, K. Akeley, Editor. July 2000, ACM Press / ACM SIGGRAPH / Addison Wesley Longman. p. 343-352. 21. Sabella, P., A Rendering Algorithm for Visualizing 3D Scalar Fields, in Computer Graphics (Proceedings of SIGGRAPH 88), J. Dill, Editor. August 1988: Atlanta, Georgia. p. 51-58. 22. Shade, J.W., et al., Layered Depth Images, in Proceedings of SIGGRAPH 98, M. Cohen, Editor. July 1998, ACM SIGGRAPH Addison Wesley: Orlando, Florida. p. 231-242. 23. Swan, J.E.I., et al. An Anti-Aliasing Technique for Splatting. in IEEE Visualization '97. 1997. Phoenix, AZ, pp. 197-204. 24. Westover, L. Interactive Volume Rendering. in Chapel Hill Workshop on Volume Visualization. 1989. Chapel Hill, NC: Department of Computer Science, Univ. of North Carolina. 25. Westover, L., Footprint Evaluation for Volume Rendering, in Computer Graphics (Proceedings of SIGGRAPH 90), F. Baskett, Editor. August 1990: Dallas, Texas. p. 367-376. 26. Wilhelms, J. and A.V. Gelder, A coherent projection approach for direct volume rendering, in Computer Graphics (Proceedings of SIGGRAPH 91), T.W. Sederberg, Editor. July 1991: Las Vegas, Nevada. p. 275-284. 27. Zhang, H., et al., Visibility Culling Using Hierarchical Occlusion Maps, in Proceedings of SIGGRAPH 97, T. Whitted, Editor. August 1997, ACM SIGGRAPH Addison Wesley: Los Angeles, California. p. 77-88.
© Copyright 2026 Paperzz