EUROGRAPHICS 2012 / P. Cignoni, T. Ertl (Guest Editors) Volume 31 (2012), Number 2 Stochastic Progressive Photon Mapping for Dynamic Scenes Maayan Weiss Thorsten Grosch University of Magdeburg, Germany Figure 1: Dynamic Stochastic Progressive Photon Mapping allows the computation of animations containing difficult light paths, like the caustics visible from the camera through an animated water surface. In contrast to an SPPM simulation for each frame, we obtain a speedup of 3.5. Abstract Stochastic Progressive Photon Mapping (SPPM) is a method to simulate consistent global illumination. It is especially useful for complicated light paths like caustics seen through a glass surface. Up to now, SPPM can only be applied to a static scene and noise-free images require hours to compute. Our approach is to extend this method to dynamic scenes (DSPPM) for an efficient simulation of animated objects and materials. We identify both hit point and photon information that can be re-used for the pixel statistics of multiple frames. In comparison to an SPPM simulation performed for each frame, we achieve a 1.96 - 9.53 speedup in our test scenes without changing correctness or simulation quality. This is the authors preprint. The definitive version of the paper is available at http://diglib.eg.org and http://onlinelibrary.wiley.com 1. Introduction Computing the correct solution of the rendering equation is difficult if specular-diffuse-specular (SDS) light paths are included, like caustics seen through a glass surface. Recently, Progressive Photon Mapping (PPM) [HOJ08] and Stochastic Progressive Photon Mapping (SPPM) [HJ09] were presented as methods to correctly simulate such complicated SDS light paths. However, computing a single noise-free image is still time-consuming. Especially for animations, the computation c 2012 The Author(s) c 2012 The Eurographics Association and Blackwell PublishComputer Graphics Forum ing Ltd. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. time grows linearly with the number of frames if a consistent simulation result is required. Our approach is to extend SPPM to animated scenes that contain arbitrary, animated objects and materials. We identify the hit point and photon information which is not affected by a dynamic object and re-use this information for multiple frames. In this way, we do not influence the quality or correctness of each frame and keep the temporal coherence. One example is shown in Fig. 1 where a drop of water is simulated in 25 frames with a speedup of 3.5. An application for our method is virtual pro- Maayan Weiss and Thorsten Grosch / Stochastic Progressive Photon Mapping for Dynamic Scenes totyping, where different geometry and material variations can be applied in a physically correct rendering. due to the fixed size search region, introduced by density estimation or standard Photon Mapping. 2. Previous Work 3. Basics To reduce the computation cost for animations with global illumination, different solutions exist. For Radiosity simulations, incremental Radiosity was introduced [Che90] to simulate only the differences in animated scenes. Further improvements for Hierarchical Radiosity were developed by Drettakis and Sillion [DS97]. For animation rendering, Damez et al. [DS99] introduced space-time hierarchical radiosity. For Ray Tracing and Path Tracing, reprojection techniques can be used to re-use information for multiple frames [HDMS03]. Sbert et al. [SCH04] showed that light paths can be re-used between frames, allowing animations with moving light sources. Many other techniques were developed for ray tracing animations, an overview can be found in Wald et al. [WMG∗ 09]. In interactive applications, reprojection techniques can be applied to re-use pixel colors [WDP99] or shading values [TPWG02]. An overview of these methods in given in [SYM∗ 11]. For animation rendering, perceptual rendering can be used to create approximate in-between frames with an imperceivable error for the human visual system [MTAS01] [DDM03]. This is used for video compression, eg. MPEG [HKMS08]. The main difference to our method is that we do not display a visually acceptable approximation for selected in-between frames, but compute a correct simulation in each frame. A simple method for Photon Mapping in dynamic scenes was developed by Jensen et al. [Jen01]: Each photon stores the seed value for the random numbers to generate the photon path. Using the same random number before and after the object movement results in many identical photons paths. This significantly reduces the flickering in the animation. Our method uses identical paths for many frames as well, but static paths are simulated only once. Similar ideas were used for final gathering of a photon mapping animation by Tawara et al. [TMS02]. To simulate interactive Photon Mapping in dynamic scenes, Dmitriev et al. [DBMS02] invented Selective Photon Tracing. In this method, quasi-Monte Carlo random numbers are used to distribute a fixed number of photons. First, a set of incoherent, so-called pilot photons is generated. For all pilot photons which hit the dynamic object, corrective photons are generated in similar directions. This effectively sorts the photon paths by their visual importance and displays a good preview image early. Guenther et al. [GWS04] used this method to quickly detect caustic paths, a SIMD implementation was developed by Baerz et al. [BAM08]. In contrast, our method is not an interactive system, the dynamic objects move on a predefined path. Instead DSPPM allows an infinite number of photons without the biasing problems 3.1. Photon Mapping Photon Mapping [Jen96] is a two-pass method to solve the rendering equation [Kaj86]. In the first step, a number of photons is emitted from the light sources, where each photon stores an equal portion of the total emitted flux. The photons are distributed along the scene, using Russian Roulette, and finally stored on diffuse surfaces. In the second step, the radiance for each pixel can be determined by accumulating the flux of all photons found inside a search region around the pixel position. Due to the fixed search radius, Photon Mapping is a biased technique that effectively blurs the real radiance values. 3.2. Stochastic Progressive Photon Mapping To remove the limitation of having a finite number of photons inside a fixed search radius, Progressive Photon Mapping (PPM) [HOJ08] extends this idea to progressive photon statistics. Instead of storing the original photons, only the number and the total flux inside the search region are stored. In a first Photon Mapping simulation, N photons are detected inside a circular region with radius R. In a second simulation, M photons are found, but only a fraction α ∈ (0, 1) of these new photons is used. Assuming a uniform photon distribution, the search radius can be decreased while the total number of photons increases. In Stochastic Progressive Photon Mapping (SPPM) [HJ09], this concept is further extended from a single point ~x to an arbitrary search region S. After each Photon Mapping iteration, a different path is generated through a random position of each pixel to generate new hit points. In this way, the average radiance of the pixel can be computed. As shown in [HJ09], the pixel statistics can be updated as follows: Ni+1 (S) = Ni (S) + αMi (~x) s Ri+1 (S) = Ri (S) Ni (S) + αMi (~x) Ni (S) + Mi (~x) (1) (2) Mi (~x) Φi (~x,~ω) = ∑ fr (~x,~ω,~ω p )Φ p (~x p ,~ω p ) (3) p=1 τi+1 (S,~ω) = (τi (S,~ω) + Φi (~x,~ω)) Ri+1 (S)2 Ri (S)2 (4) where i is the current iteration and Ri is the current search c 2012 The Author(s) c 2012 The Eurographics Association and Blackwell Publishing Ltd. Maayan Weiss and Thorsten Grosch / Stochastic Progressive Photon Mapping for Dynamic Scenes radius. Each photon stores a radiant flux Φ p , the total flux inside the search radius is Φi , which is obtained by summing all photon flux values, weighted by the BRDF fr . The BRDF-weighted flux in a reduced search radius for the next iteration is τi+1 . The final radiance L for the pixel can be obtained by normalizing with the total number of emitted photons Ne : τi (S,~ω) i→∞ Ne (i)πRi (S)2 L(S,~ω) = lim (5) We compute an animation consisting of K images where each image displays the same rendering result as SPPM. Supported animation types are objects, moving on a predefined path, and material/texture changes. We first load both the static scene and the dynamic geometry for all K frames into main memory. The simulation is then performed on the complete scene. By re-using photon and hit point information for multiple frames, the computation time can be reduced. Similar to SPPM, our method consists of the following repeating steps: 1. Hit Point Generation 2. Photon Simulation 3. Pixel Statistics Update Since we generate multiple frames, the required modifications for each step are described in the following subsections. To differentiate the static and dynamic parts, dynamic objects are tagged with their frame number ( j = 1..K) while static geometry is set to zero. 4.1. Creating the Hit Points As in SPPM, we create one hit point for each pixel by shooting a ray through a random position inside the pixel and then follow the path using Russian Roulette until a diffuse surface is found. The basic idea is to minimize the number of hit point generations by re-using each hit point for as many frames as possible. Static Path The easiest case occurs if we hit only static objects along the path, because the resulting hit point can be re-used for all frames. Fig. 2 shows an example with five pixels (A - E) and two dynamic objects in K = 3 frames. The paths of the pixels A and B contain only static geometry, so their hit points can be used for all frames. Dynamic Object Hit If a path hits a dynamic object, belonging to frame j, we have to generate at least two hit points: One that is assigned to frame j and one (or more) for all other frames. To determine the hit point for frame j we follow the reflected/refracted path until a diffuse surface is found. Since this path is assigned to frame j, we compute only intersections with objects belonging to frame j or 2 1 Static Geometry 0 1 3 2 1 0 4. Our Approach c 2012 The Author(s) c 2012 The Eurographics Association and Blackwell Publishing Ltd. E C D A B 3 2 1 Dynamic Geometry 2 2 23 3 Static Geometry Figure 2: Creating the hit points for an animation with three frames. Two dynamic objects are used, the upper one is out of glass (blue), the lower one is diffuse (grey). Sec. 4.1 describes the different cases. static geometry (frame 0). To determine the hit point(s) for all other frames, a second path is traced in the original direction and traversed until a diffuse surface is found. For this path, all intersections with objects belonging to frame j are ignored. In the example in Fig. 2, the path for pixel C hits a dynamic object in frame 1. The refracted path is therefore assigned to frame 1 and the hit point is finally stored on the diffuse dynamic object in frame 1. The transmitted path (dashed line) ignores the intersection on the back of the dynamic object and hits static geometry. Here, a hit point for frame 2 and 3 is generated, indicated by 1. The path for pixel D directly hits a diffuse dynamic object in frame 2, so the hit point is stored there. The transmitted path then hits static geometry, so a hit point for frame 1 and 3 is generated. Multiple Dynamic Object Hits Following a path, we may hit multiple dynamic objects, belonging to different frame numbers. In this case, the transmitted path has to keep all frame numbers in mind that have to be ignored for further intersection tests. We therefore use an exclusion bitmask where each bit describes a frame number. After hitting a dynamic object, belonging to frame j, we set bit j in the exclusion bitmask for the transmitted path. This indicates that all further intersection points with objects belonging to frame j should be ignored. The example in Fig. 2 shows the use of the exclusion bits for pixel E. The path first hits an object in frame 2. The refracted path is assigned to frame 2 and hits static geometry where a hit point for frame 2 is stored. The transmitted path adds frame 2 to the exclusion bits. Therefore we ignore the intersection with the backside and we hit an object belonging to frame 3. Now the refracted path is assigned to frame 3 and the hit point for frame 3 is finally stored on static geometry. The transmitted path now adds frame 3 to the exclusion bits, so both frame 2 and 3 are ignored for further intersection tests. Therefore, the following intersections with objects belonging to frame 3 are ignored and the hit point is finally generated on static geom- 1 Maayan Weiss and Thorsten Grosch / Stochastic Progressive Photon Mapping for Dynamic Scenes etry. Since the exclusion bits 2 and 3 are set (indicated by 23), this hit point is only valid for frame 1. Hitpoint Copies Each hit point can either be static (frame 0), belong to exactly one frame number j or it can be excluded from one or more frames. Since a hit point stores an individual radius and a flux value for each frame, we create a copy of each hit point for all frames that it belongs to. For a static hit point, this means that K identical hit points are created. If a hit point contains exclusion bits, a copy is created for all frames which are not set in the exclusion bitmask. If a hit point is assigned to a frame number, no further copies are generated. In this way, each pixel stores K hit points, one for each frame. In the example in Fig. 2, three hit points are generated for each pixel. 4.2. Simulating the Photons The process for photon simulation is based on a similar concept that was used for the hit point creation. During photon tracing, we can hit the static geometry of one of the dynamic objects. The photon starts with a frame number of 0. If we hit a dynamic object belonging to frame j, the photon path is split: One photon path is assigned to frame j and reflected or refracted at the dynamic object. The second photon path passes through the dynamic object and ignores frame j for further intersection tests. Again, the frame number or exclusion bits are stored in the photon to indicate which frame the photon belongs to or which frames should be ignored. Fig. 3 shows an example for K = 3 frames. If all exclusion bits are set, the photon path is stopped since the photon does no longer belong to any frame. 4.3. Updating the Pixel Statistics After the photon simulation, Eq. 1 to 4 have to be updated to obtain the pixel statistics for all hit points. If we collect the photons inside the current search radius we will find static photons, photons belonging to a certain frame and photons which are excluded from one or more frames. To select the valid photons for a hit point belonging to frame j we only accept static photons, photons assigned to frame j and photons that do not contain exclusion bit j. Fig. 4 shows an example for collected photons, given K = 3 hit points for a pixel. To avoid the time-consuming kd-Tree traversal for all K hit points belonging to the same pixel we first group the hit points by their 3D position. For all hit points belonging to the same 3D position we determine the maximum radius and use this to search the neighboring photons. Afterwards, we loop through all found photons and assign each photon to a hit point if the distance to the query point is smaller than the search radius of the hit point. This results in a significant speedup since most of the pixels are static during the whole sequence, or at least parts of the sequence, and the search radius of the hit points is ofter similar. In our test scenes, updating the pixel statistics runs roughly two to four times 0 23 B A 0 0 1 3 33 22 24.09.2011 3 0 Static Geometry 3 2 Figure 3: Photon tracing example with three frames: Path A is not affected by the dynamic object, so the photons can be used for all frames. Path B hits the dynamic object in frame 3 and is split in two sub-paths. The reflected path is assigned to frame 3 and ignores intersections with objects belonging to other frames (frame 1). The transmitted path sets exclusion bit 3 and ignores objects belonging to frame 3 as intersection. When the transmitted path hits the object in frame 2 it is split again. The reflected path is assigned to frame 2 and the transmitted path adds 2 to the exclusion bits, therefore both frame 2 and 3 are ignored for further intersections. 1 23 2 12 13 1 0 2 1 0 2 2 13 1 12 1 3 3 3 1 2 Figure 4: Collecting the photons for three hit points, assigned to three different frames. Each hit point has a radius and the collected photons inside the radius are compared to the frame number. Only static photons (tagged with 0), photons belonging to the frame or photons without the frame number in the exclusion bits are selected. Selected photons are drawn in the hit point color. faster if the grouping of hit points is enabled. Fig. 5 shows a simple example for this optimization with three hit points. Afterwards, Eq. 5 can be applied to display the pixel radiance and the whole process can be repeated. 5. Implementation Details The scene consists of static geometry and dynamic objects for K frames. We put both static and dynamic geometry in a single bounding volume hierarchy and use an interval to indicate to which frames the subtree below the current node belong to (see Fig. 6). The static geometry is tagged with 0, leaf nodes belonging to dynamic objects are tagged with their frame number. Internal nodes store an interval [a, b] that indicates that dynamic geometry belonging to frames a to c 2012 The Author(s) c 2012 The Eurographics Association and Blackwell Publishing Ltd. 1 Maayan Weiss and Thorsten Grosch / Stochastic Progressive Photon Mapping for Dynamic Scenes 3 12 1,2 Static Geometry 0 R3 R2 1,2 1 2 1 2 33 R1 Dynamic Geometry Figure 5: Example for an optimized photon search: In frame 1 and 2, the hit point is at the same (static) position, but with a different search radius. In frame 3, a dynamic object is visible and the position changes. Only one kd-Tree traversal is computed for the first two hit points using the maximum radius of the two (in this example R1 , shown in green). The list of the including photons is then traversed and all photons with a distance smaller than R2 are used for the hit point for frame 2 (light blue photons). Hit point 3 uses a standard kdTree traversal to detect its photons. b is below the current node. The ray intersection test can therefore terminate early, e.g. if the assigned frame number is outside this interval. Using such a hierarchy improves the computation time for the generation of hit point and photon paths. In our tests scenes, both steps were between 35% and 60% faster in comparison to a simple implementation that uses only the frame number in the leaf triangles. Root [1..K] [1..K/2] 0 [K/2+1..K] Dynamic Objects Figure 7: Starting the transmitted path directly at the intersection point can lead to missing geometry: In this example the path hits the object in frame 1, but the intersection point 24.09.2011 with the object in frame 2 is at the same 3D position. To avoid numerical precision problems, the ray is re-started at the origin (tagged with 0 in this example) in combination with exclusion bits to indicate which frames should be ignored. So frame 1 is ignored and we surely hit frame 2. Both the photons and the hit points can either be assigned to a frame number or contain one or more exclusion bits. Since both cases do not appear at the same time, we use the same memory for both. In our implementation we store 32 bits for this information in each photon, allowing a maximum of K = 32 frames. Increasing K increases the amount of memory for a photon and reduces the number of photons that can be simulated in one iteration. Since SPPM allows the simulation of photons in an arbitrary number of itera1 tions, this does not impose a limitation to our method. However, for animated geometry, care must be taken that the geometry of all frames still fits into main memory. If the whole animation exceeds main memory, it can simply be split in smaller intervals that are computed individually. Static Scene 6. Results and Discussion Figure 6: The scene hierarchy is organized in a static and a dynamic part. The nodes above the dynamic geometry contain an interval describing the frames in the subtree below. In this way, rays can terminate early, if the frame number does not match. The image on the right shows an example with all dynamic objects in a static scene. Whenever a ray hits a dynamic object, two paths are traced: The reflected path, that is assigned to one frame and the transmitted path which excludes the frame. Restarting the transmitted path directly at the intersection point can lead to missing geometry, if two animated objects are at the same position, as shown in Fig. 7. To avoid this, we restart the ray at its starting position and use the exclusion bits to indicate that the currently hit frame should be ignored for further intersection tests. We found this a more reliable alternative to a starting point that is moved only a small step backwards, which can introduce ghosting artifacts. We evaluate our method using a set of test scenes. All our measurements were performed on a Windows7 PC with an AMD Phenom II X4 965 processor, running at 3.4 GHz, 6 GB RAM and an NVIDIA 285 GTX graphics card. All images are rendered at 5122 resolution. Since many hit points and photons are shared among several frames, we obtain a speedup in comparison to the computation of SPPM for each frame. This speedup depends on many factors: First, we measure how the number of frames influences our method in the test scene shown in Fig. 8 where a glass horse moves through the Sibenik cathedral (see also Fig. 6). By deleting some intermediate frames, we vary the number of frames between K = 8 and K = 32 and measure the speedup for the individual steps of our algorithm. The resulting timing values are shown in Table 1. Overall, the measured speedup monotonically increases from 5.09 to 9.34 when increasing the number of frames. The memory requirements are shown in Table 2. c 2012 The Author(s) c 2012 The Eurographics Association and Blackwell Publishing Ltd. 1 Maayan Weiss and Thorsten Grosch / Stochastic Progressive Photon Mapping for Dynamic Scenes Memory Polys SPPM 245 88K D8 282 155K D16 479 231K D24 678 307K D32 874 383K Table 2: The total amount of memory required for DSPPM, measured for the glass horse scene (in MB). The memory increases linearly with the number of frames. The Sibenik cathedral contains 79K polygons, one frame of the animated horse contains 9.5K polygons. Figure 8: Two frames of the glass horse animation that was used to measure how the number of frames influences the speedup. Step Hit Points Speedup Photons Speedup kd-Tree Speedup Pixel Stat. Speedup Total Time Speedup SPPM 2.01 6.52 0.98 0.77 10.31 D8 4.15 3.84 8.59 6.07 1.5 5.23 1.79 3.44 16.21 5.09 D16 6.09 5.28 10.57 9.84 1.65 9.5 3.14 3.92 21.93 7.52 D24 7.98 6.05 12.78 12.24 1.66 14.17 4.91 3.76 27.94 8.86 D32 10.29 6.25 14.6 14.29 1.73 18.13 7.84 3.14 35.31 9.34 Table 1: Average timings for one iteration of the glass horse scene. In each of the 2600 iterations, 300K photons were emitted. The number of frames varies from 8 to 32. Each row shows the time in seconds for the individual steps. Below the time value of each step, the speedup in comparison to SPPM is shown (e.g. the speedup for the hit point distribution with 8 frames is 3.84 = 8 × 2.01/4.15). Both photon and hit point distribution improve when increasing the number of frames. Building one kd-Tree for photons belonging to the whole animation is also faster than building K individual kd-Trees. Due to the optimization described in Sec. 4.3, we also obtain a roughly constant speedup for updating the pixel statistics. The total time also includes a small constant overhead, e.g. for displaying the image. . In addition to the number of frames, the speedup we achieve mainly depends on the number of dynamic photons and the number of dynamic pixels in the animation. The intuition behind our method is that information can be shared because both a significant amount of photons and hit points remain static. To measure the influence, we generated a set of test scenes where we vary both the probability that a photon path intersects a dynamic object as well as the probability for a dynamic hit point path. The test scenes are shown in Fig. 9. To increase the probability for a dynamic photon path, two spotlights are directed towards the horse. The flux of the two spotlights is varied to achieve a desired percentage of dynamic photon paths. To increase the probability of a dynamic hit point path, the camera is moved closer towards the horse. As expected, the speedup increases when one of the two probabilities decreases. In our test cases, the speedup varies from 2.48 to 9.53. Figure 9: The vertical axis shows the probability of a dynamic photon path. The horizontal axis shows the probability of a dynamic hit point path. The achieved speedup for all nine test scenes is shown in the insets. In Fig. 10 we show one example scene which was created based on a scene used in [HOJ08]. In contrast to the running horse, this scene contains a facetted ball which rotates at a fixed position. In addition to moving objects, material changes are allowed in an animation. Fig. 11 shows an animation where an object changes its color during the animation. Table 3 summarizes the details of our test scenes. The accompanying video contains the animations that we used. Since DSPPM only re-uses valid photons or hit points for multiple frames, the resulting images converge to the same result as SPPM, as shown in Fig. 12. In addition, we achieve c 2012 The Author(s) c 2012 The Eurographics Association and Blackwell Publishing Ltd. Maayan Weiss and Thorsten Grosch / Stochastic Progressive Photon Mapping for Dynamic Scenes Figure 10: In the Disco ball animation we use a rotating, facetted mirror ball which introduces a moving caustics pattern in the whole scene. Additionally, the glass ball moves and deforms. Num Frames Iterations Emit. Photons Dyn. Photons Dyn. Hit Points Time Hit Points Speedup Photons Speedup kd-Tree Speedup Pixel Stat. Speedup Total Time Speedup Disco 24 5200 1560M 8.8% 10% Disco 27.96 1.07 57.89 8.58 1.95 9.48 7.27 3.9 97.62 6.07 Water 25 1350 390M 3% 15.5% Water 58.04 1.51 38.73 5.19 2.2 16.59 6.31 4.91 107.79 3.5 Material 16 2050 615M 10.8% 30.2% Material 44.77 0.87 30.97 1.58 2.68 9.67 4.43 6.82 83.83 1.96 Table 3: Statistic data for the different scenes. The timing values are given in seconds for one iteration. In case of a camera animation, no hit points can be shared between the frames, but the photon simulation for multiple frames can still be applied. Figure 11: Two frames of an animation with changing materials and depth-of-field. temporal coherence and avoid possible flickering by re-using the photons and hit points for multiple frames. At present, our method does not support dynamic light sources or a dynamic camera. However, if a scene contains multiple light sources where only a few are dynamic, our method can be applied for the static lights. The photons of the dynamic lights can then be simulated frame-by-frame and added to the existing photons in the shared hit points. A recent alternative to SPPM was presented by Knaus and Zwicker [KZ11] that avoids the storage of pixel statistics. Here, the shrinking process of the hit point radius is independent of the number of collected photons, resulting in an identical radius for all hit points. Afterwards, the pixel radiance can simply be computed as the average pixel radiance of all computed photon mapping simulations. Our presented re-use of both hit point and photon information can be directly applied here as well. Since all hit points have the same radius, hit points with identical position contain the same photons. An explicit comparison with the radius of each hit point, as introduced in Sec. 4.3, is therefore no longer necessary. 7. Conclusion and Future Work We presented DSPPM, the first method to extend Stochastic Progressive Photon Mapping to dynamic scenes. Given a pre-defined animation of objects and materials, we detect both photon and hit point information that can be used for the pixel statistics of multiple frames. Furthermore, this results in a temporally coherent animation. Our method is easy to implement and keeps the consistent SPPM result for each frame, allowing a predictive rendering of variations of a scene. The speedup achieved in our test scenes ranges from 1.96 to 9.53. Figure 12: One frame of an animation (left) in comparison to SPPM (right). Both images were created with 2670M emitted photons. c 2012 The Author(s) c 2012 The Eurographics Association and Blackwell Publishing Ltd. One limitation of our method is that motion blur is not supported, so we consider such an extension as future work. Furthermore, we investigate animated light sources and par- Maayan Weiss and Thorsten Grosch / Stochastic Progressive Photon Mapping for Dynamic Scenes ticipating media. A parallel implementation, using the GPU to further improve the performance is another interesting avenue of future research. 8. Acknowledgements The water simulation was created with Blender. The animated horse model was made available by Robert Sumner and Jovan Popovic. The Sibenik model was created by Marko Dabrovic. The Otto-von-Guericke head is courtesy of the University of Magdeburg. The clock model is by Toshiya Hachisuka. We thank Wito Engelke, Alexander Kuhn, Maik Schulze and the anonymous reviewers for their valuable comments. [Jen01] J ENSEN H. W.: A Practical Guide to Global Illumination using Photon Maps. SIGGRAPH course notes (2001). 2 [Kaj86] K AJIYA J. T.: The rendering equation. SIGGRAPH 1986 20, 4 (1986), 143–150. 2 [KZ11] K NAUS C., Z WICKER M.: Progressive photon mapping: A probabilistic approach. ACM Transactions on Graphics 30, 3 (2011), 25. 7 [MTAS01] M YSZKOWSKI K., TAWARA T., A KAMINE H., S EI DEL H.-P.: Perception-Guided Global Illumination Solution for Animation Rendering. In SIGGRAPH (2001), ACM Press, pp. 221–230. 2 [SCH04] S BERT M., C ASTRO F., H ALTON J. H.: Reuse of paths in light source animation. In Computer Graphics International (2004), pp. 532–535. 2 References [SYM∗ 11] S CHERZER D., YANG L., M ATTAUSCH O., N EHAB D., S ANDER P. V., W IMMER M., E ISEMANN E.: A survey on temporal coherence methods in real-time rendering. In Eurographics State of the Art Reports (May 2011). 2 [BAM08] BAERZ J., A BERT O., M UELLER S.: Interactive particle tracing in dynamic scenes consisting of NURBS surfaces. IEEE Symposium on Interactive Ray Tracing (2008), 139–146. 2 [TMS02] TAWARA T., M YSZKOWSKI K., S EIDEL H.-P.: Localizing the final gathering for dynamic scenes using the photon map. In VMV (2002), pp. 69–76. 2 [Che90] C HEN S. E.: Incremental Radiosity: An Extension of Progressive Radiosity to an Interactive Image Synthesis System. In ACM SIGGRAPH Computer Graphics (1990), vol. 24, ACM Press, pp. 135–144. 2 [TPWG02] T OLE P., P ELLACINI F., WALTER B., G REENBERG D. P.: Interactive global illumination in dynamic scenes. Proceedings of SIGGRAPH 21, 3 (2002), 537. 2 [DBMS02] D MITRIEV K., B RABEC S., M YSZKOWSKI K., S EI DEL H.-P.: Interactive Global Illumination Using Selective Photon Tracing. Proceedings of the 13th Eurographics workshop on Rendering (2002), 25–36. 2 [DDM03] DAMEZ C., D MITRIEV K., M YSZKOWSKI K.: State of the Art in Global Illumination for Interactive Applications and High-quality Animations. Computer Graphics Forum 22, 1 (2003), 55–77. 2 [WDP99] WALTER B., D RETTAKIS G., PARKER S.: Interactive rendering using the render cache. Rendering techniques 99 (1999), 19–30. 2 [WMG∗ 09] WALD I., M ARK W. R., G ÜNTHER J., B OULOS S., I ZE T., H UNT W., PARKER S. G., S HIRLEY P.: State of the Art in Ray Tracing Animated Scenes. Computer Graphics Forum 28, 6 (2009), 1691–1722. 2 [DS97] D RETTAKIS G., S ILLION F. X.: Interactive update of global illumination using a line-space hierarchy. Proceedings of the 24th annual conference on Computer graphics and interactive techniques SIGGRAPH 97 31, 3 (1997), 57–64. 2 [DS99] DAMEZ C., S ILLION F.: Space-Time Hierarchical Radiosity. In Proceedings of the eurographics workshop on Rendering techniques (1999), Springer Wien, pp. 235–246. 2 [GWS04] G ÜNTHER J., WALD I., S LUSALLEK P.: Realtime Caustics Using Distributed Photon Mapping. In Proceedings of the Eurographics Symposium on Rendering (2004), pp. 111–121. 2 [HDMS03] H AVRAN V., DAMEZ C., M YSZKOWSKI K., S EIDEL H.-P.: An efficient spatio-temporal architecture for animation rendering. In Proceedings of Eurographics Symposium on Rendering 2003 (2003), ACM Press, pp. 106–117. 2 [HJ09] H ACHISUKA T., J ENSEN H. W.: Stochastic progressive photon mapping. ACM Transactions on Graphics 28, 5 (2009), 1. 1, 2 [HKMS08] H ERZOG R., K INUWAKI S., M YSZKOWSKI K., S EI DEL H.-P.: Render2MPEG: A Perception-based Framework Towards Integrating Rendering and Video Compression. Computer Graphics Forum 27, 2 (2008), 183–192. 2 [HOJ08] H ACHISUKA T., O GAKI S., J ENSEN H. W.: Progressive photon mapping. ACM Transactions on Graphics 27, 5 (2008), 1. 1, 2, 6 [Jen96] J ENSEN H. W.: Global illumination using photon maps. In Proceedings of the Eurographics Workshop on Rendering Techniques (1996), Springer-Verlag, pp. 21–30. 2 c 2012 The Author(s) c 2012 The Eurographics Association and Blackwell Publishing Ltd.
© Copyright 2026 Paperzz