1 / 27

An E fficient Spatio-Temporal Architecture for Animation Rendering

An E fficient Spatio-Temporal Architecture for Animation Rendering. Vlastimil Havran , Cyrille Damez, Karol Myszkowski, and Hans-Peter Seidel Max-Planck-Institut f ür Informatik, Saarbrücken, Germany. Motivation.

ulmer
Download Presentation

An E fficient Spatio-Temporal Architecture for Animation Rendering

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Efficient Spatio-Temporal Architecture for Animation Rendering Vlastimil Havran, Cyrille Damez, Karol Myszkowski, and Hans-Peter Seidel Max-Planck-Institut für Informatik, Saarbrücken, Germany

  2. Motivation • In the traditional approach to rendering of high quality animation sequences every frame is considered separately. • The temporal coherence is poorly exploited: • redundant computations. • The visual sensitivity to temporal detail cannot be properly accounted for: • too conservative stopping conditions, • temporal aliasing.

  3. Goal • Developing an architecture for efficient rendering of high-quality animations which better exploits the spatio-temporal coherence between frames: • Visibility: Multi-frame ray tracing • Global illumination: Bi-directional Path Tracing extended to re-using samples between frames • Texturing and shading: sharing information between frames • Motion blur: conservative computation in 2D image space • Cache use: coherent patterns of access to data structures in memory (along motion compensation trajectories) • Memory requirements: unlimited image resolution

  4. Previous Work • Industrial solutions such as PhotorealisticRenderman, Maya Rendering System, or Softimage • frame–by–frame computation • postprocessing of animation to decrease temporal aliasing

  5. Previous Work • Temporal Coherence in Ray Tracing • Coherence of shaded pixels • Interpolation between frames [Maisel and Hegron 92] • Reprojection [Adelson and Hughes 95] • 4D radiance interpolants [Bala et al. 99] • Coherence in acceleration data structures • Space-time solutions [Glassner 88, Groeller 91] • Static vs. dynamic objects [Besuievsky and Pueyo 01, Lext and Moller 01, and Wald et al. 02]

  6. Previous Work • Temporal Coherence in Global Illumination • Coherence of illumination samples • Render cache [Walter et al. 99] • Shading cache [Tole et al. 02] • Reusing photon hit points [Myszkowski et al. 01] • Selective Photon Tracing [Dmitriev 02] • Coherence in visibility computation • Global lines [Besuievsky and Pueyo 01] • Coherence in generation of random numbers • Random number sequence associated with each light transport path [Lafortune 96, Jensen 01, Wald et al. 02]

  7. Space-Time Architecture: Principle • Compute samples using a variant of Path-Tracing • Pixels color = mean of sample values • 2 types of samples: • Native samples: • Expensive • Computed from scratch • Recycled samples: • Cheap • Based on previous computations of native samples (using reprojections)

  8. Motion Compensation • Camera and object motion compensation • Memory access coherence …..

  9. Bi-directional Path Tracing

  10. Camera Motion

  11. Occluded Connection

  12. Path Change

  13. Multi-Frame Ray Tracing • Aggregate queries • Single ray geometry • Results possible for all frames in [f(i-R-S)’f(i+R)] • Two types of visibility queries • Ray shooting • Visibility between two points • Three settings of visibility queries • Exact result for single frame f(i) • Exact for single frame f(i) + validity for all other frames • Exact results for all frames in [f(i-R)’f(i+R)]

  14. Multi-Frame Ray Tracing Construction of Spatial Data Structure • Instantiation of dynamic objects for range of frames • Static versus dynamic objects separation • Construction of global kd-tree over static objects • Hierarchical clustering over dynamic objects • Construction of kd-trees for clusters of objects • Insertion of kd-trees for clusters into global kd-tree leaves • Refinement of global kd-tree leaves + empty space cutting off (efficiency improvement methods) Additional techniques used • Mail boxes (cache) for ray transforms, objects, and kd-trees • Frame masks for inserted kd-trees and shadow cache

  15. R S R The Animation Buffer • Iterates over all pixels in S consecutive frames • If more samples are required • Compute a native sample for frame fi • Reproject it and recycle it for all frames in [f(i-R)’ f(i+R)] • S+2R frames are kept in the buffer

  16. S R S R The Animation Buffer • Iterates over all pixels in S consecutive frames • If more samples are required • Compute a native sample for frame fi • Reproject it and recycle it for all frames in [f(i-R)’ f(i+R)] • S+2R frames are kept in the buffer Saving to a disk

  17. Shading Computation • A simplified version of RenderMan Shading Language • Each shader decomposed into • View-independent component • re-usable, shared between frames • View-dependent component • recomputed for each frame

  18. Motion Blur • Accuracy & Quality • The same sample point is considered for multiple frames • In other frame-by-frame architectures the motion of objects must be computed explicitely by additional samples. • Temporal changes in shading are properly accounted for • Difficult in other architectures • Efficiency • 2D computation

  19. Results Speedup for bi-directional path tracing • Moving camera, moving objects: 7.7 • Moving objects only: 13.3 • Moving camera only: 8.8 Time per frame 240 – 415 sec. Proportion of native samples 2.4 - 4.7 % Cost of native samples (profiler) 44 - 64 % of the whole computation time. Disc caching overhead 10 % slowdown (for 1% of main memory used)

  20. Motion blur renderings Time per frame 120 sec. Time per frame 150 sec. Motion Blur - computational overhead 25%

  21. Videos

  22. Conclusions • We presented an efficient architecture for rendering of high-quality animations • In our architecture path tracing, texturing and shading, motion blur can be efficiently computed. • Our architecture is efficient in particular for scenes in which a limited number of objects moves locally in the scene and camera motion is slow. • For more complex motion scenario the animation segment length can be reduced which in the limit may boil down to the traditional frame-by-frame computation. • Data structures handling dynamic objects require additional memory, which is acceptable on modern computers. • The memory overhead involved in storing multiple images is negligible due to efficient buffering.

  23. Future Work • Reusing samples for many pixels in the same frame as suggested in [Bekaert et al. 2002, Szirmay-Kalos 2002] • Improving the efficiency of visibility tests for reprojected samples using a shaft culling approach. • Skipping some visibility tests based on the spatio-temporal coherence of neighboring samples as implemented in the Maya Rendering System [Sung et al. 2002]

  24. Thanks! • Polina Kondratieva for rewriting RenderMan Shaders • Markus Weber for help with preparing the scenes • Partial support of IST-2001-34744 project RealReflect • All anonymous reviewers for their comments

  25. Space-Time Architecture: Reprojection N-times recycled 1 x native light reflected

  26. Rendering animations – ray tracing with shaders Speedup ray tracing • Moving camera, moving objects: 2.6 Time per frame 770 sec. ( deterministic reflections must be recomputed! )

More Related