1 / 28

Out of Core Simplification

Out of Core Simplification. Benjamin Watson Dept. Computer Science Northwestern University watson@cs.northwestern.edu. Models are getting bigger. Models now ride Moore’s Law Major source: 3D scanning Example: Digital Michelangelo Sizes now range 10 7 - 10 9 faces

Download Presentation

Out of Core Simplification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Out of Core Simplification Benjamin Watson Dept. Computer Science Northwestern University watson@cs.northwestern.edu

  2. Models are getting bigger Models now ride Moore’s Law Major source: 3D scanning Example: Digital Michelangelo Sizes now range 107 - 109 faces Well beyond most core memories

  3. Can’t we do “big”? Maybe, but not “massive” That is, models not fitting in core Previous limit in 106 face range What’s the problem with out of core? Requires slow disk access So must minimize disk access Most simplification algs don’t

  4. Why should GDC care? OOC could speed development Artists design to content, not to platform Coders then have complete LOD control No artist/coder iterations to gain LOD control OOC could enable highly variable LOD Even extreme closeups reveal new detail

  5. Out of core strategies For good out of core performance, use Locality: use less main memory Reuse: maximize use of data in memory Most existing algorithms poor at both Locality not guaranteed in model formats Most algorithms are greedy -- poor reuse

  6. Demo: bunny vs. dragon RSimp on bunny

  7. Demo: bunny vs. dragon RSimp on dragon

  8. Solutions: two approaches Following survey by Silva et al., 2002 Spatial clustering Segmentation of space Typically faster and more error Surface clustering Segmentation of surface Typically slower and less error

  9. Spatial: Lindstrom Modification of Rossignac & Borrel Adds locality by deref’ing to create “soup” Done w/ little thrashing in linear time In one pass, hash vertices of input faces Add normal to quadric in each vertex hash entry Retain face if 3 vertices hash differently Output retained faces, quadric mins

  10. Spatial: Lindstrom Advantages Great reuse: each input face seen only once Good locality: memory sized by output not input Great speed results: 175 Ktris/sec Disadvantages Poor accuracy: a non-adaptive algorithm Not sensitive to topology (e.g. textures) Later, fixed memory w/ less speed (Lindstrom & Silva)

  11. Spatial: Lindstrom (2K faces) (20K faces) (200K faces)

  12. Spatial: Shaffer & Garland Addition to Lindstrom’s approach First, apply Lindstrom’s algorithm Resulting model fits in core memory Then, adaptively simplify Using refining algorithm similar to RSimp (We discuss RSimp shortly)

  13. Spatial: Shaffer & Garland Advantages Improved accuracy, about 20% on average Disadvantages Slower: 67 Ktris/sec Still bad topological sensitivity (textures) Output size fixed by core memory Similar approach by Fei et al.

  14. Spatial: Shaffer & Garland (2K faces) (20K faces) (200K faces)

  15. Surface: Hoppe Segment the surface with a grid Simplify surface in each grid independently Edge collapse to some error or % orig faces Each grid segment fits in memory Revisit and simplify faces at grid boundaries Several similar approaches Bernardini et al, Erikson et al., Cignoni et al.

  16. Surface: Hoppe Advantages Much better accuracy: adaptivity w/i segment Good topological sensitivity (textures possible) Disadvantages Worse locality, reuse: all memory, faces revisited Much slower: around 5 Ktris/sec Simplifying to a given size likely limits accuracy

  17. Surface: Cignoni et al. (6K faces) (21K faces) (orig: 161K faces)

  18. Surface: El-Sana & Chiang Similar to Hoppe, but segments using shape Repeat Sort edges by error Load all least error edges that fit in memory Edge collapse to current max error Until target error reached Better accuracy given size -- scalable?

  19. Surface: VMRSimp Modification of RSimp by Brodsky & Watson RSimp refined toward desired output size: Define a poor 8 patch (vertex) approximation Repeat Choose patch with most normal variation Split patch according to normal variation Until desired number vertices reached

  20. Surface: VMRSimp Modification makes simplification a sort Each patch a range on input array Splitting patch means sorting into subranges Thus locality is built and refined Allows reliance on virtual memory W/ accuracy control (not size), optimal reuse

  21. Surface: VMRSimp Advantages More accurate than all spatial segmentation Excellent accuracy if detail distributed unevenly Speed remains decent at 5 Ktris/sec Disadvantages Worse at preserving some topology

  22. Surface: VMRSimp 1 GHz PIII RH Linux 7.1 1 GB Mem Accuracy control improves times 25%

  23. Surface: VMRSimp Metro error as % of model bounding box

  24. Surface: VMRSimp (2K faces) (20K faces) (200K faces)

  25. Surface: comparison Lindstrom VMRSimp Shaffer & Garland

  26. Summary Spatial segmentation Poor accuracy, great speed Only practical approach for 108+ face inputs Ignores topological info such as texture seams Surface segmentation Greater accuracy, poorer speed Sensitive to topology Big challenge: better accuracy/speed tradeoff

  27. Sources Silva, C., Chiang, Y.-J., El-Sana, J. & Lindstrom, P. (2002). Out-of-core algorithms for scientific visualization and computer graphics. US DOE, Lawrence Livermore Nat. Labs, tech. rpt. UCRL-JC-150434. Lindstrom, P. (2000). Out-of-core simplification of large polygonal models. Proc. SIGGRAPH, 259–262. Lindstrom, P. & Silva, C. (2001). A memory insensitive technique for large model simplification. Proc. IEEE Visualization, 121–126. Shaffer, E. & Garland, M. (2001). Efficient adaptive simplification of massive meshes. Proc. IEEE Visualization, 127–134. Fei, G., Cai, K., Guo, B. & Wu, E. (2002). An adaptive sampling scheme for out-of-core simplification. Computer Graphics Forum, 21(2): 111–119. Hoppe, H. (1998). Smooth view-dependent level-of-detail control and its application to terrain rendering. Proc. IEEE Visualization, 35–42.

  28. Sources Bernardini, F., Mittleman, J. & Rushmeier, H. (1999). Case study: scanning Michelangelo’s Florentine Pieta. SIGGRAPH 99 Course #8, URL http://www.research.ibm.com/pieta. Cignoni, P., Rocchini, C., Montani, C. & Scopigno, R. (2002) External memory management and simplification of huge meshes. IEEE Trans. Visualization and Computer Graphics. To appear. Erikson, C., Manocha, D. & Baxter III, W.V. (2001). HLODs for faster display of large static and dynamic environments. Proc. ACM Interactive 3D Graphics, 111–120. El-Sana, J. & Chiang, Y.-J. (2000). External memory view-dependent simplification. Computer Graphics Forum, 19(3): 139–150. Brodsky, D. & Watson, B. (2000). Model simplification through refinement. Proc. Graphics Interface, 221–228. Choudhury, P. & Watson, B. (2002). Completely adaptive simplification of massive meshes. Tech. Rpt. CS-02-09, Northwestern Univ. URL http://www.cs.northwestern.edu/~watsonb/school/docs/vmrsimp.tr.pdf.

More Related