1 / 31

KIPA Game Engine Seminars

KIPA Game Engine Seminars. Day 6. Jonathan Blow Ajou University December 2, 2002. Level-of-Detail Method Overview. Traditional Purpose: Speed Boost Ideal: Render a fixed number of triangles always Doesn’t matter how far your view stretches into the distance Diagram of pixel tesselation

jacoba
Download Presentation

KIPA Game Engine Seminars

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. KIPA Game Engine Seminars Day 6 Jonathan Blow Ajou University December 2, 2002

  2. Level-of-Detail MethodOverview • Traditional Purpose: Speed Boost • Ideal: Render a fixed number of triangles always • Doesn’t matter how far your view stretches into the distance • Diagram of pixel tesselation • Object detail / triangle count as a function of distance

  3. Future Purpose:Geometric Antialiasing • Discussion of scenes with many small objects far away • In a rendering paradigm like MCRT we get a certain amount of antialiasing for free • When projecting geometry onto the screen, we do not; we need to implement something that provides antialiasing for us

  4. Level-of-Detail Methods • Static mesh switching • Progressive mesh • Continuous-LOD mesh • Issues involving big objects (static and progressive mesh not good enough?)

  5. Static mesh switching • Pre-generate a series of meshes decreasing in detail • Switch between them based on z distance of the mesh from the camera • Perhaps be more analytical and switch based on max. projected pixel error? • Nobody actually does this because it is far too conservative

  6. Progressive Mesh • Generate one sequence of collapses that takes you from high-res to 1 triangle • Dynamically select number of triangles at runtime • Works well with modern 3D hardware since you only modify a little bit of the index buffer at a time.

  7. Progressive MeshDisadvantages • Relies on frame coherence (bad!) • Interferes with triangle stripping and vertex cache sorting (they become mutually impossible). • High code complexity, and it makes everything else more complicated, and adds restrictions to everything else • Example of normal map generation restricted to object space

  8. Continuous Level-of-DetailAlgorithms • Lindstrom-Koller, ROAM, Rottger quadtree algorithm • Dynamically update tessellation based on estimate of screen-space error • Crack fixing between adjacent blocks, etc

  9. Continuous LOD • Example of binary triangle trees • There are other formats (quadtree, diamond, etc) but the ideas are similar

  10. Continuous LODDisadvantages • Extremely complicated implementations • Slow on modern hardware • Extreme reliance on frame coherence (bad!) • Not conducive to unified rendering (hard to make work on curved surfaces, arbitrary topologies)

  11. Continuous LOD • Has a lot of hype in the amateur and academic communities • Is currently not competitive with other LOD approaches • This is not likely to change any time soon

  12. LOD Metrics

  13. Introduction • We need an effective way to benchmark / judge LOD schemes • The academic world is not really doing this right now! • We need a standard set of data with comparable results • University of Waterloo Brag Zone for image compression

  14. LOD Metric? • We often create metrics for taking each small step in a geometric reduction • We don’t have a metric for comparing a fully reduced mesh with the source model or another reduced mesh • Because our mesh representations are so ad hoc

  15. Image Compression guyshave a metric • (even though they know it’s not that good) • PSNR measures difference between compressed image and original • They know it has problems (not perceptually driven) and are working on a better metric • But at least they have a way of comparing results, which means they are sort of doing science!

  16. Metric ideas • “Sum of closest-point distances” • Continuous, which is good • Very expensive to compute • Non-monotonic (!), which is bad • Monotonic for small changes, usually, which might be good enough • Ignores texture warping, which is bad • Unless we try it in 5-dimensional space • Ignores vertex placement • Important for rasterization (iterated vertex properties!) • Example of big flat area • Ignores cracks in destination model

  17. Lindstrom/Turk screenspaceLOD comparison • Guide compression of a mesh by taking snapshots of it from many different viewpoints and PSNR’ing the images • This can work but PSNR is not necessarily stable with respect to small image-space motions

  18. Lindstrom/Turk screenspaceLOD comparison • (Talking about paper, showing figures from it)

  19. The Fundamental Problem • Our rendering methods are totally ad-hoc; we have 3 different things: • Vertices • Topology • Texture • A metric that uniformly integrates these things is very difficult.

  20. Complexity of metric • The more complicated a metric is, the more difficult it is to program correctly, ensure we are using it correctly • That our simplest possible metric should be something so complicated … that is a bad sign.

  21. Compare with Voxels • Voxel geometry representations can basically use something like PSNR directly; no need for complicated metrics • Lightfields can also (though it’s a little harder)

  22. “Digital Geometry Processing” • Work by Peter Schroeder at Caltech, and many others • Attempts to develop DSP-like ideas for geometry manipulation • Heavy use of subdivision surfaces

  23. (Overview of subdivision surfaces)

  24. How DGP works • Apply a scaled filter kernel to the neighborhood of a vertex • Like wavelet image analysis in its multiscale aspects • But unlike wavelets/DSP in that the inputs/outputs are not homogeneous • What exactly is the high-pass residual after a low-pass filter? • This is because of that whole topology-different-from-vertices thing

  25. Actual effective DGP would be …? • I don’t know. (It’s a hard problem!) • Spherical harmonics would work, for shapes representable as functions over the sphere

  26. Solutions/Details

  27. What I Use • Garland/Heckbert Error Quadric Simplification • Static Mesh Switching • I want to do a unified renderer this way (characters, terrain, big airplanes, whatever) • People seem to think crack fixing is hard but it is actually easy • Maybe that’s why people haven’t tried this yet?

  28. Discussion ofGarland/Heckbert Algorithm • (whiteboard)

  29. Garland/Heckbert References • “Surface Simplification Using Quadric Error Metrics” • “Simplifying Surfaces with Color and Texture using Quadric Error Metrics”

  30. G/H is useful also if you are making progressive meshes • It just tells you how to collapse the mesh; doesn’t dictate how you will use that information.

  31. Review of GH AlgorithmIn code • (looking at the code)

More Related