1 / 31

# KIPA Game Engine Seminars - PowerPoint PPT Presentation

KIPA Game Engine Seminars. Day 6. Jonathan Blow Ajou University December 2, 2002. Level-of-Detail Method Overview. Traditional Purpose: Speed Boost Ideal: Render a fixed number of triangles always Doesn’t matter how far your view stretches into the distance Diagram of pixel tesselation

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

## PowerPoint Slideshow about ' KIPA Game Engine Seminars' - jacoba

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

### KIPA Game Engine Seminars

Day 6

Jonathan Blow

Ajou University

December 2, 2002

Level-of-Detail MethodOverview

• Ideal: Render a fixed number of triangles always

• Doesn’t matter how far your view stretches into the distance

• Diagram of pixel tesselation

• Object detail / triangle count as a function of distance

Future Purpose:Geometric Antialiasing

• Discussion of scenes with many small objects far away

• When projecting geometry onto the screen, we do not; we need to implement something that provides antialiasing for us

• Static mesh switching

• Progressive mesh

• Continuous-LOD mesh

• Issues involving big objects (static and progressive mesh not good enough?)

• Pre-generate a series of meshes decreasing in detail

• Switch between them based on z distance of the mesh from the camera

• Perhaps be more analytical and switch based on max. projected pixel error?

• Nobody actually does this because it is far too conservative

• Generate one sequence of collapses that takes you from high-res to 1 triangle

• Dynamically select number of triangles at runtime

• Works well with modern 3D hardware since you only modify a little bit of the index buffer at a time.

• Relies on frame coherence (bad!)

• Interferes with triangle stripping and vertex cache sorting (they become mutually impossible).

• High code complexity, and it makes everything else more complicated, and adds restrictions to everything else

• Example of normal map generation restricted to object space

Continuous Level-of-DetailAlgorithms

• Lindstrom-Koller, ROAM, Rottger quadtree algorithm

• Dynamically update tessellation based on estimate of screen-space error

• Crack fixing between adjacent blocks, etc

• Example of binary triangle trees

• There are other formats (quadtree, diamond, etc) but the ideas are similar

• Extremely complicated implementations

• Slow on modern hardware

• Extreme reliance on frame coherence (bad!)

• Not conducive to unified rendering (hard to make work on curved surfaces, arbitrary topologies)

• Has a lot of hype in the amateur and academic communities

• Is currently not competitive with other LOD approaches

• This is not likely to change any time soon

### LOD Metrics

• We need an effective way to benchmark / judge LOD schemes

• The academic world is not really doing this right now!

• We need a standard set of data with comparable results

• University of Waterloo Brag Zone for image compression

• We often create metrics for taking each small step in a geometric reduction

• We don’t have a metric for comparing a fully reduced mesh with the source model or another reduced mesh

• Because our mesh representations are so ad hoc

Image Compression guyshave a metric

• (even though they know it’s not that good)

• PSNR measures difference between compressed image and original

• They know it has problems (not perceptually driven) and are working on a better metric

• But at least they have a way of comparing results, which means they are sort of doing science!

• “Sum of closest-point distances”

• Continuous, which is good

• Very expensive to compute

• Non-monotonic (!), which is bad

• Monotonic for small changes, usually, which might be good enough

• Ignores texture warping, which is bad

• Unless we try it in 5-dimensional space

• Ignores vertex placement

• Important for rasterization (iterated vertex properties!)

• Example of big flat area

• Ignores cracks in destination model

Lindstrom/Turk screenspaceLOD comparison

• Guide compression of a mesh by taking snapshots of it from many different viewpoints and PSNR’ing the images

• This can work but PSNR is not necessarily stable with respect to small image-space motions

Lindstrom/Turk screenspaceLOD comparison

• (Talking about paper, showing figures from it)

• Our rendering methods are totally ad-hoc; we have 3 different things:

• Vertices

• Topology

• Texture

• A metric that uniformly integrates these things is very difficult.

• The more complicated a metric is, the more difficult it is to program correctly, ensure we are using it correctly

• That our simplest possible metric should be something so complicated … that is a bad sign.

• Voxel geometry representations can basically use something like PSNR directly; no need for complicated metrics

• Lightfields can also (though it’s a little harder)

• Work by Peter Schroeder at Caltech, and many others

• Attempts to develop DSP-like ideas for geometry manipulation

• Heavy use of subdivision surfaces

• Apply a scaled filter kernel to the neighborhood of a vertex

• Like wavelet image analysis in its multiscale aspects

• But unlike wavelets/DSP in that the inputs/outputs are not homogeneous

• What exactly is the high-pass residual after a low-pass filter?

• This is because of that whole topology-different-from-vertices thing

• I don’t know. (It’s a hard problem!)

• Spherical harmonics would work, for shapes representable as functions over the sphere

### Solutions/Details

• Static Mesh Switching

• I want to do a unified renderer this way (characters, terrain, big airplanes, whatever)

• People seem to think crack fixing is hard but it is actually easy

• Maybe that’s why people haven’t tried this yet?

Discussion ofGarland/Heckbert Algorithm

• (whiteboard)

• “Surface Simplification Using Quadric Error Metrics”

• “Simplifying Surfaces with Color and Texture using Quadric Error Metrics”

• It just tells you how to collapse the mesh; doesn’t dictate how you will use that information.

Review of GH AlgorithmIn code

• (looking at the code)