1 / 52

Image Synthesis

Image Synthesis. Point-Based Computer Graphics. Why Points?. huge geometry complexity of current CG models overhead introduced by connectivity of polygonal meshes acquisition devices generate point samples “digital 3D photography” points complement triangles. Polynomials.

roger
Download Presentation

Image Synthesis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Image Synthesis Point-Based Computer Graphics

  2. Why Points? • huge geometry complexity of current CG models • overhead introduced by connectivity of polygonal meshes • acquisition devices generate point samples“digital 3D photography” • points complement triangles

  3. Polynomials • Rigorous mathematical concept • Robust evaluation of geometric entities • Shape control for smooth shapes • Require proper parameterization • Discontinuity modeling • Topological flexibility

  4. Polynomilas Triangles • Piecewise linear approximations • Irregular sampling of the surface • No parameterization needed (geometry only)

  5. Triangles • Simple and efficient representation • Hardware pipelines support triangles • Advanced geometric processing • The widely accepted queen of graphics primitives • Sophisticated modeling is difficult • (Local) parameterizations still needed • Complex LOD management • Compression and streaming is highly non-trivial

  6. Triangles  Points • Piecewise linear functions to Delta distributions • Discrete samples of geometry • No connectivity or topology – most simple • Store all attributes per surface sample

  7. Points • geometry complexity of current CG models • connectivity overhead of polygonal meshes • acquisition devices generate point samples • points complement triangles • holes • compression

  8. Taxonomy

  9. How can we capture reality?

  10. Acquisition • Contact digitizers– intensive manual labor • Passive methods– require texture, Lambertian BRDF • Active light imaging systems– restrict types of materials • in general fuzzy, transparent, and refractive objects are difficult

  11. First Method, Laser Range Scanner

  12. Basic Idea Detector Detector Laser Laser

  13. Computing the Distance Detector H a’ d’ a Laser O d L

  14. Scattering Issues How optically cooperative is marble?

  15. Image based Acquisition

  16. Image based Acquisition Acquisition Stage 2

  17. Image based Acquisition Acquisition Stage 3

  18. IBA - Process

  19. Visual Hull

  20. Visual Hull • the quality of the visual hull geometry is a function of the number of viewpoints / silhouettes • the method is unable to capture all concavities image based lighting

  21. Point Based Rendering Surfels (surface element)

  22. Extended Surfels

  23. Rendering Pipeline

  24. Uniform Reconstruction • For uniform samples, use signal processing theory • Reconstruction by convolution with low-pass(reconstruction) filter • Exact reconstruction of band-limited signals using ideal low-pass filters

  25. Non-Uniform Reconstruction • Signal processing theory not applicable fornonuniform samples • Local weighted average filtering • Normalized sum of local reconstruction kernels

  26. Reconstruction 1D in 2D

  27. Reconstruction 2D in 3D

  28. Algorithm for each sample point { shade surface sample; splat = projected reconstruction kernel; rasterize and accumulate splat; } for each output pixel { normalize; }

  29. Results without Normalization with Normalization

  30. Visibility ε-z-buffering

  31. Implementation Use a three pass algorithm: • Draw depth image with a small depth offset ε away from the viewpointPerform regular z-buffering (depth tests and updates), no color updates

  32. Second Pass • Draw colored splats with additive blending enabled • Perform depth tests, but no updates • Accumulate • Weighted colors of visible splats in the color channels • Weights of visible footprint functions in the alpha channel

  33. Third Pass • Normalization of the color channels by dividing through the alpha channel • Implemented by • render to texture • drawing a screen filling quad with this texture • performing the normalization in the pixel shader

  34. Efficient Data Structures DuoDecimA Structure for Point ScanCompression and Rendering

  35. a.d. 1500 • created beautiful statues … but missed to make them portable • David: 434 cm • Atlas: 208 cm • Barbuto: 248 cm • … Florence, Galleria dell'Accademia

  36. Jens Krüger - Computer Graphics and Visualization Group, TU-München a.d. 1999 • Marc Levoy et al. (Stanford University) did a great job scanning the statues … but still “missed” to make them “portable” • David: 1.1 GB • Atlas: 10 GB • … All in all 32 gigabytes raw data !!! Image courtesy of Marc Levoy

  37. Today… … wepresentnovelalgorithmstomakethestatuesof Michelangelo „portable“ and „viewable“ on PCs.

  38. The Numbers The Atlas Statue Scan Size of raw scans: approx 600 mio. points Reconstruction: at 1/4 mm Size of reconstruction: approx 250 mio. vertices, approx 500 mio. triangles Filesize (without normals): 9.94 GB „Your Home PC“ Size of Main Memory: 1 – 2 GB Size of Graphics Card Memory: 0.125 – 0.5 GB

  39. 230 MB vs. 10 GB Rendering and decoding time in full resolution4 sec.

  40. The Key Idea CPU • Sample the point scan into a regular grid • Divide the grid up into 2D slices • Look for connected runs within the slices • Store the starting position/normal per run • Store position/normal delta for the rest GPU • Store the compressed runs into textures • Upload the compressed runs onto the GPU • Decode the points with normals on the fly

  41. 1. Sample the point scan into a regular grid Hexagonal close sphere packing (HCP) grid

  42. Cell Search (HCP) 2D Simplification

  43. 2. Divide the grid up into 2D slices

  44. h g e f c a d b i k j l m p n o q 3. Look for connected runs within the slices abc defghgijigfedklmnopoq dklmn opoq op oq ghgij def gh gij

  45. 3. Connected runs (cont.)

  46. 4. Store the starting position/normal per run • Start Position  two (16bit) indices per point • one (16bit) index per slice • Start Normal  one (16bit) index • one (16bit) Codebook per dataset

  47. 5. Store position/normal delta for the rest • Delta Position  6 cases  2.25 bit • Delta Normal  5bit index • one (5bit) Codebook per dataset • contains sin/cos ofdeltaangles

  48. Rendering GPU Decoding and Rendering • Store the compressed runs into textures • Upload the compressed runs onto the GPU • Decode the points with normals on the fly

  49. 1. Store the compressed runs into textures Start Position/Normal Delta Position/Normal 16bit (RG)(B) 8bit (3+5) Per Slice: • 16bit height • 8 Floats to the GPU

  50. n m 3. Decode the points with normals on the fly Start Position/Normal Position Normal n m Render as Points via: • PBO/VBO Copy • Cast to Vertex Array • Vertex Texture Fetch

More Related