1 / 68

Texturing Massive Terrain

Texturing Massive Terrain. Colt McAnlis Graphics Programmer – Blizzard 60 minutes ( ish ). What are we talking about?. Keys to Massive texturing. Texturing data is too large to fit into memory Texturing data is unique Lots of resolution Down to maybe 1meter / per pixel.

onan
Download Presentation

Texturing Massive Terrain

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Texturing Massive Terrain • Colt McAnlis • Graphics Programmer – Blizzard • 60 minutes (ish)

  2. What are we talking about?

  3. Keys to Massive texturing • Texturing data is too large to fit into memory • Texturing data is unique • Lots of resolution • Down to maybe 1meter / per pixel

  4. What we’re ignoring • Vertex data • General terrain texturing issues • Low End Hardware • Review of technologies

  5. What We’re covering • Paging & Caches • DXT++ Compression • Compositing frameworks • Editing Issues • Example Based Texture Synthesis

  6. The World : So Far..

  7. What’s Visible? • Only subsection visible at a time • Non-visible areas remain on disk • New pages must be streamed in • Quickly limited by Disk I/O • Fast frustum movements kill perf • New pages occur frequently

  8. Radial Paging • Instead page in full radius around player • Only need to stream in far-away pages

  9. Distance based resolution • Chunks stream in levels of mip-maps • As Distance changes, so does LOD • New mip levels brought in from disk • Textures typically divided across chunk bounds • Not ideal for Draw call counts..

  10. Typical Setup • Each chunk has it’s own mipchain Difficult To filter across boundaries

  11. One mip to rule them.. • But we don’t need full chains at each chunk • Radial paging requires less memory • Would be nice to have easier filtering • What if we had one large mip-chain?

  12. Mip Stack • Use one texture per ‘distance’ • Resolution consistent for range • All textures are same size • As distance increases, quality decreases • Can store as 3d texture / array • Only bind 1 texture to GPU

  13. Big textures • The benefit of this is that we can use 1 texture • Texturing no longer a reason for breaking batches • No more filtering-across-boundary issues • 1 sample at 1 level gets proper filtering • Mip mapping still poses a problem though • Since mips are separated out

  14. Mipping solution • Each ‘distance’ only needs 2 mips • Current mip, and the next smallest • At distance boundaries, mip levels should be identical. • Current distance is mipped out to next distance • Memory vs. perf vs. quality tradeoff • YMMV

  15. Mip Transition MipChain

  16. Updating the huge texture • How do we update the texture? • GPU resource? • Should use render-to-texture to fill it. • But what about compression? • Can’t RTT to compressed target • GPU compress is limited • Not enough cycles for good quality • Shouldn’t you be GPU bound?? • So then use the CPU to fill it? • Lock + memcpy

  17. What We’re covering • Paging & Caches • DXT++ Compression • Compositing frameworks • Editing Issues • Example Based Texture Synthesis

  18. Compressing Textures • Goal : Fill large texture on CPU • Problem : DXT is good • But other systems are better (JPG) • ID Software: • JPEG->RGBA8->DXT • Re-compressing decompressed streams • 2nd level quality artifacts can be introduced • Decompress / recompress speeds?

  19. Compressing DXT • We have to end up at GPU friendly format • Sooner or later.. • Remove the Middle man? • We would need to decompress directly to DXT • Means we need to compress the DXT data MORE • Let’s look at DXT layout

  20. DXT1 : Results in 4bpp High 565 Low 565 2 bit Selectors In reality you tend to have a lot of them : 512x512 texture is 16k blocks …

  21. Really, two different types of data per texture 16 bit block colors 2bit selectors Each one can be compressed even further

  22. Block Colors Input texture : Potential for millions of colors Input texture : Actual used colors 16 bit compressed Used colors • Two unique colors per block. • But what if that unique color exists in other blocks? • We’re duplicating data • Let’s focus on trying to remove duplicates

  23. Huffman Encoding • Lossless data compression • Represents least-bit dictionary set • IE more frequently used values have smaller bit reps • String : AAAABBBCCD (80 bits) • Result : 00001010101101101111 (20 bits)

  24. Huffman block colors • More common colors will be given smaller indexes • 4096 identical 565 colors = 8kb • Huffman encoded = 514 bytes • 4k single bits, one 16 bit color • Problem : As number of unique colors increases, Huffman becomes less effective.

  25. Goal : Minimize Unique Colors • Similar colors can be quantized • Human eye won’t notice • Vector Quantization • Groups large data sets into correlated groups • Can replace groupelements withsingle value

  26. Compressing Block Colors • Step #1 - Vectorize unique input colors • Reduces the number of unique colors • Step #2 – Huffmanize quantized colors • Per-DXT block, store the Huffman index rather than the 565 color. • W00t..

  27. Selector bits • Each selector block is a small number of bits • Chain 2bit selectors together to make larger symbol • Can use huffman on these too!

  28. Huffman’s revenge!! • 4x4 array of 2bit –per block values • Results in four 8 bit values • Might be too small to get good compression results • Or a single 32 bit value • Doesn’t help much if there’s a lot of unique selectors • Do tests on your data to find the ideal size • 8bit-16 bit works well in practice

  29. Compressing DXT : rehash DXT Data Seperate Block Colors Selector Bits Vector Quantization Huffman Huffman Table Q Block Colors Huffman TO DISK Huffman Table Color Indexes Selector Indexes

  30. Decompressing Color Indexes Selector Indexes Huffman Table Selector Bits Huffman Table Block Colors Fill DXT blocks

  31. Results : 1024x1024 diffuse 0.7 bpp

  32. Results : 1024x1024 AO 0.07 bpp

  33. BACK UP! • Getting back to texturing.. • Insert decompressed data into mip-stack level • Can lock the mip-stack level • Update the sub-region on the CPU • Decompression not the only way..

  34. What We’re covering • Paging & Caches • DXT++ Compression • Compositing frameworks • Editing Issues • Example Based Texture Synthesis

  35. Paged data • Pages for the cache can come from anywhere • Doesn’t have to be compressed unique data • What about splatting? • Standard screenspace method • Can we use it to fill the cache?

  36. Frame buffer splatting • Splatting is standard texturing method • Re-render terrain to screen • Bind new texture & alpha each time • Results accumulated via blending • De facto for terrain texturing

  37. 2D Splatting : Compositing • Same process can work for our caching scheme • Get same memory benefits • Don’t splat to screen space, • Composite to page in the cache • What about compression? • Can’t composite & compress • Alpha blending + DXT compress??? Composite->ARGB8->DXT

  38. Why composite? • Compression is awesome • But we could get better results • Repeating textures + low-res alpha • = large memory wins • Decouples us from Verts overdraw • Which is a great thing!

  39. Why don’t composite? • Quality vs. Perf tradeoff • Hard to get unique quality @ same perf • More blends = worse perf • Trade uniqueness for memory • Tiled features very visible. • Effectively wasting cycles • Re-creating the same asset every frame

  40. End Goal • Mix of compositing & decompression • Fun ideas for foreground / background • Switch between them based on distance • Fun ideas for low-end platforms • High end gets decompression • Low end gets compositing • Fun ideas for doing both!

  41. A really flexible pipeline.. Decompress Cache Disk Data CPU Compress 2D Compositor GPU Compress

  42. What We’re covering • Paging & Caches • DXT++ Compression • Compositing frameworks • Editing Issues • Example Based Texture Synthesis

  43. Authoring issues

  44. UR A T00L (programmer..) • Standard pipelines choke on data • Designed for 1 user -> 1 asset work • Mostly driven by source control setups • Need to address massive texturing directly

  45. Multi-user editing • Problem with allowing multiple artists to texture a planet. • 1 artist per planet is slow… • Standard Source Control concepts fail • If all texturing is in one file, it can only safely be edited by one person at a time • Solution : 2 million separate files? • Need a better setup

  46. Texture Control Server • Allows multiple users to edit texturing • User Feedback is highly important • Edited areas are highlighted immediately to other users • Highlighted means ‘has been changed’ • Highlighted means ‘you can’t change’

  47. Texturing Server Data Updated Change Made Artist A Artist B

  48. Custom Submission • Custom merge tool required • Each machine only checks in their sparse changes • Server handles merges before submitting to actual source control • Acts as ‘man in the middle’

  49. Source Control Texturing Server Changes Changes Artist A Artist B

More Related