1 / 17

Efficient Volume Visualization of Large Medical Datasets

Efficient Volume Visualization of Large Medical Datasets. Stefan Bruckner Institute of Computer Graphics and Algorithms Vienna University of Technology. Motivation. Volume visualization: Important tool in medical environments

dannyparks
Download Presentation

Efficient Volume Visualization of Large Medical Datasets

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Efficient Volume Visualization of Large Medical Datasets Stefan Bruckner Institute of Computer Graphics and Algorithms Vienna University of Technology

  2. Motivation • Volume visualization: Important tool in medical environments • CT angiography run-offs (> 1000 slices) are used in clinical practice • Scanner resolutions are getting higher (1024x1024 per slice) Memory access is increasingly becoming a bottleneck

  3. Outline • Memory hierarchy • Linear memory layouts • Bricked memory layouts • Gradient caching • Empty space skipping • Results

  4. Hierarchy of successively larger but slower memory technology Avoid frequent access to higher levels (like main memory) Exploit spatial and temporal locality Memory Hierarchy hard disk main memory CPU L2 cache L1 cache

  5. Linear Memory Layout volume Store volume as a stack of 2D images (slices) Bad cache behavior fordifferent viewing directions rays

  6. Bricked Memory Layout (1) volume Store volume as a set of equally sized cubes (bricks) Nearly constant cache behavior for all viewing directions rays

  7. Bricked Memory Layout (2) volume Process all resample locations within a brick before going to the next one Each brick is only loaded from memory once rays

  8. Bricked Memory Layout (3) volume Process all resample locations within a brick before going to the next one 7 8 9 Brick-wise processing scheme 4 5 6 3 1 2 rays

  9. How to efficiently access neighboring samples? Problem: A certain neighborhood of samples is needed at every resample location Offsets to neighboring samples are constant in linear volume layout More complicated for bricked volume layouts Bricked Memory Layout (4)

  10. How to efficiently access neighboring samples? Bricked Memory Layout (5) brick boundary sample

  11. How to efficiently access neighboring samples? 27 distinct cases in 3D for a 26-neighborhood Determine case from current position within brick Offsets to neighboring samples are stored in lookup table Bricked Memory Layout (6)

  12. Linear vs. Bricked Memory Layout speedup factor 4 optimal brick size 3 speedup: 2.8 2 cache thrashing + bricking overhead linearvolumelayout 1 brick sizein KB 1 8 64 512 4096 32768

  13. Pre-computed gradients For sufficient quality, memory requirements are at least doubled Compute gradients on-the-fly Caching has to be performed Brick-wise traversal is beneficial Store gradients in a brick-sized cache Gradient Caching

  14. Medical datasets contain large empty regions How do we quickly traverse this empty space? Empty Space Skipping • Project all non-transparent bricks onto image plane to find first entry points of rays • For finer resolution, use a min-max octree per brick and project the octree • At cell level, store one bit for each cell classified as transparent to quickly skip it

  15. Results (1)

  16. Results (2) Visible Male(587 x 341 x 1878) Intel Pentium M 1600 MHz(software capture)

  17. Conclusions • Alternative memory layouts are the key to handling large datasets • Sub second frame rates for large datasets on a standard notebook • Fully interactive volume visualization of large data on commodity hardware is within reach • Future work: Use bricked memory layout for compression and out-of-core rendering

More Related