1 / 20

Distributed Shared Memory for Roaming Large Volumes

Distributed Shared Memory for Roaming Large Volumes. Laurent Castani é (Earth Decision / Paradigm – INRIA Lorraine – Project ALICE) Christophe Mion (INRIA Lorraine – Project ALICE) Xavier Cavin (INRIA Lorraine – Project ALICE) Bruno L é vy (INRIA Lorraine – Project ALICE). Outline.

latif
Download Presentation

Distributed Shared Memory for Roaming Large Volumes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distributed Shared Memory for Roaming Large Volumes Laurent Castanié (Earth Decision / Paradigm – INRIA Lorraine – Project ALICE) Christophe Mion (INRIA Lorraine – Project ALICE) Xavier Cavin (INRIA Lorraine – Project ALICE) Bruno Lévy (INRIA Lorraine – Project ALICE)

  2. Outline • Introduction • Large volumes in the Oil and Gas EP domain • Previous work: single-workstation cache system • COTS cluster solution: DHCS • Distributed volume rendering • Distributed data management • Real-time roaming in gigantic data sets with DHCS • Conclusions

  3. Outline • Introduction • Large volumes in the Oil and Gas EP domain • Previous work: single-workstation cache system • COTS cluster solution: DHCS • Distributed volume rendering • Distributed data management • Real-time roaming in gigantic data sets with DHCS • Conclusions

  4. Targeted ROI~ 250x typical reservoir scale ROI IntroductionInterpretation scales in Oil and Gas EP Targeted volume4000x5000x5000 (~100 GB) REGIONAL SCALE10000-30000 Km2 Targeted ROI1000x1000x1000 (~1 GB) Reservoir scale volume300x400x400 (~50 MB)100-300 Km2 Reservoir scale ROI100x200x200 (~4 MB)

  5. Workstation (RAM) Graphics Card (V-RAM) Data Volume (Disk) 100 GB 8 GB 512 MB 6% 8% 0.5% IntroductionOOC visualization on a single workstation

  6. Probe-based roaming systems with LRU volume paging IntroductionOOC visualization on a single workstation Bhaniramka and Demange, IEEE VolVis 2002, OpenGL Volumizer Plate et al., VISSYM 2002, Octreemizer Castanie et al., IEEE Visualization 2005VolumeExplorer (coupling OOC visualization and data processing)

  7. IntroductionOOC visualization on a single workstation Efficient solution up to 20-30 GB, however: • ROI size is limitedto the amount of graphics memory available; • Performance decreases rapidly when the size of data on disk increases over 30 GB. => How to SCALE our solution up to 100-200 GB? Distributed Hierarchical Cache System (DHCS)on COTS cluster

  8. Outline • Introduction • Large volumes in the Oil and Gas EP domain • Previous work: single-workstation cache system • COTS cluster solution: DHCS • Distributed volume rendering • Distributed data management • Real-time roaming in gigantic data sets with DHCS • Conclusions

  9. SLAVE NODE SLAVE NODE SLAVE NODE SLAVE NODE SLAVE NODE SLAVE NODE SLAVE NODE SLAVE NODE MASTER NODE Distributed volume renderingSort-last parallel volume rendering 1. Segmentation 2. Distribution 4. Composition 3. Rendering

  10. Distributed volume renderingPipelined binary-swap compositing P2 P3 P0 P1 Ma et al., IEEE CG&A 1994Binary-swap compositing Cavin et al., IEEE Visualization 2005Cavin et al., Eurographics PGV 2006DViz pipelined implementation Result on MASTER node

  11. One virtual graphics card DViz 8 GB Master Distributed volume renderingPipelined binary-swap compositing 16 nodes with GeForce 6800 ULTRA - 512 MB ROI size several GBs => ~15-20 fps

  12. Outline • Introduction • Large volumes in the Oil and Gas EP domain • Previous work: single-workstation cache system • COTS cluster solution: DHCS • Distributed volume rendering • Distributed data management • Real-time roaming in gigantic data sets with DHCS • Conclusions

  13. Workstation (RAM) Graphics Card (V-RAM) Data Volume (Disk) ~50 MB/s ~1 GB/s ? Gigabit EthernetNetwork Distributed data managementLimited disk to memory bandwidth • Very low disk to memory bandwidth • Faster transfers through the network?

  14. 120 MB/s 500 MB/s 50 MB/s 220 MB/s Distributed data managementDisk Vs Network bandwidth Network Disk 4x faster transfers through the network

  15. 8 GB 8 GB 8 GB 8 GB Distributed data managementOur fully dynamic implementation (DHCS) 2 ~50 MB/s ~220 MB/s 1 1 Fully dynamic memory state that must be kept up-to-date

  16. Outline • Introduction • Large volumes in the Oil and Gas EP domain • Previous work: single-workstation cache system • COTS cluster solution: DHCS • Distributed volume rendering • Distributed data management • Real-time roaming in gigantic data sets with DHCS • Conclusions

  17. Full resolution volume roaming Full resolution volume rendering ResultsReal-time rendering and volume roaming • 30 copies of the Visible Human data set= 5580x5400x3840 ~ 107 GB • ROI 1000x1000x1000 ~ 1 GB => Real-time rendering and volume roaming at full resolution at 12 fps on average on a 16-node cluster

  18. Outline • Introduction • Large volumes in the Oil and Gas EP domain • Previous work: single-workstation cache system • COTS cluster solution: DHCS • Distributed volume rendering • Distributed data management • Real-time roaming in gigantic data sets with DHCS • Conclusions

  19. Conclusions • Volume visualization of ~100 GB • Volume roaming a ROI ofseveral GBs • Cluster-based hierarchical cachesystem • Distributed volume rendering • Distributed data management • Compression techniques • Better load balancing of the communications on the network • Pre-fetching strategies to hide disk access • Other use cases: • Combination of multiple attributes of 100 GB each • Real-time full resolution volume slicing at 20-30 slices per second

  20. Acknowledgements This work involved and has been supported by: • Earth Decision (now part of Paradigm) http://www.earthdecision.com • LORIA / INRIA – Project ALICE http://alice.loria.fr • DViz http://www.dviz.fr • Region Lorraine (CRVHP) • GOCAD consortium http://www.gocad.org

More Related