1 / 31

Hub-based Simulation and Graphics Hardware Accelerated Visualization for Nanotechnology Applications

Hub-based Simulation and Graphics Hardware Accelerated Visualization for Nanotechnology Applications. Wei Qiao qiaow@purdue.edu Michael McLennan mmclennan@purdue.edu Rick Kennell kennell@purdue.edu David S. Ebert ebertd@purdue.edu Gerhard Klimeck gekco@purdue.edu

osma
Download Presentation

Hub-based Simulation and Graphics Hardware Accelerated Visualization for Nanotechnology Applications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Hub-based Simulation and Graphics Hardware Accelerated Visualization for Nanotechnology Applications Wei Qiao qiaow@purdue.edu Michael McLennan mmclennan@purdue.edu Rick Kennell kennell@purdue.edu David S. Ebert ebertd@purdue.edu Gerhard Klimeck gekco@purdue.edu Purdue University

  2. Our Goals • Provide advanced interactive visualization of scientific simulations to users worldwide without the user needing special graphics capabilities • Approach - integrate hardware-accelerated remote visualization into nanoHUB.org

  3. nanoHUB Remote Simulation and Visualization

  4. Outline • nanoHUB.org • Challenges and requirements • Related work • Our system design • Performance and optimization • Case studies • Summary and future work

  5. nanoHUB.org • A nano-science gateway for nanotechnology education and research • Created by the Network for Computational Nanotechnology (NCN) • Educational material • Animations • Courses • Seminars • Simulation tools accessible from a web browser

  6. User Community and Usage • Nanoelectronics Community • Researchers • Educators • Students • Usage (last year) • More than 10,000 users viewed online materials • 1,800 users ran more than 54,000 simulation jobs consuming over 28,500 hours of CPU time

  7. nanoHUB Simulation Architecture Open Science Grid and NSF TeraGrid Simulation Cluster Web Server Gig Net Internet Gig Net Virtual Machine

  8. DEMO!

  9. System Requirements • Transparency in service delivery • Scalability to increased workload • Responsiveness to user command • Flexibility in handling simulation data • Extensibility in software and hardware

  10. Visualization Challenges • Architecture • Lack state of the art visualization systems • Mismatch between CPU and GPU resources • Users • Predominantly remote • Vast diversity of computing platforms and capabilities

  11. Related Work • Molecular Dynamics Visualization • Surface rendering • Structure rendering • Volume visualization • Electron potential fields • Electronic wave function • Electro-magnetic fields

  12. Related Work (Cont.) • Flow Visualization • Texture synthesis • CPU ([Wijk 91] and [Cabral and Leedom 93]) • GPU ([Heidrich et al. 99], [Jobard et al. 00], [Weiskopf et al. 2003] and [Telea and Wijk 03]) • Particle tracing • CPU ([Sadarjoen et al. 94]) • GPU ([Kolb et al. 04] and [Krüger et al. 05]) • Remote Visualization • Data is too large to transfer over network • Local workstation cannot handle the data • Distance collaboration

  13. Practical Obstacles to nanoHUB • VNC session run on cluster nodes with no graphics hardware acceleration • Cluster nodes are rack mounted machines with neither AGP nor PCI Express interfaces • nanoHUB’s virtual machine layer cannot directly access graphics hardware

  14. Our System Design • Client-server architecture • nanoVIS render server • Visualization engine library • Vector flows • Multivariate scalar fields • Rappture GUI client • User front end • nanoSCALE service daemon • Monitors render loads • Track GPU memory usage • Starts nanoVIS servers

  15. Gig Net Client-Server Hardware-accelerated Render Farm Schematic View Open Science Grid and NSF TeraGrid Simulation Cluster Web Server Gig Net Internet Gig Net Virtual Machine

  16. Hardware • Linux cluster render farm • 1.6GHz Pentium 4 • 512MB of RAM • nVIDIA Geforce 7800GT graphics hardware • Advantages • Extremely cost effective • Flexible to upgrade and expand • Integrates tightly into the nanoHUB architecture

  17. Rappture Toolkit • Rapid Application Infrastructure Toolkit • Accelerate development of basic infrastructure • Declare simulator input / output using XML • Automatic generation of GUI

  18. nanoVIS • Fully accelerated by graphics hardware • Visualize a variety of nanotechnology simulations • Volumetric and multivariate scalar fields • Texture-based volume visualization • FFC volume (zinc-blende) [Qiao et al. 2005] • Vector fields • GPU particle tracing • 2D texture synthesis • Geometric drawing to illustrate simulation geometry • GL primitive drawing

  19. Particle Data Vector Field Pixel Shader GPU FBO Texture Vector Field Visualization (Cont.) • Particle Implementation • [Kolb et al. 2004] [Krüger et al. 2005] • Framebuffer Object (FBO) • Vertex Buffer Object (VBO) • Particles stay in GPU memory • 2D texture synthesis • Complement particles Vertex Data VBO Particle Render

  20. Simulation Cluster Render Farm Rappture nanoSCALE Connect Data Connect Select Spawn Client-Server nanoVIS Client-Server Interaction

  21. Performance and Optimization • Work load consideration • GPU heavy • Rendering • CPU light • Network communication • GPU-oriented optimization • GPU load estimation scheme • Node selection scheme based on estimated GPU load

  22. Volume visualization Particle system GPU Load Estimation • Fragment processing cost • Number of rasterized fragments • Computation per fragment • Unified measurement for particle system and volume • Hard to compare cost of particle rendering to advection • Experimental data allows a unified measurement • Render cost is factor of 0.2 to advection • Estimation equation • Primary cost of the shader execution is texture access

  23. Performance • Measure turn around time (from issue command to image received) • 128 x 128 x 128 scalar field • 512x512 render window • Simulated user interaction • Transfer function modification, rotation, zoom, cutting plane, etc.

  24. Case Studies • Successfully developed several nanotechnology tools • SQUALID-2D • Quantum Dot Lab • BioMOCA • nanoWire

  25. 2-D Electron Gas Simulator • Goal • Study the effects of impurity in a nanowire • Device composition • Electrodes are positioned on the top • GaAs and AlGaAs semiconductor layers • A narrow channel constraining the electrons in the middle • Experiments • Vary magnetic field • Electron flows • Electron potential fields

  26. 2-D Electron Gas Simulator Flow and Electron Potential Particle Tracing and LIC

  27. BioMOCA • Goal • Study the flow of ions through a pore in a cell membrane • Method • Compute random walks of ions through a channel with a fixed geometry within a cell membrane. Cell Wall Cell Wall

  28. Quantum Dot Lab • Goal • Study the wave functions (orbitals) of electrons trapped in a quantum dot device • Method • Configure incidental light source, shape and size of the quantum dot s and p orbitals s orbital p orbital

  29. Conclusions • Hub-based remote visualization is a powerful, flexible solution • Seamlessly delivers hardware-accelerated visualization to remote scientists with minimal requirements on their computing environments • Intuitive interface and ease of use are key for wide-usage • Enables rapid development and deployment of new simulation tools • Tight integration into the simulation and interactive performance can speed scientific discovery and change science work flow • nanoVis tools is huge success

  30. Future Work • Expand to generic scientific hub-based visualization engine • Our system can be adopted to economically deliver accelerated graphics to other hub-based multi-user environments • Expand to large data support • GPGPU nano-electronics simulations and integrated visualization • More accurate GPU load estimation using nVidia newly released NVPerfKit 2.1 for Linux

  31. Acknowledgement • Martin Kraus, Nikolai Svakhine, Ross Maciejewski, Xiaoyu Li • Anonymous reviewers for many helpful discussions and comments • nVIDIA • National Science Foundation under Grant No. EEC-0228390

More Related