1 / 11

Distributed Visualization and Data Resources Enabling Remote Interaction and Analysis

Distributed Visualization and Data Resources Enabling Remote Interaction and Analysis. Scott A. Friedman UCLA Institute for Digital Research and Education. Motivating Challenge. Remote access to visualization cluster Limited resources ($) for development

oceana
Download Presentation

Distributed Visualization and Data Resources Enabling Remote Interaction and Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distributed Visualization and Data Resources Enabling Remote Interaction and Analysis Scott A. Friedman UCLA Institute for Digital Research and Education

  2. Motivating Challenge • Remote access to visualization cluster • Limited resources ($) for development • Re-use as much exiting IB (verb) code as possible (iWarp) • Leverage existing 10G infrastructure (UCLA, CENIC) • Use layer 3 • Pacing critical for consistent user interaction • Remote access to researcher’s data • Same limitations as above • High performance retrieval of time step data • Synchronized to modulo of rendering rate

  3. Visualization Cluster • Hydra - Infiniband based • 24 rendering nodes (48 nvidia 9800GTX+) • 1 high-definition display node • 3 visualization center projection nodes • 1 remote visualization bridge node* • Research System - primarily • High performance interactive rendering (60Hz) • Dynamic (on-line) load balancing • System requires both low latency and high bandwidth • Perfect for Infiniband! * We will return to this in a minute

  4. Network Diagram command scinet Remote viz node nlr data local viz nodes command Remote disk node data input IB pixel data cenic visualization cluster Acts like a display to the cluster ucla bridge node IB iWarp

  5. Remote Visualization Bridge Node • Bridges from IB to iWarp/10G Ethernet • Appears like a display node to cluster • No change to existing rendering system • Pixel data arrives over IB, which • is sent using iWarp to a remote display node • uses same RDMA protocol • Same buffer used for receive and send (bounce) • Very simple in principle - sort of… • TCP settings can be a challenge • Time step data also passes through (opposite direction) • Distributed to appropriate rendering nodes

  6. Data flow Display resolution and update rate determine throughput. Example resolution of 2560x1600 is 11.75MB per frame. Rates from 10-60hz give data flows from 0.9 to 5.5Gbps Frame data outbound frames WAN IB remote display iWarp 10GE interaction GPU0 SDR IB GPU1 GPU0 rate control GPU1 step data IB switch DDR IB GPU0 remote disk GPU1 Time step data inbound GPU0 GPU1 Data set size and rate determine throughput. Example data is 1000 16MB time steps. Rates from 10-60hz give data flows from 1.25-7.5Gbps Hydra visualization cluster

  7. Example data set • Uncompressed video stream • 20-60Hz at up to 2560x1600 ~ 1.8Gbps – 5.5Gbps • Time steps • Time step size and rate determine flow • 10000 steps at 20-60Hz at 16MB/step ~ 2.4Gbps – 7.5Gbps • Actual visualization • Interactive 100M particle n-body simulation • 11 unique processes • Performance measurements are internal • Summaries are displayed at completion of run

  8. Running Demo

  9. Future Work • Latency is a non-issue at UCLA (which is our main focus) • ~60 usec across campus • Longer latencies / distances are an issue • UCLA to UC Davis ~20ms rtt (CENIC) • UCLA to SC07 (Reno) ~14ms rtt (CENIC/NLR) • UCLA to SC08 (Austin) ~30ms rtt (CENIC/NLR) • Frames and time step data completely in-flight (at high frame-rates) • Not presently compensating for this • Minimizing latency important for interaction • Pipelining transfers helps hide latency: lat’ = ( lat / chunks ) chunks + 1 chunks = 1 : lat = 2 chunks = 2 : lat = 1.5

  10. Lessons Learned • TCP nature of iWarp (some layer 3 related) • Congestion control wreaks havoc on interaction / latencies • Reliability • Not needed for video frames • Maybe needed for data (depends on application) • Buffer tuning – headache • Layer 9 issues • Better to use UDP? • Is there an unreliable iWarp? • Dedicated/Dynamic circuits? • Obsidian Longbow IB extenders (haven’t tried this) • Understand there may be an IP version out there? (not iWarp, but…)

  11. Thank you • Questions? • friedman@ucla.edu

More Related