1 / 14

TOPS “Technology for Optical Pixel Streaming”

TOPS “Technology for Optical Pixel Streaming”. Paul Wielinga Division Manager High Performance Networking SARA Computing and Networking Services wielinga@sara.nl. Courtesy EVL. Outline. About SARA Rationale for adapting OptIPuter concept TOPS Demonstration US117b at iGrid2005

Download Presentation

TOPS “Technology for Optical Pixel Streaming”

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TOPS“Technology for Optical Pixel Streaming” Paul Wielinga Division Manager High Performance Networking SARA Computing and Networking Services wielinga@sara.nl Courtesy EVL

  2. Outline • About SARA • Rationale for adapting OptIPuter concept • TOPS • Demonstration US117b at iGrid2005 • (Preliminary) Results • Other demo’s SARA is involved in iGrid2005 - GLVF OptIPuter panel

  3. SARA Computing and Networking Services • SARA: National e-Science Support Center • Home of • Dutch Supercomputing infrastructure • SURFnet NOC • SURFnet6 major core node • NetherLight (sister site of StarLight) • Amsterdam LightHouse • OptIPuter affiliate international partner • LHC Tier-1 site • Proposed LOFAR storage site • SARA is independent and not-for-profit iGrid2005 - GLVF OptIPuter panel

  4. Teraflops, Terabytes, Terabits, Terapixels @SARA STK tape library CAVE IBM Cluster 1600 Netherlight SGI Altix 3700 Linux Nationaal Rekencluster SGI Origin 3800 Tiled-panel display SURFnet 6 iGrid2005 - GLVF OptIPuter panel

  5. Why we adopt OptIPuter ~PBytes/sec LHC Tier-1 site (with NIKHEF) Brain Activity Analysis Univ Utrecht Life Sciences • Science pull: • Petabytes of complex data explored and analyzed by nationally and internationally dispersed scientists, in many teams • SARA’s mission as e-Science Support Center LOFAR iGrid2005 - GLVF OptIPuter panel

  6. Why we adopt OptIPuter (2) • Technology push: • SURFnet6 – Hybrid Optical and Packet Switching Network • 5 Photonic RIngs • Light Path Provisioning for all universities and research institutions in NL iGrid2005 - GLVF OptIPuter panel

  7. TOPSTechnology for Optical Pixel Streaming • Use OptIPuter concept to split visualization from display • Render images from very large data sets on central visualization cluster • Use high speed (lambda) networks to stream pixels to ultra-high resolution tiled panel displays • Adapt the idea of video streaming: constant stream of pixels from renderer to display • Use lossy protocols for long distance connectivity: High performance TCP hard to achieve, UDP performance trivial • Light weight application • Possibly incorporated in SAGE (EVL and UCSD)? iGrid2005 - GLVF OptIPuter panel

  8. DEMO US117b – GLVF TOPS • Very large 2D and 3D datasets located at SARA in Amsterdam • Render cluster (29 nodes) at SARA • 20 Gbps connectivity between cluster and NetherLight, via University of Amsterdam switch • 2 * 10 Gbps lambda between Amsterdam and San Diego (SURFnet, CaveWave, WANPHY Amsterdam-SD) • UCSD LambdaVision display at iGrid: 100 Megapixel Tiled Display • 1 Frame = 100 Mpixel*24 bits = 2.4 Gbits • Uncompressed and compressed mode • Streaming pixels from Amsterdam to San Diego iGrid2005 - GLVF OptIPuter panel

  9. Preliminary results • First demoslot, 1 10 Gbps available • Sustained bandwidth used between rendering and display 9.2 Gbps • Second demoslot, 2 * 10 Gbps • Sustained bandwidth 5,3 + 6,2 = 11,5 Gbps iGrid2005 - GLVF OptIPuter panel

  10. Final results • Third demoslot, 2 times 10 Gbps available • Sustained bandwidth used between rendering and display 18 Gbps • Peak bandwidth of 19.5 Gbps! • New world record of transatlantic bandwidth usage by one single application visualizing scientific content iGrid2005 - GLVF OptIPuter panel

  11. Preliminary results • Limitation was MTU size – not whole path was jumbo-frame enabled • Display cluster had 100% cpu load due to small MTU size • Large UDP packet loss (upto 50%) • Noticable difference in latency (visible) between CaveWave (with 4 switches) and direct WANPHY connection • Later today: • new trial with jumbo frames enabled • Expect higher framerate and bandwidth usage of 15 Gbps or more iGrid2005 - GLVF OptIPuter panel

  12. iGrid2005 demonstrations • US117b: GLVF – Unreliable Stream of Images (TOPS) • SARA participation in other demo’s: • US117a: SAGE (EVL) computing support in Amsterdam • NL102: Dead Cat (UvA) – Visualization support (Amsterdam) • US106: 10 Gb line-speed security (NORTEL) – Networking and computing support (Amsterdam) iGrid2005 - GLVF OptIPuter panel

  13. Credits • Bram Stolk (SARA) • Jeroen Akershoek (SARA) • Ronald van der Pol (SARA) • Pieter de Boer (SARA) • Hanno Pet (SARA) • University of Amsterdam (Paola Grosso et al.) • SURFnet (Dennis Paus et al.) • Jason Leigh, Luc Renambot, Lance Long and colleagues at EVL and UCSD • iGrid2005 networking and LambdaVision crews iGrid2005 - GLVF OptIPuter panel

  14. Thank You

More Related