1 / 15

EMERGE Deep Tech Mtg Oliver Yu, Jason Leigh, Alan Verlo

EMERGE Deep Tech Mtg Oliver Yu, Jason Leigh, Alan Verlo. Performance Parameters. Latency = Recv Time - Send Time Note: Recv Host and Send Host are synchronized. Jitter = E [{ L i - E [ L] }] Note: E [ ] is the expection of data set. L is the set of 100 most recent Latency samples.

cgraf
Download Presentation

EMERGE Deep Tech Mtg Oliver Yu, Jason Leigh, Alan Verlo

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EMERGE Deep Tech MtgOliver Yu, Jason Leigh, Alan Verlo

  2. Performance Parameters Latency= Recv Time - Send Time Note: Recv Host and Send Host are synchronized. Jitter = E[{Li - E[L]}] Note: E[ ] is the expection of data set. L is the set of 100 most recent Latency samples. Packet Loss Rate

  3. Latency vs. Time Jitter vs. Time 0.008 0.009 Background Traffic Load 0.008 0.007 0.007 0.006 20Mbps 20Mbps 0.006 0.005 40Mbps 40Mbps 0.005 Jitter Latency 60Mbps 0.004 60Mbps 0.004 80Mbps 80Mbps 0.003 0.003 0.002 0.002 Foreground Traffic Load is 250Kbps 0.001 0.001 0 0 R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o Time o o o o o o o o o o o o Time w w w w w w w w w w w w w w w w w w w w w w w w w w w w w w w w w w w w w w w w w w w w w w w Packet Lost Rate vs. Background Traffic 0.5 • Note: • These experiments were based on best effort platform. • These experiments will be repeated on DiffServ platform when available. Foreground Traffic Loadis 3 Mbps 0.4 0.3 Packet Lost Rate(%) 0.2 0.1 0 20Mbps 40Mbps 45Mbps 50Mbps 60Mbps 80Mbps Background Traffic

  4. Forward error correction scheme for low-latency delivery of error sensitive data • Ray Fang, Dan Schonfeld, Rashid Ansari • Transmit redundant data over high bandwidth networks that can be used for error correcting UDP streams to achieve lower latency than TCP. • Transmit redundant data to improve quality of streamed video by correcting for lost packets.

  5. FEC Experiments • EVL to SARA- Amsterdam (40Mb/s 200ms RT latency) • Broader Ques: • Can FEC provide a benefit? How much? • Tradeoff between redundancy and benefit? • Specific Ques: • TCP vs UDP vs FEC/UDP • How much jitter does FEC introduce? • High thru put UDP vs FEC/UDP to observe loss & recovery • UDP vs FEC with background traffic • FEC over QoS: WFQ or WRED congestion management- hypothesis: WRED is bad for FEC

  6. UDP Latency (ms) TCP Latency (ms) FEC over UDP Latency (ms) UDP vs TCP vs FEC/UDP with 3:1 redundancy 128 77.0 115 90.3 256B 81.7 121 95.3 512 101.0 150.8 126.0 1024 143.0 210 189.0 2048 227.3 339 314.3 Packet size (bytes)

  7. FEC greatest benefit is in small packets. Larger packets impose greater overhead. As redundancy decreases FEC approaches UDP.

  8. Data Rate (bits/s) Packet Size (Bytes) Packet Loss Rate in UDP (%) Packet Loss Rate in FEC over UDP (%) Packet Loss over UDP vs FEC/UDP 1M 128 0.4 0 1M 256 0.2 0 1M 1024 0.2 0 10M 128 30 4 10M 256 25 3 10M 1024 21 1.5 UDP UDP FEC

  9. Application Level Experiments • Two possible candidates for instrumentation and testing over EMERGE: • Teleimmersive Data Explorer (TIDE) – Nikita Sawant, Chris Scharver • Collaborative Image Based Rendering Viewer (CIBR View) – Jason Leigh, Steve Lau [LBL]

  10. TIDE

  11. CIBR View

  12. Common Characteristics of both Teleimmersive Applications

  13. Research Goal: • Hope to see improved performance with QoS and/or TCP tuning enabled. • Monitor applications and characterize their network characteristics as it stands over non-QoS enabled networks. • Idenitfy & remove bottlenecks in the application. • Monitor again to verify bottlenecks removed. • Monitor over QoS enabled networks. • End result is a collection of techniques and tools to help tune similar classes of collaborative distributed applications. • Instrumentation: Time, Info (to identify a flow), Event (to mark a special event), Inter-msg delay, 1-way latency, Read bw, Send bw, Total read, Total sent • TIME=944767519.360357 INFO=Idesk_cray_avatar EVENT=new_avatar_entered MIN_IMD=0.000254 AVG_IMD=0.218938 MAX_IMD=1.170086 INST_IMD=0.134204 MIN_LAT=0.055527 AVG_LAT=0.169372 MAX_LAT=0.377978 INST_LAT=0.114098 AVG_RBW=74.292828 INST_RBW=750.061367 AVG_SBW=429.815557 INST_SBW=704.138274 TOTAL_READ=19019 TOTAL_SENT=110033

  14. Characterization of TIDE & CIBRview streams

  15. QoSiMoto: QoS Internet Monitoring Tool • Kyoung Park • Reads Netlogger data sets from file or from netlogger daemon. • CAVE application runs on SGI and Linux • Information Visualization problem. • How to leverage 3D. • Averaging of data points over long traces. • www.evl.uic.edu/cavern/qosimoto

More Related