1 / 16

Protocols Progress with Current Work.

Protocols Progress with Current Work. Richard Hughes-Jones The University of Manchester www.hep.man.ac.uk/~rich/ then “Talks”. vlbi_udp: UDP on the WAN. iGrid2002 monolithic code Convert to use pthreads control Data input Data output Code branch for Simon’s file transfer tests

graves
Download Presentation

Protocols Progress with Current Work.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ProtocolsProgress with Current Work. Richard Hughes-Jones The University of Manchesterwww.hep.man.ac.uk/~rich/ then “Talks” ESLEA PMB, Manchester, 23 Nov 2006, R. Hughes-Jones Manchester

  2. vlbi_udp: UDP on the WAN • iGrid2002 monolithic code • Convert to use pthreads • control • Data input • Data output • Code branch for Simon’s file transfer tests • Work on vlbi_recv: • Output thread polled for data in the ring buffer – burned CPU • Input thread signals output thread when there is work to do – else wait on semaphore – packet loss at high rate, variable thoughput • Output thread uses sched_yield() when no work to do – CPU used • Add code for:MarkV card and PCEVN interface • Measure: throughput, packet loss, re-ordering, 1-way delay • Multi-flow Network performance – being set up Nov/Dec06 ESLEA PMB, Manchester, 23 Nov 2006, R. Hughes-Jones Manchester

  3. vlbi_udp: B2B UDP Tests • Kernel 2.6.9 • vlbi_recv sched_yield() • Wait 12 us • Stable throughput • 999 Mbit/s variation less than 1 Mbit/s • No packet loss • Inter-packet time • Processing time • mean 0.1005 sigma 0.1438 • CPU load: • Cpu0 : 0.0% us, 0.0% sy, 0.0% ni, 99.7% id, 0.3% wa, 0.0% hi, 0.0% si • Cpu1 : 11.3% us, 88.7% sy, 0.0% ni, 0.0% id, 0.0% wa, 0.0% hi, 0.0% si • Cpu2 : 0.3% us, 0.0% sy, 0.0% ni, 99.3% id, 0.3% wa, 0.0% hi, 0.0% si • Cpu3 : 9.3% us, 15.6% sy, 0.0% ni, 37.5% id, 0.0% wa, 1.3% hi, 36.2% si ESLEA PMB, Manchester, 23 Nov 2006, R. Hughes-Jones Manchester

  4. vlbi_udp: Multi-site Streams Gbit link Chalmers University of Technology, Gothenburg Metsähovi OnsalaSweden Jodrell BankUK Gbit link TorunPoland DedicatedGbit link Dwingeloo DWDM link MedicinaItaly ESLEA PMB, Manchester, 23 Nov 2006, R. Hughes-Jones Manchester

  5. TCP & Network Sender Receiver Timestamp1 Timestamp2 Timestamp3 Data1 Timestamp4 Packet loss Data2 Timestamp5 Data3 Data4 ●●● Number of packets Time n bytes  And now with Packet Loss time Wait time TCP: tcpdelay How does TCP move CBR data? • Want to examine how TCP moves Constant Bit Rate Data • VLBI Application Protocol • tcpdelay a test program: • instrumented TCP program emulates sending CBR Data. • Records relative 1-way delay • Web100 Record TCP Stack activity ESLEA PMB, Manchester, 23 Nov 2006, R. Hughes-Jones Manchester

  6. Delay in stream Packet loss Expected arrival time at CBR TCP: tcpdelay Visualising the Results Stephen Kershaw • If Throughput NOT limited by TCP buffer size / Cwnd maybe we can re-sync with CBR arrival times. • Need to store CBR messages during the Cwind drop in the TCP buffer • Then transmit Faster than the CBR rate to catch up Arrival time Message number / Time ESLEA PMB, Manchester, 23 Nov 2006, R. Hughes-Jones Manchester

  7. TCP: tcpdelay JB-Manc • Message size: 1448 Bytes • Wait time: 22 us • Data Rate: 525 Mbit/s • Route: JB-Man • RTT ~1 ms • TCP buffer 2MB • Drop 1 in 10,000 packets • ~2.5-3 ms increase in timefor about 2000 messagesie ~ 44 ms • Classic Cwnd behaviour • Cwnd dip corresponds to ~ 1.2M bytes data “Delayed” (~810 packets) • Peak Throughput ~ 620 Mbit/s ESLEA PMB, Manchester, 23 Nov 2006, R. Hughes-Jones Manchester

  8. Arrival Times: UKLight JB-JIVE-Manc Presented at the Haystack Workshop • Message size: 1448 Bytes • Wait time: 22 us • Data Rate: 525 Mbit/s • Route:JB-UKLight-JIVE-UKLight-Man • RTT ~27 ms • TCP buffer 32M bytes • BDP @512Mbit 1.8Mbyte • Estimate catchup possible if loss < 1 in 1.24M • Data needed forJIVE-Manc ~27msChicago-Manc ~120 ms • Have ~30 GBytes!!! Stephen Kershaw ESLEA PMB, Manchester, 23 Nov 2006, R. Hughes-Jones Manchester

  9. TCP: TCP Stacks, Sharing, Reverse Traffic • Delayed by Provision of UKLight link Manc - Starlight • PC installed in Starlight and Manchester Sep06 • Udpmon tests Good • Plateau ~990 Mbit/s wire rate • No packet Loss • Same in both directions • TCP studies: Work Now in progress ESLEA PMB, Manchester, 23 Nov 2006, R. Hughes-Jones Manchester

  10. DCCP: The Application View • Stephen & Richard with help from Andrea • Had problems with Fedora Core 6 using stable kernel 2.6.19-rc1 • DCCP data packets never reached the receiving TSAP ! • Verify with tcpdump • Using 2.6.19-rc5-g73fd2531-dirty • Ported udpmon to dccpmon • Some system calls don’t work • dccpmon tests • Plateau ~990 Mbit/s wire rate • No packet Loss • Receive system crashed! • Iperf tests • 940Mbps, back-to-back • Need more instrumentation in DCCP • Eg a line in /proc/sys/snmp ESLEA PMB, Manchester, 23 Nov 2006, R. Hughes-Jones Manchester

  11. 10 Gigabit Ethernet Lab • 10 Gigabit Test Lab now set up in Manchester • Cisco 7600 • Cross Campus λ <1ms • Neterion NICs • 4 Myricom 10 Gbit NICs – delivery this week • Chelsio being purchased • Boston/Supermicro X7DBE PCs • Two Dual Core Intel Xeon Woodcrest 5130 2 GHz • PCI-e and PCI-X • B2B performance so far • SuperMicro X6DHE-G2 • Kernel (2.6.13) & Driver dependent! • One iperf TCP data stream 4 Gbit/s • Two bi-directional iperf TCP data streams 3.8 & 2.2 Gbit/s • UDP Disappointing • Installed Fedora Core5 Kernels 2.6.17 & 2.6.18 (+web100 + packet drop) & 2.6.19 on the Intel dual-core PCs ESLEA PMB, Manchester, 23 Nov 2006, R. Hughes-Jones Manchester

  12. ESLEA-FABRIC:4 Gbit flows over GÉANT • Set up 4 Gigabit Lightpath Between GÉANT PoPs • Collaboration with Dante • GÉANT Development Network London – Amsterdamand GÉANT Lightpath service CERN – Poznan • PCs in their PoPs with 10 Gigabit NICs • VLBI Tests: • UDP Performance • Throughput, jitter, packet loss, 1-way delay, stability • Continuous (days) Data Flows – VLBI_UDP and • multi-Gigabit TCP performance with current kernels • Experience for FPGA Ethernet packet systems • Dante Interests: • multi-Gigabit TCP performance • The effect of (Alcatel) buffer size on bursty TCP using BW limited Lightpaths ESLEA PMB, Manchester, 23 Nov 2006, R. Hughes-Jones Manchester

  13. Options Using the GÉANT LightPaths • 10 Gigabit SDH backbone • Alkatel 1678 MCC • Node location: • Budapest • Geneva • Frankfurt • Milan • Paris • Poznan • Prague • Vienna • Can do traffic routingso make long rtt paths • Ideal: London Copenhagen • Set up 4 Gigabit Lightpath Between GÉANT PoPs • Collaboration with Dante • PCs in Dante PoPs ESLEA PMB, Manchester, 23 Nov 2006, R. Hughes-Jones Manchester

  14. Network/PC Booking System • Based on Meeting Room Booking System • Divide into Links and End systems • Hard work by Stephen Kershaw • Tesing with VLBI ESLEA PMB, Manchester, 23 Nov 2006, R. Hughes-Jones Manchester

  15. Any Questions? ESLEA PMB, Manchester, 23 Nov 2006, R. Hughes-Jones Manchester

  16. Backup Slides ESLEA PMB, Manchester, 23 Nov 2006, R. Hughes-Jones Manchester

More Related