1 / 17

European Topology: NRNs & Geant

SURFnet. Manc. UvA. RAL. SuperJANET4. CERN. European Topology: NRNs & Geant. Gigabit Throughput on the Production WAN. Manc - RAL 570 Mbit/s 91% of the 622 Mbit access link between SuperJANET4 and RAL 1472 bytes propagation ~21 s Manc-UvA (SARA) 750 Mbit/s SJANET4 + Geant + SURFnet

Download Presentation

European Topology: NRNs & Geant

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SURFnet Manc UvA RAL SuperJANET4 CERN European Topology: NRNs & Geant DataTAG CERN Sep 2002 R. Hughes-Jones Manchester

  2. Gigabit Throughput on the Production WAN • Manc - RAL 570 Mbit/s • 91% of the 622 Mbit access link between SuperJANET4 and RAL • 1472 bytes propagation ~21s • Manc-UvA (SARA) 750 Mbit/s • SJANET4 + Geant + SURFnet • Manc – CERN 460 Mbit/s • CERN PC had a 32 bit PCI bus DataTAG CERN Sep 2002 R. Hughes-Jones Manchester

  3. Gigabit TCP Throughput on the Production WAN • Throughput vs TCP buffer size • TCP window sizes in Mbytes calculated from RTT*bandwidth DataTAG CERN Sep 2002 R. Hughes-Jones Manchester

  4. Gigabit TCP on the Production WAN Man-CERN • Throughput vs n-streams • Default buffer size slope = ~25 Mbit/s/stream up to 9 streams then 15 Mbit/s/stream • With larger buffers rate of increase per stream is larger • Plateaus at about 7 streams giving a total throughput of ~400 Mbit/s DataTAG CERN Sep 2002 R. Hughes-Jones Manchester

  5. UDP Throughput: SLAC - Man • SLAC – Manc 470 Mbit/s • 75% of the 622 Mbit access link • SuperJANET4 peers with ESnet at 622Mbit in NY DataTAG CERN Sep 2002 R. Hughes-Jones Manchester

  6. Gigabit TCP Throughput Man-SLAC Les Cottrell SLAC • Throughput vs n-streams • Much less than for European links • Buffer required: rtt*BW (622Mbit) = ~14 Mbytes • With larger buffers > default, rate of increase per stream is ~ 5.4 Mbit/s/stream • No Plateau • Consistent with Iperf • Why do we need so many streams? DataTAG CERN Sep 2002 R. Hughes-Jones Manchester

  7. iGrid2002 Radio Astronomy data movement (1) • Arrival times • Slope corresponds to > 2Gbit/s - not physical ! • 1.2 ms steps every 79 packets • Buffer required: ~ 120 kbytes • Average slope: 560 Mbit/s – agrees with (bytes received)/ (time taken) DataTAG CERN Sep 2002 R. Hughes-Jones Manchester

  8. iGrid2002 Radio Astronomy data movement (2) • Arrival times • Slope corresponds to 123 Mbit/s – agrees! • 1-way delay flat • Suggest that the interface/driver are being clever with the interrupts ! DataTAG CERN Sep 2002 R. Hughes-Jones Manchester

  9. iGrid2002 UDP Throughput: Intel Pro/1000 • Motherboard: SuperMicro P4DP6 Chipset: Intel E7500 (Plumas) • CPU: Dual Xeon Prestonia (2cpu/die) 2.2 GHz Slot 4: PCI, 64 bit, 66 MHz • RedHat 7.2 Kernel 2.4.18 • Max throughput 700Mbit/s • Loss only when at wire rate • Loss not due to user  Kernel moves • Receiving CPU load ~15% 1472bytes DataTAG CERN Sep 2002 R. Hughes-Jones Manchester

  10. Gigabit iperf TCP From iGrid2002 DataTAG CERN Sep 2002 R. Hughes-Jones Manchester

  11. Work on End Systems: PCI: SysKonnect SK-9843 • Motherboard: SuperMicro 370DLE Chipset: ServerWorks III LE Chipset • CPU: PIII 800 MHz PCI:64 bit 66 MHz • RedHat 7.1 Kernel 2.4.14 • SK301 • 1400 bytes sent • Wait 20 us • Sk303 • 1400 bytes sent • Wait 10 us • Frames are back-to-back • Can drive at line speed • Cannot go any faster ! Gig Eth frames back to back DataTAG CERN Sep 2002 R. Hughes-Jones Manchester

  12. PCI: Intel Pro/1000 • Motherboard: SuperMicro 370DLE Chipset:: ServerWorks III LE Chipset • CPU: PIII 800 MHz PCI:64 bit 66 MHz • RedHat 7.1 Kernel 2.4.14 • IT66M212 • 1400 bytes sent • Wait 11 us • ~4.7us on send PCI bus • PCI bus ~45% occupancy • ~ 3.25 us on PCI for data recv • IT66M212 • 1400 bytes sent • Wait 11 us • Packets lost • Action of pause packet? DataTAG CERN Sep 2002 R. Hughes-Jones Manchester

  13. UDPmon UDPmon UDP UDP IP IP Eth drv Eth drv Gig Switch HW HW • No loss at switch • But Pause packet seen to sender Packet Loss: Where? • Intel Pro 1000 on 370DLE • 1472 byte packets • Expected loss in transmitter ! • /proc/net/snmp N Lost N Gen InDiscards N Transmit N Received DataTAG CERN Sep 2002 R. Hughes-Jones Manchester

  14. High Speed TCP • Gareth & Yee Implemented mods to TCP - Sally Floyd 02 draft RFC • Congestion Avoidance • Interest in exchanging stacks: • Les Cottrell SLAC • Bill Allcock Argonne DataTAG CERN Sep 2002 R. Hughes-Jones Manchester

  15. UDP Throughput: Intel Pro/1000on B2B P4DP6 • Motherboard: SuperMicro P4DP6 Chipset: Intel E7500 (Plumas) • CPU: Dual Xeon Prestonia (2cpu/die) 2.2 GHz Slot 4: PCI, 64 bit, 66 MHz • RedHat 7.2 Kernel 2.4.14 • Max throughput 950Mbit/s • Some throughput drop for packets >1000 bytes • Loss NIC dependent • Loss not due to user  Kernel moves • Traced to discards in the receiving IP layer ??? DataTAG CERN Sep 2002 R. Hughes-Jones Manchester

  16. Interrupt Coalescence: Latency • Intel Pro 1000 on 370DLE 800 MHz CPU DataTAG CERN Sep 2002 R. Hughes-Jones Manchester

  17. Interrupt Coalescence: Throughput • Intel Pro 1000 on 370DLE DataTAG CERN Sep 2002 R. Hughes-Jones Manchester

More Related