1 / 20

10 Gigabit Ethernet Test Lab PCI-X Motherboards Related work & Initial tests

10 Gigabit Ethernet Test Lab PCI-X Motherboards Related work & Initial tests. Richard Hughes-Jones The University of Manchester www.hep.man.ac.uk/~rich/ then “Talks”. Early 10 GE Tests CERN & SLAC. Sender. Receiver. Zero stats. OK done. Send data frames at regular intervals.

cosmo
Download Presentation

10 Gigabit Ethernet Test Lab PCI-X Motherboards Related work & Initial tests

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 10 Gigabit Ethernet Test LabPCI-X MotherboardsRelated work & Initial tests Richard Hughes-Jones The University of Manchesterwww.hep.man.ac.uk/~rich/ then “Talks” CALICE UCL , 20 Feb 2006, R. Hughes-Jones Manchester

  2. Early 10 GE Tests CERN & SLAC CALICE UCL , 20 Feb 2006, R. Hughes-Jones Manchester

  3. Sender Receiver Zero stats OK done Send data frames at regular intervals Inter-packet time (Histogram) ●●● ●●● Time to receive Time to send Get remote statistics Send statistics: No. received No. lost + loss pattern No. out-of-order CPU load & no. int 1-way delay Signal end of test OK done Time Number of packets n bytes  time Wait time Throughput Measurements • UDP Throughput with udpmon • Send a controlled stream of UDP frames spaced at regular intervals CALICE UCL , 20 Feb 2006, R. Hughes-Jones Manchester

  4. Possible Bottlenecks PCI Bus & Gigabit Ethernet Activity • PCI Activity • Logic Analyzer with • PCI Probe cards in sending PC • PCI Probe cards in receiving PC CPU CPU NIC NIC PCI bus PCI bus chipset chipset mem mem Logic Analyser Display CALICE UCL , 20 Feb 2006, R. Hughes-Jones Manchester

  5. 10 Gigabit Ethernet: UDP Throughput • 1500 byte MTU gives ~ 2 Gbit/s • Used 16144 byte MTU max user length 16080 • DataTAG Supermicro PCs • Dual 2.2 GHz Xenon CPU FSB 400 MHz • PCI-X mmrbc 512 bytes • wire rate throughput of 2.9 Gbit/s • CERN OpenLab HP Itanium PCs • Dual 1.0 GHz 64 bit Itanium CPU FSB 400 MHz • PCI-X mmrbc 4096 bytes • wire rate of 5.7 Gbit/s • SLAC Dell PCs giving a • Dual 3.0 GHz Xenon CPU FSB 533 MHz • PCI-X mmrbc 4096 bytes • wire rate of 5.4 Gbit/s CALICE UCL , 20 Feb 2006, R. Hughes-Jones Manchester

  6. mmrbc 512 bytes mmrbc 1024 bytes mmrbc 2048 bytes CSR Access PCI-X Sequence Data Transfer Interrupt & CSR Update mmrbc 4096 bytes 5.7Gbit/s 10 Gigabit Ethernet: Tuning PCI-X • 16080 byte packets every 200 µs • Intel PRO/10GbE LR Adapter • PCI-X bus occupancy vs mmrbc • Measured times • Times based on PCI-X times from the logic analyser • Expected throughput ~7 Gbit/s • Measured 5.7 Gbit/s CALICE UCL , 20 Feb 2006, R. Hughes-Jones Manchester

  7. Manchester 10 GE Lab CALICE UCL , 20 Feb 2006, R. Hughes-Jones Manchester

  8. “Server Quality” Motherboards • SuperMicro X5DPE-G2 • Dual 2.4 GHz Xeon • 533 MHz Front side bus • 6 PCI PCI-X slots • 4 independent PCI buses • 64 bit 66 MHz PCI • 100 MHz PCI-X • 133 MHz PCI-X • Dual Gigabit Ethernet • UDMA/100 bus master/EIDE channels • data transfer rates of 100 MB/sec burst CALICE UCL , 20 Feb 2006, R. Hughes-Jones Manchester

  9. “Server Quality” Motherboards • Boston/Supermicro H8DAR • Two Dual Core Opterons • 200 MHz DDR Memory • Theory BW: 6.4Gbit • HyperTransport • 2 independent PCI buses • 133 MHz PCI-X • 2 Gigabit Ethernet • SATA • ( PCI-e ) CALICE UCL , 20 Feb 2006, R. Hughes-Jones Manchester

  10. 10 Gigabit Ethernet: iperf TCP Intel Results • X5DPE-G2 Supermicro PCs B2B • Dual 2.2 GHz Xeon CPU FSB 533 MHz • XFrame II NIC • PCI-X mmrbc 512 bytes • 1500 byte MTU • 2.5 Mbyte TCP buffer size • Iperf rate throughput of 2.33 Gbit/s • PCI-X mmrbc 512 bytes • 9000 byte MTU • Iperf rate of 3.92 Gbit/s • PCI-X mmrbc 4096 bytes • 9000 byte MTU • Iperf rate of 3.94 Gbit/s CALICE UCL , 20 Feb 2006, R. Hughes-Jones Manchester

  11. 10 Gigabit Ethernet: UDP Intel Results • X5DPE-G2 Supermicro PCs B2B • Dual 2.2 GHz Xeon CPU FSB 533 MHz • XFrame II NIC • PCI-X mmrbc 4096 bytes • Low rates • Large packet loss • ??? CALICE UCL , 20 Feb 2006, R. Hughes-Jones Manchester

  12. PCI-X Signals from SC2005 CALICE UCL , 20 Feb 2006, R. Hughes-Jones Manchester

  13. Data Transfer CSR Access 10 Gigabit Ethernet: TCP Data transfer on PCI-X • Sun V20z 1.8GHz to2.6 GHz Dual Opterons • Connect via 6509 • XFrame II NIC • PCI-X mmrbc 4096 bytes66 MHz • Two 9000 byte packets b2b • Ave Rate 2.87 Gbit/s • Burst of packets length646.8 us • Gap between bursts 343 us • 2 Interrupts / burst CALICE UCL , 20 Feb 2006, R. Hughes-Jones Manchester

  14. Data Transfer CSR Access 2.8us 10 Gigabit Ethernet: UDP Data transfer on PCI-X • Sun V20z 1.8GHz to2.6 GHz Dual Opterons • Connect via 6509 • XFrame II NIC • PCI-X mmrbc 2048 bytes66 MHz • One 8000 byte packets • 2.8us for CSRs • 24.2 us data transfereffective rate 2.6 Gbit/s • 2000 byte packet wait 0us • ~200ms pauses • 8000 byte packet wait 0us • ~15ms between data blocks CALICE UCL , 20 Feb 2006, R. Hughes-Jones Manchester

  15. Disk 2 Disk tests Building on SC2004 work CALICE UCL , 20 Feb 2006, R. Hughes-Jones Manchester

  16. SC2004 Disk-Disk bbftp • bbftp file transfer program uses TCP/IP • UKLight: Path:- London-Chicago-London; PCs:- Supermicro +3Ware RAID0 • MTU 1500 bytes; Socket size 22 Mbytes; rtt 177ms; SACK off • Move a 2 GByte file • Web100 plots: • Standard TCP • Average 825 Mbit/s • (bbcp: 670 Mbit/s) • Scalable TCP • Average 875 Mbit/s • (bbcp: 701 Mbit/s~4.5s of overhead) • Disk-TCP-Disk at 1Gbit/s CALICE UCL , 20 Feb 2006, R. Hughes-Jones Manchester

  17. % CPU kernel mode Disk write 1735 Mbit/s Disk write + 1500 MTU UDP 1218 Mbit/s Drop of 30% Disk write + 9000 MTU UDP 1400 Mbit/s Drop of 19% Network & Disk Interactions (Network-Disk sub-system interactions) • Hosts: • Supermicro X5DPE-G2 motherboards • dual 2.8 GHz Zeon CPUs with 512 k byte cache and 1 M byte memory • 3Ware 8506-8 controller on 133 MHz PCI-X bus configured as RAID0 • six 74.3 GByte Western Digital Raptor WD740 SATA disks 64k byte stripe size • Measure memory to RAID0 transfer rates with & without UDP traffic CALICE UCL , 20 Feb 2006, R. Hughes-Jones Manchester

  18. Any Questions? CALICE UCL , 20 Feb 2006, R. Hughes-Jones Manchester

  19. Backup Slides CALICE UCL , 20 Feb 2006, R. Hughes-Jones Manchester

  20. More Information Some URLs 1 • UKLight web site: http://www.uklight.ac.uk • MB-NG project web site:http://www.mb-ng.net/ • DataTAG project web site: http://www.datatag.org/ • UDPmon / TCPmon kit + writeup: http://www.hep.man.ac.uk/~rich/net • Motherboard and NIC Tests: http://www.hep.man.ac.uk/~rich/net/nic/GigEth_tests_Boston.ppt& http://datatag.web.cern.ch/datatag/pfldnet2003/ “Performance of 1 and 10 Gigabit Ethernet Cards with Server Quality Motherboards” FGCS Special issue 2004 http:// www.hep.man.ac.uk/~rich/ • TCP tuning information may be found at:http://www.ncne.nlanr.net/documentation/faq/performance.html& http://www.psc.edu/networking/perf_tune.html • TCP stack comparisons:“Evaluation of Advanced TCP Stacks on Fast Long-Distance Production Networks” Journal of Grid Computing 2004 • PFLDnet http://www.ens-lyon.fr/LIP/RESO/pfldnet2005/ • Dante PERT http://www.geant2.net/server/show/nav.00d00h002 CALICE UCL , 20 Feb 2006, R. Hughes-Jones Manchester

More Related