1 / 23

Brief Report on Tests Related to the e-VLBI Project

Brief Report on Tests Related to the e-VLBI Project. Richard Hughes-Jones The University of Manchester. DataGrid WP7 – Dante Tests on the G ÉANT Core End-2-End Measurements from the 4 th Year VLBI Project at Manchester DiskPack-2-Memory Throughput and PCI Activity in a Mark5 PC

evania
Download Presentation

Brief Report on Tests Related to the e-VLBI Project

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Brief Report on Tests Related to the e-VLBI Project Richard Hughes-Jones The University of Manchester DataGrid WP7 – Dante Tests on the GÉANT Core End-2-End Measurements from the 4th Year VLBI Project at Manchester DiskPack-2-Memory Throughput and PCI Activity in a Mark5 PC Update on 1 and 10 Gigabit Ethernet NICs in the PC JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester

  2. DataGrid WP7 – Dante Tests on the GÉANT Core • Set-up • Supermicro PC in: • London GEANT PoP • Amsterdam GEANT PoP • Smartbits in: • London GEANT PoP • Frankfurt GEANT PoP • Long link UK-SE-DE2-IT-CH-FR-BE-NL • Short Link UK-FR-BE-NL JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester

  3. Tests GÉANT Core: UDP throughput • UDP Throughput • London-Amsterdam • Available BW to packet on wire • Then 1/t • Wire rate 998 Mbit/s for packets > 1400 bytes • Packet Loss • None for large packets • Dips in BW lined to packet loss • SysKonnect NIC int. per packet • CPU load important JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester

  4. Tests GÉANT Core: Packet re-ordering • Effect of Packet size • London-Amsterdam • Packets at 10 µs – line speed • 10,000 sent • Packet Loss ~ 0.1% • Re-order Distribution JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester

  5. Tests GÉANT Core: Packet re-ordering • Effect of LBE background • Amsterdam-London • BE Test flow • Packets at 10 µs – line speed • 10,000 sent • Packet Loss ~ 0.1% • Re-order Distributions: JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester

  6. VLBI Project: Test Topology SURFnet Manchester JIVE Dwingaloo Jodrell SuperJANET4 Adam Mathews Steve O’Toole Univ of Manchester JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester

  7. VLBI Project: Throughput • Manchester to Dwingeloo • 2.0G Hz Xeon 1.2 GHz PIII • Re-ordering vs Offered Load JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester

  8. 1-way Delay – note the packet loss (points with 0 –way delay) VLBI Project: Jitter & 1-way Delay • 1472 byte Packets man -> JIVE • FWHM 22 µs (B2B 3 µs ) JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester

  9. VLBI Project: Packet Loss Distribution • Measure the time between lost packets in the time series of packets sent. • Lost 1410 in 0.6s • Is it a Poisson process? • Assume Poisson is stationary λ(t) = λ • Use Prob. Density Function:P(t) = λ e-λt • Mean λ = 2360 / s[426 µs] • Plot log: slope -0.0028expect -0.0024 • Could be additional process involved JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester

  10. VLBI Project: Packet Loss – Is it Poisson? • Divide time series of packets into 1000 slices of 50 packets • Total lost packets 1410 • Average number / slice = 1.4 • Calc Poisson ProbabilityP(n, µ) = µ n e -µ n! • Curves close but not exact • Could be more than 1 process JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester

  11. VLBI Project: Packet Loss – Long Range Effects? • Aggregated Variance Method • Divide time series length N into blocks of size m • Calc mean of each section Xm(k) k= 1 … N/m • Calc variance VXm of these Xm(k) • Vary m size of the blocks • Plot on log-log & fit slope β • Hurst parameter Hβ = 2H -2 • Measure:β = -0.355 which gives H 0.822 • H =1 no long range dependence JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester

  12. Traffic Flows • Manchester – NetNorthWest - SuperJANET Access links • Two 1 Gbit/s • Access links:SJ4 to GÉANT GÉANT to SurfNet JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester

  13. Read / Write n bytes  time Wait time Throughput & PCI transactions on the Mark5 PC: • Mark5 uses Supermicro P3TDLE • 1.2 GHz PIII • Mem bus 133/100 MHz • 2 *64bit 66 MHz PCI • 4 32bit 33 MHz PCI Ethernet NIC IDE Disc Pack SuperStor Input Card Logic Analyser Display JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester

  14. PCI Activity: Read 1 data block 0 wait time • Data block 131,072 bytes • Read consists of: • Setup CSRs • Data movement • Update CSRs • Data block contains PCI bursts 4096 bytes long • For 0 wait between reads: • Data block 608µs long • Then 655µs gap • PCI transfer rate 830 Mbit/s • Read_sstor rate 778 Mbit/s (97 Mbyte/s) Data Block CSR Access PCI Burst JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester

  15. PCI Activity: Read Multiple data blocks 0 wait • Read 999424 bytes • Each Data block: • Setup CSRs • Data movement • Update CSRs • For 0 wait between reads: • Data blocks ~600µs longtake ~6 ms • Then 744µs gap • PCI transfer rate 1188Mbit/s(148.5 Mbytes/s) • Read_sstor rate 778 Mbit/s (97 Mbyte/s) • PCI bus occupancy: 68.44% • Concern about Ethernet Traffic 64 bit 33 MHz PCI needs ~ 82% for 930 Mbit/s Expect ~360 Mbit/s JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester

  16. PCI Activity: Read Throughput • Flat then 1/t dependance • ~ 860 Mbit/s for Read blocks >= 262144 bytes • CPU load ~20% • Concern about CPU load needed to drive Gigabit link JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester

  17. 10 GigEthernet: Throughput • 1500 byte MTU gives ~ 2 Gbit/s • Used 16144 byte MTU max user length 16080 • DataTAG Supermicro PCs • Dual 2.2 GHz Xenon CPU FSB 400 MHz • PCI-X mmrbc 512 bytes • wire rate throughput of 2.9 Gbit/s • CERN OpenLab HP Itanium PCs • Dual 1.0 GHz 64 bit Itanium CPU FSB 400 MHz • PCI-X mmrbc 512 bytes • wire rate of 5.7 Gbit/s • SLAC Dell PCs giving a • Dual 3.0 GHz Xenon CPU FSB 533 MHz • PCI-X mmrbc 4096 bytes • wire rate of 5.4 Gbit/s JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester

  18. 10 GigEthernet: PCI-X Activity • Supermicro P4DP8-2G motherboard Packet transmission Memory to NIC CSR Access PCI-X segment 512 bytes Transfer of 16114 bytes Interrupt Packet reception NIC to Memory JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester

  19. 10 GigEthernet: Tuning PCI-X JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester

  20. 10 GigEthernet at SC2003 BW Challenge (Phoenix) • Three Server systems with 10 GigEthernet NICs • Used the DataTAG altAIMD stack 9000 byte MTU • Streams From SLAC/FNAL booth in Phoenix to: • Pal Alto PAIX • Chicago Starlight • Amsterdam SARA JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester

  21. JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester

  22. Traffic QoS Classes on GÉANT Backbone Max Throughput on 2.5 G PoS • Normal Traffic + • Radio Astronomy Data + • Less Than Best Effort • 2.0 Gbit/s • Normal Traffic + • Less Than Best Effort • 2.0 Gbit/s • Normal Traffic + • Radio Astronomy Data • 500 Mbit/s • Normal Traffic JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester

  23. Some Measurements made during ER2002 JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester

More Related