1 / 21

Network Tests at CHEP

Network Tests at CHEP. K. Kwon, D. Han, K. Cho, J.S. Suh, D. Son Center for High Energy Physics, KNU, Korea H. Park Supercomputing Center, KISTI, Korea Representing the HEP Working Group for ANF/the HEP Data Grid WG The 3 rd International Workshop on HEP Data Grid

berg
Download Presentation

Network Tests at CHEP

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Network Tests at CHEP K. Kwon, D. Han, K. Cho, J.S. Suh, D.Son Center for High Energy Physics, KNU, Korea H. Park Supercomputing Center, KISTI, Korea Representing the HEP Working Group for ANF/the HEP Data Grid WG The 3rd International Workshop on HEP Data Grid August 26, 2004, Daegu, Korea

  2. Introduction • Network Tests using Iperf • Domestic tests • International tests (USA, Europe) • Real File Transfer Tests using bbFTP • Domestic tests • International tests (Europe) • Summary & Future Works

  3. HEP Data Grid • Implementation of the Tier-1 Regional Data Center of LHC • Networking • Tier0 (CERN)–Tier1 (CHEP) : ~2.5Gbps via TEIN • Tier1(CHEP)–Tier1(US and Japan): ~Gbps via APII • Tier1(CHEP), Tier2 or 3(inside Korea): 0.155~1 Gbps via KOREN/KREONET • Computing(1000 CPU Linux clusters) • Data Storage Capability • Storage: 1.1 PB Raid Type Disk (Tier1+Tier2) • Tape Drive: ~ 3.2 PB • HPSS Servers

  4. Kreonet KOREN HPSS Network at CHEP & Available Research Networks USA APII/KREONET2 2 * 622 M 2.5G TransPAC 2.5G DataTaG 2.5G APII/H-G 1 G Japan CERN/Geneva 1 G TEIN 34 M 1 G KNU CC 1 G Cisco7606 CHEP Cisco6509 Network Test PC … … … Servers/PCs 100 M Clustered PCs … … HSM Servers

  5. Test Tools • Iperf • A tool for measuring TCP and bandwidth performance. Data sent by default from the client’s memory to the server memory • bbFTP • File transfer software that is optimized for large files. Supports multi-stream transfer and big windows • TCP Reno: Linux 2.4.26 TCP protocol

  6. Factors affecting TCP performance • Window Size • Number of Streams • MTU – We have not tried yet. • Txqueuelen – No gain in performance • SACK – No gain in performance

  7. CHEP KOREN-NOC Iperf Test KOREN CHEP KNU KOREN-NOC 1 G 1 G RTT: ~2ms BDP=0.002*1000Mbps=0.25MB Max throughput: 920Mbps  GigabitEthernet3/1 -- KOREN Throughput of five streams: 916Mbps

  8. transPAC CalREN2 KOREN H-G KNU Busan Genkai Tokyo Caltech LA 2.5G 1 G 1G 1 G 1 G Single Stream Tests between CHEP  Caltech Duration: 10min each, 10min interval, over KOREN-TransPAC path (1Gbps), 20MB window, TCP(Linux 2.4.26) RTT: ~130ms BDP: 0.13*1000Mbps=16MB Max throughput: 146Mbps

  9. Multi-Stream Tests between CHEP  Caltech Duration: 10min each over KOREN-TransPAC path (1Gbps), (Stream* Window ) <=100MB APII/Genkai Link (APII-Juniper ge-0/1/0.1) Max throughput: 783Mbps TransPAC LA link (TPR2 so-2/0/0.0)

  10. transPAC Abilene DataTag KOREN H-G KNU Busan Genkai Tokyo CERN Chicago LA 2.5 G 2.5G 1 G 1G 1 G 1 G Single Stream Tests between CHEP  CERN Duration: 10min each, 10min interval, over KOREN-TransPAC path 40MB window, TCP(Linux 2.4.26) RTT: ~370ms BDP: 0.37*1000Mbps=46MB Max throughput: 99Mbs

  11. Multi-Stream Tests between CHEP  CERN Duration: 10min each over KOREN-TransPAC path (1Gbps), (Stream* Window ) <=100MB APII/Genkai Link (APII-Juniper ge-0/1/0.1) Max throughput:714Mbps TransPAC LA link (TPR2 so-2/0/0.0)

  12. Other TCP Stacks Setup for FAST TCP: net.ipv4.tcp_rmem= 4096 33554422 134217728 net.ipv4.tcp_wmem= 4096 33554422 134217728 net.ipv4.tcp_mem= 4096 33554422 134217728 txqueuelen =1000 Setup for HS-TCP: net.ipv4.tcp_rmem= 4096 87380 67108864 net.ipv4.tcp_wmem= 4096 87380 67108864 net.ipv4.tcp_mem= 8388608 8388608 67108864 txqueuelen =1000

  13. Real File Transfer between KNU & KOREN over Linux TCP [root@cluster90 bbftpc]# ./bbftp -V -e 'setrecvwinsize 1024; setsendwinsize 1024; put ams' -u root 203.255.252.26 Password: >> USER root PASS << bbftpd version 3.0.2 : OK >> COMMAND : setremotecos 0 << OK : COS set >> COMMAND : setrecvwinsize 1024 << OK >> COMMAND : setsendwinsize 1024 << OK >> COMMAND : put ams ams << OK 1024000000 bytes send in 19.8 secs (5.06e+04 Kbytes/sec or 395 Mbits/s)

  14. I/O test run rules • The maximum file size to be greater than the total physical memory to get accurate results (Iozone file system benchmark) • Perform 40X physical RAM size worth of IO to minimize the percentage of error due to IO being read out of cache(3ware white paper)

  15. KOREN(2.5G) 1G 1G Xeon 2GHz Dual, ATA DISK Drive at KOREN-NOC, Daejon AMD Opteron Dual, Tuned RAID 0, (Read 197MB/s with Iozone, Write 178MB/s) Real File Transfer (100GB) between KNU & KOREN Time Taken: 1 hour 20 min 58 Average throughput: 164Mbps

  16. Real File Transfer(1TB) between KNU & KOREN 200GB each A 1G B 1G KOREN C 1G 1G D 1G KNU File Server (Tuned RAID 0, Read 197MB/s with Iozone, Write 178MB/s) E 1G machines at KOREN-NOC in Daejon A: 3 hr. 13 min. 20 B: 3 hr. 13 min. 31 C: 3 hr. 14 min. 36 D: 3 hr. 14 min. 46 E: 3 hr. 11 min. 22 Throughput: 701Mbps

  17. MDS OST OST OST Client 1G GigE OST 100GB KNU File Server OST File Transfer (100GB)with Lustre • Lustre(Linux + Cluster): Distributed file system for large clusters Time Take: 52min 55 Throughput: 251Mbps

  18. Real File Transfer between KNU & CERN over HS-TCP [kihwan@w01gva bbftpc]$ ./bbftp -V -e 'setrecvwinsize 41024; setsendwinsize 410 24; cd /d/Bandwidth/BBftp/bbftp-3.0.2/bbftpd; get ams' -u root cluster90.knu.ac .kr Password: >> USER root PASS << bbftpd version 3.0.2 : OK >> COMMAND : setremotecos 0 << OK : COS set >> COMMAND : setrecvwinsize 41024 << OK >> COMMAND : setsendwinsize 41024 << OK >> COMMAND : cd /d/Bandwidth/BBftp/bbftp-3.0.2/bbftpd << OK : Current remote directory is /d/Bandwidth/BBftp/bbftp-3.0.2/bbftpd >> COMMAND : get ams ams << OK 1024000000 bytes got in 47.7 secs (2.1e+04 Kbytes/sec or 164 Mbits/s)

  19. Summary • High Bandwidth Network is essential for HEP Data Grid • Domestic Links • Only Window size need to be adjusted to fully utilize available bandwidth • Real file transfer shows that speed is limited by physical I/O, rather than network • International Links • Single stream: ~100Mbps • Parallel streams needed to achieve significant throughput • Other TCP Stacks may help improve performance • Further tests and investigations should be done

  20. Future Works • Jumbo Frame (9000Byte MTU) • File transfer using RAID disks between KNU and CERN • Tests over Lambda Networks

  21. KREONET IPv6 (with Lambda) 2.5 Gbps KISTI Terabyte Transfer Test • Equipments (4TB Raid, 2 Network Test machines) CHEP File servers

More Related