1 / 1

Data transfer over the wide area network with a large round trip time

GEANT (10Gbps). SINET3 (10Gbps). New York. Tokyo. Data transfer over the wide area network with a large round trip time H. Matsunaga, T. Isobe, T. Mashimo, H. Sakamoto, I. Ueda International Center for Elementary Particle Physics (ICEPP), The University of Tokyo. gridFTP tests.

dawn-henson
Download Presentation

Data transfer over the wide area network with a large round trip time

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. GEANT (10Gbps) SINET3 (10Gbps) New York Tokyo Data transfer over the wide area network with a large round trip time H. Matsunaga, T. Isobe, T. Mashimo, H. Sakamoto, I. Ueda International Center for Elementary Particle Physics (ICEPP), The University of Tokyo gridFTP tests Introduction A WLCG Tier-2 site is running at ICEPP, the University of Tokyo in Japan. This site receives a large amount of data of the ATLAS experiment mainly from the Tier-1 site in Lyon, France. It is important to transfer data in a timely manner to facilitate data analysis at the ICEPP site, but it is not so easy to exploit the available bandwidth for the data transfer due to the large round trip time between the two sites. We performed data transfer tests in various configurations in order to understand the performance of the production system and also possible improvement in future. The tests were carried out by using gridFTP, which is used in the production system for the data transfer over the wide area network (WAN), with test nodes between ICEPP and CERN (near Lyon). Disk-to-disk tests were performed by using gridFTP in Globus Toolkit 4.2.1 or 3.2.1 (gridFTP server: 3.15 or 1.17, globus-url-copy: 4.14 or 3.6). The time-dependent variation in transfer rates was larger than that of iperf tests due to relatively slow disk IO. Figures 4-7 show throughputs of single file (>1GB) transfers in various configurations. Better performance were seen with more recent versions of the Globus Toolkit and Linux kernel. For the Linux kernel, we have observed more TCP window reductions in 2.6.9 (Fig. 8) kernel than in 2.6.27. Network Figure 1 shows the route for the data transfer. From the ICEPP site, the route to Europe goes through the SINET and GEANT, the Japanese and European academic network, respectively. The bandwidth of the route is 10Gbps, but is shared with other traffic. To the Lyon site, RENATER provides 10Gbps link connected toGEANT, but for this test the bandwidth was limited to 1Gbps at CERN’s HTAR (High Performance Access Route). The round trip time (RTT) is ~290ms for both Lyon-ICEPP and CERN-ICEPP routes, therefore the Bandwidth-Delay Product (BDP) is 1Gbps x 290ms = 36MB, which is needed to fully use 1Gbps bandwidth with a single TCP connection. Fig._4 GT4.2.1, kernel 2.6.27 Fig._5 GT4.2.1, kernel 2.6.9 Fig._6 GT3.2.1, kernel 2.6.27 Fig._7 GT3.2.1, kernel 2.6.9 Fig._1 Network route between Japan and Europe. Fig._8 Example of the packet loss during the file transfer Test setup Figure 9 shows transfer rates per stream in a file transfer. In this test (GT4.2.1, 2.6.27), most streams were well balanced. Multiple-file transfers were also tested in a few cases (Figure 10), and the 1Gbps bandwidth was almost saturated. At CERN and ICEPP, Linux test nodes were setup for the tests. System configuration relevant to the tests is listed below. (sender node at CERN, receiver node at ICEPP) OS: Scientific Linux CERN 4.7 x86_64. kernel: 2.6.9-78.0.13.EL.cernsmp (TCP BIC, ICEPP and CERN) 2.6.27 (TCP CUBIC, CERN) RAM: 32GB (CERN), 8GB (ICEPP) CPU: Xeon L5420 (CERN), Xeon 5160 (ICEPP) RAID Disk: 3ware (CERN), Infortrend (ICEPP). XFS. >80MB/s for single read/write. NIC: Intel 1Gbps (CERN), Chelsio 10Gbps (ICEPP) Parameters: net.ipv4.tcp_sack = 0 net.ipv4.tcp_dsack = 0 net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_no_metrics_save = 0 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.core.rmem_max = 16777216 net.core.wmem_wmax = 16777216 net.core.netdev_max_backlog = 10000 txquelelen (of NIC) = 10000 Software: iperf-2.0.4 gridFTP (Globus Toolkit 3.2.1, 4.2.1) Fig._9 transfer rate per stream Fig._10 rate vs. concurrent files Production system The Disk Pool Manager (DPM) has been deployed as the Storage Element at the ICEPP Tier-2 site. There are now 13 disk servers on which the maximum TCP window size is limited to 2MB in 2.6.9 kernel (SLC4 x86_64). For the data transfer from Lyon to ICEPP, numbers of concurrent files and gridFTP streams has been set to 20 and 10 respectively with the File Transfer Service (FTS). The best data transfer rate observed was ~500MB/s for Lyon-ICEPP (Figure 11). At that time (May 2008), we had only 6 disk servers, while >30 disk servers were used in Lyon. Iperf tests Fig._11 best data transfer rate observed for Lyon Tier-1 to ICEPP Tier-2 Memory-to-memory tests were carried out first by using iperf for varying numbers of streams and TCP window sizes. On sender node at CERN, Linux kernels 2.6.9 and 2.6.27 were both tried, and 2.6.27 was slightly better in performance. Major rate drops (packet losses) were rarely seen. When the aggregated window sizes were large enough, the transfer rates reached the limit of ~1Gbps (~120MB/s). Conclusions We have tested data transfer between Europe and Japan via New York. Newer Linux kernel (or TCP implementation) and gridFTP perform better than previous versions. In the nearly optimal configuration, we could use the available bandwidth of ~1Gbps with gridFTP. In the production system, we have observed ~500MB/s data transfer rate for Lyon to ICEPP, while the configuration was not tuned seriously. Based on the test results shown above, we will try to change the system parameters to make better use of the available bandwidth. In future, performance should become better with the new TCP implementation in the recent Linux kernel. Fig._2 rate vs. number of streams Fig._3 rate vs. time (typical cases)

More Related