1 / 17

ESLEA and HEP’s Work on UKLight Network

ESLEA and HEP’s Work on UKLight Network. ESLEA. Exploitation of Switched Lightpaths in E-sciences Applications Multi-disciplined Protocol Development Exploitation by HEP ( ATLAS and CDF), Radio-astronomers, E-Health, HPC

tanner
Download Presentation

ESLEA and HEP’s Work on UKLight Network

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ESLEA and HEP’s Work on UKLight Network

  2. ESLEA • Exploitation of Switched Lightpaths in E-sciences Applications • Multi-disciplined • Protocol Development • Exploitation by HEP ( ATLAS and CDF), Radio-astronomers, E-Health, HPC • Using dedicated point-to-point light path channels on research UKLight Network for R&D purposes • Bulk Data Transfers / Circuit Reservation and deployment /Transport Protocols / Real Time Visualization

  3. HEP Connections • RAL-CERN • UCL-Fermilab • Lancaster-Edinburgh • RAL-Lancaster • SARA-Lancaster • Lancaster-Manchester

  4. Lancaster <-> Edinburgh Objectives • Investigate the use of an alternate (in this case UDT) protocol to maximise the potential of an optical circuit • Utilise this protocol in such a was as to be of practicable use to users of the grid.

  5. What is UDT ? • UDT: UDP-based Data Transfer Protocol • Application level, end-to-end, unicast, reliable, connection-oriented, data transport protocol. • Approximately 90% utilisation of available bandwidth

  6. Servers • Hardware : • Dual Xeon 3.2GHz dual core • 2 GB RAM • Dual PCI-X bus • 2 x Gigabit Ethernet • SATA Raid controller • 6 x SATA disks • 1 x SATA system disk • OS • Scientific Linux 3.0.5 with 2.4.21

  7. Disk Testing Results

  8. Network Testing • Tests were performed with default kernel and application settings and then again after applying changes to maximise network speeds • BDP for this link should be : • BDP = Bandwidth (MB/s) * RTT (seconds) • BDP = (1 * 1024 / 8) * (0.3 / 1000) • BDP = 0.0384 MB (39.32KB)

  9. Basic Network Test results

  10. File Transfer Test Results

  11. What next ? • The Basic Network tests and the File transfer tests need to be re-performed once the UKLight link between Lancaster and Edinburgh is fully functional • Integration of UDT into a functional GridFTP server and client • Deployment of modified software into test LCG sites.

  12. Lancaster<->RAL Link • T1-T2 transfer testing • Avoid production network induced bottlenecks • Firewall @ RAL • Internal LAN traffic • Tested using : • Command line srmcp in shell script • FTS controlled transfers

  13. Achieved • Peak of 948Mbps • Transferred: • 8TB in 24 hours - 800+ Mbps aggregate rate • 36TB in 1 week - 500+ Mbps aggregate rate • Over 800Mbps when running, but 0Mbps in downtimes a problem • Parallel file transfers increase rate • Better utilisation of bandwidth • Staggered initialisation of transfers reduces overhead from initialisation/cessation of individual transfers. Rate increase from 150Mbps to 900Mbps • 2% (18Mbps) reverse traffic flow for 900Mbps transfer

  14. FTS transfers not yet as successful as srmcp only transfers • Greater overheads? • More optimisation needed • Single FTS file transfer gives 150Mbps • Same as srmcp • Concurrent FTS file transfers scales at lower rate than srmcp • All single stream transfers • FTS tests currently used single source file • Srmcp used with multiple source files • Rate varies dependent on direction • Possibly explained by difference in dCache setup • V0 Dependency • kernel settings • Disk I/O limitations • SRM pool load balancing • To be investigated • File size affects rate of transfer • Single stream rate varies 150 to 180 Mbps with increase from 1 to 10 GB file size

  15. Lancaster<->SARA Link • Link not yet active • Tests similar to Lancaster-RAL and Lancaster-Edinburgh Tests • Bulk File Transfers • UDT Protocol Testing • Study of effect of International/Extended link length • SARA storage capacity underused, RAL capacity currently too small for UK simulation storage • Also, SARA to test ATLAS Tier1 fallback scenario (FTS catalogues etc.) • Are we capable of connecting to an alternate Tier1?

  16. Lancaster<->Manchester Link • Intra-Tier2 site Testing • “Homogeneous Distributed Tier2” • dCache Head node at Lancaster, pool nodes at both Lancaster and Manchester • Test Transfers to/from RAL • Test of Job submission to close CE/WN’s • Possible testing of xrootd within dCache

  17. www.eslea.uklight.ac.uk • Connecting to UKLight • Documents

More Related