1 / 24

The DataTAG Project Internet2 Spring Member meeting 8 April 2003, Arlington, USA

The DataTAG Project Internet2 Spring Member meeting 8 April 2003, Arlington, USA. Olivier H. Martin / CERN. http://www.datatag.org. DataTAG Mission. T rans A tlantic G rid. EU  US Grid network research High Performance Transport protocols Inter-domain QoS

goodrichj
Download Presentation

The DataTAG Project Internet2 Spring Member meeting 8 April 2003, Arlington, USA

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The DataTAG Project Internet2 Spring Member meeting 8 April 2003, Arlington, USA Olivier H. Martin / CERN http://www.datatag.org

  2. DataTAG Mission TransAtlantic Grid • EU  US Grid network research • High Performance Transport protocols • Inter-domain QoS • Advance bandwidth reservation • EU  US Grid Interoperability • Sister project to EU DataGRID The DataTAG Project

  3. Funding agencies Cooperating Networks The DataTAG Project

  4. Brunel University CERN CLRC CNAF DANTE INFN INRIA NIKHEF PPARC UvA University of Manchester University of Padova University of Milano University of Torino UCL EU collaborators The DataTAG Project

  5. ANL Caltech Fermilab FSU Globus Indiana Wisconsin Northwestern University UIC University of Chicago University of Michigan SLAC Starlight US collaborators The DataTAG Project

  6. Project information • Two years project started on 1/1/2002 • extension until 1Q04 under consideration • 3.9 MEUROs • 50% Circuit cost, hardware • Manpower • WP1: • Establishment of a high performance intercontinental Grid testbed (CERN) • WP2: • High performance networking (PPARC) • WP3 • Bulk data transfer validations and application performance monitoring (UvA) • WP4 • Interoperability between Grid domains (INFN) • WP5 & WP6 • Dissemination and project management (CERN) The DataTAG Project

  7. GriPhyN PPDG iVDGL InteroperabilityFramework EU Part US Part DataTAG-WP4 iVDGL HICB GLUE DataGRID Griphyn/PPDG HEP experiments and LCG LCG middleware selection The DataTAG Project

  8. The WorldGRID transatlantic testbed A successful example of Grid interoperability across EU and US domains  Flavia Donno (Former DataTAG WP4, LCG) Flavia.Donno@cern.ch http://chep03.ucsd.edu/files/249.ppt DataTag is a project funded by the European Union CHEP 2003 – 24-28 March – no. (1)

  9. UI VDT Client RC SE RC RB IS IS CE VDT Server Solutions • Different Grid Architectures (VDT server/client vs. Computing Elements, Storage Elements, User Interfaces, Ressource Broker, Replica Catalog,…)

  10. UI VDT Client RC SE RB IS CE VDT Server Final Architecture

  11. DataTAG testbed status

  12. Evolution of the testbed • 2.5G circuit in operation since August 20, 2002 • On request from the partners, the testbed evolved from a simple layer3 testbed into an extremely rich, most probably unique, multi-vendor layer2 & layer 3 testbed • Alcatel, Cisco, Juniper • Direct extensions to Amsterdam (UvA)/Surfnet (10G) & Lyon (INRIA)/VTHD (2.5G) • VPN layer 2 extension to INFN/CNAF over GEANT & GARR using Juniper’s MPLS • In order to guarantee exclusive access to the testbed a reservation application has been developed • Proved to be essential The DataTAG Project

  13. UK SuperJANET4 NL ATRIUM/VTHD FR SURFnet INRIA GEANT IT GARR-B DataTAG connectivity NewYork Abilene 3*2.5G VPN Layer 2 STAR-LIGHT ESNET CERN 2.5G --> 10G 10G MREN STAR-TAP Major 2.5/10 Gbps circuits between Europe & USA The DataTAG Project

  14. Multi vendor layer 2/3 testbed STARLIGHT (Chicago) INFN (Bologna) CERN (Geneva) Abilene Canarie ESnet INRIA (Lyon) GEANT Surfnet 2.5Gbps 10Gbps 10Gbps Juniper Juniper Wave triangle Extreme Summit5i M M Alcatel 2.5Gbps Alcatel GBE GBE Cisco Cisco M=A1670 (Layer 2over SDH Mux) The DataTAG Project

  15. Phase I (iGRID2002)Layer2 The DataTAG Project

  16. Phase II Generic Layer 3 configuration (Oct. 2002 – Feb. 2003) StarLight Servers CERN Servers GigE switch GigE switch 2.5Gbps C7606 C7606 The DataTAG Project

  17. Phase III Layer2/3 (March 2003) Layer 3 INRIA Layer 2 Layer 1 VTHD Routers Servers GigE switch A1670 Multiplexer GigE switch A7770 2.5G 2*GigE C7606 To STARLIGHT 8*GigE CERN J-M10 C-ONS15454 10G UvA GEANT From CERN Servers Ditto 2.5G Abilene GARR ESNet Canarie STARLIGHT The DataTAG Project INFN/CNAF

  18. Photos

  19. Photos

  20. Photos

  21. Phase IV (September 2003?) Layer 3 INRIA Layer 2 Layer 1 VTHD Routers Servers 10GigE switch Multiplexer GigE switch A7770 10G 10GigE C7606 To STARLIGHT 8*GigE CERN J-M10 C-ONS15454 10G UvA GEANT From CERN Servers Ditto 2.5G Abilene GARR ESNet Canarie STARLIGHT The DataTAG Project INFN/CNAF

  22. Main achievements • GLUE Interoperability effort with DataGrid, iVDGL & Globus • GLUE testbed & demos • VOMS design and implementation in collaboration with DataGrid • VOMS evaluation within iVDGL underway • Integration of GLUE compliant components in DataGrid and VDT middleware • Internet landspeed records have been beaten one after the other by DataTAG project members and/or teams closely associated with DataTAG: • Atlas Canada lightpath experiment (iGRID2002) • New Internet2 landspeed record (I2 LSR) by Nikhef/Caltech team (SC2002) • Scalable TCP, HSTCP, GridDT & FAST experiments (DataTAG partners & Caltech) • Intel 10GigE tests between CERN (Geneva) and SLAC (Sunnyvale) – (Caltech, CERN, Los Alamos NL, SLAC) • 2.38Gbps sustained rate, single TCP/IP flow, 1TB in one hour (S. Ravot/Caltech) The DataTAG Project

  23. 10GigE Data Transfer Trial On Feb. 27-28, a terabyte of data was transferred in 3700 seconds by S. Ravot of Caltech between the Level3 PoP in Sunnyvale near SLAC and CERN through the TeraGrid router at StarLight from memory to memory with a single TCP/IP stream.This achievement translates to an average rate of 2.38 Gbps(using large windows and 9kB “jumbo frames”). This beat the former record by a factor of ~2.5 and used the US-CERN link at 99% efficiency. European Commission

  24. Conclusions • TCP/IP performance issues in long distance high speed networks have been known for very many years. • What is new, however, is the widespread availability of 10Gbps A&R backbones as well as the emergence of 10GigE technology. • Thus, the awareness that the problem requires quick resolution has been growing rapidly during the last 2 years, hence the flurry of proposals. • HSTCP, Scalable TCP, FAST, Grid DT, XCP,… • Hard to predict which one will win, but simplicity and ease of deployment is definitely key to success! The DataTAG Project

More Related