1 / 19

Nuclear Physics Network Requirements Workshop Washington, DC

Nuclear Physics Network Requirements Workshop Washington, DC. Eli Dart, Network Engineer ESnet Network Engineering Group. Energy Sciences Network Lawrence Berkeley National Laboratory. May 6, 2008. Networking for the Future of Science. Overview. Logistics Network Requirements

Download Presentation

Nuclear Physics Network Requirements Workshop Washington, DC

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group Energy Sciences Network Lawrence Berkeley National Laboratory May 6, 2008 Networking for the Future of Science

  2. Overview • Logistics • Network Requirements • Sources, Workshop context • Case Study Example • Large Hadron Collider • Today’s Workshop • Structure and Goals

  3. Logistics • Mid-morning break, lunch, afternoon break • Self-organization for dinner • Agenda on workshop web page • http://workshops.es.net/2008/np-net-req/ • Round-table introductions

  4. Network Requirements • Requirements are primary drivers for ESnet – science focused • Sources of Requirements • Office of Science (SC) Program Managers • Direct gathering through interaction with science users of the network • Examples of recent case studies • Climate Modeling • Large Hadron Collider (LHC) • Spallation Neutron Source at ORNL • Observation of the network • Other Sources (e.g. Laboratory CIOs)

  5. Program Office Network Requirements Workshops • Two workshops per year • One workshop per program office every 3 years • Workshop Goals • Accurately characterize current and future network requirements for Program Office science portfolio • Collect network requirements from scientists and Program Office • Workshop structure • Modeled after the 2002 High Performance Network Planning Workshop conducted by the DOE Office of Science • Elicit information from managers, scientists and network users regarding usage patterns, science process, instruments and facilities – codify in “Case Studies” • Synthesize network requirements from the Case Studies

  6. Large Hadron Collider at CERN

  7. LHC Requirements – Instruments and Facilities • Large Hadron Collider at CERN • Networking requirements of two experiments have been characterized – CMS and Atlas • Petabytes of data per year to be distributed • LHC networking and data volume requirements are unique to date • First in a series of DOE science projects with requirements of unprecedented scale • Driving ESnet’s near-term bandwidth and architecture requirements • These requirements are shared by other very-large-scale projects that are coming on line soon (e.g. ITER) • Tiered data distribution model • Tier0 center at CERN processes raw data into event data • Tier1 centers receive event data from CERN • FNAL is CMS Tier1 center for US • BNL is Atlas Tier1 center for US • CERN to US Tier1 data rates: 10 Gbps in 2007, 30-40 Gbps by 2010/11 • Tier2 and Tier3 sites receive data from Tier1 centers • Tier2 and Tier3 sites are end user analysis facilities • Analysis results are sent back to Tier1 and Tier0 centers • Tier2 and Tier3 sites are largely universities in US and Europe

  8. LHC Requirements – Process of Science • Strictly tiered data distribution model is only part of the picture • Some Tier2 scientists will require data not available from their local Tier1 center • This will generate additional traffic outside the strict tiered data distribution tree • CMS Tier2 sites will fetch data from all Tier1 centers in the general case • Network reliability is critical for the LHC • Data rates are so large that buffering capacity is limited • If an outage is more than a few hours in duration, the analysis could fall permanently behind • Analysis capability is already maximized – little extra headroom • CMS/Atlas require DOE federated trust for credentials and federation with LCG • Service guarantees will play a key role • Traffic isolation for unfriendly data transport protocols • Bandwidth guarantees for deadline scheduling • Several unknowns will require ESnet to be nimble and flexible • Tier1 to Tier1,Tier2 to Tier1, and Tier2 to Tier0 data rates could add significant additional requirements for international bandwidth • Bandwidth will need to be added once requirements are clarified • Drives architectural requirements for scalability, modularity

  9. LHC Ongoing Requirements Gathering Process • ESnet has been an active participant in the LHC network planning and operation • Been an active participant in the LHC network operations working group since its creation • Jointly organized the US CMS Tier2 networking requirements workshop with Internet2 • Participated in the US Atlas Tier2 networking requirements workshop • Participated in US Tier3 networking workshops

  10. LHC Requirements Identified To Date • 10 Gbps “light paths” from FNAL and BNL to CERN • CERN / USLHCnet will provide10 Gbps circuits to Starlight, to 32 AoA, NYC (MAN LAN), and between Starlight and NYC • 10 Gbps each in near term, additional lambdas over time (3-4 lambdas each by 2010) • BNL must communicate with TRIUMF in Vancouver • This is an example of Tier1 to Tier1 traffic – 1 Gbps in near term • Circuit is currently up and running • Additional bandwidth requirements between US Tier1s and European Tier2s • Served by USLHCnet circuit between New York and Amsterdam • Reliability • 99.95%+ uptime (small number of hours per year) • Secondary backup paths • Tertiary backup paths – virtual circuits through ESnet, Internet2, and GEANT production networks and possibly GLIF (Global Lambda Integrated Facility) for transatlantic links • Tier2 site connectivity • 1 to 10 Gbps required • Many large Tier2 sites require direct connections to the Tier1 sites – this drives bandwidth and Virtual Circuit deployment (e.g. UCSD) • Ability to add bandwidth as additional requirements are clarified

  11. Identified US Tier2 Sites Atlas (BNL Clients) Boston University Harvard University Indiana University Bloomington Langston University University of Chicago University of New Mexico Alb. University of Oklahoma Norman University of Texas at Arlington Calibration site University of Michigan CMS (FNAL Clients) Caltech MIT Purdue University University of California San Diego University of Florida at Gainesville University of Nebraska at Lincoln University of Wisconsin at Madison

  12. LHC ATLAS Bandwidth Matrix as of April 2007

  13. LHC CMS Bandwidth Matrix as of April 2007

  14. Estimated Aggregate Link Loadings, 2007-08 ESnet IP switch/router hubs ESnet IP core (1) ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections ESnet IP switch only hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Layer 1 optical nodes at eventual ESnet Points of Presence Lab site unlabeled links are 10 Gb/s 9 12.5 Seattle 13 13 9 Portland 2.5 Existing site supplied circuits Boise Boston Chicago Clev. NYC Denver Sunnyvale Philadelphia KC Salt Lake City Pitts. Wash DC Indianapolis 8.5 Raleigh Tulsa LA Nashville Albuq. OC48 6 (1(3)) San Diego (1) 6 Atlanta Jacksonville El Paso BatonRouge Houston 2.5 2.5 2.5 Committed bandwidth, Gb/s

  15. ESnet4 2007-8 Estimated Bandwidth Commitments CERN USLHCNet BNL 32 AoA, NYC Wash., DC MATP JLab ELITE JGI ODU LBNL ESnet IP switch/router hubs NERSC SLAC ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections ESnet IP switch only hubs ESnet SDN switch hubs LLNL SNLL Layer 1 optical nodes not currently in ESnet plans Layer 1 optical nodes at eventual ESnet Points of Presence Lab site Long Island MAN 600 W. Chicago West Chicago MAN unlabeled links are 10 Gb/s 5 Seattle 10 (28) Portland (8) CERN Starlight 13 Boise (29) Boston (9) 29(total) Chicago (7) Clev. (10) (11) NYC (25) (13) (32) Denver Sunnyvale 10 (12) USLHCNet Philadelphia KC Salt Lake City (14) (15) (26) Pitts. (16) Wash DC San FranciscoBay Area MAN Indianapolis (27) (21) (0) (23) (30) (22) Raleigh FNAL Tulsa LA Nashville ANL Albuq. OC48 (3) (1(3)) (24) (4) San Diego Newport News - Elite (1) Atlanta (2) (20) (19) Jacksonville El Paso (17) (6) (5) BatonRouge MAX Houston All circuits are 10Gb/s. 2.5 Committed bandwidth, Gb/s

  16. Estimated Aggregate Link Loadings, 2010-11 ESnet IP switch/router hubs ESnet IP core (1) ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections ESnet IP switch only hubs Layer 1 optical nodes not currently in ESnet plans Layer 1 optical nodes at eventual ESnet Points of Presence Lab site unlabeled links are 10 Gb/s labeled links are in Gb/s 30 Seattle 50 45 20 15 (>1 ) Portland 50 Boise Boston 50 Chicago Clev. 40 50 NYC Pitts. 50 50 Denver Sunnyvale Philadelphia KC Salt Lake City 50 50 40 (16) 10 Wash. DC 40 5 5 Indianapolis 4 30 Raleigh 50 Tulsa 5 LA Nashville 40 20 Albuq. OC48 40 40 30 San Diego 30 Atlanta 20 20 5 40 Jacksonville El Paso 40 20 BatonRouge Houston ESnet SDN switch hubs link capacity, Gb/s 2.5 Committed bandwidth, Gb/s 40

  17. ESnet4 2010-11 Estimated Bandwidth Commitments 600 W. Chicago CERN 40 USLHCNet BNL 32 AoA, NYC CERN 65 Starlight 100 80 80 80 80 USLHCNet FNAL 40 ANL ESnet IP switch/router hubs ESnet IP core (1) ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number ESnet IP switch only hubs Layer 1 optical nodes not currently in ESnet plans Layer 1 optical nodes at eventual ESnet Points of Presence Lab site (20) unlabeled links are 10 Gb/s 25 20 Seattle 25 (28) 15 (>1 ) Portland (8) 5 Boise (29) Boston (9) 5 Chicago (7) Clev. 4 5 (10) (11) NYC Pitts. 5 (25) (13) (32) 5 Denver Sunnyvale (12) Philadelphia (14) KC Salt Lake City (15) 5 5 (26) 4 (16) (21) Wash. DC (27) 4 5 5 Indianapolis 4 (23) 3 (30) (22) (0) Raleigh 5 Tulsa 5 LA Nashville 4 10 Albuq. OC48 (24) 4 4 (4) 3 (3) San Diego 3 (1) Atlanta 20 20 5 (2) (20) (19) 4 Jacksonville El Paso 4 (17) (6) 10 (5) BatonRouge Houston ESnet SDN switch hubs 2.5 Committed bandwidth, Gb/s

  18. 2008 NP Workshop • Goals • Accurately characterize the current and future network requirements for the NP Program Office’s science portfolio • Codify the requirements in a document • The document will contain the case studies and summary matrices • Structure • Discussion of ESnet4 architecture and deployment • NP Science portfolio • I2 Perspective • Round table discussions of case study documents • Ensure that networking folks understand the science process, instruments and facilities, collaborations, etc. outlined in case studies • Provide opportunity for discussions of synergy, common strategies, etc • Interactive discussion rather than formal PowerPoint presentations • Collaboration services discussion – Wednesday morning

  19. Questions? • Thanks!

More Related