1 / 25

Networks for HENP and ICFA SCIC

Networks for HENP and ICFA SCIC. Harvey B. Newman California Institute of Technology APAN High Energy Physics Workshop January 21, 2003. Next Generation Networks for Experiments: Goals and Needs.

jaclyn
Download Presentation

Networks for HENP and ICFA SCIC

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Networks for HENP and ICFA SCIC Harvey B. Newman California Institute of TechnologyAPAN High Energy Physics WorkshopJanuary 21, 2003

  2. Next Generation Networks for Experiments: Goals and Needs • Providing rapid access to event samples, subsets and analyzed physics results from massive data stores • From Petabytes by 2002, ~100 Petabytes by 2007, to ~1 Exabyte by ~2012. • Providing analyzed results with rapid turnaround, bycoordinating and managing the large but LIMITED computing, data handling and NETWORK resources effectively • Enabling rapid access to the data and the collaboration • Across an ensemble of networks of varying capability • Advanced integrated applications, such as Data Grids, rely on seamless operation of our LANs and WANs • With reliable, monitored, quantifiable high performance Large data samples explored and analyzed by thousands of globally dispersed scientists, in hundreds of teams

  3. Four LHC Experiments: The Petabyte to Exabyte Challenge • ATLAS, CMS, ALICE, LHCBHiggs + New particles; Quark-Gluon Plasma; CP Violation Data stored ~40 Petabytes/Year and UP; CPU 0.30 Petaflops and UP 0.1 to 1 Exabyte (1 EB = 1018 Bytes) (2007) (~2012 ?) for the LHC Experiments

  4. Tier2 Center Tier2 Center Tier2 Center Tier2 Center Tier2 Center HPSS HPSS HPSS HPSS LHC Data Grid Hierarchy CERN/Outside Resource Ratio ~1:2Tier0/( Tier1)/( Tier2) ~1:1:1 ~PByte/sec ~100-400 MBytes/sec Online System Experiment CERN 700k SI95 ~1 PB Disk; Tape Robot Tier 0 +1 HPSS Tier 1 ~2.5 Gbps FNAL IN2P3 Center INFN Center RAL Center 2.5 Gbps Tier 2 ~2.5 Gbps Tier 3 Institute ~0.25TIPS Institute Institute Institute Tens of Petabytes by 2007-8.An Exabyte within ~5 Years later. Physics data cache 0.1 to 10 Gbps Tier 4 Workstations

  5. ICFA and Global Networks for HENP • National and International Networks, with sufficient (rapidly increasing) capacity and capability, are essential for • The daily conduct of collaborative work in both experiment and theory • Detector development & construction on a global scale; Data analysis involving physicists from all world regions • The formation of worldwide collaborations • The conception, design and implementation of next generation facilities as “global networks” • “Collaborations on this scale would never have been attempted, if they could not rely on excellent networks”

  6. ICFA and International Networking • ICFA Statement on Communications in Int’l HEPCollaborations of October 17, 1996 See http://www.fnal.gov/directorate/icfa/icfa_communicaes.html “ICFA urges that all countries and institutions wishing to participate even more effectively and fully in international HEP Collaborations should: • Review their operating methods to ensure they are fully adapted to remote participation • Strive to provide the necessary communications facilities and adequate international bandwidth”

  7. ICFA Network Task Force: 1998 Bandwidth Requirements Projection (Mbps) NTF 100–1000 X Bandwidth Increase Foreseen for 1998-2005 See the ICFA-NTF Requirements Report: http://l3www.cern.ch/~newman/icfareq98.html

  8. ICFA Standing Committee on Interregional Connectivity (SCIC) • Created by ICFA in July 1998 in Vancouver ; Following ICFA-NTF • CHARGE: Make recommendations to ICFA concerning the connectivity between the Americas, Asia and Europe (and network requirements of HENP) • As part of the process of developing theserecommendations, the committee should • Monitor traffic • Keep track of technology developments • Periodically review forecasts of future bandwidth needs, and • Provide early warning of potential problems • Create subcommittees when necessary to meet the charge • The chair of the committee should report to ICFA once peryear, at its joint meeting with laboratory directors (Feb. 2003) • Representatives: Major labs, ECFA, ACFA, NA Users, S. America

  9. Representatives from major HEP laboratories: W. Von Reuden (CERN) Volker Guelzow (DESY) Vicky White (FNAL) Yukio Karita (KEK)Richard Mount (SLAC) User Representatives Richard Hughes-Jones (UK) Harvey Newman (USA) Dean Karlen (Canada) For Russia: Slava Ilyin (MSU) ECFA representatives: Denis Linglin (IN2P3, Lyon)Frederico Ruggieri (INFN Frascati) ACFA representatives: Rongsheng Xu (IHEP Beijing) H. Park, D. Son (Kyungpook Nat’l University) For South America:Sergio F. Novaes (University of Sao Paulo) ICFA-SCIC Core Membership

  10. SCIC Sub-Committees Web Page http://cern.ch/ICFA-SCIC/ • Monitoring: Les Cottrell (http://www.slac.stanford.edu/xorg/icfa/scic-netmon) With Richard Hughes-Jones (Manchester), Sergio Novaes (Sao Paolo); Sergei Berezhnev (RUHEP), Fukuko Yuasa (KEK), Daniel Davids (CERN), Sylvain Ravot (Caltech), Shawn McKee (Michigan) • Advanced Technologies: Richard Hughes-Jones,With Vladimir Korenkov (JINR, Dubna), Olivier Martin(CERN), Harvey Newman • The Digital Divide:Alberto Santoro (Rio, Brazil) • With Slava Ilyin, Yukio Karita, David O. Williams • Also Dongchul Son (Korea), Hafeez Hoorani (Pakistan), Sunanda Banerjee (India), Vicky White (FNAL) • Key Requirements: Harvey Newman • Also Charlie Young (SLAC)

  11. Transatlantic Net WG (HN, L. Price) Bandwidth Requirements [*] [*] BW Requirements Increasing Faster Than Moore’s Law See http://gate.hep.anl.gov/lprice/TAN

  12. History – One large Research Site Much of the Traffic:SLAC IN2P3/RAL/INFN;via ESnet+France;Abilene+CERN Current Traffic ~400 Mbps;ESNet LimitationProjections: 0.5 to 24 Tbps by ~2012

  13. Tier0-Tier1 Link Requirements Estimate: for Hoffmann Report 2001 • Tier1  Tier0 Data Flow for Analysis 0.5 - 1.0 Gbps • Tier2  Tier0 Data Flow for Analysis 0.2 - 0.5 Gbps • Interactive Collaborative Sessions (30 Peak) 0.1 - 0.3 Gbps • Remote Interactive Sessions (30 Flows Peak) 0.1 - 0.2 Gbps • Individual (Tier3 or Tier4) data transfers 0.8 GbpsLimit to 10 Flows of 5 Mbytes/sec each • TOTAL Per Tier0 - Tier1 Link 1.7 - 2.8 Gbps NOTE: • Adopted by the LHC Experiments; given in the upcomingHoffmann Steering Committee Report: “1.5 - 3 Gbps per experiment” • Corresponds to ~10 Gbps Baseline BW Installed on US-CERN Link • Hoffmann Panel also discussed the effects of higher bandwidths • For example all-optical 10 Gbps Ethernet across WANs

  14. Tier0-Tier1 BW Requirements Estimate: for Hoffmann Report 2001 • Does Not Include the more recent ATLAS Data Estimates • 270 Hz at 1033 Instead of 100Hz • 400 Hz at 1034 Instead of 100Hz • 2 MB/Event Instead of 1 MB/Event • Does Not Allow Fast Download to Tier3+4 of “Small” Object Collections • Example: Download 107 Events of AODs (104 Bytes)  100 Gbytes;At 5 Mbytes/sec per person (above) that’s 6 Hours ! • This is a still a rough, bottoms-up, static, and hence Conservative Model. • A Dynamic distributed DB or “Grid” system with Caching, Co-scheduling, and Pre-Emptive data movement may well require greater bandwidth • Does Not Include “Virtual Data” operations:Derived Data Copies; Data-description overheads • Further MONARC Computing Model Studies are Needed

  15. ICFA SCIC Meetings[*] and Topics • Focus on the Digital Divide This Year • Identification of problem areas; work on ways to improve • Network Status and Upgrade Plans in Each Country • Performance (Throughput) Evolution in Each Country, and Transatlantic • Performance Monitoring World-Overview (Les Cottrell, IEPM Project) • Specific Technical Topics (Examples): • Bulk transfer, New Protocols; Collaborative Systems, VOIP • Preparation of Reports to ICFA (Lab Directors’ Meetings) • Last Report: World Network Status and Outlook - Feb. 2002 • Next Report: Digital Divide, + Monitoring, Advanced Technologies; Requirements Evolution – Feb. 2003 [*] Seven Meetings in 2002; at KEK In December 13.

  16. Network Progress in 2002 andIssues for Major Experiments • Backbones & major links advancing rapidly to 10 Gbps range • “Gbps” end-to-end throughput data flows have been tested; will be in production soon (in 12 to 18 Months) • Transition to Multi-wavelengths 1-3 yrs. in the “most favored” regions • Network advances are changing the view of the net’s roles • Likely to have a profound impact on the experiments’ Computing Models, and bandwidth requirements • More dynamic view: GByte to TByte data transactions;dynamic path provisioning • Net R&D Driven by Advanced integrated applications, such as Data Grids, that rely on seamless LAN and WAN operation • With reliable, quantifiable (monitored), high performance • All of the above will further open the Digital Divide chasm. We need to take action

  17. ICFA SCIC: R&E Backbone and International Link Progress • GEANT Pan-European Backbone (http://www.dante.net/geant) • Now interconnects >31 countries; many trunks 2.5 and 10 Gbps • UK: SuperJANET Core at 10 Gbps • 2.5 Gbps NY-London, with 622 Mbps to ESnet and Abilene • France (IN2P3): 2.5 Gbps RENATER backbone from October 2002 • Lyon-CERN Link Upgraded to 1 Gbps Ethernet • Proposal for dark fiber to CERN by end 2003 • SuperSINET (Japan):10 Gbps IP and 10 Gbps Wavelength Core • Tokyo to NY Links: 2 X 2.5 Gbps started; Peer with ESNet by Feb. • CA*net4 (Canada): Interconnect customer-owned dark fiber nets across Canada at 10 Gbps, started July 2002 • “Lambda-Grids” by ~2004-5 • GWIN (Germany):2.5 Gbps Core; Connect to US at 2 X 2.5 Gbps;Support for SILK Project: Satellite links to FSU Republics • Russia: 155 Mbps Links to Moscow (Typ. 30-45 Mbps for Science) • Moscow-Starlight Link to 155 Mbps (US NSF + Russia Support) • Moscow-GEANT and Moscow-Stockholm Links 155 Mbps

  18. R&E Backbone and Int’l Link Progress • Abilene (Internet2) Upgrade from 2.5 to 10 Gbps in 2002 • Encourage high throughput use for targeted applications; FAST • ESNET: Upgrade: to 10 Gbps “As Soon as Possible” • US-CERN • to 622 Mbps in August; Move to STARLIGHT • 2.5G Research Triangle from 8/02; STARLIGHT-CERN-NL; to 10G in 2003. [10Gbps SNV-Starlight Link Loan from Level(3) • SLAC + IN2P3 (BaBar) • Typically ~400 Mbps throughput on US-CERN, Renater links • 600 Mbps Throughput is BaBar Target for Early 2003 (with ESnet and Upgrade) • FNAL: ESnet Link Upgraded to 622 Mbps • Plans for dark fiber to STARLIGHT, proceeding • NY-Amsterdam Donation from Tyco, September 2002: Arranged by IEEAF: 622 Gbps+10 Gbps Research Wavelength • US National Light Rail Proceeding; Startup Expected this Year

  19. 2.5 10 Gbps Backbone > 200 Primary ParticipantsAll 50 States, D.C. and Puerto Rico75 Partner Corporations and Non-Profits23 State Research and Education Nets 15 “GigaPoPs” Support 70% of Members

  20. 2003: OC192 and OC48 Links Coming Into Service;Need to Consider Links to US HENP Labs

  21. STM 16 STM 4 STM 16 National R&E Network ExampleGermany: DFN Transatlantic Connectivity 2002 • 2 X OC48: NY-Hamburg and NY-Frankfurt • Direct Peering to Abilene (US) and Canarie (Canada) • UCAID said to be adding another 2 OC48’s; in a Proposed Global Terabit Research Network (GTRN) Virtual SILK Highway Project (from 11/01): NATO ($ 2.5 M) and Partners ($ 1.1M) • Satellite Links to South Caucasus and Central Asia (8 Countries) • In 2001-2 (pre-SILK) BW 64-512 kbps • Proposed VSAT to get 10-50 X BW for same cost • See www.silkproject.org [*] Partners: CISCO, DESY. GEANT, UNDP, US State Dept., Worldbank, UC London, Univ. Groenigen

  22. Tohoku U OXC KEK NII Chiba National Research Networks in Japan SuperSINET • Started operation January 4, 2002 • Support for 5 important areas:HEP, Genetics, Nano-Technology,Space/Astronomy, GRIDs • Provides 10 ’s: • 10 Gbps IP connection • Direct intersite GbE links • 9 Universities Connected January 2003: Two TransPacific 2.5 Gbps Wavelengths (to NY); Japan-US-CERN Grid Testbed Soon NIFS IP Nagoya U NIG WDM path IP router Nagoya Osaka Osaka U Tokyo Kyoto U NII Hitot. ICR Kyoto-U ISAS U Tokyo Internet IMS NAO U-Tokyo

  23. SuperSINET Updated Map: October 2002

  24. APAN Links in Southeast AsiaJanuary 15, 2003

More Related