1 / 24

Joseph Burrescia, Senior Network Engineer

Energy Sciences Network (ESnet) Futures Chinese American Networking Symposium November 30 – December 2, 2004. Joseph Burrescia, Senior Network Engineer William E. Johnston, ESnet Dept. Head and Senior Scientist. What is the Mission of ESnet?.

donald
Download Presentation

Joseph Burrescia, Senior Network Engineer

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Energy Sciences Network (ESnet) FuturesChinese American Networking SymposiumNovember 30 – December 2, 2004 Joseph Burrescia, Senior Network Engineer William E. Johnston, ESnet Dept. Head and Senior Scientist

  2. What is the Mission of ESnet? • Enable thousands of DOE, university and industry scientists and collaborators worldwide to make effective use of unique DOE research facilities and computing resources independent of time and geographic location • Direct connections to all major DOE sites • Access to the global Internet (managing 150,000 routes at 10 commercial peering points) • Provide capabilities not available through commercial networks • Architected to move huge amounts of data between a small number of sites • High bandwidth peering to provide access to US, European, Asia-Pacific, and other research and education networks. Objective: Support scientific research by providing seamless and ubiquitous access to the facilities, data, and colleagues

  3. The STAR Collaboration at the Relativistic Heavy Ion Collider (RHIC) Brookhaven National Laboratory Brazil: Universidade de Sao Paulo China: IHEP – Beijing IMP - Lanzou IPP – Wuhan USTC SINR – Shanghai Tsinghua University Great Britain: University of Birmingham France: IReS Strasbourg SUBATECH - Nantes Germany: MPI – Munich University of Frankfurt India: IOP - Bhubaneswar VECC - Calcutta Panjab University University of Rajasthan Jammu University IIT - Bombay VECC – Kolcata Poland: Warsaw University of Tech. Russia: MEPHI - Moscow LPP/LHE JINR - Dubna IHEP - Protvino U.S. Laboratories: Argonne Berkeley Brookhaven U.S. Universities: UC Berkeley UC Davis UC Los Angeles Carnegie Mellon Creighton University Indiana University Kent State University Michigan State University City College of New York Ohio State University Penn. State University Purdue University Rice University Texas A&M UT Austin U. of Washington Wayne State University Yale University

  4. Supernova Cosmology Project

  5. HPSS HPSS HPSS HPSS Tier2 Center Tier2 Center Tier2 Center Tier2 Center Tier2 Center CERN / LHC High Energy Physics Data Provides One of Science’s Most Challenging Data Management Problems ~100 MBytes/sec event simulation Online System ~PByte/sec Tier 0 +1 eventreconstruction human=2m HPSS human CERN LHC CMS detector 15m X 15m X 22m, 12,500 tons, $700M. ~2.5 Gbits/sec Tier 1 German Regional Center French Regional Center FermiLab, USA Regional Center Italian Center ~0.6-2.5 Gbps analysis Tier 2 ~0.6-2.5 Gbps Tier 3 • 2000 physicists in 31 countries are involved in this 20-year experiment in which DOE is a major player. • Grid infrastructure spread over the US and Europe coordinates the data analysis Institute ~0.25TIPS Institute Institute Institute Physics data cache 100 - 1000 Mbits/sec Tier 4 Courtesy Harvey Newman, CalTech Workstations

  6. How is this Mission Accomplished? • ESnet builds a comprehensive IP network infrastructure (routing, IPv4, IP multicast, IPv6)based on commercial circuits • ESnet purchases telecommunications services ranging from T1 (1 Mb/s) to OC192 SONET (10 Gb/s) and uses these to connect core routers and sites to form the ESnet IP network • ESnet peers at high speeds with domestic and International R&E networks. • ESnet peers with many commercial networks to provide full Internet connectivity

  7. What is ESnet Today? GEANT - Germany - France - Italy - UK - etc Sinet (Japan) Japan – Russia(BINP) CA*net4 KDDI (Japan) France Switzerland Taiwan (TANet2) Australia CA*net4 Taiwan (TANet2) Singaren CERN PNNL NERSC SLAC PNWG ANL BNL MIT INEEL LIGO LLNL LBNL SNLL TWC JGI Starlight 4xLAB-DC GTN&NNSA ANL-DC INEEL-DC ORAU-DC LLNL/LANL-DC JLAB PPPL FNAL AMES ORNL SRS SNLA LANL DOE-ALB PANTEX SDSC ORAU NOAA OSTI ARM ALB HUB YUCCA MT BECHTEL GA Abilene Abilene Abilene Abilene MAN LANAbilene KCP Allied Signal ELP HUB CHI HUB ATL HUB DC HUB NYC HUB NREL CA*net4 China MREN Netherlands Russia StarTap Taiwan (ASCC) SEA HUB ESnet IP Japan Chi NAP NY-NAP QWEST ATM MAE-E SNV HUB PAIX-E MAE-W Fix-W PAIX-W Euqinix 42 end user sites Office Of Science Sponsored (22) International (high speed) OC192 (10G/s optical) OC48 (2.5 Gb/s optical) Gigabit Ethernet (1 Gb/s) OC12 ATM (622 Mb/s) OC12 OC3 (155 Mb/s) T3 (45 Mb/s) T1-T3 T1 (1 Mb/s) NNSA Sponsored (12) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) ESnet core: Packet over SONET Optical Ring and Hubs (Qwest) peering points hubs SNV HUB high-speed peering points

  8. Science Drives the Future Direction of ESnet Science Drives the Future Direction of ESnet • Modern, large-scale science is dependent on networks • Data sharing • Collaboration • Distributed data processing • Distributed simulation • Data management

  9. Predictive Drivers for the Evolution of ESnet August, 2002 Workshop Organized by Office of Science Mary Anne Scott, Chair, Dave Bader, Steve Eckstrand. Marvin Frazier, Dale Koelling, Vicky White Workshop Panel Chairs Ray Bair, Deb Agarwal, Bill Johnston, Mike Wilde, Rick Stevens, Ian Foster, Dennis Gannon, Linda Winkler, Brian Tierney, Sandy Merola, and Charlie Catlett • The network and middleware requirements to support DOE science were developed by the OSC science community representing major DOE science disciplines • Climate simulation • Spallation Neutron Source facility • Macromolecular Crystallography • High Energy Physics experiments • Magnetic Fusion Energy Sciences • Chemical Sciences • Bioinformatics • The network is essential for: • long term (final stage) data analysis • “control loop” data analysis (influence an experiment in progress) • distributed, multidisciplinary simulation • Available at www.es.net/#research

  10. The Analysis was Driven by the Evolving Process of Science analysis was driven by

  11. Evolving Quantitative Science Requirements for Networks

  12. Observed Data Movement in ESnet ESnet is currently transporting about 300 terabytes/mo.(300,000,000 Mbytes/mo.) ESnet Monthly Accepted Traffic Through Sept. 2004 Annual growth in the past five years about 2.0x annually. TBytes/Month

  13. The Science Traffic on ESnet is Increasing Dramatically:A Small Number of Science Users at Major Facilities Account for a Significant Fraction of all ESnet Traffic ESnet Top 20 Data Flows, 24 hr. avg., 2004-04-20 1 Terabyte/day SLAC (US)  IN2P3 (FR) Fermilab (US) CERN SLAC (US) INFN Padva (IT) Fermilab (US)  U. Chicago (US) U. Toronto (CA)  Fermilab (US) Helmholtz-Karlsruhe (DE) SLAC (US) CEBAF (US)  IN2P3 (FR) INFN Padva (IT)  SLAC (US) Fermilab (US)  JANET (UK) SLAC (US)  JANET (UK) Argonne (US)  Level3 (US) Fermilab (US)  INFN Padva (IT) Argonne  SURFnet (NL) IN2P3 (FR)  SLAC (US) DOE Lab  ? DOE Lab  ? • Since BaBar production started, the top 20 ESnet flows have consistently accounted for > 50% of ESnet’s monthly total traffic (~130 of 250 TBy/mo) • As LHC data starts to move, this will increase a lot (200-2000 times)

  14. Enabling the Future:ESnet’s Evolution Near Term and Beyond • Based on the requirements of the OSC Network Workshops, ESnet networking must address • A reliable production IP Network • University and international collaborator connectivity • Scalable, reliable, and high bandwidth site connectivity • A network support of high-impact science (Science Data Network) • provisioned circuits with guaranteed quality of service(e.g. dedicated bandwidth) • Additionally be positioned to incorporate lessons learned from DOE’s UltraScience Network, the Advanced Research Network. Upgrading ESnet to accommodate the anticipated increase from the current 100%/yr traffic growth to 300%/yr over the next 5-10 years is priority number 7 out of 20 in DOE’s “Facilities for the Future of Science – A Twenty Year Outlook”

  15. Evolving Requirements for DOE Science Network Infrastructure S C C&C C&C I C&C C&C C&C C C&C C S S C S C guaranteedbandwidthpaths I 1-40 Gb/s,end-to-end I 2-4 yr Requirements 1-3 yr Requirements C C C C storage S S S compute C instrument I cache &compute C&C S C C&C C&C I C&C C&C C&C C 3-5 yr Requirements 4-7 yr Requirements C&C 100-200 Gb/s,end-to-end C S

  16. New ESnet Architecture Needs to Accommodate OSC • The essential requirements cannot be met with the current, telecom provided, hub and spoke architecture of ESnet Chicago (CHI) New York (AOA) ESnetCore DOE sites Washington, DC (DC) Sunnyvale (SNV) Atlanta (ATL) El Paso (ELP) • The core ring has good capacity and resiliency against single point failures, but the point-to-point tail circuits are neither reliable nor scalable to the required bandwidth

  17. Basis for a New Architecture • Goals for each site: • fully redundant connectivity • high-speed access to the backbone • Meeting these goals requires a two part approach: • Connecting to sites via a MAN ring topology to provide: • Dual site connectivity and much higher site bandwidth • Employing a second backbone to provide: • Multiply connected MAN ring protection against hub failure • Extra backbone capacity • A platform for provisioned, guaranteed bandwidth circuits • An alternate path for production IP traffic • Access to carrier neutral hubs

  18. Bay Area, Chicago, New York MANs Chicago (CHI) New York (AOA) ESnetExistingCore Sunnyvale(SNV) Atlanta (ATL) Washington, DC (DC) Existing hubs El Paso (ELP) DOE/OSC sites

  19. Enhancing the Existing Core Chicago (CHI) New York (AOA) ESnetExistingCore Sunnyvale(SNV) Atlanta (ATL) Washington, DC (DC) Existing hubs New hubs El Paso (ELP) DOE/OSC sites

  20. Addition of Science Data Network (SDN) Backbone Europe USN Asia-Pacific ESnet SDN Backbone USN Chicago (CHI) New York (AOA) ESnetExistingCore Washington, DC (DC) Sunnyvale(SNV) Atlanta (ATL) Existing hubs New hubs El Paso (ELP) DOE/OSC sites

  21. ESnet – UltraScience Net Topology

  22. On-Demand Secure Circuits and Advance Reservation System (OSCARS) • Guaranteed Bandwidth Circuit • Multi-Protocol Label Switching (MPLS) and Resource Reservation Protocol (RSVP) is used to create a Label Switched Path (LSP) • Quality-of-Service (QoS) level is assign to the LSP to guarantee bandwidth. LSP between ESnet border routers Source Sink RSVP, MPLS enabled on internal interfaces MPLS labels are attached onto packets from Source and placed in separate queue to ensure guaranteed bandwidth. Regular production traffic queue. Interface queues

  23. On-Demand Secure Circuits and Advance Reservation System (OSCARS) Reservation Manager Web-Based User Interface Path Setup Subsystem Authentication, Authorization, And Auditing Subsystem Bandwidth Scheduler Subsystem • Reservation Manager • Web-Based User Interface (WBUI) will prompt the user for a username/password and forward it to the AAAS. • Authentication, Authorization, and Auditing Subsystem (AAAS) will handle access, enforce policy, and generate usage records. • Bandwidth Scheduler Subsystem (BSS) will track reservations and map the state of the network (present and future). • Path Setup Subsystem (PSS) will setup and teardown the on-demand paths (LSPs). User request via WBUI User Instructions to setup/teardown LSPs on routers User feedback User Application User app request via AAAS

  24. High-speed cross connects with Internet2/Abilene Major DOE Office of Science Sites ESnet Meeting Science Requirements – 2007/2008 AsiaPac • 10 Gbps enterprise IP traffic • 40-60 Gbps circuit based transport SEA CERN Aus. Europe Europe ESnet Science Data Network (2nd Core) Japan Japan CHI SNV NYC DEN DC MetropolitanAreaRings Aus. ESnetIP Core ALB ATL SDG ESnet hubs ESnet hubs ELP Metropolitan Area Rings Production IP ESnet core 10Gb/s 30Bg/s40Gb/s High-impact science core 2.5 Gbs10 Gbs Lab supplied Future phases Major international

More Related