1 / 81

ESnet Status Update

ESnet Status Update. ESCC July 18, 2007. William E. Johnston ESnet Department Head and Senior Scientist. Energy Sciences Network Lawrence Berkeley National Laboratory. wej@es.net, www.es.net This talk is available at www.es.net/ESnet4. Networking for the Future of Science.

hope-gibbs
Download Presentation

ESnet Status Update

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ESnet Status Update ESCC July 18, 2007 William E. Johnston ESnet Department Head and Senior Scientist Energy Sciences Network Lawrence Berkeley National Laboratory wej@es.net, www.es.net This talk is available at www.es.net/ESnet4 Networking for the Future of Science

  2. DOE Office of Science and ESnet – the ESnet Mission • ESnet’s primary mission is to enable the large-scale science that is the mission of the Office of Science (SC) and that depends on: • Sharing of massive amounts of data • Supporting thousands of collaborators world-wide • Distributed data processing • Distributed data management • Distributed simulation, visualization, and computational steering • Collaboration with the US and International Research and Education community • ESnet provides network and collaboration services to Office of Science laboratories and many other DOE programs in order to accomplish its mission

  3. Talk Outline I.Current Network Status II. Planning and Building the Future Network - ESnet4 III. Science Collaboration Services - 1. Federated Trust IV. Science Collaboration Services - 2. Audio, Video, Data Teleconferencing

  4. I. ESnet3 Today Provides Global High-Speed Internet Connectivity for DOE Facilities and Collaborators (Early 2007) SINet (Japan) Russia (BINP) PacWave PacificWave CERN (USLHCnetDOE+CERN funded) GÉANT - France, Germany, Italy, UK, etc PNNL SEA NERSC SLAC MIT ANL BNL IARC INEEL LIGO LBNL LLNL MAN LANAbilene Starlight SNLL CHI-SL TWC JGI OSC GTNNNSA Lab DC Offices KAREN/REANNZ Abilene/I2 SINGAREN ODN Japan Telecom America CHI ATL FNAL PPPL AMES JLAB Equinix Equinix ORNL SRS SNV SDN LANL SNLA DC DOE-ALB NASAAmes PANTEX NOAA ORAU OSTI ARM NSF/IRNCfunded YUCCA MT BECHTEL-NV GA Abilene Abilene SDSC MAXGPoP Allied Signal KCP UNM R&Enetworks AMPATH(S. America) AMPATH(S. America) NREL SNV Japan (SINet) Australia (AARNet) Canada (CA*net4 Taiwan (TANet2) Singaren KAREN/REANNZ ODN Japan Telecom America NLR-Packetnet Abilene/I2 ESnet Science Data Network (SDN) core CA*net4 France GLORIAD (Russia, China)Korea (Kreonet2 MREN StarTapTaiwan (TANet2, ASCC) AU NYC ESnet IP core: Packet over SONET Optical Ring and Hubs MAE-E SNV PAIX-PA Equinix, etc. AU ALB 42 end user sites ELP Office Of Science Sponsored (22) International (high speed) 10 Gb/s SDN core 10G/s IP core 2.5 Gb/s IP core MAN rings (≥ 10 G/s) Lab supplied links OC12 ATM (622 Mb/s) OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less NNSA Sponsored (13) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) Specific R&E network peers Other R&E peering points commercial peering points ESnet core hubs IP Abilene high-speed peering points with Internet2/Abilene

  5. ESnet Availability With a goal of “5 nines” for the large science Labs it becomes clear that ESnet will have to deploy dual routers at the site and core-core attachment points in order to avoid down time due to router reloads/upgrades. “4 nines” (>99.95%) “5 nines” (>99.995%) “3 nines” (>99.5%) Dually connected sites Note: These availability measures are only for ESnet infrastructure, they do not include site-related problems. Some sites, e.g. PNNL and LANL, provide circuits from the site to an ESnet hub, and therefore the ESnet-site demarc is at the ESnet hub (there is no ESnet equipment at the site. In this case, circuit outages between the ESnet equipment and the site are considered site issues and are not included in the ESnet availability metric.

  6. Peering Issues • ESnet has experienced congestion at both the West Coast and mid-West Equinix commercial peering exchanges

  7. Commercial Peers Congestion Issues: Temporary changes Long-term Fixes • The OC3 connection between paix-pa-rt1 and snv-rt1 was very congested, with peaks clipped for most of the day. • Temporary mitigation • Temporarily forcing West coast Level3 traffic to eqx-chicago - Traffic is now only clipped (if at all) at the peak of the day • Long term solution • Establish new Level3 peering at eqx-chicago (7/11/07) • Working on establishing a second peering with Global Crossing • Upgrade current loop (OC3) and fabric (100Mpbs) to 1Gbps • Congestion to AT&T • Long term solution • Upgraded AT&T peering at eqx-sanjose from OC3 to OC12 (3/15/07) • Established OC12 peering with AT&T at eqx-ashburn (1/29/07) andeqx-chicago (07/11/07) • The Equinix shared fabric at eqx-ashburn is congested • Long term solution • New Level3 peering at eqx-chicago has helped to relieve congestion • Additional mitigation • Third peering with Google at eqx-chicago, third peering with Yahoo at eqx-chicago • Future mitigation • Establish a second peering with Global Crossing at eqx-chicago • Upgrade equinix-sanjose and equinix-ashburn fabrics connections from 100Mb/s to 1Gbps

  8. II. Planning and Building the Future Network - ESnet4 • Requirements are primary drivers for ESnet – science focused • Sources of Requirements • Office of Science (SC) Program Managers • The Program Offices Requirements Workshops • BES completed • BER in July, 2007 • Others to follow at the rate of 3 a year • Direct gathering through interaction with science users of the network • Example case studies (updated 2005/2006) • Magnetic Fusion • Large Hadron Collider (LHC) • Climate Modeling • Spallation Neutron Source • Observation of the network • Requirements aggregation • Convergence on a complete set of network requirements

  9. 1. Basic Energy Sciences (BES) Network Requirements Workshop • Input from BES facilities, science programs and sites • Light Sources • SNS at ORNL, Neutron Science program • Nanoscience Centers • Combustion Research • Computational Chemistry • Other existing facilities (e.g. National Center for Electron Microscopy at LBL) • Facilities currently undergoing construction (e.g. LCLS at SLAC)

  10. Workshop Process • Three inputs • Discussion of Program Office – goals, future projects, and science portfolio • Discussions with representatives of individual programs and facilities • Group discussion about common issues, future technologies (e.g. detector upgrades), etc. • Additional discussion – ESnet4 • Architecture • Deployment schedule • Future services

  11. BES Workshop Findings (1) • BES facilities are unlikely to provide the magnitude of load that we expect from the LHC • However, significant detector upgrades are coming in the next 3 years • LCLS may provide significant load • SNS data repositories may provide significant load • Theory and simulation efforts may provide significant load • Broad user base • Makes it difficult to model facilities as anything other than point sources of traffic load • Requires wide connectivity • Most facilities and disciplines expect significant increases in PKI service usage

  12. BES Workshop Findings (2) • Significant difficulty and frustration with moving data sets • Problems deal with moving data sets that are small by HEP’s standards • Currently many users ship hard disks or stacks of DVDs • Solutions • HEP model of assigning a group of skilled computer people to address the data transfer problem does not map well onto BES for several reasons • BES is more heterogeneous in science and in funding • User base for BES facilities is very heterogeneous and this results in a large number of sites that must be involved in data transfers • It appears that this is likely to be true of the other Program Offices • ESnet action item – build a central web page for disseminating information about data transfer tools and techniques • Users also expressed interest in a blueprint for a site-local BWCTL/PerfSONAR service (1A)

  13. 2. Case Studies For Requirements Advanced Scientific Computing Research (ASCR) NERSC NLCF Basic Energy Sciences Advanced Light Source Macromolecular Crystallography Chemistry/Combustion Spallation Neutron Source Biological and Environmental Bioinformatics/Genomics Climate Science • Fusion Energy Sciences • Magnetic Fusion Energy/ITER • High Energy Physics • LHC • Nuclear Physics • RHIC

  14. (2A)Science Networking Requirements Aggregation Summary

  15. Science Network Requirements Aggregation Summary Immediate Requirements and Drivers

  16. (2B) The Next Level of Detail: LHC Tier 0, 1, and 2 Connectivity Requirements Summary TRIUMF (Atlas T1, Canada) BNL (Atlas T1) FNAL (CMS T1) USLHC nodes Internet2/GigaPoP nodes ESnet IP core hubs ESnet SDN/NLR hubs Cross connects ESnet - Internet2 Vancouver CERN-1 CANARIE USLHCNet Seattle Toronto CERN-2 Virtual Circuits ESnet SDN Boise CERN-3 Chicago New York Denver Sunnyvale KC GÉANT-1 ESnet IP Core Wash DC Internet2 / RONs Internet2 / RONs Internet2 / RONs LA Albuq. GÉANT-2 San Diego GÉANT Atlanta Dallas Jacksonville • Direct connectivity T0-T1-T2 • USLHCNet to ESnet to Abilene • Backup connectivity • SDN, GLIF, VCs Tier 1 Centers Tier 2 Sites

  17. (2C) The Next Level of Detail:LHC ATLAS Bandwidth Matrix as of April 2007

  18. LHC CMS Bandwidth Matrix as of April 2007

  19. Large-Scale Data Analysis Systems (Typified by the LHC) have Several Characteristics that Result inRequirements for the Network and its Services • The systems are data intensive and high-performance, typically moving terabytes a day for months at a time • The system are high duty-cycle, operating most of the day for months at a time in order to meet the requirements for data movement • The systems are widely distributed – typically spread over continental or inter-continental distances • Such systems depend on network performance and availability, but these characteristics cannot be taken for granted, even in well run networks, when the multi-domain network path is considered • The applications must be able to get guarantees from the network that there is adequate bandwidth to accomplish the task at hand • The applications must be able to get information from the network that allows graceful failure and auto-recovery and adaptation to unexpected network conditions that are short of outright failure This slide drawn from [ICFA SCIC]

  20. Enabling Large-Scale Science • These requirements are generally true for systems with widely distributed components to be reliable and consistent in performing the sustained, complex tasks of large-scale science • Networks must provide communication capability that is service-oriented: configurable, schedulable, predictable, reliable, and informative – and the network and its services must be scalable (2D)

  21. 3. Observed Evolution of Historical ESnet Traffic Patterns ESnet total traffic passed2 Petabytes/mo about mid-April, 2007 Terabytes / month top 100 sites to siteworkflows site to siteworkflow data not available • ESnet Monthly Accepted Traffic, January, 2000 – June, 2007 • ESnet is currently transporting more than1 petabyte (1000 terabytes) per month • More than 50% of the traffic is now generated by the top 100 sites large-scale science dominates all ESnet traffic

  22. ESnet Traffic has Increased by10X Every 47 Months, on Average, Since 1990 Apr., 2006 1 PBy/mo. Nov., 2001 100 TBy/mo. Jul., 1998 10 TBy/mo. 53 months Oct., 1993 1 TBy/mo. 40 months Terabytes / month Aug., 1990 100 MBy/mo. 57 months 38 months Log Plot of ESnet Monthly Accepted Traffic, January, 1990 – June, 2007

  23. Requirements from Network Utilization Observation • In 4 years, we can expect a 10x increase in traffic over current levels without the addition of production LHC traffic • Nominal average load on busiest backbone links is ~1.5 Gbps today • In 4 years that figure will be ~15 Gbps based on current trends • Measurements of this type are science-agnostic • It doesn’t matter who the users are, the traffic load is increasing exponentially • Predictions based on this sort of forward projection tend to be conservative estimates of future requirements because they cannot predict new uses (3A)

  24. Requirements from Traffic Flow Observations • Most of ESnet science traffic has a source or sink outside of ESnet • Drives requirement for high-bandwidth peering • Reliability and bandwidth requirements demand that peering be redundant • Multiple 10 Gbps peerings today, must be able to add more bandwidth flexibly and cost-effectively • Bandwidth and service guarantees must traverse R&E peerings • Collaboration with other R&E networks on a common framework is critical • Seamless fabric • Large-scale science is now the dominant user of the network • Satisfying the demands of large-scale science traffic into the future will require a purpose-built, scalable architecture • Traffic patterns are different than commodity Internet (3B) (3C)

  25. Summary of All Requirements To-Date Requirements from SC Programs: 1A) Provide “consulting” on system / application network tuning Requirements from science case studies: 2A) Build the ESnet core up to 100 Gb/s within 5 years 2B) Deploy network to accommodate LHC collaborator footprint 2C) Implement network to provide for LHC data path loadings 2D) Provide the network as a service-oriented capability Requirements from observing traffic growth and change trends in the network: 3A) Provide 15 Gb/s core within four years and 150 Gb/s core within eight years 3B) Provide a rich diversity and high bandwidth for R&E peerings 3C) Economically accommodate a very large volume of circuit-like traffic

  26. ESnet4 - The Response to the Requirements I) A new network architecture and implementation strategy • Provide two networks: IP and circuit-oriented Science Data Netework • Reduces cost of handling high bandwidth data flows • Highly capable routers are not necessary when every packet goes to the same place • Use lower cost (factor of 5x) switches to relatively route the packets • Rich and diverse network topology for flexible management and high reliability • Dual connectivity at every level for all large-scale science sources and sinks • A partnership with the US research and education community to build a shared, large-scale, R&E managed optical infrastructure • a scalable approach to adding bandwidth to the network • dynamic allocation and management of optical circuits II) Development and deployment of a virtual circuit service • Develop the service cooperatively with the networks that are intermediate between DOE Labs and major collaborators to ensure and-to-end interoperability III) Develop and deploy service-oriented, user accessable network monitoring systems IV) Provide “consulting” on system / application network performance tuning

  27. ESnet4 • Internet2 has partnered with Level 3 Communications Co. and Infinera Corp. for a dedicated optical fiber infrastructure with a national footprint and a rich topology - the “Internet2 Network” • The fiber will be provisioned with Infinera Dense Wave Division Multiplexing equipment that uses an advanced, integrated optical-electrical design • Level 3 will maintain the fiber and the DWDM equipment • The DWDM equipment will initially be provisioned to provide10 optical circuits (lambdas - s) across the entire fiber footprint (80 s is max.) • ESnet has partnered with Internet2 to: • Share the optical infrastructure • Develop new circuit-oriented network services • Explore mechanisms that could be used for the ESnet Network Operations Center (NOC) and the Internet2/Indiana University NOC to back each other up for disaster recovery purposes

  28. ESnet4 • ESnet will build its next generation IP network and its new circuit-oriented Science Data Network primarily on the Internet2 circuits (s) that are dedicated to ESnet, together with a few National Lambda Rail and other circuits • ESnet will provision and operate its own routing and switching hardware that is installed in various commercial telecom hubs around the country, as it has done for the past 20 years • ESnet’s peering relationships with the commercial Internet, various US research and education networks, and numerous international networks will continue and evolve as they have for the past 20 years

  29. Internet2 and ESnet Optical Node SDNcoreswitch M320 T640 various equipment and experimental control plane management systems ESnet RON IPcore ESnetmetro-areanetworks groomingdevice CienaCoreDirector dynamically allocated and routed waves (future) • support devices: • measurement • out-of-band access • monitoring • security Direct Optical Connections to RONs • support devices: • measurement • out-of-band access • monitoring • ……. Network Testbeds Future access to control plane fiber east fiber west Internet2/Level3National Optical Infrastructure Infinera DTN fiber north/south

  30. ESnet Metropolitan Area Network Ring Architecture for High Reliability Sites MANsiteswitch USLHCnetswitch MANsiteswitch USLHCnetswitch SDNcoreswitch SDNcoreswitch T320 SDNcoreeast ESnet production IP core hub IP coreeast IP corewest IP core router SDN corewest ESnetIP core hub ESnet SDNcore hub MAN fiber ring: 2-4 x 10 Gbps channels provisioned initially,with expansion capacity to 16-64 ESnet managedvirtual circuit services tunneled through the IP backbone Large Science Site ESnet production IP service ESnet managedλ / circuit services ESnet MANswitch Independentport cards supportingmultiple 10 Gb/s line interfaces Site ESnet switch Virtual Circuits to Site Virtual Circuit to Site Siterouter Site gateway router SDN circuitsto site systems Site LAN Site edge router

  31. ESnet 3 Backbone as of January 1, 2007 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub Seattle New York City Sunnyvale Chicago Washington DC San Diego Albuquerque Atlanta El Paso

  32. ESnet 4 Backbone as of April 15, 2007 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub Seattle Boston New York City Clev. Sunnyvale Chicago Washington DC San Diego Albuquerque Atlanta El Paso

  33. ESnet 4 Backbone as of May 15, 2007 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub Seattle Boston Boston New York City Clev. Clev. Sunnyvale Chicago Washington DC SNV San Diego Albuquerque Atlanta El Paso

  34. ESnet 4 Backbone as of June 20, 2007 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub Seattle Boston Boston New York City Clev. Sunnyvale Denver Chicago Washington DC Kansas City San Diego Albuquerque Atlanta El Paso Houston

  35. ESnet 4 Backbone Target August 1, 2007 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub Seattle Denver-Sunnyvale-El Paso ring installed July 16, 2007 Boston Boston New York City Clev. Clev. Sunnyvale Denver Chicago Washington DC Kansas City Los Angeles San Diego Albuquerque Atlanta El Paso Houston Houston

  36. ESnet 4 Backbone Target August 30, 2007 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub Seattle Boston Boston Boise New York City Clev. Clev. Sunnyvale Denver Chicago Washington DC Kansas City Los Angeles San Diego Albuquerque Atlanta El Paso Houston Houston

  37. ESnet 4 Backbone Target September 30, 2007 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub Seattle Boston Boston Boise New York City Clev. Clev. Denver Sunnyvale Chicago Washington DC Kansas City Los Angeles Nashville San Diego Albuquerque Atlanta El Paso Houston Houston

  38. ESnet4 Roll OutESnet4 IP + SDN Configuration, mid-September, 2007 ESnet IP switch/router hubs ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections ESnet IP switch only hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Layer 1 optical nodes at eventual ESnet Points of Presence Lab site All circuits are 10Gb/s, unless noted. Seattle (28) Portland (8) Boise (29) Boston (9) Chicago (7) Clev. (10) (11) NYC (25) (13) (32) Denver Sunnyvale (12) Philadelphia KC Salt Lake City (14) (15) (26) Pitts. (16) Wash DC Indianapolis (27) (21) (0) (0) (23) (30) (22) Raleigh Tulsa LA Nashville Albuq. OC48 (3) (1(3)) (24) (4) San Diego (1) (1) (2) (20) (19) Atlanta Jacksonville El Paso (17) (6) (5) BatonRouge Houston

  39. ESnet4 Metro Area Rings, 2007 Configurations USLHCNet 32 AoA, NYC BNL 600 W. Chicago Starlight USLHCNet Wash., DC MATP JGI FNAL ANL JLab Ames LBNL ELITE SLAC NERSC ODU ESnet IP switch/router hubs LLNL ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections ORNL ESnet IP switch only hubs SNLL Nashville ESnet SDN switch hubs Wash., DC 56 Marietta (SOX) Layer 1 optical nodes not currently in ESnet plans 180 Peachtree Layer 1 optical nodes at eventual ESnet Points of Presence Lab site Houston Long Island MAN West Chicago MAN Seattle (28) Portland (8) Boise (29) Boston (9) Chicago (7) Clev. (10) (11) NYC (25) (13) (32) Denver Sunnyvale (12) Philadelphia KC Salt Lake City (14) (15) (26) Pitts. (16) Wash DC San FranciscoBay Area MAN Indianapolis (27) (21) (0) (23) (30) (22) Raleigh Tulsa LA Nashville Albuq. OC48 (3) (1(3)) (24) (4) San Diego Newport News - Elite (1) Atlanta (2) (20) (19) Jacksonville El Paso (17) (6) Atlanta MAN All circuits are 10Gb/s.

  40. Note that the major ESnet sites are now directly on the ESnet “core” network 600 W. Chicago USLHCNet 32 AoA, NYC Starlight BNL USLHCNet FNAL ANL Wash., DC San FranciscoBay Area MAN MATP JGI JLab LBNL ELITE SLAC NERSC ODU ESnet IP switch/router hubs LLNL Atlanta MAN ESnet IP core (1) ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number ORNL ESnet IP switch only hubs SNLL Nashville ESnet SDN switch hubs Wash., DC 56 Marietta (SOX) Layer 1 optical nodes not currently in ESnet plans 180 Peachtree Layer 1 optical nodes at eventual ESnet Points of Presence Lab site (20) Houston Long Island MAN West Chicago MAN e.g. the bandwidth into and out of FNAL is equal to, or greater, than the ESnet core bandwidth Seattle (28) (>1 ) Portland (8) 5 Boise (29) Boston (9) 5 Chicago (7) Clev. 4 5 (10) (11) NYC Pitts. 5 (25) (13) (32) 5 Denver Sunnyvale (12) Philadelphia (14) KC Salt Lake City (15) 5 5 (26) 4 (16) (21) Wash. DC (27) 4 5 Indianapolis 4 (23) 3 (30) (22) (0) Raleigh 5 Tulsa LA Nashville 4 Albuq. OC48 (24) 4 4 (4) 3 (3) San Diego 3 (1) Atlanta (2) (20) (19) 4 Jacksonville El Paso 4 (17) (6) (5) BatonRouge Houston

  41. The Evolution of ESnet Architecture Other IP networks ESnet IP core ESnet IP core independent redundancy (TBD) ESnet Science Data Network (SDN) core • ESnet from 2006-07: • A routed IP network with sites dually connected on metro area rings or dually connected directly to core ring • A switched network providing virtual circuit services for data-intensive science • Rich topology offsets the lack of dual, independent national cores • ESnet to 2005: • A routed IP network with sites singly attached to a national core ring ESnet sites ESnet hubs / core network connection points Metro area rings (MANs) Circuit connections to other science networks (e.g. USLHCNet)

  42. ESnet 4 Factiods as of July 16, 2007 Installation to date: 10 new 10Gb/s circuits ~10,000 Route Miles 6 new hubs 5 new routers 4 new switches Total of 70 individual pieces of equipment shipped Over two and a half tons of electronics 15 round trip airline tickets for our install team About 60,000 miles traveled so far…. 6 cities 5 Brazilian Bar-B-Qs/Grills sampled

  43. Typical ESnet 4 Hub OWAMP Time Source Power Controllers Secure Term Server Peering Router 10G Performance Tester M320 Router 7609 Switch

  44. (2C) Aggregate Estimated Link Loadings, 2007-08 ESnet IP switch/router hubs ESnet IP core (1) ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections ESnet IP switch only hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Layer 1 optical nodes at eventual ESnet Points of Presence Lab site 9 12.5 Seattle 13 13 9 (28) Portland (8) 2.5 Existing site supplied circuits Boise (29) Boston (9) Chicago (7) Clev. (10) (11) NYC (25) (13) (32) Denver Sunnyvale (12) Philadelphia KC Salt Lake City (14) (15) (26) Pitts. (16) Wash DC Indianapolis (27) (21) (0) (0) (23) (30) (22) 8.5 Raleigh Tulsa LA Nashville Albuq. OC48 6 (3) (1(3)) (24) (4) San Diego (1) (1) 6 Atlanta (2) (20) (19) Jacksonville El Paso (17) (6) (5) BatonRouge Houston 2.5 2.5 2.5 Committed bandwidth, Gb/s

  45. (2C) ESnet4 2007-8 Estimated Bandwidth Commitments CERN USLHCNet BNL 32 AoA, NYC Wash., DC MATP JLab ELITE JGI ODU LBNL ESnet IP switch/router hubs NERSC SLAC ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections ESnet IP switch only hubs ESnet SDN switch hubs LLNL SNLL Layer 1 optical nodes not currently in ESnet plans Layer 1 optical nodes at eventual ESnet Points of Presence Lab site Long Island MAN 600 W. Chicago West Chicago MAN CMS 5 Seattle 10 (28) Portland (8) CERN Starlight 13 Boise (29) Boston (9) 29(total) Chicago (7) Clev. (10) (11) NYC (25) (13) (32) Denver Sunnyvale 10 (12) USLHCNet Philadelphia KC Salt Lake City (14) (15) (26) Pitts. (16) Wash DC San FranciscoBay Area MAN Indianapolis (27) (21) (0) (23) (30) (22) Raleigh FNAL Tulsa LA Nashville ANL Albuq. OC48 (3) (1(3)) (24) (4) San Diego Newport News - Elite (1) Atlanta (2) (20) (19) Jacksonville El Paso (17) (6) (5) BatonRouge MAX Houston All circuits are 10Gb/s. 2.5 Committed bandwidth, Gb/s

  46. Are These Estimates Realistic? YES! FNAL Outbound CMS TrafficMax= 1064 MBy/s (8.5 Gb/s), Average = 394 MBy/s (3.2 Gb/s)

  47. ESnet4 IP + SDN, 2008 Configuration ESnet IP switch/router hubs ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number ESnet IP switch only hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Layer 1 optical nodes at eventual ESnet Points of Presence Lab site (20) Seattle (28) (? ) Portland (8) (2) Boise (29) Boston (9) (2) Chicago (7) Clev. (2) (1) (11) (2) (10) NYC (2) Pitts. (25) (13) (32) (2) Denver Sunnyvale (12) Philadelphia KC Salt Lake City (14) (15) (2) (2) (26) (1) (16) (21) Wash. DC (27) (2) Indianapolis (1) (2) (23) (1) (30) (22) (0) (2) Raleigh Tulsa (1) LA Nashville Albuq. OC48 (2) (24) (1) (4) (1) (1) San Diego (3) (1) Atlanta (2) (20) (19) (1) Jacksonville El Paso (1) (17) (6) (5) BatonRouge Houston

  48. Estimated ESnet4 2009 Configuration(Some of the circuits may be allocated dynamically from shared a pool.) ESnet IP switch/router hubs ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number ESnet IP switch only hubs Layer 1 optical nodes not currently in ESnet plans Layer 1 optical nodes at eventual ESnet Points of Presence Lab site (20) Seattle (28) (? ) Portland (8) 3 Boise (29) Boston (9) 3 Chicago (7) Clev. 2 3 (10) (11) 3 NYC Pitts. 3 (25) (13) (32) 3 Denver Sunnyvale (12) Philadelphia (14) KC Salt Lake City (15) 3 3 (26) 2 (16) (21) Wash. DC 2 3 Indianapolis (27) 2 (23) 2 (30) (22) (0) Raleigh 3 Tulsa LA Nashville 2 Albuq. OC48 (24) 2 2 (4) 2 (3) San Diego 1 (1) Atlanta (2) (20) (19) 2 Jacksonville El Paso 2 (17) (6) (5) BatonRouge Houston ESnet SDN switch hubs

  49. (2C) Aggregate Estimated Link Loadings, 2010-11 ESnet IP switch/router hubs ESnet IP core (1) ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number ESnet IP switch only hubs Layer 1 optical nodes not currently in ESnet plans Layer 1 optical nodes at eventual ESnet Points of Presence Lab site (20) 30 45 Seattle 50 20 (28) 15 (>1 ) Portland (8) 5 Boise (29) Boston (9) 5 Chicago (7) Clev. 4 5 (10) (11) NYC Pitts. 5 (25) (13) (32) 5 Denver Sunnyvale (12) Philadelphia (14) KC Salt Lake City (15) 5 5 (26) 4 (16) 10 (21) Wash. DC (27) 4 5 5 Indianapolis 4 (23) 3 (30) (22) (0) Raleigh 5 Tulsa 5 LA Nashville 4 20 Albuq. OC48 (24) 4 4 (4) 3 (3) San Diego 3 (1) Atlanta 20 20 5 (2) (20) (19) 4 Jacksonville El Paso 4 (17) (6) 20 (5) BatonRouge Houston ESnet SDN switch hubs

  50. (2C) ESnet4 2010-11 Estimated Bandwidth Commitments 600 W. Chicago CERN 40 USLHCNet BNL 32 AoA, NYC CERN 65 Starlight 100 80 80 80 80 USLHCNet FNAL 40 ANL ESnet IP switch/router hubs ESnet IP core (1) ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number ESnet IP switch only hubs Layer 1 optical nodes not currently in ESnet plans Layer 1 optical nodes at eventual ESnet Points of Presence Lab site (20) CMS 25 20 Seattle 25 (28) 15 (>1 ) Portland (8) 5 Boise (29) Boston (9) 5 Chicago (7) Clev. 4 5 (10) (11) NYC Pitts. 5 (25) (13) (32) 5 Denver Sunnyvale (12) Philadelphia (14) KC Salt Lake City (15) 5 5 (26) 4 (16) (21) Wash. DC (27) 4 5 5 Indianapolis 4 (23) 3 (30) (22) (0) Raleigh 5 Tulsa 5 LA Nashville 4 10 Albuq. OC48 (24) 4 4 (4) 3 (3) San Diego 3 (1) Atlanta 20 20 5 (2) (20) (19) 4 Jacksonville El Paso 4 (17) (6) 10 (5) BatonRouge Houston ESnet SDN switch hubs

More Related