HENP Grids and Networks:
Download
1 / 28

Harvey B. Newman, Caltech Internet2 Virtual Member Meeting March 19, 2002 l3cern.ch/~newman/HENPGridsNets\_I2Virt03190 - PowerPoint PPT Presentation


  • 213 Views
  • Uploaded on

HENP Grids and Networks: Global Virtual Organizations. Harvey B. Newman, Caltech Internet2 Virtual Member Meeting March 19, 2002 http://l3www.cern.ch/~newman/HENPGridsNets_I2Virt031902.ppt. Computing Challenges: Petabyes, Petaflops, Global VOs.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Harvey B. Newman, Caltech Internet2 Virtual Member Meeting March 19, 2002 l3cern.ch/~newman/HENPGridsNets\_I2Virt03190' - daniel_millan


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Slide1 l.jpg

  • Harvey B. Newman, Caltech

  • Internet2 Virtual Member MeetingMarch 19, 2002

  • http://l3www.cern.ch/~newman/HENPGridsNets_I2Virt031902.ppt


Computing challenges petabyes petaflops global vos l.jpg
Computing Challenges: Petabyes, Petaflops, Global VOs

  • Geographical dispersion: of people and resources

  • Complexity: the detector and the LHC environment

  • Scale: Tens of Petabytes per year of data

5000+ Physicists

250+ Institutes

60+ Countries

  • Major challenges associated with:

    • Communication and collaboration at a distance

    • Managing globally distributed computing & data resources

    • Remote software development and physics analysis

    • R&D: New Forms of Distributed Systems: Data Grids


The large hadron collider 2006 l.jpg
The Large Hadron Collider (2006-)

  • The Next-generation Particle Collider

    • The largest superconductor installation in the world

  • Bunch-bunch collisions at 40 MHz,Each generating ~20 interactions

    • Only one in a trillion may lead to a major physics discovery

  • Real-time data filtering: Petabytes per second to Gigabytes per second

  • Accumulated data of many Petabytes/Year

  • Large data samples explored and analyzed by thousands of globally dispersed scientists, in hundreds of teams


Four lhc experiments the petabyte to exabyte challenge l.jpg
Four LHC Experiments: The Petabyte to Exabyte Challenge

  • ATLAS, CMS, ALICE, LHCBHiggs + New particles; Quark-Gluon Plasma; CP Violation

  • Data stored ~40 Petabytes/Year and UP; CPU 0.30 Petaflops and UP

  • 0.1 to 1 Exabyte (1 EB = 1018 Bytes) (2007) (~2012 ?) for the LHC Experiments


Lhc higgs decay into 4 muons tracker only 1000x lep data rate l.jpg
LHC: Higgs Decay into 4 muons Petabyte to Exabyte Challenge(Tracker only); 1000X LEP Data Rate

109 events/sec, selectivity: 1 in 1013 (1 person in a thousand world populations)


Slide6 l.jpg

Evidence for the Higgs at LEP at M~115 GeV Petabyte to Exabyte Challenge

The LEP Program Has Now Ended


Lhc data grid hierarchy l.jpg

Tier2 Center Petabyte to Exabyte Challenge

Tier2 Center

Tier2 Center

Tier2 Center

Tier2 Center

HPSS

HPSS

HPSS

HPSS

LHC Data Grid Hierarchy

CERN/Outside Resource Ratio ~1:2Tier0/( Tier1)/( Tier2) ~1:1:1

~PByte/sec

~100-400 MBytes/sec

Online System

Experiment

CERN 700k SI95 ~1 PB Disk; Tape Robot

Tier 0 +1

HPSS

~2.5 Gbits/sec

Tier 1

FNAL: 200k SI95; 600 TB

IN2P3 Center

INFN Center

RAL Center

2.5 Gbps

Tier 2

~2.5 Gbps

Tier 3

Institute ~0.25TIPS

Institute

Institute

Institute

Physicists work on analysis “channels”

Each institute has ~10 physicists working on one or more channels

100 - 1000 Mbits/sec

Physics data cache

Tier 4

Workstations


Henp related data grid projects l.jpg
HENP Related Data Grid Projects Petabyte to Exabyte Challenge

  • Projects

    • PPDG I USA DOE $2M 1999-2001

    • GriPhyN USA NSF $11.9M + $1.6M 2000-2005

    • EU DataGrid EU EC €10M 2001-2004

    • PPDG II (CP) USA DOE $9.5M 2001-2004

    • iVDGL USA NSF $13.7M + $2M 2001-2006

    • DataTAG EU EC €4M 2002-2004

    • GridPP UK PPARC >$15M 2001-2004

    • LCG (Ph1) CERN MS 30 MCHF 2002-2004

  • Many Other Projects of interest to HENP

    • Initiatives in US, UK, Italy, France, NL, Germany, Japan, …

    • US and EU networking initiatives: AMPATH, I2, DataTAG

    • US Distributed Terascale Facility: ($53M, 12 TeraFlops, 40 Gb/s network)


Cms production event simulation and reconstruction l.jpg
CMS Production: Event Simulation Petabyte to Exabyte Challengeand Reconstruction

  • Simulation

  • Digitization

  • GDMP

  • Common Prod. tools

  • (IMPALA)

  • No PU

  • PU

  • CERN

  • Fully operational

  • FNAL

  • Moscow

  • In progress

  • INFN (9)

  • Caltech

  • UCSD

  • UFL

Worldwide Productionat 20 Sites

  • Imperial College

  • Bristol

  • Wisconsin

  • IN2P3

  • Helsinki

  • “Grid-Enabled”

  • Automated


Next generation networks for experiments goals and needs l.jpg
Next Generation Networks for Petabyte to Exabyte ChallengeExperiments: Goals and Needs

  • Providing rapid access to event samples and subsets from massive data stores

    • From ~400 Terabytes in 2001, ~Petabytes by 2002, ~100 Petabytes by 2007, to ~1 Exabyte by ~2012.

  • Providing analyzed results with rapid turnaround, bycoordinating and managing the LIMITED computing, data handling and NETWORK resources effectively

  • Enabling rapid access to the data and the collaboration

    • Across an ensemble of networks of varying capability

  • Advanced integrated applications, such as Data Grids, rely on seamless operation of our LANs and WANs

    • With reliable, quantifiable (monitored), high performance

    • For “Grid-enabled” event processing and data analysis,and collaboration


Baseline bw for the us cern link henp transatlantic wg doe nsf l.jpg
Baseline BW for the US-CERN Link: Petabyte to Exabyte Challenge HENP Transatlantic WG (DOE+NSF)

Transoceanic NetworkingIntegrated with the Abilene, TeraGrid, Regional Netsand Continental NetworkInfrastructuresin US, Europe, Asia, South America

Evolution typicalof major HENPlinks 2001-2006

US-CERN Link: 2 X 155 Mbps Now;Plans: 622 Mbps in April 2002;DataTAG 2.5 Gbps Research Link in Summer 2002;10 Gbps Research Link in ~2003 or Early 2004


Transatlantic net wg hn l price bandwidth requirements l.jpg
Transatlantic Net WG (HN, L. Price) Petabyte to Exabyte Challenge Bandwidth Requirements [*]

[*] Installed BW. Maximum Link Occupancy 50% Assumed

See http://gate.hep.anl.gov/lprice/TAN


Links required to us labs and transatlantic l.jpg
Links Required to US Labs Petabyte to Exabyte Challengeand Transatlantic [*]

[*] Maximum Link Occupancy 50% Assumed


Daily weekly monthly and yearly statistics on 155 mbps us cern link l.jpg
Daily, Weekly, Monthly and Yearly Statistics on 155 Mbps US-CERN Link

20 - 100 Mbps Used Routinely in ‘01

BaBar: 600Mbps Throughput in ‘02

BW Upgrades Quickly Followedby Upgraded Production Use


Slide15 l.jpg

RNP Brazil (to 20 Mbps) US-CERN Link

FIU Miami (to 80 Mbps)

STARTAP/Abilene OC3 (to 80 Mbps)


Total u s internet traffic l.jpg

1970 US-CERN Link

1975

1980

1985

1990

1995

2000

2005

2010

Total U.S. Internet Traffic

100 Pbps

Limit of same % GDP as Voice

10 Pbps

1 Pbps

100Tbps

New Measurements

10Tbps

1Tbps

100Gbps

Projected at 4/Year

Voice Crossover: August 2000

10Gbps

1Gbps

ARPA & NSF Data to 96

100Mbps

4X/Year

10Mbps

2.8X/Year

1Mbps

100Kbps

10Kbps

1Kbps

100 bps

10 bps

U.S. Internet Traffic

Source: Roberts et al., 2001


Ams ix internet exchange throughput accelerating growth in europe nl l.jpg
AMS-IX Internet Exchange Throughput US-CERN LinkAccelerating Growth in Europe (NL)

Monthly Traffic2X Growth from 8/00 - 3/01;2X Growth from 8/01 - 12/01

Hourly Traffic2/03/02

6.0 Gbps

4.0 Gbps

2.0 Gbps

0


Icfa scic december 2001 backbone and int l link progress l.jpg
ICFA SCIC December 2001: Backbone and Int’l Link Progress US-CERN Link

  • Abilene upgrade from 2.5 to 10 Gbps; additional lambdas on demand planned for targeted applications

  • GEANT Pan-European backbone (http://www.dante.net/geant) now interconnects 31 countries; includes many trunks at OC48 and OC192

  • CA*net4: Interconnect customer-owned dark fiber nets across Canada, starting in 2003

  • GWIN (Germany): Connection to Abilene to 2 X OC48 Expected in 2002

  • SuperSINET (Japan): Two OC12 Links, to Chicago and Seattle; Plan upgrade to 2 X OC48 Connection to US West Coast in 2003

  • RENATER (France): Connected to GEANT at OC48; CERN link to OC12

  • SuperJANET4 (UK): Mostly OC48 links, connected to academic MANs typically at OC48 (http://www.superjanet4.net)

  • US-CERN link (2 X OC3 Now) to OC12 this Spring; OC192 by ~2005;DataTAG research link OC48 Summer 2002; to OC192 in 2003-4.

  • SURFNet (NL) link to US at OC48

  • ESnet 2 X OC12 backbone, with OC12 to HEP labs; Plans to connectto STARLIGHT using Gigabit Ethernet


Key network issues challenges l.jpg
Key Network Issues & Challenges US-CERN Link

  • Net Infrastructure Requirements for High Throughput

    • Packet Loss must be ~Zero (well below 0.01%)

      • I.e. No “Commodity” networks

      • Need to track down uncongested packet loss

    • No Local infrastructure bottlenecks

      • Gigabit Ethernet “clear paths” between selected host pairs are needed now

      • To 10 Gbps Ethernet by ~2003 or 2004

    • TCP/IP stack configuration and tuning Absolutely Required

      • Large Windows; Possibly Multiple Streams

      • New Concepts of Fair Use Must then be Developed

    • Careful Router configuration; monitoring

      • Server and Client CPU, I/O and NIC throughput sufficient

    • End-to-end monitoring and tracking of performance

    • Close collaboration with local and “regional” network staffs

      TCP Does Not Scale to the 1-10 Gbps Range


Slide20 l.jpg

2.5 Gbps Backbone US-CERN Link

201 Primary ParticipantsAll 50 States, D.C. and Puerto Rico75 Partner Corporations and Non-Profits14 State Research and Education Nets 15 “GigaPoPs” Support 70% of Members


Rapid advances of nat l backbones next generation abilene l.jpg
Rapid Advances of Nat’l Backbones: US-CERN LinkNext Generation Abilene

  • Abilene partnership with Qwest extended through 2006

  • Backbone to be upgraded to 10-Gbps in phases, to be Completed by October 2003

    • GigaPoP Upgrade started in February 2002

  • Capability for flexible  provisioning in support of future experimentation in optical networking

    • In a multi-  infrastructure


Slide22 l.jpg

STM 16 US-CERN Link

STM 4

STM 16

National R&E Network ExampleGermany: DFN TransAtlanticConnectivity Q1 2002

  • 2 X OC12 Now: NY-Hamburg and NY-Frankfurt

  • ESNet peering at 34 Mbps

  • Upgrade to 2 X OC48 expected in Q1 2002

  • Direct Peering to Abilene and Canarie expected

  • UCAID will add (?) another 2 OC48’s; Proposing a Global Terabit Research Network (GTRN)

  • FSU Connections via satellite:Yerevan, Minsk, Almaty, Baikal

    • Speeds of 32 - 512 kbps

  • SILK Project (2002): NATO funding

    • Links to Caucasus and Central Asia (8 Countries)

    • Currently 64-512 kbps

    • Propose VSAT for 10-50 X BW: NATO + State Funding


National research networks in japan l.jpg

Tohoku U US-CERN Link

OXC

KEK

NII Chiba

National Research Networks in Japan

  • SuperSINET

    • Started operation January 4, 2002

    • Support for 5 important areas:

      HEP, Genetics, Nano-Technology, Space/Astronomy, GRIDs

    • Provides 10 ’s:

      • 10 Gbps IP connection

      • Direct intersite GbE links

      • Some connections to 10 GbE in JFY2002

  • HEPnet-J

    • Will be re-constructed with MPLS-VPN in SuperSINET

  • Proposal: Two TransPacific 2.5 Gbps Wavelengths, and KEK-CERN Grid Testbed

NIFS

IP

Nagoya U

NIG

WDM path

IP router

Nagoya

Osaka

Osaka U

Tokyo

Kyoto U

NII

Hitot.

ICR

Kyoto-U

ISAS

U Tokyo

Internet

IMS

NAO

U-Tokyo


Teragrid www teragrid org ncsa anl sdsc caltech l.jpg
TeraGrid (www.teragrid.org) US-CERN LinkNCSA, ANL, SDSC, Caltech

A Preview of the Grid Hierarchyand Networks of the LHC Era

Abilene

Chicago

Indianapolis

DTF Backplane(4x): 40 Gbps)

Urbana

Pasadena

Starlight / NW Univ

UIC

I-WIRE

San Diego

Multiple Carrier Hubs

Ill Inst of Tech

ANL

OC-48 (2.5 Gb/s, Abilene)

Univ of Chicago

Multiple 10 GbE (Qwest)

Indianapolis (Abilene NOC)

Multiple 10 GbE (I-WIRE Dark Fiber)

NCSA/UIUC

  • Solid lines in place and/or available in 2001

  • Dashed I-WIRE lines planned for Summer 2002

Source: Charlie Catlett, Argonne


Throughput quality improvements bw tcp mss rtt sqrt loss l.jpg
Throughput quality improvements: US-CERN LinkBWTCP < MSS/(RTT*sqrt(loss)) [*]

80% Improvement/Year

 Factor of 10 In 4 Years

Eastern Europe Keeping Up

China Recent Improvement

[*] See “Macroscopic Behavior of the TCP Congestion Avoidance Algorithm,” Matthis, Semke, Mahdavi, Ott, Computer Communication Review 27(3), 7/1997


Internet2 henp wg l.jpg
Internet2 HENP WG [*] US-CERN Link

  • Mission: To help ensure that the required

    • National and international network infrastructures(end-to-end)

    • Standardized tools and facilities for high performance and end-to-end monitoring and tracking, and

    • Collaborative systems

  • are developed and deployed in a timely manner, and used effectively to meet the needs of the US LHC and other major HENP Programs, as well as the at-large scientific community.

    • To carry out these developments in a way that is broadly applicable across many fields

  • Formed an Internet2 WG as a suitable framework: Oct. 26 2001

  • [*] Co-Chairs: S. McKee (Michigan), H. Newman (Caltech); Sec’y J. Williams (Indiana

  • Website: http://www.internet2.edu/henp; also see the Internet2End-to-end Initiative: http://www.internet2.edu/e2e


True end to end experience l.jpg
True End to End Experience US-CERN Link

  • User perception

  • Application

  • Operating system

  • Host IP stack

  • Host network card

  • Local Area Network

  • Campus backbone network

  • Campus link to regional network/GigaPoP

  • GigaPoP link to Internet2 national backbones

  • International connections

EYEBALL

APPLICATION

STACK

JACK

NETWORK

. . .

. . .

. . .

. . .


Networks grids and henp l.jpg
Networks, Grids and HENP US-CERN Link

  • Grids are starting to change the way we do science and engineering

    • Successful use of Grids relies on high performance national and international networks

  • Next generation 10 Gbps network backbones are almost here: in the US, Europe and Japan

    • First stages arriving in 6-12 months

  • Major transoceanic links at 2.5 - 10 Gbps within 0-18 months

  • Network improvements are especially needed in South America; and some other world regions

    • BRAZIL, Chile; India, Pakistan, China; Africa and elsewhere

  • Removing regional, last mile bottlenecks and compromises in network quality are now all on the critical path

  • Getting high (reliable) Grid performance across networks means!

    • End-to-end monitoring; a coherent approach

    • Getting high performance (TCP) toolkits in users’ hands

    • Working in concert with Internet E2E, I2 HENP WG, DataTAG; Working with the Grid projects and GGF


ad