1 / 71

Networks, Grids and the Digital Divide in HEP and Global e-Science

Networks, Grids and the Digital Divide in HEP and Global e-Science. Harvey B. Newman ICFA Workshop on HEP Networking, Grids, and Digital Divide Issues for Global e-Science Daegu, May 23 2005. Large Hadron Collider CERN, Geneva: 2007 Start. pp s =14 TeV L=10 34 cm -2 s -1

karik
Download Presentation

Networks, Grids and the Digital Divide in HEP and Global e-Science

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Networks, Grids and the Digital Divide in HEP and Global e-Science Harvey B. Newman ICFA Workshop on HEP Networking, Grids, and Digital Divide Issues for Global e-ScienceDaegu, May 23 2005

  2. Large Hadron Collider CERN, Geneva: 2007 Start • pp s =14 TeV L=1034 cm-2 s-1 • 27 km Tunnel in Switzerland & France CMS TOTEM pp, general purpose; HI 5000+ Physicists 250+ Institutes 60+ Countries Atlas ALICE : HI LHCb: B-physics Higgs, SUSY, Extra Dimensions, CP Violation, QG Plasma, … the Unexpected

  3. Tier2 Center Tier2 Center Tier2 Center Tier2 Center Tier2 Center LHC Data Grid Hierarchy:Developed at Caltech CERN/Outside Resource Ratio ~1:2Tier0/( Tier1)/( Tier2) ~1:1:1 ~PByte/sec ~150-1500 MBytes/sec Online System Experiment CERN Center PBs of Disk; Tape Robot Tier 0 +1 Tier 1 10 - 40 Gbps FNAL Center IN2P3 Center INFN Center RAL Center ~10 Gbps Tier 2 ~1-10 Gbps Tier 3 Tens of Petabytes by 2007-8,at ~100 Sites.An Exabyte ~5-7 Years later. Institute Institute Institute Institute Physics data cache 1 to 10 Gbps Tier 4 Workstations Emerging Vision: A Richly Structured, Global Dynamic System

  4. ICFA and Global Networks for Collaborative Science • National and International Networks, with rapidly increasing capacity and end-to-end capability are essential, for • The daily conduct of collaborative work in both experiment and theory • Experiment development & construction on a global scale • Grid systems supporting analysis involving physicists in all world regions • The conception, design and implementation of next generation facilities as “global networks” • “Collaborations on this scale would never have been attempted, if they could not rely on excellent networks”

  5. Challenges of Next Generation Science in the Information Age • Flagship Applications • High Energy & Nuclear Physics, AstroPhysics Sky Surveys: TByte to PByte “block” transfers at 1-10+ Gbps • Fusion Energy: Time Critical Burst-Data Distribution; Distributed Plasma Simulations, Visualization, Analysis • eVLBI: Many (quasi-) real time data streams at 1-10 Gbps • BioInformatics, Clinical Imaging: GByte images on demand • Advanced integrated Grid applications rely on reliable, high performance operation of our LANs and WANs • Analysis Challenge: Provide results with rapid turnaround, over networks of varying capability in different world regions Petabytes of complex data explored and analyzed by 100s-1000s of globally dispersed scientists, in 10s-100s of teams

  6. Huygens Space Probe Lands on Titan - Monitored by 17 telescopes in Au, Jp, CN, US • In October 1997, the Cassini spacecraft left Earth to travel to Saturn • On Christmas day 2004, the Huygens probe separated from Cassini • On 14 January 2005 it started its descent through the dense (methane, nitrogen) atmosphere of Titan (speculated to be similar to that of Earth billions of years ago)  • The signals sent back from Huygens to Cassini were monitored by 17 telescopes in Australia, China, Japan and the US to accurately position the probe to within a kilometre (Titan is ~1.5 billion kilometres from Earth) Courtesy G. McLaughlin

  7. Australian eVLBI data sent over high speed links to the Netherlands • The data from two of the Australian telescopes were transferred to the Netherlands over the SXTransport and IEEAF links, and CA*net4 using UCLP, and were the first to be received by JIVE (Joint Institute for VLBI in Europe), the correlator site • The data was transferred at an average rate of 400Mbps (note 1Gbps was available) • The data from these two telescopes were reformatted and correlated within hours of the end of the landing • This early correlation allowed calibration of the data processor at JIVE, ready for the data from other telescopes to be added • Significant int’l collaborative effort: 9 Organizations G. McLaughlin, D. Riley

  8. ICFA Standing Committee on Interregional Connectivity (SCIC) • Created in July 1998 in Vancouver ; Following ICFA-NTF CHARGE: • Make recommendations to ICFA concerning the connectivity between the Americas, Asia and Europe • As part of the process of developing theserecommendations, the committee should • Monitor traffic on the world’s networks • Keep track of technology developments • Periodically review forecasts of future bandwidth needs, and • Provide early warning of potential problems • Create subcommittees as needed to meet the charge • Representatives: Major labs, ECFA, ACFA, North and South American Users • Chair of the committee reports to ICFA twice per year

  9. SCIC in 2004-2005http://cern.ch/icfa-scic Three 2005 Reports, Presented to ICFA Today • Main Report: “Networking for HENP” [H. Newman et al.] • Includes Updates on the Digital Divide, WorldNetwork Status; Brief updates on Monitoring and Advanced Technologies [*] • 18 Appendices: A World Network Overview Status and Plans for the Next Few Years of Nat’l & Regional Networks, and Optical Network Initiatives • Monitoring Working Group Report [L. Cottrell] Also See: • SCIC Digital Divide Report [A. Santoro et al.] • SCIC 2004 Digital Divide in Russia Report [V. Ilyin] • TERENA (www.terena.nl) 2004 Compendium

  10. SCIC Main Conclusion for 2002-5 • The disparity among regions in HENP could increase even more sharply, as we learn to use advanced networks effectively, and we develop dynamic Grid systems in the “most favored” regions • We must therefore take action, and workto Close the Digital Divide • To make Scientists in All World Regions Full Partners in the Process of Frontier Discoveries • This is essential for the health of our global experimental collaborations, for our field, and for international collaboration in many fields of science.

  11. HEPGRID and Digital Divide Workshop UERJ, Rio de Janeiro, Feb. 16-20 2004 Theme: Global Collaborations, Grids and Their Relationship to the Digital Divide For the past three years the SCIC has focused on understanding and seeking the means of reducing or eliminating the Digital Divide. It proposed to ICFA that these issues, as they affect our field, be brought to our community for discussion. This led to ICFA’s approval, in July 2003, of the Digital Divide and HEP Grid Workshop. • Review of R&E Networks; Major Grid Projects • Perspectives on Digital Divide Issues by Major HEP Experiments, Regional Representatives • Focus on Digital Divide Issues in Latin America; Relate to Problems in Other Regions See http://www.lishep.uerj.br NEWS:Bulletin: ONE TWOWELCOME BULLETIN General InformationRegistrationTravel InformationHotel Registration Participant List How to Get UERJ/HotelComputer Accounts Useful Phone Numbers Program Contact us: SecretariatChairmen Tutorials • C++ • Grid Technologies • Grid-Enabled Analysis • Networks • Collaborative Systems SPONSORS A. Santoro CLAF  CNPQ FAPERJ      UERJ

  12. International ICFA Workshop on HEP Networking, Grids, and Digital Divide Issues for Global e-Science May 23-27, 2005 Daegu, Korea Dongchul Son Center for High Energy Physics Harvey Newman California Institute of Technology • Focus on Asia-Pacific • Also Latin America, Middle East, Africa Approved by ICFA August 2004

  13. International ICFA Workshop on HEP Networking, Grids and Digital Divide Issues for Global e-Science • Workshop Goals • Review the current status, progress and barriers to effective use of major national, continental and transoceanic networks • Review progress, strengthen opportunities for collaboration, and explore the means to deal with key issues in Grid computing and Grid-enabled data analysis, for high energy physics and other fields of data intensive science • Exchange information and ideas, and formulate plans to develop solutions to specific problems related to the Digital Divide, with a focus on the Asia Pacific region, as well as Latin America, Russia and Africa • Continue to advance a broad program of work on reducing or eliminating the Digital Divide, and ensuring global collaboration, as related to all of the above aspects.

  14. PingER: World View from SLAC, CERN C. Asia, Russia, SE Europe, L. America, M. East, China: 4-7 yrs behind India, Africa: 7-8 yrs behind S.E. Europe, Russia: Catching up Latin Am., China: Keeping up India, Mid-East, Africa: Falling Behind Latin America Latin America   R. Cottrell

  15. Connectivity to Africa • Internet Access: More than an order of magnitude lower than the corresponding percentages in Europe (33%) & N. America (70%). • Digital Divide: Lack of Infrastructure, especially interior, high prices (e.g. $ 4-10/kbps/mo.); “Gray” satellite bandwidth market • Intiatives: EUMEDCONNECT (EU-North Africa); GEANT: 155 Mbps to S. Africa; Nectarnet (Ga. Tech); IEEAF/I2 NSF-Sponsored Initiative

  16. Bandwidth prices in Africa vary dramatically; are in general many times what they could be if universities purchase in volume Sample Bandwidth Costs for African Universities Avg. Unit Cost is 40X US Avg.; Cost is Several Hundred Times, Compared to Leading Countries Sample size of 26 universitiesAverage Cost for VSAT service: Quality, CIR, Rx, Tx not distinguished Roy Steiner Internet2 2004 Workshop

  17. RU 200M 20.9G JP Europe 34M 2G KR 1.2G US 155M 310M 9.1G 622M CN • TW 777M 45M `722M 90M HK 2M 7.5M • IN 45M 155M 1.5M TH PH VN 155M 1.5M 932M MY • LK 2M 12M SG ID 2.5M Access Point Exchange Point Current Status 2004 (plan) 16M AU Asia Pacific Academic Network Connectivity APAN Status 7/2004 Connectivity to US from JP, KO, AU is Advancing Rapidly.Progress in the Region, and to Europe is Much Slower D. Y. Kim Better North/South Linkages within Asia Needed JP- TH link: 2Mbps  45Mbps in 2004.

  18. Some APAN Links G. McLaughlin

  19. Digital Divide Illustrated by Network Infrastructures: TERENA NREN Core Capacity In Two Years Current Core capacity goes up in Large Steps: 10 to 20 Gbps; 2.5 to 10 Gbps; 0.6-1 to 2.5 Gbps SE Europe, Medit., FSU, Middle East:Less Progress Based on Older Technologies (Below 0.15, 1.0 Gbps): Digital Divide Will Not Be Closed Source: TERENA

  20. Long Term Trends in Network Traffic Volumes: 300-1000X/10Yrs ESnet Accepted Traffic 1990 – 2004Exponential Growth Since ’92;Annual Rate Increased from 1.7 to 2.0X Per Year In the Last 5 Years L. Cottrell 10 Gbit/s W. Johnston W. Johnston Progressin Steps • SLAC Traffic ~400 Mbps; Growth in Steps (ESNet Limit): ~ 10X/4 Years. • Projected: ~2 Terabits/s by ~2014 • July 2005: 2x10 Gbps links: onefor production and one for research • FNAL: 10 to 20 (+40) Gbps by Fall 2005

  21. CANARIE (Canada) Utilization Trends and UCLPv2 W. St. Arnaud • “Demand for Customer Empowered Nets (CENs) is exceeding our wildest expectations • New version of UCLP will allow easier integration of CENs into E2E nets for specific communities &/or disciplines • UCLPv2 will be based on SOA, web services and workflow to allow easy integration into cyber-infrastructure projects • The Network is no longer a static fixed facility – but can be ‘orchestrated’ with different topologies, routing etc to meet specific needs of end users” Gbps Network Capacity Limit Jan 05

  22. National Lambda Rail (NLR): www.nlr.net Transition beginning now to optical, multi-wavelength Community owned or leased “dark fiber” networks for R&E NLR • Initially 4-8 10G Wavelengths • To 40 10GWaves in Future • Ultralight, Internet2 HOPI, Cisco Research & UltraScience NetInitiatives w/HEP • Atlantic & Pacific Wave Initiatives in: nl, ca, jp, uk, kr; pl, cz, sk, pt, ei, gr, sb/mn … + 30 US States (Ca, Il, Fl, In, …)

  23. GEANT2 Hybrid Architecture Global Connectivity • 10 Gbps + 3x2.5 Gbps to North America • 2.5 Gbps to Japan • 622 Mbps to South America • 45 Mbps to Mediterranean countries • 155 Mbps to South Africa Will be Improved in GEANT2 • Cooperation of 26 NRENs • Implementation on dark fiber, IRU Asset, Transmission & Switching Equipment • Layer 1 & 2 switching, “the Light Path” • Point to Point (E2E) Wavelength services • LHC in Europe: N X 10G T0-T1 Overlay Net H. Doebbling

  24. SXTransport: Au-US 2 X 10G AARNet has dual 10Gbps circuits to the US via Hawaii, dual 622Mbps commodity links G. McLaughlin

  25. [Legends ] <10G>  ・Ishikawa Hi-tech Exchange Center (Tatsunokuchi-machi, Ishikawa Prefecture) <100M>  ・Toyama Institute of Information Systems (Toyama)  ・Fukui Prefecture Data Super Highway AP * (Fukui) 20Gbps 10Gbps 1Gbps Optical testbeds Access points <100M>  ・Hokkaido Regional Network Association AP * (Sapporo) Core network nodes <1G>  ・Teleport Okayama (Okayama)  ・Hiroshima University (Higashi Hiroshima) <100M>  ・Tottori University of Environmental Studies (Tottori)  ・Techno Ark Shimane (Matsue)  ・New Media Plaza Yamaguchi (Yamaguchi) <1G>  ・Tohoku University (Sendai)  ・NICT Iwate IT Open Laboratory (Takizawa-mura, Iwate Prefecture) <100M>  ・Hachinohe Institute of Technology (Hachinohe, Aomori Prefecture)  ・Akita Regional IX *(Akita)  ・Keio University Tsuruoka Campus (Tsuruoka, Yamagata Prefecture)  ・Aizu University (Aizu Wakamatsu) <10G>  ・Kyoto University (Kyoto)  ・Osaka University (Ibaraki) <1G>  ・NICT Kansai Advanced Research Center (Kobe) <100M>  ・Lake Biwa Data Highway AP * (Ohtsu)  ・Nara Prefectural Institute of Industrial Technology (Nara)  ・Wakayama University (Wakayama)  ・Hyogo Prefecture Nishiharima Technopolis (Kamigori-cho, Hyogo Prefecture) Sapporo <100M>  ・Niigata University (Niigata)  ・Matsumoto Information Creation Center (Matsumoto, Nagano Prefecture) <10G>  ・Kyushu University (Fukuoka) <100M>  ・NetCom Saga (Saga)  ・Nagasaki University (Nagasaki)  ・Kumamoto Prefectural Office (Kumamoto)  ・Toyonokuni Hyper Network AP *(Oita)  ・Miyazaki University (Miyazaki)  ・Kagoshima University (Kagoshima) <10G>  ・Tokyo University (Bunkyo Ward, Tokyo)  ・NICT Kashima Space Research Center (Kashima, Ibaraki Prefecture) <1G>  ・Yokosuka Telecom Research Park (Yokosuka, Kanagawa Prefecture) <100M>  ・Utsunomiya University (Utsunomiya)  ・Gunma Industrial Technology Center (Maebashi)  ・Reitaku University (Kashiwa, Chiba Prefecture)  ・NICT Honjo Information and Communications Open Laboratory (Honjo, Saitama Prefecture)  ・Yamanashi Prefecture Open R&D Center (Nakakoma-gun, Yamanashi Prefecture) Fukuoka Sendai Kanazawa NICT Kita Kyushu IT Open Laboratory Nagano NICT Koganei Headquarters Osaka Okayama Kochi NICT Tsukuba Research Center Nagoya NICT Keihannna Human Info-Communications Research Center Otemachi USA Okinawa <100M>  ・Kagawa Prefecture Industry Promotion Center (Takamatsu)  ・Tokushima University (Tokushima)  ・Ehime University (Matsuyama)  ・Kochi University of Technology (Tosayamada-cho, Kochi Prefecture) <100M>  ・Nagoya University (Nagoya)  ・University of Shizuoka (Shizuoka)  ・Softopia Japan (Ogaki, Gifu Prefecture)  ・Mie Prefectural College of Nursing (Tsu) *IX:Internet eXchange AP:Access Point JGN2: Japan Gigabit Network (4/04 – 3/08)20 Gbps Backbone, 6 Optical Cross-Connects JGN2 • Connection services at the optical level: 1 GbE and 10GbE • Optical testbeds: e.g.GMPLS Interop. Tests Y. Karita

  26. APAN-KR : KREONET/KREONet2 II KREONET2 • Support for Next Gen. Apps: • IPv6, QoS, Multicast; Bandwidth Alloc. Services • StarLight/Abilene Connection KREONET • 11 Regions, 12 POP Centers • Optical 2.5-10G Backbone; SONET/SDH, POS, ATM • National IX Connection D. Son International Links • GLORIAD Link to 10G to Seattle Aug. 1 (MOST) • US: 2 X 622 Mbps via CA*Net; GbE via TransPAC • Japan: 2 Gbps • TEIN to GEANT: 155 Mbps SuperSIREN (7 Res. Institutes) • Optical 10-40G Backbone • Collaborative Environment Support • High Speed Wireless: 1.25 G

  27. The Global Lambda Integrated Facility for Research and Education (GLIF) • Architecting an international LambdaGrid infrastructure • Virtual organization supports persistent data-intensive scientific research and middleware development on “LambdaGrids” Many 2.5 - 10G Links Across the Atlantic and PacificPeerings: Pacific & Atlantic Wave; Seattle, LA, Chicago, NYC, HK

  28. Internet 2 Land Speed Record (LSR) • Product of transfer speed and distance using standard Internet (TCP/IP) protocols. • Single Stream 7.5 Gbps X 16 kkm with Linux: July 2004 • IPv4 Multi-stream record with FAST TCP: 6.86 Gbps X 27kkm: Nov 2004 • IPv6 record: 5.11 Gbps between Geneva and Starlight: Jan. 2005 • Concentrate now on reliable Terabyte-scale file transfers • Disk-to-disk Marks: 536 Mbytes/sec (Windows); 500 Mbytes/sec (Linux) • Note System Issues: PCI-X Bus, Network Interface, Disk I/O Controllers, CPU, Drivers 7.2G X 20.7 kkm Internet2 LSRs:Blue = HEP Throuhgput (Petabit-m/sec) Nov. 2004 Record Network NB: Computing Manuf.’s Roadmaps for 2006: One Server Pair to One 10G Link S. Ravot

  29. SC2004 Bandwidth Record by HEP: High Speed TeraByte Transfers for Physics • Caltech, CERN SLAC, FNAL, UFl, FIU, ESNet, UK, Brazil, Korea;NLR, Abilene, LHCNet, TeraGrid;DOE, NSF, EU, …; Cisco, Neterion, HP, NewiSys, … • Ten 10G Waves, 80 10GbE Ports, 50 10GbE NICs • Aggregate Rate of 101 Gbps • 1.6 Gbps to/from Korea • 2.93 Gbps to/from Brazil UERJ, USP Monitoring NLR, Abilene, LHCNet, SCINet, UERJ, USP, Int’l R&E Nets and 9000+ Grid Nodes Simultaneously I. Legrand

  30. SC2004 KNU Traffic: 1.6 Gbps to/From Pittsburgh Via Transpac (LA) and NLR Monitoring in Daegu Courtesy K. Kwon

  31. SC2004: 2.93 (1.95 + 0.98) Gbps Sao Paulo – Miami – Pittsburgh (Via Abilene) GEANT (SURFNet) Madrid & GEANT J. Ibarra Brazilian T2+T3 HEPGrid: Rio + Sao Paolo Also 500 Mbps Via Red CLARA, GEANT (Madrid)

  32. HENP Bandwidth Roadmap for Major Links (in Gbps) Continuing Trend: ~1000 Times Bandwidth Growth Per Decade;HEP: Co-Developer as well as Application Driver of Global Nets

  33. Evolving Quantitative Science Requirements for Networks (DOE High Perf. Network Workshop) W. Johnston See http://www.doecollaboratory.org/meetings/hpnpw/

  34. LHCNet , ESnet Plan 2007/2008:40Gbps US-CERN, ESnet MANs, IRNC LHCNet US-CERN: 9/05: 10G CHI + 10G NY 2007: 20G + 20G 2009: ~40G + 40G AsiaPac SEA Europe Europe ESnet 2nd Core: 30-50G Aus. BNL Japan Japan SNV CHI NYC DEN GEANT2 SURFNet IN2P3 DC Metro Rings FNAL Aus. ESnet IP Core (≥10 Gbps) ALB SDG ATL CERN ELP ESnet hubs New ESnet hubs Metropolitan Area Rings 10Gb/s 10Gb/s 30Gb/s2 x 10Gb/s LHCNet Data Network (4 x 10 Gbps US-CERN) Major DOE Office of Science Sites High-speed cross connects with Internet2/Abilene Production IP ESnet core, 10 Gbps enterprise IP traffic Science Data Network core, 40-60 Gbps circuit transport Lab supplied Major international LHCNet Data Network NSF/IRNC circuit; GVA-AMS connection via Surfnet or Geant2 S. Ravot

  35. We Need to Work on the Digital Dividefrom Several Perspectives • Workshops and Tutorials/Training Sessions • For Example: ICFA DD Workshops, Rio 2/04, HONET (Pakistan) 12/04; Daegu May 2005 • Share Information: Monitoring, BW Progress; Dark Fiber Projects;Prices in different markets • Use Model Cases: Poland, Slovakia, Czech Rep., China, Brazil,… • Encourage, and Work on Inter-Regional Projects • GLORIAD, Russia-China-Korea US Optical Ring • Latin America: CHEPREO/WHREN (US-Brazil); RedCLARA • Help with Modernizing the Infrastructure • Design, Commissioning, Development • Provide Tools for Effective Use: Monitoring, CollaborationSystems; Advanced TCP Stacks, Grid System Software • Work on Policies and/or Pricing: pk, br, cn, SE Europe, in, … • Encourage Access to Dark Fiber • Raise World Awareness of Problems, & Opportunities for Solutions

  36. UERJ T2 HEPGRID Inauguration: Dec. 2004: The Team (Santoro et al.) 100 Dual Nodes;UpgradesPlanned Also Tier3 in Sao Paulo(Novaes) UERJ Tier2 Now On Grid3 and Open Science Grid (5/13)

  37. Grid3, the Open Science Grid and DISUN Grid3: A National Grid Infrastructure • 35 sites, 3500 CPUs: Univ. + 4 Nat’l labs • Part of LHC Grid • Running since October 2003 • HEP, LIGO, SDSS, Biology, Computer Sci. +Brazil (UERJ, USP) P. Avery Transition to Open Science Grid (www.openscience.org) 7 US CMS Tier2s; Caltech, Florida, UCSD, UWisc Form DISUN

  38. Science-Driven: HEPGRID (CMS) in Brazil ICFA DD Workshop 2/04; T2 Inauguration + GIGA/RNP Agree 12/04 Brazilian HEPGRID On line systems 622 Mbps T0 +T1 Italy France BRAZIL USA Germany 2.5 - 10 Gbps CERN T1 UNESP/USP SPRACE-Working T2 T1 UERJ Regional Tier2 Ctr Gigabit T3 T2 UFRGS UERJ: T2T1,100500 Nodes; Plus T2s to 100 Nodes CBPF UERJ UFBA UFRJ Individual Machines T4 • HEPGRID-CMS/BRAZIL is a project to build a Grid that • At Regional Level will include CBPF,UFRJ,UFRGS,UFBA, UERJ & UNESP • At International Level will be integrated with CMS Grid based at CERN; focal points include Grid3/OSG and bilateral projects with Caltech Group

  39. Rio Tier2-SPRACE (Sao Paolo)-Ampath Direct Link at 1 Gbps UERJ CC-USP Giga Fiber 1 Gbps Giga Fiber 1 Gbps T2 Rio Jump CC Giga Router Iqara Fiber leased to ANSP 1 Gbps 1 Gbps 1 Gbps NAP of Brazil Terremark AMPATH Eletropaulo Fiber leased to ANSP 1 Gbps SPRACE ANSP Routers Caltech/Cisco Routers L. Lopez

  40. Highest Bandwidth Link in NREN’s Infrastructure, EU & EFTA Countries, & Dark Fiber 10.0G 1.0G 0.1G 0.01G • Owning (or leasing) dark fiber is an interesting option for an NREN; Depends on the national situation. • NRENs that own dark fiber can decide for themselves which technology and what speeds to use on it Source: TERENA

  41. Europe: Core Network Bandwidth Increase for Years 2001-2004 and 2004-2006 • Countries With No Increase Already Had 1-2.5G Backbone in 2001 • These are all going to 10G backbones by 2006-7 • Countries Showing the Largest Increase Are: • PIONIER (Poland) from 155 Mbps to 10 Gbps capacity (64X) • SANET (Slovakia) from 4 Mbps to 1 Gbps (250X). Source: TERENA

  42. 120km CBDF Cost 4 k Per Month 1 GE Now;10G Planned • 1660 km of Dark Fiber CWDM Links, 1 to 4 Gbps (GbE) • August 2002: Dark Fiber Link, to Austria • April 2003: Dark Fiber Link to Czech Republic • 2004: Dark Fiber Link to Poland • Planning 10 Gbps Backbone > 250X: 2002-2005 T. Weis

  43. GDAŃSK KOSZALIN OLSZTYN SZCZECIN BIAŁYSTOK BASNET 34 Mb/s BYDGOSZCZ TORUŃ GÉANT POZNAŃ ZIELONA GÓRA WARSZAWA ŁÓDŹ WROCŁAW RADOM CZĘSTOCHOWA PUŁAWY OPOLE KIELCE LUBLIN KATOWICE 10 Gb/s (2 lambdas) RZESZÓW KRAKÓW 10 Gb/s BIELSKO-BIAŁA 1 Gb/s CESNET, SANET Metropolitan Area Networks Dark Fiber in Eastern EuropePoland: PIONIER (10-20G) Network 2763 km Lit Fiber Connects 22 MANs; +1286 km (9/05) + 1159 km (4Q/06) • Vision: Support - • Computational Grids;Domain-Specific Grids • Digital Libraries • Interactive TV • Add’l Fibers for e-Regional Initiatives 4Q05 Plan: Courtesy M. Przybylski

  44. PIONIERCross Border Dark Fiber Plan Locations Single GEANT PoPin Poznan

  45. CESNET2 (Czech Republic)Network Topology, Dec. 2004 2500+ km Leased Fibers (Since 1999) 2005: 10GE Link Praha-Brno (300km) in Service; Plan to go to 4 X 10G and higher as needed;More 10GE links planned J. Gruntorad

  46. APAN China Consortium Established in 1999.  The China Education and Research Network (CERNET) and the China Science and Technology Network (CSTNET) are the main advanced networks. CERNET • 2000: Own dark fiber crossing 30+ major cities and 30,000 kilometers • 2003: 1300+ universities and institutes, over 15 million users CERNET 2: Next Generation R&E Net • Backbone connects 15-20 Giga-POPs at 2.5G-10Gbps (I2-like) • Connects to 200 Universities and 100+ Research Institutes at 1 Gbps-10 Gbps • Native IPv6 and Lambda Networking From 6 to 78 Million Internet Users in China from Jan. – July 2004 CERNet 2.5 Gbps CSTnet J. P. Wu, H. Chen

  47. Brazil (RNP2): Rapid Backbone Progressand the GIGA Project • RNP Connects the regional networks in all 26 states of Brazil • Backbone on major links to 155 Mbps; 622 Mbps Rio – Sao Paulo. • 2.5G to 10G Core in 2005 (300X Improvement in 2 Years)  • RNP & GIGA: Extend GIGA to the Northeast, with 4000 km of dark fiber by 2008  • The GIGA Project – Dark Fiber experimental network • 700 km of fiber, 7 cities and 20 institutions in Sao Paolo and Rio • GbE to Rio Tier-2, Sao Paulo Tier-3 L. Lopez

  48. KIE ROS DES Faser GL HAM Faser GC BRE TUB Faser vorhanden Faser KPN POT HUB HAN BIE ADH MUE ZIB BRA MAG DUI LEI DRE FZJ WEI JEN AAC BIR CHE ILM FRA BAY GSI ESF ERL HEI REG FZK STU GAR DFN (Germany): X-WiN-Fiber Network 13.04.2005 • Most of the X-WiN core will be a fibre network, (see map), the rest will be provided by wavelengths • Several fibre and wavelengths providers • Fibre is relatively cheap – in most cases more economic than (one) wavelength • X-Win creates many new options besides being cheaper than the current G-WiN core K. Schauerhammer

  49. Romania: Inter-City Links were 2 to 6 Mbps in 2002; Improved to 155 Mbps in 2003-2004; GEANT-Bucharest Link: 155 to 622 Mbps RoEduNetJanuary 2005 N. Tapus T. Ul Haq Plan: 3-4 Centers at 2.5 Gbps; Dark Fiber InterCity Backbone Compare Pk: 56 univ. share 155 Mbps Internationally

More Related