1 / 53

PIONIER - Polish Optical Internet: The eScience Enabler

PIONIER - Polish Optical Internet: The eScience Enabler. Jarek Nabrzyski Poznan Supercomputing and Networking Center naber@man.poznan.pl. Digital Divide and HEPGRID Workshop. Poland. Population: 38 mln Area: 312 000 km 2 Since 01.05.04 part of EU Temperature today: 0 0 C.

sinead
Download Presentation

PIONIER - Polish Optical Internet: The eScience Enabler

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PIONIER - Polish Optical Internet: The eScience Enabler Jarek Nabrzyski Poznan Supercomputing and Networking Center naber@man.poznan.pl Digital Divide and HEPGRID Workshop

  2. Poland Population: 38 mln Area: 312 000 km2 Since 01.05.04 part of EU Temperature today: 00C

  3. PIONIER - an idea of „All Optical Network”, facts: • 4Q1999 – proposal of program submited to KBN • 2Q2000 – PIONIER testbed (DWDM, TNC 2001) • 3Q2000 – project accepted (tender for co-operation, negotiations with Telcos) • 4Q2001 – I Phase: ~10 mln Euro • Contracts with Telbank and Szeptel (1434 km) • 4Q2002 – II Phase: ~18.5 mln Euro • Contracts with Telbank, regional Power Grids Companies (1214 km) • Contract for equipment: 10GE&DWDM and IP router • 2H2003 – installation of 10GE with DWDM rep./amp. • 16 MANs connected and 2648 km of fibers installed • Contracts with partners (Telbank and HAVE) (1426 km): I phase ~ 5 mln Euro • 2004/2005 – 21 MANs connected with 5200 km of fiber

  4. Installed fiber PIONIER nodes Fibers started in 2003 Fibers planned in 2004/2005 PIONIER nodes planned in 2004/2005 PIONIER - fibers deployment, 1Q2004.

  5. How we build fibers • Co-investment with telco operators or self-investment (with right of way: power distribution, railways and public roads) • Average of 16 fibers available (4xG.652 for national backbone, 8xG.652 for regional use, 4xG.655 for long haul transmission) (2001-2002) • 2 pipes and one cable with 24 fibers available (2003) • Average span length 60km for national backbone (regeneration possible) • Local loop contruction is sometimes difficult (urban area - average 6 months waiting time for permissions) • Found on time ...

  6. Link optimization • a side effect of an urgent demand for DCM module ;-) • replacement of G.652 fiber with G.655 fiber (NZDS) • similar cost of fiber G.652 and G.655 • cost reduction via: • lower # of amp./regenerators, • lower # of DCM • But: • optimisation is valid for selected cases only and is wavelength/waveset/link dependent.

  7. Cost approximately 140KEuro Cost approximately 90KEuro, cost savings (equipment only): 35%

  8. Community demands as a driving force • Academic Internet • international connections: • GEANT 10 Gb/s • TELIA 2 Gb/s • GTS/SPRINT 2 Gb/s • national connections between MANs (10Gb/s, 622Gb/s leased lambda) • near future – n x 10Gb/s • High Performance Computing Centers (FC, GE, 10GE) • Project PROGRESS „Access environment to computational services performed by cluster of SUNs” SUN cluster (3 sites x 1Gb/s) result presented on SC 2002 and SC 2003 • Project SGI „HPC/HPV in Virtual Laboratory on SGI clusters” SGI cluster (6 sites x 1Gb/s) • Project CLUSTERIX (12 sites x 1 Gb/s) „National CLUSTER of LInuX Systems” • Project in preparation • National Data Storage system (5 sites x 1Gb/s)

  9. Community demands as a driving force • Dedicated Capacity for European Projects • ATRIUM (622Mb/s) • 6NET (155-622Mb/s) • VLBI (2x1Gb/s dedicated) • CERN-ATLAS (>1 Gb/s dedicated per site) • near future – 6 FP IST

  10. GÉANT Intermediate stage - 10GE over fiber OWN FIBERS GDAŃSK 10 Gb/s KOSZALIN OLSZTYN SZCZECIN GÉANT BIAŁYSTOK BYDGOSZCZ 10 Gb/s TORUŃ POZNAŃ LEASED CHANNELS ZIELONA GÓRA WARSZAWA 622 Mb/s ŁÓDŹ 155 Mb/s RADOM WROCŁAW CZĘSTOCHOWA PUŁAWY KIELCE OPOLE LUBLIN Metropolitan Area Networks KATOWICE KRAKÓW RZESZÓW BIELSKO-BIAŁA

  11. PIONIER - the economy behind Cost reduction via: • simplified network architecture IP / ATM / SDH / DWDM  IP / GE / DWDM • lower investment, lower depreciation ATM /SDH  GE • simplified management

  12. PIONIER - the economy behind... Cost relation (connections between 21 MANs, per year): • 622Mb/s channels from telco (real cost) : 4.8 MEuro • 2.5Gb/s channels from telco (estimate) : 9.6 MEuro • 10Gb/s channels from telco (estimate) : 19.2 MEuro PIONIER costs (5200km of fibers, 10GE) : 55.0 MEuro Annual PIONIER maintenance costs : 2.1 MEuro Return of Investment in 3 years! (calculations made only for 1 lambda used)

  13. PIONIER – e-Region • Two e-Regions already defined: • Cottbus – Zielona Gora (D-PL) • Ostrava – Bielsko Biala (CZ-PL) • e-Region objectives: • Creation of a rational base and possibility of integrated work between institutions across the border, as defined by e-Europe. (...) education, medicine, natural disasters, information bases, protection of environment. • Enchancing the abilities of co-operation by developing new generation of services and applications. • Promoting the region in the Europe (as a micro scale of e-Europe concept)

  14. PIONIER – „Porta Optica” • „PORTA OPTICA” - a distributed optical gateway to eastern neigbours of Poland (project proposal) • A chance for close cooperation in scientific projects, by the means of providing multichannel/multilambda Internet connections to the neighbouring countries. • An easy way to extend GEANT to Eastern European countries

  15. e-Region e-Region PIONIER – cooperation with neighbours RUSSIA LITHUANIA PORTA OPTICA BELARUS GERMANY UKRAINE CZECH REP. SLOVAKIA

  16. PROGRESS (3) VLBI ATLAS HPC and IST project HPC network (5+3) OTHER PROJECTS?

  17. CLUSTERIX - National CLUSTER of LInuX Systems • launched in November 2003, 30 months • itsrealization is divided into two stages: • - research and development – first 18 months • - deployment – starting after the r&d stage and lasting 12 months • in more than 50 % funded by the consortium members • consortium: 12 universities and Polish Academy of Sciences

  18. CLUSTERIXgoals • to develop mechanisms and tools that allow the deployment of a productionGrid environment with the basic infrastructure comprising local PC- clusters based on 64-bit Linux machines located in geographically distant independent centers connected by the fast backbone network provided by the Polish Optical Network PIONIER • existing PC-clusters, as well as new clusters with both 32- and 64-bit architecture, will be dynamically connected to the basic infrastructure • as a result, a distributed PC-cluster of a new generation with a dynamically changing size, fully operational and integrated with the existing services offered by other projects related to the PIONIER program, will be obtained • results in the software infrastructure area will allow for increasing the portability and stability of the software and performance of theservices and computations in the Grid-type structure

  19. CLUSTERIX - objectives • development of software capable of managing clusters with dynamically changing configuration, i.e. changing number of nodes, users and available services; one of the most important factors is reducing the management overhead; • new quality of services and applications based on the IPv6 protocols; • productionGrid infrastructureavailable for thePolish research community; • integration and making use of the existing services delivered as the outcome of other projects (data warehouse, remote visualization, computational resources of KKO); • taking into consideration local policies of infrastructure administration and management, within independent domains; • integrated end-user/administrator interface; • providing required security in a heterogeneous distributed system.

  20. CLUSTERIX:Pilot installation

  21. Architecture

  22. CLUSTERIX: Technologies • the software developed will be based on the Globus Toolkit v.3, using the OGSA (Open Grid Services Architecture) concept • - this technology ensures software compatibility with otherenvironments used for creating Grid systems, and makes the created services easier to reuse • - accepting OGSA as a standard will allow for co-operation of the services with other meta-clusters and Grid systems • Open Source technology • - allows anybody to access the project source code, modify it and publish the changes • - makes the software more reliable and secure • - open software is easier to integrate with the existing solutions and helps other technologies using Open Source software to develop • Integration withexisting software will be used extensively, e.g., GridLabbroker, Virtual Users Account System

  23. CLUSTERIX: R&D • architecture design accordingly to specific requirements of users • data management • procedures of attaching a local PC cluster of any architecture • design and implementation of the task/resource management system • users account and virtual organization management • security mechanisms in a PC cluster • network resources management • utilization of the IPv6 protocol family • monitoring of cluster nodes and distributed applications • design of a user/administrator interface • design of tools for an automated installation/reconfiguration of all nodes within the entire cluster • dynamic load balancing and checkpointing mechanism • end-user’s applications

  24. High Performance Computing and Visualisationwith the SGI Grid for Virtual Laboratory Applications Project No. 6 T11 0052 2002 C/05836

  25. TASK PSNC IMWM PŁ WCSS CYFRONET Project duration: R&D – December 2002 .. November 2004 Deployment – 1 year Partners: HPC centers ACK CYFRONET AGH (Kraków) PSNC (Poznan) TASK (Gdansk) WCSS (Wroclaw) University Lodz End User IMWM (Warsaw) Institute of Bioorganic Chemistry PAS Industry SGI, ATM S.A. Funds: KBN, SGI

  26. Structure

  27. Added value • Real remote access to the national cluster (... GRID): • ASPs • HPC/HPV • Labour instruments • Better usage of licences • Dedicated Application Servers • Better usage of HPC resources • HTC • Emergency Computing Site • IMWM • Production Grid environment • Midleware we will work out

  28. Virtual Laboratory VERYlimited access Main reason -COSTS Main GOAL- to make accessible on a common way Added Value virtual, remote (?)

  29. The Goal • Remote usage of expensive and unique facilities • Better utilisation • Joint venture and on-line co-operation of scientific teams • Shorter deadlines, faster work • eScience – closer • Equal chances • Tele –work, -science

  30. Testbed infrastructure Pilot installation of NMR Spectroscopy Optical network HPC, HPV systems Data Mining ... more than remote access

  31. Remaining R&D activities • Building a national wide HPC/HPV infrastructure: • Connecting the existing infrastructure with the new testbed. • Dedicated Application Servers • Resource Management • Data access optimisation • tape subsystems • Access to scientific libraries • Checkpoint restart • kernel level • IA64 architecture • Advanced visualization • Distributed • Remote visualization • Programming environment supporting the end user • How to simplify the process of making parallel applications

  32. PROGRESS (1) • Duration: December 2001 – May 2003 (R&D) • Budget: ~4,0 MEuro • Project Partners • SUN Microsystems Poland • PSNC IBCh Poznań • Cyfronet AMM, Kraków • Technical University Łódź • Co-funded by The State Committee for Scientific Research (KBN) and SUN Microsystems Poland

  33. PROGRESS (2) • Deployment: June 2003 – .... • Grid constructors • Computational applications developers • Computing portals operators • Enabling access to global grid through deployment of PROGRESS open source packages

  34. Gdańsk Wrocław PROGRESS (3) • Cluster of 80 processors • Networked Storage of 1,3 TB • Software: ORACLE, HPC Cluster Tools, Sun ONE, Sun Grid Engine, Globus

  35. PROGRESS GPE

  36. http://progress.psnc.pl/ http://progress.psnc.pl/portal/ progress@psnc.pl

  37. EU Projects: Progress and GridLab

  38. What is CrossGrid ? • 5. FP, founded by the EU • Time frame: March 2002 – February 2005 • Structure of the project • WP1 - CrossGrid Applications Development • WP2 - Grid Application Programming Environment • WP3 - New Grid Services and Tools • WP4 - International Testbed Organisation • WP5 - Project Management (including Architecture Team and central Dissemination/ Exploitation)

  39. Partners • 21 partners • 2 industry partners • 11 countries • The biggest testbed in Europe

  40. Project structure • WP1 - CrossGrid Applications Development • WP2 - Grid Application Programming Environment • WP3 - New Grid Services and Tools • WP4 - International Testbed Organisation • WP5 - Project Management

  41. Middleware Mobile Access Supercomputer, PC-Cluster G R I D M I D D L E W A R E Desktop Data-storage, Sensors, Experiments Hoffmann, Reinefeld, Putzer Internet, networks Visualization

  42. 40 MHz (40 TB/sec) level 1 - special hardware 75 KHz (75 GB/sec) level 2 - embedded processors 5 KHz (5 GB/sec) level 3 - PCs 100 Hz (100 MB/sec) data recording & offline analysis Applications Surgery planning & visualisation HEP data analysis Flooding control MIS Weather & pollution modelling

  43. Migrating Desktop What’s the best way to ‘travel’ ? Grids Microsoft Windows take it anywhere access anyhow Roaming Access Linux http://ras.man.poznan.pl

  44. GridLabEnabling Applications on the Grid www.gridlab.org Jarek Nabrzyski, Project Coordinator naber@man.poznan.pl office@gridlab.org Poznan Supercomputing and Networking Center

  45. GridLab Project • Funded by the EU (5+ M€), January 2002 – December 2004 • Application and Testbed oriented • Cactus Code, Triana Workflow, all the other applications that want to be Grid-enabled • Main goal: to develop a Grid Application Toolkit (GAT) and set of grid services and tools...: • resource management (GRMS), • data management, • monitoring, • adaptive components, • mobile user support, • security services, • portals, • ... and test them on a real testbed with real applications

  46. GridLab Members • collaborating with: • Users! • EU Astrophysics Network, • DFN TiKSL/GriKSL • NSF ASC Project • other Grid projects • Globus, Condor, • GrADS, • PROGRESS, • GriPhyn/iVDGL, • CrossGrid and all the other European Grid Projects (GRIDSTART) • GWEN, HPC-Europa • PSNC (Poznan) - coordination • AEI (Potsdam) • ZIB (Berlin) • Univ. of Lecce • Cardiff University • Vrije Univ. (Amsterdam) • SZTAKI (Budapest) • Masaryk Univ. (Brno) • NTUA (Athens) • Sun Microsystems • HP • ANL (Chicago, I. Foster) • ISI (LA, C.Kesselman) • UoWisconsin (M. Livny)

  47. GridLab Aims • Get Computational Scientists using the “Grid” and Grid services for real, everyday, production work (AEI Relativists, EU Network, Grav Wave Data Analysis, Cactus User Community), all the other potential grid apps • Make it easier for applications to make flexible, efficient, robust, use of the resources available to their virtual organizations • Dream up, prototype, and test new application scenarios which make adaptive, dynamic, wild, and futuristic uses of resources.

  48. What GridLab isn’t • We are not developing low level Grid infrastructure, • We do not want to repeat work which has already been done (want to incorporate and assimilate it …) • Globus APIs, • OGSA, • ASC Portal (GridSphere/Orbiter), • GPDK, • GridPort, • DataGrid, • GriPhyn, • ...

  49. …need to make it easier to use Application “Is there a better resource I could be using?” GAT_FindResource( ) GAT The Grid

More Related