1 / 36

Science Grid Program NAREGI And Cyber Science Infrastructure

Science Grid Program NAREGI And Cyber Science Infrastructure. November 1, 2007 Kenichi Miura, Ph.D. Information Systems Architecture Research Division Center for Grid Research and Development National Institute of Informatics Tokyo, Japan.

calix
Download Presentation

Science Grid Program NAREGI And Cyber Science Infrastructure

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Science Grid Program NAREGI And Cyber Science Infrastructure November 1, 2007 Kenichi Miura, Ph.D. Information Systems Architecture Research Division Center for Grid Research and Development National Institute of Informatics Tokyo, Japan

  2. 1.National Research Grid Initiatve (NAREGI)2. Cyber Science Infrastructure(CSI) Outline

  3. National Research Grid Initiative (NAREGI) Project:Overview • Originally started as an R&D project funded by MEXT • (FY2003-FY2007) • 2 B Yen(~17M$) budget in FY2003 • Collaboration of National Labs. Universities and Industry • in the R&D activities (IT and Nano-science Apps.) • Project redirected as a part of the Next Generation • Supercomputer Development Project (FY2006-…..) MEXT:Ministry of Education, Culture, Sports,Science and Technology

  4. National Research Grid Initiative (NAREGI) Project:Goals To develop a Grid Software System (R&D in Grid Middleware and Upper Layer) as the prototype of future Grid Infrastructure in scientific research in Japan (2)To provide a Testbed to prove that the High-end Grid Computing Environment (100+Tflop/s expected by 2007) can be practically utilized by the nano-science research community over the Super SINET (now, SINET3). (3) To Participate in International collaboration/Interoperability (U.S., Europe, Asian Pacific)  GIN (4) To Contribute to Standardization Activities, e.g., OGF

  5. Project Leader: Dr. K.Miura Grid Middleware Integration and Operation Group Grid Middleware And Upper Layer R&D Dir.: Dr. F.Hirata Organization of NAREGI Ministry of Education, Culture, Sports, Science and industry(MEXT) Oparation And Collaboration Collaboration Center for Grid Research and Development (National Institute of Informatics) Cyber Science Infrastructure(CSI) Industrial Association for Promotion of Supercomputing Technology Deployment Coordination and Operation Committee Computing and Communication Centers (7 National Universities) etc. Grid Technology Research Center (AIST), JAEA Joint R&D TiTech, Kyushu-U, Osaka-U, Kyushu-Tech., Fujitsu, Hitachi, NEC SINET3 Unitization Collaboration Computational Nano Center (Inst. Molecular science) ITBL Joint Research Collaboration Joint Research R&D on Grand Challenge Problems for Grid Applications (ISSP, Tohoku-U, AIST, Inst. Chem. Research, KEK etc.)

  6. Grid-Enabled Nano-Applications Grid Visualization Grid PSE Grid Programming -Grid RPC -Grid MPI Data Grid Grid Workflow Distributed Information Service Super Scheduler (Globus,Condor,UNICOREOGSA) Packaging Grid VM High-Performance & Secure Grid Networking SINET3 Computing Resources NII Research Organizations IMS etc NAREGI Software Stack

  7. VO-APL1 SS IS VO-RO1 SS IS GridVM GridVM GridVM GridVM IS IS IS IS IS IS GridVM GridVM VO and Resources in Beta 2 Decoupling VOs and Resource Providers (Centers) RO3 Client Client Client VO-APL2 ResearchOrg (RO)1 VOMS VOMS SS IS RO2 VOs & Users VO-RO2 VOMS VOMS IS SS Resource Providers IS IS • Policy • VO-R01 • Policy • VO-R01 • VO-APL1 • VO-APL2 • Policy • VO-R02 • Policy • VO-R01 • VO-APL1 • Policy • VO-R01 • VO-APL1 • VO-APL2 • Policy • VO-R02 • VO-APL2 a.RO2 b.RO2 n.RO2 A.RO1 B.RO1 N.RO1 Grid Center@RO1 Grid Center@RO2

  8. WP-2:Grid Programming – GridRPC/Ninf-G2 (AIST/GTRC) GridRPC • Programming Model using RPC on the Grid • High-level, taylored for Scientific Computing (c.f. SOAP-RPC) • GridRPC API standardization by GGF GridRPC WG • Ninf-G Version 2 • A reference implementation of GridRPC API • Implemented on top of Globus Toolkit 2.0 (3.0 experimental) • Provides C and Java APIs Numerical Library IDL FILE IDL Compiler Client 4. connect back 3. invoke Executable generate 2. interface reply Remote Executable GRAM 1. interface request fork Interface Information LDIF File MDS retrieve Client side Server side

  9. WP-2:Grid Programming-GridMPI (AIST and U-Tokyo) Cluster A: Cluster B: YAMPII IMPI YAMPII IMPI server ■ GridMPI is a library which enables MPI communication between parallel systems in the grid environment. This realizes;   ・Huge data size jobs which cannot be executed in a single cluster system   ・Multi-Physics jobs in the heterogeneous CPU architecture environment • Interoperability: • - IMPI(Interoperable MPI) compliance communication protocol • - Strict adherence to MPI standard in implementation • High performance: • - Simple implementation • - Buit-in wrapper to vendor-provided MPI library

  10. WP-3: User-Level Grid Tools & PSE • Grid PSE - Deployment of applications on the Grid - Support for execution of deployed applications • Grid Workflow - Workflow language independent of specific Grid middleware - GUI in task-flow representation • Grid Visualization - Remote visualization of massive data distributed over the Grid - General Grid services for visualization

  11. Workflow based Grid FMO Simulations of Proteins fragment data monomer calculation densityexchange dimer calculation total energy calculation njs_png2005 njs_png2005 njs_png2006 njs_png2006 njs_png2007 njs_png2007 njs_png2008 njs_png2008 visuali-zation njs_png2009 njs_png2009 njs_png2002 njs_png2010 njs_png2002 njs_png2010 input data njs_png2003 njs_png2011 njs_png2003 njs_png2011 njs_png2004 njs_png2012 njs_png2002 njs_png2004 njs_png2002 njs_png2012 NII Resources njs_png2057 njs_png2057 Data component dpcd054 dpcd054 dpcd055 dpcd055 dpcd056 dpcd056 dpcd052 dpcd052 IMS Resources dpcd057 dpcd057 dpcd053 dpcd056 dpcd053 dpcd056 By courtesy of Prof. Aoyagi (Kyushu Univ.)

  12. GridMPI FMOPC Cluster 128 CPUs RISMSMP 64 CPUs RISMsource FMOsource Scenario for Multi-sites MPI Job Execution Work-flow Input files a: Registration b: Deployment c: Edit 1: Submission IMPI CA PSE WFT 3: Negotiation DistributedInformationService SuperScheduler Resource Query Output files 3: Negotiation 5: Sub-Job Agreement Grid Visualization 10: Accounting 6: Co-Allocation GridVM GridVM GridVM 6: Submission 4: Reservation LocalScheduler LocalScheduler LocalScheduler MPI 10: Monitoring FMOJob IMPIServer RISMJob CA CA CA Site C (PC cluster) Site B (SMP machine) Site A

  13. Grid Middleware Grid Middleware Adaptation of Nano-science Applications to Grid Environment GridMPI NII IMS (Sinet3) Electronic Structurein Solutions FMO RISM Solvent Distribution Analysis Electronic StructureAnalysis Data Transformationbetween Different Meshes MPICH-G2, Globus FMO RISM Fragment Molecular Orbital method Reference Interaction Site Model

  14. FMO Mediator 3D-RISM find correlationsbetween mesh points adaptive meshes evenly-spacedmesh monomer calculations pair correlation functions Data exchangebetween meshes solvent distribution effective charges on solute sites dimer calculation NAREGI Application: Nanoscience Simulation Scheme By courtesy of Prof. Aoyagi (Kyushu Univ.)

  15. Collaboration in Data Grid Area • High Energy Physics(GIN) - KEK - EGEE • Astronomy - National Astronomical Observatory (Virtual Observatory) • Bio-informatics - BioGrid Project

  16. NAREGI Data Grid Environment Grid Workflow Job 1 Job 2 Job n Data Grid Components Data 1 Data 2 Data n Import data into workflow Data Access Management Place & register data on the Grid Job 1 Job 2 Metadata Construction Assign metadata to data Grid-wide DB Querying Meta- data Meta- data Job n Meta- data Data Resource Management Data 1 Data 2 Data n Grid-wide File System Store data into distributed file nodes

  17. FY2003 FY2004 FY2005 FY2006 FY2007 Development and Integration of αVer. Middleware Roadmap of NAREGI Grid Middleware OGSA/WSRF-based R&D Framework UNICORE-based R&D Framework Utilization of NAREGI-Wide Area Testbed Utilization of NAREGI NII-IMS Testbed Prototyping NAREGI Middleware Components Apply Component Technologies to Nano Apps and Evaluation β2 Ver. Limited Distr. β1 Ver. Release Version 1.0 Release Evaluation of αVer. In NII-IMS Testbed Deployment of βVersion Evaluation of βVersion By IMS and other Collaborating Institutes Development and Integration of βVer. Middleware Evaluation on NAREGI Wide-area Testbed αVer. (Internal) Development of OGSA-based Middleware Verification & Evaluation Of Ver. 1 Midpoint Evaluation

  18. Highlights of NAREGI b release (‘05-’06) The first incarnation In the world (@a) • Resource and Execution Management • GT4/WSRF based OGSA-EMS incarnation Job Management, Brokering, Reservation based co-allocation, Monitoring, Accounting • Network traffic measurement and control • Security • Production-quality CA • VOMS/MyProxy based identity/security/monitoring/accounting • Data Grid • WSRF based grid-wide data sharing with Gfarm • Grid Ready Programming Libraries • Standards compliant GridMPI (MPI-2) and GridRPC • Bridge tools for different type applications in a concurrent job • User Tools • Web based Portal • Workflow tool w/NAREGI-WFML • WS based application contents and deployment service • Large-Scale Interactive Grid Visualization NAREGI is operating production level CA in APGrid PMA Grid wide seamless data access High performance communication Support data form exchange A reference implementation of OGSA-ACS

  19. NAREGI Version 1.0 Operability, Robustness, Maintainability • To be developed in FY2007 • More flexible scheduling methods - Reservation-based scheduling - Coexistence with locally scheduled jobs - Support of Non-reservation-based scheduling - Support of “Bulk submission” for parameter sweep type jobs • Improvement in maintainability - More systematic logging using Information Service (IS) • Easier installation procedure - apt-rpm - VM

  20. Science Grid NAREGI- Middleware Version. 1.0Architecture -

  21. Network Topology of SINET3 • It has 63 edge nodes and 12 core nodes (75 layer-1 switches and 12 IP routers). • It deploys Japan’s first 40 Gbps lines between Tokyo, Nagoya, and Osaka. • The backbone links form three loops to enable quick service recovery against network failures and the efficient use of the network bandwidth. : Edge node (edge L1 switch) : Core node (core L1 switch + IP router) : 1 Gbps to 20 Gbps : 10 Gbps to 40 Gbps Hong Kong 622Mbps Singapore 622Mbps 2.4Gbps Los Angeles 10Gbps New York Japan’s first 40Gbps (STM256) lines

  22. ISSP Small Test App Clusters Kyoto Univ. Small Test App Clusters Tohoku Univ. Small Test App Clusters KEK Small Test App Clusters AIST Small Test App Clusters NAREGIPhase 1 Testbed ~3000 CPUs ~17 Tflops TiTech Campus Grid Osaka Univ. BioGrid AIST SuperCluster Kyushu Univ. Small Test App Clusters SINET3 (10Gbps MPLS) Computational Nano-science Center(IMS) ~10 Tflops Center for GRID R&D (NII) ~5 Tflops

  23. Computer System for Grid Software Infrastructure R & D Center for Grid Research and Development(5 Tflop/s,700GB) File Server (PRIMEPOWER 900 +ETERNUS3000   + ETERNUS LT160) High Perf. Distributed-memory type Compute Server (PRIMERGY RX200) Intra NW-A Intra NW 128CPUs(Xeon, 3.06GHz)+Control Node Memory 130GB Storage 9.4TB InfiniBand 4X(8Gbps) Memory 16GB Storage10TB Back-up Max.36.4TB 1node/8CPU High Perf. Distributed-memoryType Compute Server (PRIMERGY RX200) (SPARC64V1.3GHz) 128 CPUs(Xeon, 3.06GHz)+Control Node Memory65GB Storage 9.4TB L3 SW 1Gbps (upgradable To 10Gbps) SMP type Compute Server (PRIMEPOWER HPC2500)  InfiniBand 4X (8Gbps) Distributed-memory type Compute Server (Express 5800)  Intra NW-B 1node (UNIX, SPARC64V1.3GHz/64CPU) Memory 128GB Storage 441GB 128 CPUs (Xeon, 2.8GHz)+Control Node Memory65GB Storage 4.7TB GbE (1Gbps) SMP type Compute Server (SGI Altix3700) Distributed-memory type Compute Server(Express 5800)  128 CPUs (Xeon, 2.8GHz)+Control Node 1node (Itanium2 1.3GHz/32CPU) Memory 32GB Storage 180GB Memory 65GB Storage4.7TB Ext. NW GbE (1Gbps) Distributed-memory type Compute Server(HPC LinuxNetworx )  SMP type Compute Server (IBM pSeries690) 128 CPUs (Xeon, 2.8GHz)+Control Node Memory 65GB Storage 4.7TB L3 SW 1Gbps (Upgradable to 10Gbps) 1node (Power4 1.3GHz/32CPU) Memory 64GB Storage 480GB GbE (1Gbps) Distributed-memory type Compute Server(HPC LinuxNetworx )  128 CPUs(Xeon, 2.8GHz)+Control Node Memory 65GB Storage 4.7TB SINET3 GbE (1Gbps)

  24. Computer System for Nano Application R & D Computational Nano science Center(10 Tflop/s,5TB) SMP type Computer Server Distributed-memory type Computer Server(4 units) 5.0 TFLOPS 5.4 TFLOPS 818 CPUs(Xeon, 3.06GHz)+Control Nodes Myrinet2000(2Gbps) 16ways×50nodes (POWER4+ 1.7GHz) Multi-stage Crossbar Network Memory 3072GB Storage2.2TB Memory 1.6TB Storage1.1TB/unit Front-end Server Front-end Server File Server 16CPUs(SPARC64 GP, 675MHz) L3 SW 1Gbps (Upgradable to 10Gbps) CA/RA Server Memory 8GB Storage30TB Back-up25TB Firewall VPN Center for Grid R & D SINET3

  25. Future Direction of NAREGI Grid Middleware Toward Petascale Computing Environment for Scientific Research Center for Grid Research and Development (National Institute of Informatics) Cyber Science Infrastructure(CSI) Science Grid Environment Productization of Generalpurpose Grid Middleware for Scientific Computing Grid Middleware Resource Management in the Grid Environment Grid Middleware for Large Computer Centers Grid Programming Environment Personnel Training (IT and Application Engineers) Grid Application Environment Evaluation of Grid System with Nano Applications Data Grid Environment Contribution to International Scientific Community and Standardization High-Performance & Secure Grid Networking Grid-Enabled Nano Applications Computational Methods for Nanoscience using the Lastest Grid Technology Research Areas Requirement from the Industry with regard to Science Grid for Industrial Applications Large-scale Computation Vitalization of Industry High Throughput Computation Solicited Research Proposals from the Industry to Evaluate Applications Progress in the Latest Research and Development (Nano, Biotechnology) New Methodology for Computational Science Computational Nano-science Center Use In Industry (New Intellectual Product Development) Industrial Committee for Super Computing Promotion (Institute for Molecular Science)

  26. 1.National Research Grid Initiatve (NAREGI)2. CyberScience Infrastructure(CSI) Outline

  27. A new information infrastructure is needed in order to boost today’s advanced scientific research. Integrated information resources and system Supercomputer and high-performance computing Software Databases and digital contents such as e-journals “Human” and research processes themselves U.S.A: Cyber-Infrastructure (CI) Europe: EU e-Infrastructure (EGEE,DEISA,….) Break-through in research methodology is required in various fields such as nano-Science/technology, bioinformatics/life sciences,… the key to industry/academia cooperation: from ‘Science’ to ‘Intellectual Production’ Cyber Science Infrastructure: background Advanced information infrastructure for research will be the key in international cooperation and competitiveness in future science and engineering areas A new comprehensive framework of information infrastructure in Japan Cyber Science Infrastructure

  28. Cyber-Science Infrastructure for R & D SuperSINET and Beyond: Lambda-based Academic Networking Backbone Hokkaido-U ★ ● ★ Tohoku-U Kyoto-U ☆ ★ ★ ★ Tokyo-U Kyushu-U ★ NII Nagoya-U ★ Osaka-U (Titech, Waseda-U, KEK, etc.) Cyber-Science Infrastructure (CSI) NII-REO (Repository of Electronic Journals and Online Publications NAREGI Outputs GeNii (Global Environment for Networked Intellectual Information) Virtual Labs Live Collaborations Deployment of NAREGI Middleware UPKI: National Research PKI Infrastructure International Infrastructural Collaboration Industry/Societal Feedback Restructuring Univ. IT Research ResourcesExtensive On-Line Publications of Results

  29. Structure of CSI and Role of Grid Operation Center (GOC) Industrial Project VOs Domain-specific Research Organization VO (IMS,AIST,KEK,NAO etc) Research Project VOs International Collaboration ・EGEE ・TeraGrid ・DEISA ・OGF etc National Institute of Informatics Cyber-Science Infrastructure Center for Grid Research and Development R&D and Operational Collaboration Academic Contents Service R&D/Support to Operations Planning/Collaboration e-Science Community WG for Grid Middleware GOC (Grid Operation Center) ・Deployment & Operations of Middleware ・Tech. Support ・Operations of CA ・VO Users Admin. ・Users Training ・Feedbacks to R&D Group Peta-scale System VO WG for Inter-university PKI NAREGI Middleware Planning/Operations/Support Univ./National Supercomputing Center VOs WG for Networking Research Community VO UPKI System Planning/Operations Networking Infrastructure(SINET3) Planning/Operations

  30. Cyber Science Infrastructure

  31. Expansion Plan of NAREGI Grid Petascale Computing Environment National Supercomputer Grid (Tokyo,Kyoto,Nagoya…) Domain-specific Research Organizations (IMS,KEK,NAOJ….) NAREGI Grid Middleware Interoperability (GIN,EGEE,Teragrid etc.) Laboratory-level PC Clusters Departmental Computing Resources Domain-specific Research Communities

  32. CyberInfrastructure (NSF) Track1Petascale System (NCSA) Leadership Class Machine > 1Pflops NSF Supercomputer Centers (SDSC,NCSA,PSC) Track2: (TACC,UTK/ORNL,FY2009) National:>500 Tflops Four Important Areas (2006-2010) ・High Performance Computing ・Data, Data Analysis & Visualization ・Virtual Organization for Distributed Communities ・Learning & Workforce Development Local:50-500 Tflops Network Infrastructure:TeraGrid Slogan:Deep – Wide - Open

  33. EU’s e-Infrastructure (HET) Tier 1 PACEPetascale Project(2009?) Europtean HPC center(s): >1Pflops DEISA National/Regional centers with Grid Colaboration: 10-100 Tflops Tier 2 EGI EGEE Tier 3 Local centers HET:HPC in Europe Task Force PACE: Partnership for Advanced Computing in Europe DEISA: Distributed European Infrastructure for Supercomputer Applications EGEE: Enabling Grid for E-SciencE EGI: European Grid Initiative Network Infrastructure:GEANT2

  34. Summary • NAREGI Grid middleware will enable seamless federation of heterogeneous computational resources. • Computations in Nano-science/technology applications over Grid is to be promoted, including participation from industry. • NAREGI Grid Middleware is to be adopted as one of the important components in the new Japanese Cyber Science Infrastructure Framework. • NAREGI is planned to provide the access and computational infrastructure for the Next Generation Supercomputer System.

  35. Thank you! http://www.naregi.org

More Related