1 / 17

Building Cyber-Infrastructure and Supporting eScience

Building Cyber-Infrastructure and Supporting eScience. Satoshi Sekiguchi Director, Grid Technology Research Center, AIST, JAPAN s.sekiguchi@aist.go.jp. High speed network (over 1G – 10G). Grid enabled Business. Grid enabled Engineering. Medical Informatics. eBiz, service. Nano-tech

Download Presentation

Building Cyber-Infrastructure and Supporting eScience

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Building Cyber-Infrastructure and Supporting eScience Satoshi Sekiguchi Director, Grid Technology Research Center, AIST, JAPAN s.sekiguchi@aist.go.jp

  2. High speed network (over 1G – 10G) Grid enabled Business Grid enabled Engineering Medical Informatics eBiz, service Nano-tech Informatics Bio-Informatics Solution Utility Chem Infoma HPC Webservice Community JST Portal/ASP Personal Wether service JST Access Grid Grid Technology Science and Engineering platform Upper Middle Community Infra IPA Operation/Human resoure Peta Scale Grid Ubiquitous Grid Database/Expression ITBL/SuperSINET SETI/United Device Lower MIddle sensor Mega Computing 1PFLOPS PC x 100M SC x 10000 Distributed Storage 10PB GPS Earth Q Wireless network Tremendous Sensor Huge datastream P2P grid METI program Mobile Cam Server tech Blood pressure Security IPv6 FTTH/xDSL/Wireless AP/100M-

  3. GRID: Concept and Reality • Key Concept: • Resource Virtualization • Computers, storage, sensors, networks, softwares, people/organization … • WWW – Whenever, Wherever, Whoever • Transparency – Ubiquitous (?) • Similarity – Electric Power Service • Plug-in at consumer • Public utilities – social infrastructure • 99.9....9% availability, reliability, security • 安定・安心・安全

  4. Detector forLHCb experiment Detector for ALICE experiment VLBI: Kashima 34m telescope GRID: Consumers scenario (sensors) • To provide infrastructure and facilities needed for next major stages of collaborative research in: • genomics and bioscience • particle physics • astronomy • climatology • engineering design • social sciences • Medical Engineering Bio Grid JVO

  5. Correspondences between agencies (US-JP) US JPN NEDO Council S&T Coordination Office IPA AIST METI Env.,nanotech, Bio, Life, etc GTRC Commerce, Industry, Energy Sekiguchi DOC TACC RICS DOE MEXT JAERI JST JAMSTEC ES ctr NSF JSSP RIKEN Sci, Tech,& Edu (sports, culture) Univ Matsuoka U.Tokyo SOUM Titech E-Science & Computation Science NII IMS CRL

  6. AIST Grid Projects (Underway)

  7. Other Grid related Projects (Underway)

  8. National Research Grid Initiative (NAREGI) • A new R&D project funded by MEXT • FY2003-FY2007 • ~2B JPY budget in FY2003 • 1.5B for Grid R&D, 0.5B for Nano-tech apps. • One of Japanese Government’s Grid Computing Projects • Selected National Labs. Universities and Industry are to be involved in the R&D activities

  9. National Research Grid Initiative • To develop a Grid Software System • R&D in Grid Middleware and Upper Layer • prototyping for future Grid Infrastructure in scientific research • To provide a Testbed to prove that the High-end Grid Computing Environment • 100+Tflop/s expected by 2007 • practically utilized in the Nano-science Simulations over the Super SINET. • To Contribute to Standardization Activities, • e.g., GGF • To Participate in International Collaboration • Europe, U.S., Asian Pacific

  10. Participating Organizations • National Institute of Informatics (NII/MEXT) • Site for R&D in Grid Software and Networking • Institute for Molecular Science (IMS/MEXT) • Site for R&D in Computational Nano-sciences and Simulation Software Platform • Universities and National Laboratories(Joint R&D) • AIST, Tokyo Inst. Tech, Osaka-u, Kyushu-u, Kyushu Inst. Tech., etc. • Research Collaboration • ITBL Project, National Supercomputing Centers etc. • Participating Vendors • IT and Chemicals/ Materials

  11. Hokkaido University Inter-university Computer Centers (excl. National Labs) circa 2002 HITACHI SR8000 HP Exemplar V2500 HITACHI MP5800/160 Sun Ultra Enterprise 4000 University of Tsukuba FUJITSU VPP5000 CP-PACS 2048 (SR8000 proto) Kyoto University FUJITSU VPP800 FUJITSU GP7000F model 900 /32 FUJITSU GS8000 Tohoku University NEC SX-4/128H4(Soon SX-7) NEC TX7/AzusA Kyushu University FUJITSU VPP5000/64 HP GS320/32 FUJITSU GP7000F 900/64 University of Tokyo HITACHI SR8000 HITACHI SR8000/MPP Others (in institutes) Tokyo Inst. Technology (Titech) NEC SX-5/16, Origin2K/256 HP GS320/64 Nagoya University Osaka University FUJITSU VPP5000/64 FUJITSU GP7000F model 900/64 FUJITSU GP7000F model 600/12 NEC SX-5/128M8 HP Exemplar V2500/N

  12. Proposed Solutions • Diversity of Resources (松竹梅 “Shou-Chiku-Bai”) • 松(“shou” pine) – ES – like centers 40-100Teraflops x (a few), 100-300 TeraFlops nationwide • 竹(“chiku” bamboo) – Medium-sized machines at SCs, 5-10 TeraFlops x 5, 25-50 TeraFlops aggregate / Center, 250-500 TeraFlops total • 梅(“bai” plumb) – small clusters and PCs spread out throughout campus in a campus Grid x 5k-10k, 50 -100 TeraFlops / Center, 500-1 PetaFlop Nationwide • Division of Labor between “Big” centers like ES and Univ. Centers, Large-medium-small resources • A National Grid Infrastructure to virtualize/federate these Resources ($120 mil approved 2003-2007) ES’s Univ SCs Original slide: courtesy of S.Matsuoka

  13. 高速ネットワーク (over 1G – 10G) Everyday Grid – Tier model 竹 Bamboo 松PINE Larger Jobs 梅 Plum SITE-A SITE-B SITE-C

  14. Center for Grid R&D (Jinbo-cho, Tokyo)

  15. AIST super cluster for e-Science platform • Target performance: 10~20 TFLOPS for Linpack • Challenge for one of the fastest clusters in the world • Development of ‘real-use’ cluster system • Delivered 1Q(’04) • Software • SCore System Software • for Cluster managemnet and communication • Compilers and utilities • C,C++、FORTRAN(77, 90) • MPI、OpenMP • Parallel debugger, Performance monitoring tools • GRID support • Globus, condor, GridRPC, etc Components • Computing nodes (2000~1000 Pus) • Opteron 2GHz • 4GB/proc • Interactive nodes • Network for computation • RDMA support, >3Gbps/link, High bi-section BW • Myrinet… • Network for management and data transfer • Gigabit Ethernet • File system • Disks of each node are used for scratch • Shared file system (NAS …)

  16. Overviews Original slide: courtesy of IBM Japan

  17. Summary • E-Science is a great concept for future our life • Scientists, Education, • School, Home • Commercial scene • Grid meets e-Science needs perfectly • Resource sharing & virtualization • Tele-Science, remote-collaboration • Unfortunately  Japan government tends to have “e-Japan” rather than “e-Science”, sigh • Program like “e-Science” or “Cyber Infra” • Projects launched separately

More Related