1 / 55

Grid Status and Perspective in China

Grid Status and Perspective in China. Sun Gongxing IHEP, Beijing. Main Grid Projects in China. NHPCE—the National High Performance Computing Environment (MOST, 863 programme) VEGA—Grid Project around Dawning SuperServer (ICT,CAS). SDG—the Scientific Data Grid Project (CNIC, CAS)

xenos
Download Presentation

Grid Status and Perspective in China

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Grid Status and Perspective in China Sun Gongxing IHEP, Beijing

  2. Main Grid Projects in China • NHPCE—the National High Performance Computing Environment (MOST, 863 programme) • VEGA—Grid Project around Dawning SuperServer (ICT,CAS). • SDG—the Scientific Data Grid Project (CNIC, CAS) • ChinaGrid—the MOE Project. • HEP Data Grid (currently IHEP).

  3. Network Infrastructure Overview in China

  4. Overview of International Network Links of China • CN-HK CERNET/HARNET 2Mbps • CN-HK C&SNET 45Mpbs • CN-JP CERNET 10Mpbs • CN-US CERNET 10Mpbs • CN-US CERNET 200Mpbs • CN-UK CERNET 2Mpbs • CN-US CSNET 55Mpbs • CN IEEAF 10.6Gbps

  5. 1 Network Infrastructures for Science and Technology in China • The CSTNET— the China Science and Technology Network - One of the earliest networks in China and the first full-functional interconnection with U.S - 55Mbps international link (upgrade to 100Mbps soon). - Topology as Fig. 1

  6. 2 NSFCNET—Natural Science Foundation of China Network • Sponsor: National Science Foundation of China (NSFC) - A test bed for advanced high speed network research, native multicast enabled - 6 main nodes (all in Beijing) CNIC of CAS, THU, PKU, BUAA, BUPT, NSFC - Topology is as Fig. 2.

  7. 3 The Internet2 Testbed • Support scientific research, development and test of next generation network. - High-speed connection with VBNS and Abiene (connected to STARTAP in Sep,2000) . - 2.5/10Gbps DWDM Backbone ( dual rings). - Will inter-connect 200 nodes, including 100 universities and 100 institutes.

  8. - Networking infrastructure is a part of the effort, the more important is development of applications, particularly grid-enabled applications. - Funding 400 million Yuan (~40M US$).

  9. 4 CAINONET—China Advanced Info. Optic Net • A 863-300 key project • self-owned technologies and products for next-generation inter-networks • IP/DWDM,2.5G. • Topology as Fig. 3

  10. Overview of Grid Projects in China

  11. 1 The VEGA Project • Design and implement Grid level system software. • Grid oriented super-server—the Dawning 4000. • Service Grid—Providing various services to users, for instances, computation, messaging, knowledge services. • Architecture, Grid Operating system, application development, etc.

  12. Vega Grid Architecture Grid server Client Grid browser Grid://…… Resource router Grid server Grid server

  13. Vega Applications • Case 1 • Global Grid batch system. • Applications based on the GOS APIs. • No center control. • Shell commands: gsub, gstat, gkill, etc. • All registered resources are available for users, CPU, disk spaces, networks, data, software, sercices…..

  14. Global Batch System in Vega

  15. On-line Transaction System • Case 2: • 4 different basic services registered to Grid. - Whether forcasting. - Airline ticket booking. - Sigh spot ticket booking. - Charging. • Put these services into a special application.

  16. AN Online Transaction

  17. 2 The China National Grid • The National High Performance Computing Environment—the NHPCE. • A key 863 programme (MOST). • Including 9 sites for HPC in the Grid. - Interconnected them with available networks ( Cernet & NSTnet) - equipped each site with Dawning 2000/3000, Galaxy 3(20Gflops), Sunway 1(460Gflops), PC-cluster(8Gflops).

  18. NHPCE Grid Portal (User Mgt)

  19. NHPCE Grid Portal (Job Submit)

  20. NHPCE Grid Portal(Resource Mgt)

  21. NHPCE Grid Portal(System Monitoring)

  22. NHPCE Grid Portal (utilities)

  23. NHPCE Applications • Weather forecasting. • Petroleum reservoir simulation. • Bio-information database and applications. • Numerical wind tunnel simulation. • Automobile collision simulation. • Ship structure analysis. • National scientific databases and applications. • Digital library.

  24. NHPCE Node Node Node Node Instr. DB Info lib. Audio Gigabit IP Network Internet DVD Video Camera Notebook HPC Tel. Game TV BP, Mobile phone PC

  25. 3 The SDG Project • By Dec. 2002 • 31 member institutions • 217 databases • 3.2 TB • Classification • numerical: 46% • text: 18% • spatial: 22% • multimedia: 14%

  26. SDG Universal Metadata Too • many disciplines in SDG  similarly many or more metadata standards. it’s not good for us to develop a tool for every metadata schema individually. • input metadata for existing databases is more bothersome, so a ease-to-use tool might be in practice. • input: a metadata schema (xml DTD). • output: Web-based, customizable UI. • LDAP-based Storage. • Management functions (add, delete, modify and query) , back-end is MDIS.

  27. MD schema install & configure Process(Java bean) MDIS(LDAP) interim XML User page XML engine universal, extensible customizable SDG Universal Metadata Tool • metadata is tree-like and more flexible than fix-column tables, difficult to deal with on web UIuse xml files to store interim results • use xml files to store interim results

  28. Applications • Virtual Observatory • Astronomical data is huge & well documented • Most is online & sharable • So, Internet is becoming the world’s best telescope • Data integration and federation(by many different instruments from many different places and many different times) • Ease-to-use for astronomers • LAMOST (aperture: 4m, by 2004) • Collaboration betweenNAO and CNIC

  29. 4 The China Grid • it is being proposed now, and will be founded by MOE ( the Ministry of Education), It’s bandwidth is 2.5Gbps, connect 100 universities across China. • And to be inter-connected to NFSCNET. • Discussion is under way, When? How?

  30. 5 HEP DataGrid in China • (1) BES experiment: the tau-charm physics International collaboration with 18 domestic universities and 4 foreign universities. • The major upgrade program BEPCII has been approved by Chinese Government. • Increasing Luminosity by a factor of 100 • produce about 3 PB data in 3-5 years.

  31. (2) YangBaJing Cosmic-ray Lab. At Tibet • China-Japan collaboration: air-shower array detector since 1990 • China-Italy collaboration: RPC carpet detector of 10000 m2 , under construction. several domestic and foreign universities participate in both experiments Data Volume: about several hundreds of TB data.

  32. (3)Experiments of LHC: CMS and Atlas • China physicists are participating in MC simulations, and will participate in physics analysis. - build Tier-2 Regional Center at least. - NSFC delegation visited CERN to discuss the collaboration in April this year.

  33. (4) Belle and Babar (5) Alpha Magnetic Spectrometer (AMS)

  34. The IHEP Campus Grid • Building a Campus Grid Computing Environment at IHEP). • Making full use of 7 PC Farms built at IHEP, which run separately for various experiments which leads to inefficient utilization of these PC Farms. • Topology as Fig. • IhepGrid monitoring system. • Rebuilding higher bandwidth connection among these PC Farms.

  35. Topological map of IHEP Computing Environment

  36. The PC Farms Hardware Configuration at Computing Center

  37. The CMS and Atlas PC FARM monitor RAID

  38. The IhepGrid Monitoring System

  39. China HEP Top CA

  40. The Connection of PC Farms at IHEP Computing Center

  41. Components of Campus Grid Computing Environment at IHEP

More Related