1 / 29

Development of Russian Grid Segment in the frames of EU DataGRID, LCG and EGEE projects

Development of Russian Grid Segment in the frames of EU DataGRID, LCG and EGEE projects. V.A.Ilyin (SINP MSU), V.V.Korenkov (JINR, Dubna) NEC’2003, Varna 19 September, 2003. RDCC for LHC. 622 Мbps. Institute of Nuclear Physics SB R AS.

lew
Download Presentation

Development of Russian Grid Segment in the frames of EU DataGRID, LCG and EGEE projects

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Development of Russian Grid Segment in the frames of EU DataGRID, LCG and EGEE projects V.A.Ilyin (SINP MSU), V.V.Korenkov (JINR, Dubna) NEC’2003, Varna 19 September, 2003

  2. RDCC for LHC 622 Мbps Institute of Nuclear Physics SB RAS Creation of a regional centre for experimental data processing of the Large Hadron Collider (LHC) in Russia The project is intended for the years 1999 - 2007, first stage is the creation of a prototype of the centre - 1999-2001. By the year 2005 the computing resources of the regional centre and the throughput of the links to CERN will provide: total processor performance 2106 MIPS disk space for data storage 50 TB roboticmass storage 5104 TB communication channel CERN - regional centre 622 Мbps The distributed regional centre is expected to be created on the basis of the infrastructure of 4 centers: SINP MSU, ITEP, IHEP, and JINR. The unified computer network will be constructed for all Russian institutes participating in the LHC project

  3. The opportunity of Grid technology Lab m Uni x CERN Tier1 Uni a UK Russia France Tier3 physics department Uni n CERN Tier2 USA Desktop Lab b Germany Italy Lab c  Uni y Uni b regional group   MONARC project LHC Computing Model2001 - evolving Tier1

  4. Apps Mware Globus DataGrid Architecture Local Computing Local Application Local Database Grid Grid Application Layer Data Management Metadata Management Object to File Mapping Job Management Collective Services Information & Monitoring Replica Manager Grid Scheduler Underlying Grid Services Database Services Computing Element Services Storage Element Services Replica Catalog Authorization Authentication & Accounting Logging & Book-keeping Grid Fabric services Fabric Resource Management Configuration Management Monitoring and Fault Tolerance Node Installation & Management Fabric Storage Management

  5. EDG overview : structure , work packages • The EDG collaboration is structured in 12 Work Packages • WP1: Work Load Management System • WP2: Data Management • WP3: Grid Monitoring / Grid Information Systems • WP4: Fabric Management • WP5: Storage Element • WP6: Testbed and demonstrators • WP7: Network Monitoring • WP8: High Energy Physics Applications • WP9: Earth Observation • WP10: Biology • WP11: Dissemination • WP12: Management } Applications

  6. Russian HEP institutes: IHEP (Protvino), ITEP (Moscow), JINR (Dubna), SINP MSU, TC “Science and Society”(Moscow), Keldysh IAM (Moscow), RCC MSU, PNPI (St.Petersburg) participated in the first European GRID project, EU DataGRID (WP6, WP8, WP10), with success deployment of EDG middleware and participation in EDG testbeds. These activities led to accumulation of an experience in a work with modern Grid environment and integration of Russian Grid segment into European Grid infrastructure.

  7. Activities of Russian institutes in EDG Project: • information service (GIIS) • certification service (Certification Authority) • data management (GDMP; OmniBack&OmniStorage) • monitoring • Metadispetcher • mass events production for CMS&ATLAS experiments • DOLLY – a solution proposed to integrate mass events production for CMS into Grid infrastructure

  8. G I I S The technology of creation of GIIS information servers [which collect the information on local computing resources and resources of data storage (this information is created by GRIS Globus service at an each node of a distributed system) and transmit this information in a dynamical way to the higher GIIS server] has been put into practice. So way, a hierarchical structure of GRIS-GIIS information service building has been applied and tested. A common GIIS information server (ldap://lhc-fs.sinp.msu.ru:2137) has been organized. It transfers the information on local resources of Russian centers to information server (ldap://testbed1.cern.ch:2137) of European EU DataGrid project.

  9. dc=ru, o=grid Country-level GIIS lhc-fs.sinp.msu.ru:2137 dc=sinp, dc=ru, o=grid SINP MSU, Moscow dc=jinr, dc=ru, o=grid JINR, Dubna dc=srcc, dc=ru, o=grid SRCC MSU, Moscow dc=ihep, dc=ru, o=grid IHEP, Protvino CERN Top-level WP6 GIIS testbed001.cern.ch:2137 dc=itep, dc=ru, o=grid ITEP, Moscow dc=tcss, dc=ru, o=grid TCSS, Moscow dc=kiam, dc=ru, o=grid KIAM, Moscow dc=?, dc=ru, o=grid St. Petersburg Russian National GIIS • SRCC MSU, KIAM and TCSS participate only in Russian DataGrid project and are not involved in CERN projects.

  10. C A . Certificationauthority(СА) center for Russian grid segment has been created at SINP MSU. The certificates of this center are accepted by all the participants of EU DataGRID project. A scheme of confirming of requirements to certificates by an electronic signature has been created with an assistance of Registrationauthority (RC) centers which are located in another institutes. The programs on installing and checking an electronic signature and a package of automated operation of certification center have been developed. The scheme CA+RA proposed and a program package have been accepted at CERN and other participants of EU DataGrid project.

  11. GDMP .GDMP (GRID Data MirroringPackage) – a program for replication of files and data bases - has been installed and tested. GDMP had been created for remote actions with distributed data bases. GDMP uses GRID certificates and works in accordance with a client-server scheme i.e. a replication of changes in a data base is accomplished dynamically. Periodically the server notifies the clients on the changes in a data base and the clients send the updated files with a use of GSI-ftp command. GDMP is actively user for replication purposes and is considered to become a Grid standard for replication of changes in distributed data bases.

  12. Some first experience with a common usage of mass storage system in Dubna (ATL-2640) OMNIBACK Usage Some tests on transfer of data from Protvino (sirius-b.ihep.su; OS Digital UNIX Alpha Systems 4.0) to ATL-2640 mass storage system in Dubna (dtmain.jinr.ru; OS HP-UX 11.0) to define a transmission capacity and a stability af a system including communication channels and a mass storage (OmniBack disk agent in Protvino and OmniBack tape agent in Dubna). No abnormal terminations have been fixed. The average speed of a transmission by all the attempts – 480 Kb/s or 1.68 Gb/h. A maximal speed – 623 Kb/s. A minimal speed – 301 Kb/s.(A distance between Dubna and Protvino is about 250 km; communication between Protvino and Moscow – 8 Mbps). OMNISTORAGE Usage Data storage of data obtained during CMS M.-C. Mass Production runs is provided with the usage of Omnistorage : the volumes of data from SINP MSU have been transferred to Dubna (~1 TB) tp ATL-2640; an access to data via scp.

  13. MONITORING Complex of works on monitoring of network resources, computing nodes, services and applications had been fulfilled. The JINR members of staff take part in a development of monitoring facilities for computing clusters with a large number of nodes (10 000 and more) which are used in the EU Data Grid infrastructure created. In the frames of a task of MonitoringandFaultTolerancethey take part in a creation of a CorrelationEngine system. This system serves for an operative discovering of abnormal states at cluster nodes and taking measures on preventing of abnormal states. A CorrelationEngine Prototype is installed at CERN and in JINR for accounting of abnormal states of nodes.

  14. Metadispetcher The Metadispetcher program had being installed in Russian EU Data Grid segment in a cooperation with the Keldysh institute of applied mathematics. The Metadispetcher program is served for a jobs start planning in a distributed computing grid-environment. The program had being tested; after that the program had being modified to provide an effective data transfer by means of Globus toolkit.

  15. RefDB at CERN jobs Environment DOLLY UI mySQL DB UI BOSS GRID EDG-RB CE batch manager job NFS job executer IMPALA WN1 WN2 WNn CMKIN A Task of Mass Event Generation for CMS Experiment at LHC(the solution proposed)

  16. Fundamental Goal of the LCG To help the experiments’ computing projects get the best, most reliable and accurate physics results from the data coming from the detectors Phase 1 – 2002-05prepare and deploy the environment for LHC computing Phase 2 – 2006-08acquire, build and operate the LHC computing service

  17. The protocol between CERN, Russia and JINR on a participation in LCG Project has been approved in 2003. • The tasks of the Russian institutes in the LCG: • LCG software testing; • evaluation of new Grid technologies (e.g. Globus toolkit 3) in a context of using in the LCG; • event generators repository, data base of physical events: support and development; • LCG infrastructure creation in Russia. Since April, 2003 the groups on the directions mentioned above are created and began their work.

  18. The virtual LHC Computing Centre Grid Building a Grid Collaborating Computer Centres Alice VO CMS VO

  19. Russian LCG Portal

  20. Russian LCG Portal

  21. Russian LCG Portal

  22. Monitoring Facilities

  23. Monitoring Facilities

  24. Monitoring Facilities

  25. EGEE The EGEE (Enabling Grids for E-science in Europe) project is accepted by the European Commission (6th Framework program). The aim of the project is to create a global Pan-European computing infrastructure of a Grid type. Main goal is the integration of Russian GRID segments, created during past two years, in the European GRID infrastructure to be developed in the framework of EGEE project.

  26. Russian Data Intensive GRID (RDIG) ConsortiumEGEE Federation Eight Russian Institutes made up the consortium RDIG (Russian Data Intensive GRID) as a national federation in the EGEE project. They are: IHEP - Institute of High Energy Physics (Protvino), IMPB RAS - Institute of Mathematical Problems in Biology (Russian Academy of Science, Pushchino, ITEP - Institute of Theoretical and Experimental Physics (Moscow), JINR - Joint Institute of Nuclear Physics (Dubna), KIAM RAS - Keldysh Institute of Applied Mathematics (Russian Academy of Science, Moscow), PNPI - Petersburg Nuclear Physics Institute (Russian Academy of Science, Gatchina), RRC KI - Russian Research Center 'Kurchatov Institute' (Moscow), SINP-MSU - Skobeltsyn Institute of Nuclear Physics (Moscow State University, Moscow). The Russian memorandum on a creation of a Grid type computing infrastructure on distributed processing of huge data volumes has being signed in September, 2003 by the Directors of the eight institutes.

  27. Russian Contribution to EGEE RDIG as an operational and functional part of EGEE infrastructure (CIC, ROC, RC; integration with EGEE). Specific Service activities: SA1 - Creation of Infrastructure SA2 – Network Activities NA2 – Dissemination and Outreach NA3 – User Training and Induction NA4 - Application Identification and Support

More Related