1 / 29

High Performance Computing Center at JINR 1997

High Performance Computing Center at JINR 1997. LCTA “Status Report” Release 1, 1997. Telecommunication systems: * External communication channels (INTERNET); * High-speed JINR Backbone; Systems for powerful computations and mass data processing: * General High-performance server;

razi
Download Presentation

High Performance Computing Center at JINR 1997

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. High Performance Computing Center at JINR 1997 LCTA “Status Report” Release 1, 1997

  2. Telecommunication systems: • * External communication channels (INTERNET); • * High-speed JINR Backbone; • Systems for powerful computations and mass data processing: * General High-performance server; * Clusters of workstations of JINR laboratories and experiments; * Computing farms (PC-farms); • Data storage system: • * File servers system based on AFS; • * Mass storage system; • * Information servers and database servers; • Software support systems: • * System for application creation and maintenance; • * Visualization systems.

  3. Route Server Auxiliary Network Manager Network Manager 155 Mb/s 155 Mb/s ATM Switch 1 2 12 12 10-100 Mb/s 10-100 Mb/s

  4. Statistics for Convex

  5. A Computing Farm to Solve the Problems of Simulation and Homogeneous Physical Data Mass Processing The main goal: to construct at JINR the PC-farm with a new possibilities of physical experiments simulation, mass data processing and analysis The main result: there will be the real possibility of equal in rights participation of JINR physicist in the data analysis of the large international experimental programs in high energy physics The main requests: CPU Time (on Pentium 200) >109 s/year RAM Memory on processor up to 128 Mb HDD Memory 200 - 500 Gb Mass Storage System 4 - 12 DLT The reason of choice: A most simple and cheap solution for very special problems in high energy physics Customers: Laboratory of Particle Physics Laboratory of High Energy Physics

  6. Two stages of the Project realization: 1. PC-farm segment 32 Pentium Pro 200 Processors (4 - for interactive using); (2 years) 128 Mb per processor; 200 Gb HDD memory (RAID and SCSI); 4 DLT; 100 Mbit Ethernet Switch with 16 ports OS "Solaris x86 v.2.6" or Linux Andrew File System (AFS); Network Queue System (NQS); Fortran, C++. The total cost of this stage is 187,600 USD. Different financial sources are needed to realize this stage of the project: the Russia Ministry of Science and Technologies subprogram "Perspective information technologies" fund, budget of JINR, special funds of physical collaboration. 2. Full PC-farm: 128 Pentium Processors (8 - for interactive using); (3-4 years) 128 - 256 Mb per Processor; 500 Gb HDD Mary; Mass Storage System with 12 DLT; new Ethernet Technology; new Software Technology. An estimation total cost of this stage equipment is about 260,000 USD. The additional financial sources and funds are needed.

  7. SUN/SPARC CLUSTER at JINR as OS SOLARIS ENVIRONMENT SERVER Cluster ultra.jinr.dubna.su is working under OS Solaris 2.5.1 Site-licenses for JINR on Fortran F77-4.0 and C++-4.1 provide all JINR specialists complete conditions in OS Solaris environment for their work including the usage of current versions of CERNLIB Site-licenses on F77-4.0 and C++-4.1 are available for any JINR hosts working under OS Solaris The latest versions of many FSF/GNU products widely used in JINR are installed at cluster in a proper way and can be used for installation at any other JINR hosts working under OS Solaris

  8. CMS CLUSTER at JINR Hardware: 3 SPARC-stations ( 140 Ultra SPARC station and two SPARC-stations-20) Disk Space: 24 GB Software: OS Solaris 2.5.1, C-4.0, C++-4.1, F77-4.0 compilers Number of users: 37 The computational environment is the same as at the CERN CMS cluster (cms.cern.ch) CMS cluster at JINR supports both the tasks of simulation and data processing

  9. MAIN DATA BASES 1."Physics Information Servers and Data Bases" • Accelerators DB 2.JINR Library Reference DB system 3.DB for IP addresses of JINR 4.Russian WWW-Servers DB 5.Central services DB for administrative economic activities • Publishing Department DB • STAFF • Topical Plan • Etc.

  10. GENERAL WWW SUPPORT • "Physics Information Servers and Data Bases" • Physics Servers and Services around the World give access to Physics Encyclopedia, Educational Resources, News in Physics, Data and Tables on Physical Constants et cetera. • Physics Institutions around the World. • Physics Conferences, Workshops and Summer Schools. • Publishing Offices home pages which give access to book and journal catalogs, their tables of contents and abstracts of articles, etc • High Energy Physics(HEP) section includes current news, educational materials, information about HEP centers, programs and service packages on data analysis and modelling, detector simulation, event generators etc. • Low and Medium Energy Physics • CMS-RDMS Web site • Main JINR Web site • General INFO • Main documents • Journal "JINR Rapid Communications" • Newspaper "DUBNA" • JINR Information Bulletin • JINR Scientific Programme for the Years 1997 - 1999 • Etc. • LCTA main Web site

  11. RDMS CMS WWW-SERVER CMS informational system is heavily based on the world-wide web (WWW) Web-server http://sunct2.jinr.dubna.su was designed at LCTA and contains information on RDMS CMS collaboration activities This web-server was adopted as an official web-server of RDMS CMS collaboration by the RDMS CMS Collaboration Board in June, 1997 Now there references on RDMS CMS web-server http://sunct2.jinr.dubna.su from CERN CMS web-servers CMSDOC and CMSINFO LCTA is responsible for the further development and support of RDMS CMS web-server

  12. PARTICIPATION IN RUSSIAN INTERDEPARTMENTAL PROGRAMMES • 1."Creation of National Network of Telecommunications for Science and Higher Schools" • - development of the network RBNet and earth channels (section of the Programme 3.3.2. jointly with ROSNIIROS) • - development of the network RUHep - "Dubna-Moscow" link creation (jointly with the Research Institute for Nuclear Physics MSU) • - head organization within the project BAPHYS -creation of a data base network for nuclear physics. • 2. Working out an interdepartmental programme "Creation of high-performance computer centres in Russia" (funds have been allocated for organization of HPCC at JINR). • 3. A project "A computing farm for solving the problems of modeling and mass processing of homogeneous physical information" has been worked out for participation in the programme "Advanced Information Technologies".

  13. PLAN FOR THE YEAR 1998 • To finish the ATM-project on creation of the ATM-based backbones in all JINR laboratories. • To provide a reliable operation of the external communication links by means of a tender selection of the service-provider. • The most reasonable strategy for development of external computer communications for JINR, in view of the limited funds, involves aid in solving the question on creation of a node for a high-speed Europe Backbone (project TEN34) in Moscow, as well as a high-speed communication link to the USA in frames of projects on the development of the national network of computer telecommunications for science and higher schools. • To start up the HPCC at JINR at its first stage -SPP200+DLT robot+mass memory system. • To realize a first step in creation of the computing farm. • To work out a project on organizing an electronic complex for message exchange within JINR by using a unified data base of all registered JINR users. • To install an electronic libraries server, based on HPC servers (full-text data bases, photo-archive data bases, physical data bases etc.) • To design a centralized Backup system for general JINR servers by using the HPC facilities. • To pursue a policy of software standardization and licensing for creation of workplaces for users of the JINR networking informational-computational infrastructure(NICE95/NT). • Further development of algorithms and methods for researches under way at JINR in the field of nuclear physics.

  14. Development Strategy Link to Europe/USA from 2 Mbps to 155 Mbps JINR LAN Full transfer to ATM Mass Storage From Gbytes to Petabytes Tele-videoconferences At least on room in each Laboratory PC-Farms for large experiments ATLAS, CMS, STAR etc. High-performance Computing (nonbudgetary sources) OO, CASE-tools Standard-Software, algorithms and methods

  15. In according to Memorandum of Understanding of collaboration between the Laboratory of Computing Techniques and Automation of the Joint Institute for Nuclear Research, Dubna, Russia and the SPS/LEP Division of the European Organization for Nuclear Research, Geneva, Switzerland during 1997 year has been done: 1. TDM Monitoring System http://hpslz24.cern.ch:8080/cgi-bin/tdm\_protect.vi 2. MOPOS timing diagnostics system http://hpslz24.cern.ch:8080/examples/ex\_mopos.html 3. LabVIEW on the WEB http://hpslz24.cern.ch:8080/ 4. TCP/IP based Message Handler (LV\_mhm) http://hpslz24.cern.ch:8080/others/rem\_exec.html 5. Synchronisation of LabView application http://hpslz24.cern.ch:8080/sync/LV\_sync.html with an accelerator's cycle 6. An Object-Oriented technology in http://hpslz24.cern.ch:8080/others/LV\_oop.html LabVIEW programming (report ICALEPS'97 Beijing 1-7.11.1997) Future developments and projects on 1998 year (a few points has been done now) 1. BEAM LOSS MONITORING System http://hpslz24.cern.ch:8080/examples/BML.html 2. COMMON LABVIEW ENVIRONMENT http://hpslz24.cern.ch:8080/others/LV\_slops.html for the operational machines 3. Evaluation of ACE http://hpslz24.cern.ch:8080/others/LV\_ace.html

More Related