1 / 20

Computing in HEP (plan)

Computing in HEP (plan). Introduction The scale of computing in LHC Regional Computing Russian Regional Computing Facility Institute Level Computing Computing cluster at PNPI Computing cluster at Stony Brook Conclusion. Introduction.

nuwa
Download Presentation

Computing in HEP (plan)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computing in HEP(plan) • Introduction • The scale of computing in LHC • Regional Computing • Russian Regional Computing Facility • Institute Level Computing • Computing cluster at PNPI • Computing cluster at Stony Brook • Conclusion Reporter: Andrei.Chevel@pnpi.spb.ru -- From: PNPI

  2. Introduction • PNPI - St. Petersburg Nuclear Physics Institute (http://pnpi.spb.ru) • High Energy Physics Division • Neutron Physics Department • Theory Physics Division • Molecular and Biology Physics Division Reporter: Andrei.Chevel@pnpi.spb.ru -- From: PNPI

  3. CSD responsibilities • HEPD Centralized Computing (http://www.pnpi.spb.ru/comp_home.html) • Computing Cluster • Computing Server • HEPD Local Area Network • HEPD and PNPI Connectivity (excluding terrestrial channel) • HEPD and PNPI InfoStructure Reporter: Andrei.Chevel@pnpi.spb.ru -- From: PNPI

  4. High Energy Physics Division Computer LAN • About 200 hosts • Most of them are 10Mbit/sec • Central part of LAN consists of 100Mbit/sec segments (Full Duplex) • Based on 3Com Switches 3300 • LAN is distributed over 5 buildings • Effective distance is about 800 m • Fiber Optic Cable is used in between buildings Reporter: Andrei.Chevel@pnpi.spb.ru -- From: PNPI

  5. Reporter: Andrei.Chevel@pnpi.spb.ru -- From: PNPI

  6. Reporter: Andrei.Chevel@pnpi.spb.ru -- From: PNPI

  7. Reporter: Andrei.Chevel@pnpi.spb.ru -- From: PNPI

  8. Related Proposals CERN GRID site http://grid.web.cern.ch/grid/ • Particle Physics Data Grid (http://www.cacr.caltech.edu/ppdg/) • High Energy Physics Grid Initiative (http://nicewww.cern.ch/~les/grid/welcome.html) • MONARC Project (http://www.cern.ch/MONARC/) Reporter: Andrei.Chevel@pnpi.spb.ru -- From: PNPI

  9. What is an inspiration? • Last year (1999) that book became famous. • Immediately after a range of proposals were submitted to various agencies and funds. • Main idea is to create World Wide Computing Infrastructure. Reporter: Andrei.Chevel@pnpi.spb.ru -- From: PNPI

  10. Reporter: Andrei.Chevel@pnpi.spb.ru -- From: PNPI

  11. RRCF in St.Petersburg • Possible partners • Petersburg Nuclear Physics Institute; • S.Petersburg University; • S.Petersburg Technical University; • Institute for High Performance Computing & Data Bases; • RUNNET ? Reporter: Andrei.Chevel@pnpi.spb.ru -- From: PNPI

  12. Russian Reginal Computing Facility • Regional Computing Centre for LHC • CSD takes participation in Russian activity in creating the Russian Regional Computing Centre for LHC (see http://www.pnpi.spb.ru/RRCF/RRCF) Reporter: Andrei.Chevel@pnpi.spb.ru -- From: PNPI

  13. RRCF in St.Petersburg Reporter: Andrei.Chevel@pnpi.spb.ru -- From: PNPI

  14. RRCF in St.PetersburgComputing Power • Total Computing Capacity about 90K SPECint95 • http://nicewww.cern/ch/~les/monarc/base_config.html • Pentium III/700 has about 35 SPECint95 • About 2570 processors • or about 640 machines for 4 processor • or 640/4 (institutes) = about 160 machines. Reporter: Andrei.Chevel@pnpi.spb.ru -- From: PNPI

  15. Computing Cluster PC/II/400/128 EIDE 6GB PC/III/450/256 EIDE 6GB PC/II/266/256 EIDE 5GB Switch 100Mb RedHat 6.1 CODINE CERNlib Root http://www.pnpi.spb.ru/pcfarm Dual PII/450/512 Ultra2WideSCSI 18GB Reporter: Andrei.Chevel@pnpi.spb.ru -- From: PNPI

  16. BCFpc hardware (picture) Reporter: Andrei.Chevel@pnpi.spb.ru -- From: PNPI

  17. Another Examplefor Computing Cluster • University at Stony Brook, Departnet of Chemistry, Laboratory of Nulear Chemistry • DEC Alpha 4100 (500 MHz). • 32 machines (dual PIII/500, 256 MB); • Tape Robot for 3 TB; • RAID array 1.5 TB. Reporter: Andrei.Chevel@pnpi.spb.ru -- From: PNPI

  18. Problems • Security • against attacks from Internet and Intranet; • against unplanned data losses; • To keep up to date the software base on whole cluster. • To save network bandwidth by keeping locally the special cache for experimental data. • Appropriate Batch System. Reporter: Andrei.Chevel@pnpi.spb.ru -- From: PNPI

  19. Possible Plan for Small Physics Laboratory • To install the hardware: (3-5) PCs (about 0.8-1.3 GHz, 0.5GB of main memory, DLT stacker on the base DLT-8000, Switch with 1 Gbit uplink, etc., etc.). • To install the software: Linux, CERNlib, Objectivity/DB, GLOBUS, etc., etc. • To prepare and test logical connectivity to CERN and to Regional Computing Facility for LHC. Reporter: Andrei.Chevel@pnpi.spb.ru -- From: PNPI

  20. Conclusion • Relatively new situation for many small and midrange laboratories (like PNPI): • Main direction in HEP computing at PNPI is to create good front-end for High Performance Computing Facilities plus MSS outside Institute. • We have to collect all the available financial, technical, administrative resources and plan to work in close collaboration. Reporter: Andrei.Chevel@pnpi.spb.ru -- From: PNPI

More Related