1 / 24

Grid Efforts in Belle

Grid Efforts in Belle. Hideyuki Nakazawa (National Central University, Taiwan), Belle Collaboration, KEK. Out Line. Belle experiment Computing system in Belle LCG at KEK and Belle VO status Introduction of SRB Summary. Mt. Tsukuba. Belle. KEKB. 3km. Linac. Belle Experiment.

mikel
Download Presentation

Grid Efforts in Belle

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Grid Efforts in Belle Hideyuki Nakazawa (National Central University, Taiwan), Belle Collaboration, KEK Grid Efforts in Belle

  2. Out Line • Belle experiment • Computing system in Belle • LCG at KEK and Belle VO status • Introduction of SRB • Summary Grid Efforts in Belle

  3. Mt. Tsukuba Belle KEKB 3km Linac Belle Experiment “B factory” experiment at KEK (Japan). • KEKB Accelerator • Asymmetric e+e- collider • 3.5 GeV on 8 GeV • 3 km circumference • 22mrad Crossing Angle • Continuous Injection • Belle Detector • Generic purpose • 7 sub-detectors Grid Efforts in Belle

  4. 13 countries, 57 institutes, ~400 collaborators Belle Collaboration Seoul National U. Shinshu U. Sungkyunkwan U. U. of Sydney Tata Institute Toho U. Tohoku U. Tohuku Gakuin U. U. of Tokyo Tokyo Inst. of Tech. Tokyo Metropolitan U. Tokyo U. of Agri. and Tech. Toyama Nat’l College U. of Tsukuba Utkal U. VPI Yonsei U. Aomori U. BINP Chiba U. Chonnam Nat’l U. U. of Cincinnati Ewha Womans U. Frankfurt U. Gyeongsang Nat’l U. U. of Hawaii Hiroshima Tech. IHEP, Beijing IHEP, Moscow Nagoya U. Nara Women’s U. National Central U. Nat’l Kaoshiung Normal U. National Taiwan U. National United U. Nihon Dental College Niigata U. Osaka U. Osaka City U. Panjab U. Peking U. U. of Pittsburgh Princeton U. Riken Saga U. USTC IHEP, Vienna ITEP Kanagawa U. KEK Korea U. Krakow Inst. of Nucl. Phys. Kyoto U. Kyungpook Nat’l U. EPF Lausanne Jozef Stefan Inst. / U. of Ljubljana / U. of Maribor U. of Melbourne Lots of contribution from Taiwan Grid Efforts in Belle

  5. Luminosity Produce large amount of B mesons!! 1 fb-1~106 BB Integrated Luminosity peak luminosity 1.7118 × 1034 cm-2s-1 1 fb-1 ~ 1TB / day Integrated Luminosity (fb-1) 710 fb-1 • Crab Cavityinstalled, being tuned now. Luminosity doubled? Grid Efforts in Belle

  6. History of Belle computing system Grid Efforts in Belle

  7. Overview of the B Computer Workgroup Servers reserved for Grid Storage On-line Reconstruction Farm Computing Servers Grid Efforts in Belle

  8. Belle System Storage System (DISK): 1PB Computing Server: ~42,500 SPECint2K Storage System (HSM): 3.5PB Grid Efforts in Belle

  9. Data Production at Belle MC online reconstruction farm Generation and Detector Simulation 2.5THz (to finish in 6 months) HSM rawdata + “DST” data ~ 1PB 2THz (to finish in 2 months) production Loose Selection Criteria “MDST” data (four vector, PID info etc.) hadron 120TB + others non-HSM @500/fb Users' analyes Grid Efforts in Belle

  10. Maybe we should start considering about Grid Why Grid in Belle? • No urgent requirement • Belle shifts to precise and exotic measurement • More MC statistics necessary for precise measurement • New skim for exotic process • Lesson in de facto standard Just my feeling Grid Efforts in Belle

  11. Grid Introduction Strategy • Strong support from KEK CRC • Starting with MC production and accumulating experiences, gradually shift to handle experimental data • Recruitment • Some collaborators who have running LCG are preparing to join the Belle VO • Experiencing Grid potential may change Belle’s recognition? Grid Efforts in Belle

  12. LCG Deployment at KEK JP-KEK-CRC-01 JP-KEK-CRC-02 Operation is supported by great efforts by APROC members in ASGC. • Since Nov. 2005. • Registered to GOC, in operation as WLCG • Site Role: • practice for production system JP-KEK-CRC-02. • test use among university groups in Japan. • Resource and Component: • SL-3.0.5 w/ gLite-3.0 later • CPU: 14, Storage: ~1.5TB • FTS, FTA, RB, MON, BDII, LFC, CE, SE • Supporting VOs: • belle, apdg, g4med, ppj, dteam, ops and ail Grid Efforts in Belle • Since early 2006. • Registered to GOC, in operation as WLCG • Site Role: • More stable services based on KEK-1 experiences. • Resource and Component: • SL or SLC w/ gLite-3.0 later • CPU: 48, Storage: ~1TB (w/o HPSS) • Full components • Supporting VOs: • belle, apdg, g4med, atlasj, ppj, ilc, dteam, ops and ail

  13. Belle VO • 9 sites • Belle software are installed to 3 sites (KEK x2, ASGC) • ~60 CPUs • 2TB storage • MC production ongoing • Installation manual ready • GFAL with Belle software Grid Efforts in Belle

  14. Total Number of Jobs at KEK in 2006 JP-KEK-CRC-01 JP-KEK-CRC-02 1,400 700 1,000 200 400 Belle Belle Grid Efforts in Belle

  15. Total CPU Time at KEK in 2006(Normalized by 1kSI2K) JP-KEK-CRC-01 JP-KEK-CRC-02 12,000 4,000 10,000 3,000 4,000 1,000 [hrs kSI2K] Belle Belle Grid Efforts in Belle

  16. Logical Site Overview SRB-DSI 130.87.104.0/22 KEK-CC Grid LAN KEK-1 130.87.208.0/22 KEK-2 202.13.197.0/24 130.87.224.0/21 SRB 172.22.28.0/24 $ scp output Belle: MCAT 172.22.28.0/24 $ scp input Grid: KEK-DMZ SuperSINET KEK Firewall WS HSM CPUs Local files Grid Efforts in Belle

  17. SRB Introduction Schedule Preparation Construction Planning • Grid • Belle Operation • Networking • KEKCC/IBM MCAT SRB FW SRB-DSI Test Connection Start Operation Grid Efforts in Belle

  18. Belle Grid Deployment Future Plan Federate with Japanese universities. KEK hosts the Belle experiment and behaves as Tier-0. Univ. with reasonable resources: full LCG (Tier-1) Univ. without resources: UI The central services such as VOMS, LFC and FTS are provided by KEK. KEK also covers web Information and support service. Grid operation is co-operated with 1~2 staffs in each full LCG site. preliminary design deploy in the future University University University UI UI UI JP-KEK-CRC-03 JP-KEK-CRC-02 Tier-0 UI UI UI UI UI UI University University University University University University Tier-1 Grid Efforts in Belle

  19. Summary • Belle VO launched • Belle software are installed to 3 sites • KEK sites are mainly used by Belle • MC production ongoing • SRB is being introduced Grid Efforts in Belle

  20. Additonal (Belle's) Resources We now obtain high-performance computer system; but we didn't suddenly switch to the “less expensive” system. We have been testing such system for several years. 20units/20TB • Linux based PC clusters • S-ATA disk based RAIDdrives • S-AIT tape drives These resources have been essential for Belle (production/analysis) 350TB disks 1.5PB tapes 934 CPUs 1000TB disks 3.5PB tapes 2280 CPUs B computer for comparison Grid Efforts in Belle

  21. Belle Grid Deployment Plan We are planning a 2-phased deployment for BELLE experiments. Phase-1: BELLE user uses VO in JP-KEK-CRC-02 sharing with other VOs. JP-KEK-CRC-02 consists of “Central Computing System” maintained by IBM corporation. Available resources: CPU: 72 processors (opteron), SE: 200TB (with HPSS) Phase-2: Deployment of JP-KEK-CRC-03 as BELLE Production System JP-KEK-CRC-03 uses a part of “B Factory Computer System” resources. Available resources (maximum estimation) CPU: 2200 CPU,SE: 1PB (disk), 3.5 PB (HSM) This system will be maintained by CRC and NetOne corporation. Grid Efforts in Belle

  22. Computing Servers • DELL Power Edge 1855Xeon 3.6GHz x2Memory 1GB • Made in Taiwan [Quanta] • WG: 80 servers (for login)Linux (RHEL) • CS: 1128 serversLinux (CentOS) • total: 45662 SPEC CINT2000 Rate.equivalent to 8.7THz 1 enclosure = 10 nodes / 7U space 1 rack = 50 nodes CPU will be increased by x2.5 (i.e. to 110000 SPEC CINT 2000 Rate) in 2009. Grid Efforts in Belle

  23. Storage System (Disk) • Total 1PBwith 42 file servers(1.5PB in 2009) • SATAII 500GB diskx ~2000(~1.8 failure/day ?) • 3 types of RAID(to avoid problems) • HSM = 370 TBnon-HSM = 630 TB SystemWorks MASTER RAID B1230 16drive/3U/8TB (made in Taiwan) ADTX ArrayMasStor LP 15drive/3U/7.5TB Nexan SATA Beast 42drive/4U/21TB Grid Efforts in Belle

  24. Storage System (Tape) • HSM: PetaSite (SONY) • 3.5PB + 60drv + 13srv • SAIT 500GB/volume • 30MB/s drive • Petaserve • Backup • 90TB + 12drv + 3srv • LTO3 400GB/volume • NetVault Grid Efforts in Belle

More Related