1 / 21

GRID Deployment Status and Plan at KEK

GRID Deployment Status and Plan at KEK. ISGC2007 Takashi Sasaki KEK Computing Research Center. Who we are?. KEK (High Energy Accelerator Research Organization) Inter University Research Institute Corporation since 2004 National Institute previously Still 100% funded by the government

elvina
Download Presentation

GRID Deployment Status and Plan at KEK

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. GRID Deployment Status and Plan at KEK ISGC2007 Takashi Sasaki KEK Computing Research Center

  2. Who we are? • KEK (High Energy Accelerator Research Organization) • Inter University Research Institute Corporation since 2004 • National Institute previously • Still 100% funded by the government • Mission • Carry out research related accelerator science • Particle Physics, Material Science, Life Science and so on • Supporting collaborating universities • Universities have a freedom for the choice and they can be independent of us Takashi.Sasaki@kek.jp

  3. High Energy Physics in Japan • Major High Energy Activities in Japan • Terminated in the JFY2005 • K2K Experiment at Kamiokande and KEK • Active Experiments • Belle Experiment at KEK • KamLAND at Kamioka • CDF at FermiLab/USA • Under construction • J-PARC • T2K Experiment at Tokai and Kamioka • ATLAS and ALICE at LHC • Future Plan • SuperB Factory • International Linear Collider (ILC) • HEPnet-J: High Energy PhysicsNetwork in Japan • KEK provides the network facility, NEPnet-J, on the SINET/SuperSINET (NII)。 CDF at FNAL ATLAS at LHC KamLand Tokyo Metropolitan U. Kamioka RIKEN SuperKamiokande Tohoku U. Niigata U. Osaka U. Kanazawa U. J-PARC Shinshu U. JAEA Tokai Okayama U. Kyoto U. Nagoya U. Kobe U. KEK Hiroshima U Nara Women U. Tsukuba U. Osaka city U. ICRR/Tpkyo Waseda U. U. Tokyo Tokyo Inst. of Tech. KEK BELLE SuperSINET nodes SINET nodes Takashi.Sasaki@kek.jp

  4. J-PARC B-Factory LC-Test Facility Photon-Factory High Energy Accelerator Research Organization Tokai Tsukuba Narita Airport Takashi.Sasaki@kek.jp

  5. SCC RF(HER) B0J/yKS ARES (LER) B0J/yKS Ares RF cavity e+ source Belle Experiment 13 countries, 57 institutes, ~400 collaborators Observation of CPV in the B meson system KEKB e+ e- Collider 8 GeV e- 3.5 GeV e+ Takashi.Sasaki@kek.jp

  6. Strategy on GRID deployment • Highest priority is support on Belle and ILC • LCG deployment • Help Universities to start over • R&D • Preparing for ILC • Japan has strong will to host ILC • NAREGI • Japanese flagship project on developing a GRID middleware • May substitute LCG/gLite in the future • GRID interoperability is the key Takashi.Sasaki@kek.jp

  7. LCG at KEK • Computer Systems related • Central Information System since Feb. 20. 2006 • Central Computer System■GRID System■Mail System, etc. • B Factory Computer System since Mar. 23. 2006 • CPU : 45,662 SI2K rate ■ Disk : 1 PB ■ Tape : 3.5 PB • 1st Phase • KEK Grid CA • approved by APGRID PMA and has been in operation since Jan. 2006. • The third official Grid CA in Japan • NAREGI CA software was modified to use at KEK. • KEK employee and their collaborators are eligible for this service. • LCG and SRB for production usage are available on the GIRD System • not for public usage, but for supporting projects • WN: 36 nodes x 2 =72 CPU • Storage: Disk (2TB) + HPSS(~200TB) • Supported VO: Belle, APDG, Atlas_J • in operation since May 2006 • 2nd Phase • since Jan. 2007, LCG is going to be deployed in the Belle production. Takashi.Sasaki@kek.jp

  8. LCG Sites at KEK JP-KEK-CRC-01 JP-KEK-CRC-02 We depends on services by APROC members in ASGC, Taiwan. 8 Takashi.Sasaki@kek.jp • since early 2006. • Site Role: • Production • Resource and Component: • SL or SLC w/ gLite-3.0 later • CPU: 48, Storage: ~1TB (w/o HPSS) • Full components • Supported VOs: • belle, apdg, g4med, atlasj, ppj, ilc, dteam, ops and ail • since Nov. 2005. • Site Role: • Experimental • Resource and Component: • SL-3.0.5 w/ gLite-3.0 later • CPU: 14, Storage: ~1.5TB • FTS, FTA, RB, MON, BDII, LFC, CE, SE • Supported VOs: • belle, apdg, g4med, ppj, dteam, ops and ail

  9. Belle GRID • LCG services are provided by the Central Computer system • Only storage system is used in B Factory computer system • Almost the same size of LHC tier-1 centers • CPU’s are served in the future • JP-KEK-CRC-02 is the production site • Belle data is exported by SRB-DSI • Very useful to export existing data • SRB-SRM is desirable still • iRODs? Takashi.Sasaki@kek.jp

  10. LCG at KEK (2) • Data Trans in Belle Grid AS(Taiwan) Kracow Melbourne LCG/gLite CPUs WS Nagoya SRB MCAT SRB-DSI SRB server File servers CRC-02 CRC-01 SRB files Local files Disk/HSM Grid files HPSS B Factory-system Grid-system Takashi.Sasaki@kek.jp

  11. B Factory Computer System - New B Factory Computer System since March 23. 2006 - History of B Factory Computer System 11 Moore’s Law: 1.5y=x2.04y=x~6.3 5y=x~10 Takashi.Sasaki@kek.jp

  12. Total Number of Jobs at KEK in 2006 JP-KEK-CRC-01 JP-KEK-CRC-02 1,400 700 1,000 200 400 BELLE BELLE 12 Takashi.Sasaki@kek.jp

  13. Total CPU Time at KEK in 2006(Normalized by 1kSI2K) JP-KEK-CRC-01 JP-KEK-CRC-02 12,000 4,000 10,000 3,000 4,000 1,000 [hrs kSI2K] BELLE BELLE 13 Takashi.Sasaki@kek.jp

  14. JLCG • Sponsored by the s-Science promotion program of National Institute of Informatics • LCG federation among Japanese Universities and KEK • Nagoya: ATLAS, Belle • University of Tohoku :ILC, KamLand • University of Tsukuba :ATLAS, ILC • Kobe University: ATLAS, ILC • Hiroshima Institute of Tech.: ATLAS, Information science • KEK: Belle, ILC, ATLAS… • KEK behaves as GOC Takashi.Sasaki@kek.jp

  15. NAREGI at KEK • NAREGI • National Research GRID Initiative • What we expect? • Better quality • Better functionality • Job submissions with NAREGI is almost 3times faster than gLite3 now • Better support • We may substitute LCG/gLite with NAREGI if NAREGI works as advertised • GRID interoperability is the key • We are working together with NAREGI for this issue • Support on operation and application adoption is necessary • LCG is very excellent in support on deployment and operations Takashi.Sasaki@kek.jp

  16. NAREGI at KEK (2) • NAREGI-betatest bed • 6nodes+3nodes(DG) • Installation • Naregi-beta 1.0 : Nov. 2006 • Information, PSE, WFT, GVS • Naregi-DG : Feb 2007 • Test by Application • P152 (Heavy ion) experiment • Data analysis • data processing, then stored the results to SRB through SRB-DSI gate way • Belle event simulation • We found • several inconvenient differences between gLite and NAREGI • ask NAREGI group to improve nrg00 Portal node: Super Scheduler:SS Inforamtion Server:IS Grid VM Sever:GVM GridVM engine node-1 GridVM engine node-2 User Manag. Serv.: VOMS nrg01 DG MCS (Metadata) nrg02 nrg09 nrg04 DG RMS (Resource Manag.) nrg10 nrg05 Gfarm FS nrg11 nrg03 Takashi.Sasaki@kek.jp

  17. GRID at KEK • GRID environment at KEK 2007.2.9 gLite/EGEE CPUs Naregi-kek SRB MCAT SRB-DSI SRB server gLite/ CRC-02 gLite/ CRC-01 SRB files Local files HPSS Grid files Takashi.Sasaki@kek.jp

  18. Plan • More production use of LCG • Belle • Collaboration with ASGC, National Central Univeristy(Taiwan), U. of Melborne, Krakow and so on • ILC GRID • Collaboration with CC-IN2P3 • Start with CALICE • NAREGI-LCG(gLite) interoperability • More tests locally and discussion with NAREGI • Seeking a possible scenario to have a tier-1 center in Japan in the future (2012?) • Good practice for hosting the tier-0 center of ILC Takashi.Sasaki@kek.jp

  19. Collaboration in Asia Pacific • Belle GRID was established by the collaboration of people in Asia Pacific • ASGC. KNU Korea, NCU Taiwan and U. of Melbourn • We depends on efforts of ASGC very much for the operation of LCG • We sent our stuffs to ASGC for their training • We seek further collaboration in this region • LCG operation • Anyone who has interests in NAREGI deployment? (anybody, not only people in this region) Takashi.Sasaki@kek.jp

  20. KEK Grid CA (member of APGRID PMA) LCG JP-KEK-CRC-01 and 02 Belle Experiment B Factory Computer System Size : Tier1 level LCG deployment share the simulation data among the collaborating institutions near future: share the experimental data and mutual job submission JLCG Tohoku U. / Tsukuba U./ Kobe U./ Hiroshima Tech. / Nagoya U. / KEK NAREGI at KEK NAREGI-beta Test bed Test by the data analysis program Usage of SRB through SRB-DSI Installation of NAREGI-DG in progress From now on Test of NAREGI-DG Test of Interoperability between gLite and NAREGI Summary Takashi.Sasaki@kek.jp

  21. Acknowledgements • We thank ASGC for their great support on the LCG/gLite operation • A.E. Chen, Hide Nakazawa (NCU Taiwan), Glenn Moloney, Marco La Rosa (U. of Melbourne), Simon Lin and his stuffs at ASGC helped us for Belle GRID. We appreciate very much for them. • We thank people in NAREGI for their supports on us Takashi.Sasaki@kek.jp

More Related