Ccj computing center in japan for spin physics at rhic
Download
1 / 12

CCJ Computing Center in Japan for spin physics at RHIC - PowerPoint PPT Presentation


  • 384 Views
  • Uploaded on

CCJ Computing Center in Japan for spin physics at RHIC. T. Ichihara , Y. Watanabe, S. Yokkaichi, O. Jinnouchi, N. Saito, H. En’yo, M. Ishihara,Y.Goto (1) , S. Sawada (2) , H. Hamagaki (3) RIKEN , RBRC (1) ,KEK (2) , CNS (3)

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'CCJ Computing Center in Japan for spin physics at RHIC' - Anita


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Ccj computing center in japan for spin physics at rhic l.jpg

CCJComputing Center in Japan for spin physics at RHIC

T. Ichihara, Y. Watanabe, S. Yokkaichi, O. Jinnouchi, N. Saito,

H. En’yo, M. Ishihara,Y.Goto(1) , S. Sawada(2), H. Hamagaki(3)

RIKEN, RBRC(1),KEK(2), CNS(3)

Presented on 17th October 2001 at First Joint meeting of the Nuclear Physics Division of APS and JPS (Hawaii 2001)


Riken ccj overview l.jpg
RIKEN CCJ : Overview

  • Scope

    • Center for the analysis ofRHIC Spin Physics

    • Principal site of computing for PHENIXsimulation

      • PHENIX CC-J is aiming at coveringmost of the simulation tasks of the whole PHENIX experiments

    • Regionalcomputing center in Japan

  • Size

    • Data amount: handing 225 TB /year

    • Disk Storage : ~20 TB, Tape Library: ~500 TB

    • CPU performance : 13 k SPECint95(~300 Pentium 3/4 CPU)

  • Schedule

    • R&D for the CC-J started in April ‘98 at RBRC in BNL

    • Construction began in April ‘99 over athree years period

    • CCJ started operation in June 2000

    • CCJ will reach full scale in2002



  • System requirement for ccj l.jpg

    Annual Data amount

    DST 150 TB

    micro-DST 45 TB

    Simulated Data 30 TB

    Total225 TB

    Hierarchical Storage System

    Handle data amount of 225 TB/year

    Total I/O bandwidth: 112 MB/s

    HPSS system

    Disk storage system

    15 TB capacity

    All RAID system

    I/O bandwidth: 520 MB/s

    System Requirement for CCJ

    • CPU( SPECint95)

      • Simulation 8200

      • Sim. Reconst 1300

      • Sim. ana. 170

      • Theor. Mode 800

      • Data Analysis 2000

      • Total 12470 SPECint95

      • ( = 120K SPECint2000)

    • Data Accessibility(exchange 225 TB /year)

      • Data Duplication Facility (tapecartridge)

      • Wide Area Network (IMnet/APAN/ESnet)

    • Software Enviroment

      • AFS, Objectivity/DB, Batch Queuing System


    Requirements as a regional computing center l.jpg
    Requirements as a Regional Computing Center

    • Software Environment

      • Software environment of the CCJ needed be compatible to the PHENIX software environment at the RHIC Computing Facility (RCF) at BNL

        • AFS (/afs/rhic)

          • Remote access over the network is very slow and unstable (AFS sever : BNL)

          • daily mirroring at CCJ and accessed via NFS from Linux farms

        • Objectivity/DB

          • Remote access over the network is very slow and unstable (AMS server: BNL)

          • Local AMS serverat CCJ(regularly update DB)

        • Batch Queuing System: Load Sharing Facility (LSF 4.1)

        • RedHat Linux 6.1 (Kernel2.2.19 withnfsv3 enabled), gcc-2.95.3 etc.

        • Veritas File System (Journaling file system on Solaris): free from fsck for ~TB disk

    • Data Accessibility

      • Need to exchange data of 225 TB/year to BNL RCF

        • Most part of the data exchange is carried out with SD3 tapecartridges

          • Data duplicating Facility at both (10 MB/s performance by airbone)

        • Some part of the data exchange is carried out over theWide Area Network (WAN)

          • CC-J will use Asia-Pacific Advanced Network (APAN) for US-Japan connection

            • http://www.apan.net/

          • APAN has currently ~100 Mbps bandwidth for Japan-US(STAR TAP) connection

            • 3 MB/s transfer performance was obtained using bbftp protocol





    File transfer over wan l.jpg
    File Transfer over WAN

    • RIKEN - (Imnet) - APAN (~100 Mbps) -startap- ESnet - BNL

      • Round Trip Time(RTT) for RIKEN-BNL :170 ms

      • RFC1323 (TCP Extensions for high performance, May 1992) describes the method of using large TCP window-size (> 64 KB)

      • FTP performance : 290 KB/s (64 KB windowsize), 640 KB/s(512KB window)

        • Large TCP-window size is necessary to obtain high-transfer rate

          BBFTP (http://doc.in2p3.fr/bbftp/)

      • Software designed to quickly transfer files accross a wide area network.

      • It has been written for the babar experiment in order to transfer big files

      • transferring data through several parallel tcp streams

      • RIKEN-BNL bbftp Performance (10 tcp streams for each session)

        • 1.5 MB/s (1-sessinon) 3 MB/s (3-sessions)

    MRTG graph during bbftp transfer between RIEKN and BNL


    Analysis for the first phenix experiment with ccj l.jpg
    Analysis for the first PHENIX experiment with CCJ

    • Quark Matter 2001 conference (Jan 2001)

      • 21 papers using CCJ / total 33 PHENIX papers

    • Simulation

      • 100k event simulation/reconst. (Hayashi)

      • DC Simulation (Jane)

      • Muon Moc Data Challenge (MDC) (Satohiro) 100k Au+Au(June 2001)

    • DST Production

      • 1 TB of Raw Data (Year-1) was transferred via WAN from BNL to RIKEN

      • PHENIX official DST (v03) production (Sawada) (Dec. 2000)

        • about 40% of DSTs produced at CCJ and transferred to BNL RCF

    • Detector calibration/simulation

      • TOF -(Kiyomochi,Chujo, Suzuki)

      • EMCal - (Torii,Oyama, Matsumoto)

      • RICH -(Akiba,Shigaki, Sakaguchi)

    • Physics/simulation

      • Single electron spectrum - (Akiba, Hachiya)

      • Hadoron particle ratio/spectrum ( Ohnishi, Chujo)

      • Photon [π0 spectrum] (Oyama, Sakaguchi)


    Ccj operation l.jpg
    CCJ Operation

    • Operation, maintenance and development of CC-J are carried out under the charge of the CCJ Planning and Coordinate Office (PCO).


    Summary l.jpg
    Summary

    • Construction of the CCJ started in 1999

    • The CCJ operation started in June 2000 at 1/3 scale.

      • 43 user’s account created.

    • Hardware/upgrade and software improvement :

      • 224 Linux CPU(90k Specint2000), 21 TB disk, 100TB HPSS Tape library

      • Data Duplicating Facility at BNL RCF started operation in Dec 2000.

    • PHENIX Year -1 experiment : summer in 2000

      • Analysis of the PHENIX first experiment with CCJ

        • Official DST (v03) Product ion

        • Simulations

        • Detector calibration/simulation

        • Physics analysis/simulation

        • Quark Matter 2001 conference (Jan 2001)

          • 21 papers using CCJ / total 33 PHENIX papers

    • PHENIX Year 2 (2001) Run started in August 2001

    • Spin experiments at RHIC will start in December 2001 !!


    ad