1 / 13

CC-J : Computing Center in Japan for RHIC Physics

CC-J : Computing Center in Japan for RHIC Physics. Takashi Ichihara (RIKEN and RIKEN BNL Research Center ) Presented on 28/09/2000 at RBRC Review at BNL. Contents. 1. Overview 2. Concept of the system 3. System requirement 4. Other requirement as a regional center

Patman
Download Presentation

CC-J : Computing Center in Japan for RHIC Physics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CC-J : Computing Center in Japanfor RHIC Physics Takashi Ichihara (RIKEN and RIKEN BNL Research Center) Presented on 28/09/2000 at RBRC Review at BNL

  2. Contents 1. Overview 2. Concept of the system 3. System requirement 4. Other requirement as a regional center 5. Project plan and current status 6. Activity since last review 7. Plan in this year 8. Current Configuration of the CC-J 9. Components of the CC-J (photo) 10. CC-J Operation 11. Summary

  3. PHENIX CC-J : Overview • PHENIX Regional Computing Center in Japan (CC-J) at RIKEN wako • Scope • Principal site of computing for PHENIX simulation • PHENIX CC-J is aiming at coveringmost of the simulation tasks of the whole PHENIX experiments • Regional Asian computing center • Center for the analysis of RHIC spin physics • Architecture • Essentially follow the architecture of RHIC Computing Facility (RCF) at BNL • Construction • R&D for the CC-J started in April ‘98 at RBRC • Construction began in April ‘99 over athree years period • 1/3 scale of of the CC-J started operation in June 2000

  4. Concept of the CC-J System

  5. Annual Data amount DST 150 TB micro-DST 45 TB Simulated Data 30 TB Total225 TB Hierarchical Storage System Handle data amount of 225TB/year Total I/O bandwidth: 112 MB/s HPSS system Disk storage system 15 TB capacity All RAID system I/O bandwidth: 520 MB/s System Requirement for the CC-J • CPU ( SPECint95) • Simulation 8200 • Sim. Reconst 1300 • Sim. ana. 170 • Theor. Mode 800 • Data Analysis 1000 • Total 11470 SPECint95 • ( = 120K SPECint2000) • Data Duplication Facility • Export/import DST, simulated data.

  6. Other Requirements as a Regional Computing Center • Software Environment • Software environment of the CC-J should be compatible to the PHENIX Offline Software environment at the RHIC Computing Facility (RCF) at BNL • AFS accessibility (/afs/rhic) • Objectivity/DB accessibility • Data Accessibility • Need exchange data of 225 TB/year to RCF • Most part of the data exchange will be done by SD3 tapecartridges (50GB/volume) • Some part of the data exchange will be done over theWAN • CC-J will use Asia-Pacific Advanced Network (APAN) for US-Japan connection • http://www.apan.net/ • APAN has currently 70 Mbps bandwidth for Japan-US connection • Expecting 10-30% of the APAN bandwidth (7-21 M bps) can be used for this project: • 75-230 GB/day ( 27 - 82 TB/year) will be transferred over the WAN

  7. Project plan and current status of the CC-J

  8. Activity since the last review (in these 15 months) • Construction of the CC-J at phase 1,2 • Phase 1 and 1/3 of Phsae 2 hardware installed • High Performance Storage System (HPSS) with 100 TB Tape library • CPU farm of 128 processors (about 4000 SPECint95) • 3.5 TB Raid Disk , Two Gigabit Ethernet Switches, etc. • A tape drive and a workstation installed at BNL RCF for tape duplication • Software environment for PHENIX computing : established • AFS local mirroring, Linux software environment, LSF etc : ready to use • CC-J operation • CC-J started operation in June 2000. • 40 user’s accounts created so far. • 100 K event simulation (130 GeV/nucleon, Au+Au min. bias) of • (1)retracted geometry, zero filed, (2) standard geometry, zero filed, • (3)full 2D field field, (3) full 3D field,(3) half 3D filed : in progress • Real raw data (2TB) of PHENIX experiment transferred to CC-J via WAN • Large ftp performance (641 KB/s = 5 Mbps) was obtained (RTT = 170 ms) • data analysis by regional users are in progress.

  9. Plan in this year • Hardware upgrade • 6 TB Raid disk, tape drives and cached disks for HPSS • 64 Linux CPU farms, SUN servers for data mining & NFS serving • System development • Data duplicating facility (to be operational) • Objectivity/DB accessing method (to be established) • Simulation production • 100k Au+Au simulations (continuing) • Other simulations to be proposed by PHENIX PWG (spin etc.) • Data Analysis • Official Data Summary Tape (DST) will be produced at RCF soon • DST : transfer to the CC-J via the duplication facility (by tape) • Micro-DST production (data mining ) for data analysis

  10. Current configuration of the CC-J

  11. Component of the PHENIX CC-J at RIKEN

  12. CC-J Operation • Operation, maintenance and development of CC-J are carried out under the charge of the CC-J Planning and Coordinate Office (PCO).

  13. Summary • The construction of the PHENIX Computing Center in Japan (CC-J) at RIKEN Wako campus, which will extend over a three years period, began in April 1999. • The CC-J is intended as the principal site of computing for PHENIX simulation, a regional PHENIX Asian computing center, and a center for the analysis of RHIC spin Physics. • The CC-J will handle the data of about 220 TB/year and the total CPU performance is planned to be 10k SPECint95 (100k SPECint2000) in 2002. • CPU farm of 128 processors (RH6.1, kernel 2.2.14/16 with nfsv3 ) is stable. • Copy data over WAN: Large ftp performance (641 KB/s = 5 Mbps) was obtained for over the Pacific Ocean (RTT = 170 ms) • The CC-J operation started in June 2000 at 1/3 scale. • 39 user’s account created. • 100K-event simulation project started in September 2000. • Some part of real raw data (about 2 TB) of PHENIX experiment transferred to CC-J via WAN and data analysis by regional users are in progress.

More Related