1 / 11

CuCu 200GeV analysis ALICE/PHENIX computing

CuCu 200GeV analysis ALICE/PHENIX computing. DongJo Kim Sep 11, 2006. Run Infos. Cu+Cu 200GeV(~3.0 nb^-1) 100k segments Good 547 runs LVL1 trigger ERTLL1 4x4b&BBCLL1 + LVL2 algorithm L2EmcHighPtTileTrigger + vertex_z +-30cm. Pi0 extraction. 1.5<pT<2.0 Central events 0-5%

walker
Download Presentation

CuCu 200GeV analysis ALICE/PHENIX computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CuCu 200GeV analysisALICE/PHENIX computing DongJo Kim Sep 11, 2006

  2. Run Infos • Cu+Cu 200GeV(~3.0 nb^-1) • 100k segments • Good 547 runs • LVL1 trigger • ERTLL1 4x4b&BBCLL1 • + LVL2 algorithm • L2EmcHighPtTileTrigger • + vertex_z +-30cm.

  3. Pi0 extraction • 1.5<pT<2.0 • Central events 0-5% • Mixed Bkg

  4. Shower merging pi0’s • The probability for the loss of pi0 due to the merging of the two decay photons in oneEMCal cluster

  5. Well calibrated pi0’s MB<6GeV<ERT

  6. Cross section (MB/LVL2)

  7. Spectras

  8. Correlation Functions Just one run ; 164K MB events

  9. ToDo • Finalize the production code • Test later code developments to handle centrality and reaction plane • PWG/CNT files , how much ? • Files have not arrived at dCACHE yet • Should be ready for Analysis train in a week

  10. ALICE/PHENIX Computing • Data reconstruction (Tier-1) • ALICE Tier-1 centre as a part of the NDGF coordinated Nordic Tier-1 center • Faster Network • Large Storage(HPSS, big disks) • Fast access to Data ( no overload etc…) • More CUPs Proposal : CPU: 300 CPUs Disk: 170 TB tape: 70 TB/year (only tapes, since no tape robot) • Analysis (Tier-2) • Tier-1 can be used • same but not so small • 100CPUs / 10TB

  11. ToDo • ALICE Grid middleware Test • Installation and test should be done soon in a available site • JYFL now doesn’t have a faster network • Can we do this in CSC NOW ? • ALICE computing softwares should be installed for testing and Joining to Tier-1 test • CPUs, Storages are needed • Several server machines • DB servera ( calibration and file catalog for data, Grid ) • AFS/CORBA servers to have controls of softwares

More Related