1 / 13

Grid setup for CMS experiment

Grid setup for CMS experiment. Youngdo Oh Center for high energy physics, Kyungpook National University (On behalf of the HEP Data Grid Working Group) 2003.8.22 The 2 nd international workshop on high energy physics data grid. RPMs repository Profile repository. Installation of EDG.

Download Presentation

Grid setup for CMS experiment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Grid setup for CMS experiment Youngdo Oh Center for high energy physics, Kyungpook National University (On behalf of the HEP Data Grid Working Group) 2003.8.22 The 2nd international workshop on high energy physics data grid

  2. RPMs repository Profile repository Installation of EDG • TheDataGrid fabric consists of a farm of centrally managed machines with multiple functionalities: Computing Elements(CE), Worker Nodes (WN), Storage Elements (SE), Resource Broker(RB ), User Interface (UI), Information Service (IS), Network links , … Each of these is a linux machine. CE/WN (PC Cluster) LCFG Server UI SE (GDMP) RB Modified slide from A. Ghiselli, INFN, Italy

  3. EDG Testbed at CHEP Check resources for given jobs User submits job to UI and requests data to SE from UI Submit job to Worker node and send output to user

  4. EDG Testbed K2K CPU UI Real user RB In operation NFS BigFat Disk GSIFTP GDMP client (with new VO) SE CE VOuser KNU/CHEP Disk VO user LDAP Server @SNU NFS GSIFTP GDMP server (with new VO) NFS GSIFTP GDMP client (with new VO) MAP on disk With maximum security NFS In preparation WN grid-security VO user GSIFTP SKKU SNU In operation CDF CPU . . . • The EUDG test beds are operated at KNU and at SNU. • The globus simple CA is managed at KNU and at SNU to sign certs. • In addition to the default VO’s in EUDG, a cdf VO is constructed. • One CDF Linux machine is embedded at EUDG Test bed as WN by installing PBS Batch server  CDF job is running at EDG test bed.

  5. Mass Storage • Software : HSM (Hierarchical Storage Management) • 1TB FastT200 : Raid 5 • 4 x 3494 tape drive : 48TB

  6. Preformance of Mass Storage Clusters at CHEP Nfs : 10Mbytes/s (writing) 61Mbytes/s (reading) Ftp : 50Mbytes/s (writing) 38Mbytes/s (reading) nfs Cluster at KT , ~ 200km far from CHEP

  7. Installation using PXE & Kickstart tool • LCFGng was installed successfully • But, there are minor problems to configure client using LCFGng, we are communicating with LCFGng team. • Toy installation server working like LCFG kickstart : automatic installation tool of redhat user applicatin can be added. => Using kickstart with PXE and DHCP, any kind of application can be installed with installing linux automatically. We will use this kind of system to setup and maintain various grid software in CHEP cluster.

  8. Network to CERN The traffic through TEIN is almost saturated. : slow less than 2Mbps So, we are using KOREN => StarTap => CERN normally : 1Mbps ~ 20Mbps

  9. Web interface for job management Initial screen Web Browser Web Server Proxy information EDG System Web service

  10. Web interface for job management ( cont ) EDG menu Edit job Loading jdl file

  11. Web interface for job management ( cont ) dgjoblistmatch dgjobsubmit dgjobstatus dgjobgetoutput

  12. iVDGL & LCG-1 • Gatekeeper( CE in EDG ) installed • Batch system : condor • UI and WN under installation • No well-defined RB and SE yet in iVDGL • UF and Fermilab are preparing something similar • General installation & setup script will be ready after • Successful installation of all component. • We are waiting stable LCG-1 release.

  13. Plan & conclusion • Updating CMS Grid software to LCG-1 • In order for HSM to be an efficient part of SE, nfs between HSM and clusters should be turned, or we should think of other solution. • The network between CHEP and CERN shows sometimes unstable and low bandwidth. • For management of various grid system, efficient management server(LCFG, or kickstart) will be ready. • More institute will be included in CMS grid system.

More Related