1 / 10

F Harris(Oxford) E van Herwijnen(CERN) G Patrick(RAL)

LHCb ‘use-case’ - distributed MC production http://lhcb-comp.web.cern.ch/lhcb-comp/grid/Default.htm. F Harris(Oxford) E van Herwijnen(CERN) G Patrick(RAL). Overview of presentation. The LHCb distributed MC production system Where can GRID technology help? - our requirements

meris
Download Presentation

F Harris(Oxford) E van Herwijnen(CERN) G Patrick(RAL)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LHCb ‘use-case’ - distributed MC production http://lhcb-comp.web.cern.ch/lhcb-comp/grid/Default.htm F Harris(Oxford) E van Herwijnen(CERN) G Patrick(RAL) F Harris Datagrid Testbed meeting at Milan

  2. Overview of presentation • The LHCb distributed MC production system • Where can GRID technology help? - our requirements • Current production centres and GRID Testbeds F Harris Datagrid Testbed meeting at Milan

  3. LHCb working production system(and forward look to putting in GRID..) • Generate events • write log to Web • (globus-run) • copy to mass store • Globus-rcp,gsi-ftp • call servlet (at CERN) Construct job script and submit via Web (remote or local at CERN) (GRID Certification) mass store (e.g. RAL Atlas data store, CERN shift system) • Get token on shd18 • (Certification) • copy data to shift • copy data to tape • (gsi-ftp) CERN or remote • Find next free tape-slot • call servlet to copy data from mass store to tape at CERN • update bookkeeping db • (Oracle) CERN only F Harris Datagrid Testbed meeting at Milan

  4. Problems of production system Main issue: • We are forced to copy all data back to CERN Reasons for this: • Standard cataloguing tools do not exist - so we cannot keep track of the data where it is produced • Absence of smart analysis job-submission tools that move executables to where the input data is stored Steps that make the production difficult: • Authorisation (jobs can be submitted only from trusted machines) • Copy data (generated both inside & outside CERN) into the CERN mass store (many fragile steps) • Updating of the bookkeeping database at CERN (Oracle interface is non standard) F Harris Datagrid Testbed meeting at Milan

  5. Where can the GRID help? • Very transparent way of authorizing users on remote computers • data set cataloguing tools (LHCb has expertise and is willing to share experience) to avoid unnecessary replication • if replication is required, provide fast and reliable tools • analysis job submission tools • interrogate the data set catalogue and specify where the job should be run; (the executable may need to be sent to the data) • read different datasets from different sites into interactive application • standard/interface for submitting/monitoring production jobs on any node on the GRID F Harris Datagrid Testbed meeting at Milan

  6. Current and ‘imminent’ production centres CERN • Samples (several channels) for Trigger TDR on PCSF (~10**6 events) RAL • 50k 411400 (Bd-> J/psi K (e+e-), DST2, for Trigger • 250k inclusive bbar + 250k mbias RAWH, DST2, no cuts Liverpool • 2 million MDST2 events after L0 and L1 cuts Lyon • plan to do 250 k inclusive bb-bar events without cuts (January) Nikhef and Bologna • Will generate samples for Detector and Trigger studies (? Mar/April) F Harris Datagrid Testbed meeting at Milan

  7. Exists Planned Initial LHCb-UK GRID “Testbed” CERN pcrd25.cern.ch lxplus009.cern.ch RAL CSF 120 Linux cpu IBM 3494 tape robot Institutes LIVERPOOL MAP 300 Linux cpu RAL (PPD) Bristol RAL DataGrid Testbed Imperial College GLASGOW/ EDINBURGH “Proto-Tier 2” Oxford F Harris Datagrid Testbed meeting at Milan

  8. Initial Architecture • Based around existing production facilities (separate Datagrid testbed facilities will eventually exist). • Intel PCs running Linux Redhat 6.1 • Mixture of batch systems (LSF at CERN, PBS at RAL, FCS at MAP). • Globus 1.1.3 everywhere. • Standard file transfer tools (eg. globus-rcp, GSIFTP). • GASS servers for secondary storage? • Java tools for controlling production, bookkeeping, etc. • MDS/LDAP for bookkeeping database(s). F Harris Datagrid Testbed meeting at Milan

  9. Other LHCb countries(and institutes) developing Tier1/2/3 centres and GRID plans • Germany,Poland,Spain,Switzerland,Russia • see talk at WP8 meeting of Nov 16 • Several institutes have installed Globus, or are about to (UK institutes,Clermont Ferrand,Marseille,Bologna,Santiago……) F Harris Datagrid Testbed meeting at Milan

  10. SuperJANET Backbone SuperJANET III 155 Mbit/s (SuperJANET IV 2.5Gbit/s) London Networking Bottlenecks? Need to study/measure for data transfer and replication within UK and to CERN. RAL 34 Mbit/s 155 Mbit/s MAN MAN 622 Mbit/s (March 2001) 622 Mbit/s Campus 100 Mbit/s CERN Univ. Dept TEN-155 Schematic only

More Related