1 / 25

Using of Grid Prototype Infrastructure for QCD Background Study to the H   Process on Alliance Resources

Using of Grid Prototype Infrastructure for QCD Background Study to the H   Process on Alliance Resources. Vladimir Litvin, Harvey Newman, Sergey Schevchenko Caltech CMS Scott Koranda, Bruce Loftis, John Towns NCSA Miron Livny, Peter Couvares, Todd Tannenbaum, Jamie Frey

chavez
Download Presentation

Using of Grid Prototype Infrastructure for QCD Background Study to the H   Process on Alliance Resources

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Using of Grid Prototype Infrastructure for QCD Background Study to the H  Process on Alliance Resources Vladimir Litvin, Harvey Newman, Sergey Schevchenko Caltech CMS Scott Koranda, Bruce Loftis, John Towns NCSA Miron Livny, Peter Couvares, Todd Tannenbaum, Jamie Frey Wisconsin Condor

  2. CMS Physics The CMS detector at the LHC will probe fundamental forces in our Universe and search for the yet undetected Higgs Boson Detector expected to come online 2007

  3. CMS Physics

  4. Leveraging Alliance Grid Resources • The Caltech CMS group is using Alliance Grid resources today for detector simulation and data processing prototyping • Even during this simulation and prototyping phase the computational and data challenges are substantial

  5. Goal to simulate QCD background • The QCD jet-jet background cross section is huge (~ 1010 pb). Previous studies of QCD jet-jet background have got the estimation of the rate, Rjet , when jet might be misidentified as photon and, due to the limited CPU power, for QCD jet-jet background rates where simply squared (Rjet2). Hence, the correlations within event have not been taken into account in previous studies • Previous simulations have been done with simplified geometry and non-gaussian tails in the resolution have not been adequately simulated • Our goal is to make full simulation of relatively large QCD sample, measure the rate of diphoton misidentification and compare it with other types of background

  6. QCD jet cross section strongly depends on the pT of the parton in hard interaction QCD jet cross section is huge. We need reasonable preselection at the generator level before pass events through full detector simulation The optimal choice of pT is needed Our choice is pT = 35GeV pT = 35 GeV is safe cut: we do not lose significant fraction of events, which could fake the Higgs signal, at the preselection level Generation of QCD background

  7. QCD background Standard CMS cuts: Et1>40 GeV, Et2>25 GeV, |1,2|<2.5 at least one pair of any two neutral particles (0, , e, , ’, , ) with Et1 > 37.5 GeV Et2 > 22.5 GeV |1,2| < 2.5 minv in 80-160 GeV Rejection factor at generator level ~3000 Photon bremsstrahlung background Standard CMS cuts: Et1>40 GeV, Et2>25 GeV, |1,2|<2.5 at least one neutral particle (0, , e, , ’, , ) with Et > 37.5 GeV || < 2.5 Rejection factor at generator level ~6 Generator level cuts

  8. CMS run naturally divided into two phases Monte Carlo detector response simulation 100’s of jobs per run each generating ~ 1 GB all data passed to next phase and archived reconstruct physics from simulated data 100’s of jobs per run jobs coupled via Objectivity database access ~100 GB data archived Specific challenges each run generates ~100 GB of data to be moved and archived many, many runs necessary simulation & reconstruction jobs at different sites large human effort starting & monitoring jobs, moving data Challenges of a CMS Run

  9. Tools • Generation level - PYTHIA 6.152 (CTEQ 4L structure functions) http://www.thep.lu.se/~torbjorn/Pythia.html • Full Detector Simulation - CMSIM 121 (includes full silicon version of the tracker) http://cmsdoc.cern.ch/cmsim/cmsim.html • Reconstruction - ORCA 5.2.0 with pileup at L = 2 *1033 cm-2/s (~30 pileup events per signal event) - http://cmsdoc.cern.ch/orca

  10. Analysis Chain • Full analysis chain

  11. Globus middleware deployed across entire Alliance Grid remote access to computational resources dependable, robust, automated data transfer Condor strong fault tolerance including checkpointing and migration job scheduling across multiple resources layered over Globus as “personal batch system” for the Grid Meeting Challenge With Globus and Condor

  12. Master Condor job running at Caltech CMS Run on the Alliance Grid • Caltech CMS staff prepares input files on local workstation • Pushes “one button” to launch master Condor job • Input files transferred by master Condor job to Wisconsin Condor pool (~700 CPUs) using Globus GASS file transfer Caltech workstation Input files via Globus GASS WI Condor pool

  13. Master Condor job running at Caltech Secondary Condor job on WI pool 100 Monte Carlo jobs on Wisconsin Condor pool CMS Run on the Alliance Grid • Master Condor job at Caltech launches secondary Condor job on Wisconsin pool • Secondary Condor job launches 100 Monte Carlo jobs on Wisconsin pool • each runs 12~24 hours • each generates ~1GB data • Condor handles checkpointing & migration • no staff intervention

  14. CMS Run on the Alliance Grid • When each Monte Carlo job completes data automatically transferred to UniTree at NCSA • each file ~ 1 GB • transferred using Globus-enabled FTP client “gsiftp” • NCSA UniTree runs Globus-enabled FTP server • authentication to FTP server on user’s behalf using digital certificate 100 Monte Carlo jobs on Wisconsin Condor pool 100 data files transferred via gsiftp, ~ 1 GB each NCSA UniTree with Globus-enabled FTP server

  15. Secondary reports complete to master Master Condor job running at Caltech Secondary Condor job on WI pool NCSA Linux cluster gsiftp fetches data from UniTree CMS Run on the Alliance Grid • When all Monte Carlo jobs complete Secondary Condor reports to Master Condor at Caltech • Master Condor at Caltech launches job to stage data from NCSA UniTree to NCSA Linux cluster • job launched via Globus jobmanager on cluster • data transferred using Globus-enabled FTP • authentication on user’s behalf using digital certificate Master starts job via Globus jobmanager on cluster to stage data

  16. Master Condor job running at Caltech NCSA Linux cluster CMS Run on the Alliance Grid • Master Condor at Caltech launches physics reconstruction jobs on NCSA Linux cluster • job launched via Globus jobmanager on cluster • Master Condor continually monitors job and logs progress locally at Caltech • no user intervention required • authentication on user’s behalf using digital certificate Master starts reconstruction jobs via Globus jobmanager on cluster

  17. NCSA Linux cluster CMS Run on the Alliance Grid • When reconstruction jobs complete data automatically archived to NCSA UniTree • data transferred using Globus-enabled FTP • After data transferred run is complete and Master Condor at Caltech emails notification to staff data files transferred via gsiftp to UniTree for archiving

  18. Condor Details for Experts • Use CondorG • Condor + Globus • allows Condor to submit jobs to remote host via a Globus jobmanager • any Globus-enabled host reachable (with authorization) • Condor jobs run in the “Globus” universe • use familiar Condor classads for submitting jobs universe = globus globusscheduler = beak.cs.wisc.edu/jobmanager- condor-INTEL-LINUX environment = CONDOR_UNIVERSE=scheduler executable = CMS/condor_dagman_run arguments = -f -t -l . -Lockfile cms.lock -Condorlog cms.log -Dag cms.dag -Rescue cms.rescue input = CMS/hg_90.tar.gz remote_initialdir = Prod2001 output = CMS/hg_90.out error = CMS/hg_90.err log = CMS/condor.log notification = always queue

  19. Condor Details for Experts • Exploit Condor DAGman • DAG=directed acyclic graph • submission of Condor jobs based on dependencies • job B runs only after job A completes, job D runs only after job C completes, job E only after A,B,C & D complete… • includes both pre- and post-job script execution for data-staging, cleanup, or the like Job jobA_632 Prod2000/hg_90_gen_632.cdr Job jobB_632 Prod2000/hg_90_sim_632.cdr Script pre jobA_632 Prod2000/pre_632.csh Script post jobB_632 Prod2000/post_632.csh PARENT jobA_632 CHILD jobB_632 Job jobA_633 Prod2000/hg_90_gen_633.cdr Job jobB_633 Prod2000/hg_90_sim_633.cdr Script pre jobA_633 Prod2000/pre_633.csh Script post jobB_633 Prod2000/post_633.csh PARENT jobA_633 CHILD jobB_633

  20. Monte Carlo Samples Simulated and Reconstructed

  21. CPU timing

  22. All cuts except isolation are applied • Distributions are normalized to Lint = 40 pb-1

  23. Tracker isolation Isolation cut: Number of tracks with pT > 1.5 GeV in R = 0.30 cone around photon candidate is zero Still optimizing pT threshold and cone sizes Ecal isolation Sum of Et energy in the cone around photon candidate, using Et energies of ECAL clusters Isolation cut: Sum of Et energy in R = 0.30 cone around photon candidate is less than 0.8 GeV Isolation

  24. Background Cross Section

  25. Conclusions • The goal of this study is to increase efficiency of computer resources and to reduce and minimize human intervention during simulation and reconstruction • “proof of concept” - it is possible to create the distributed system based on GLOBUS and Condor (MOP is operational now) • A lot of work ahead in order to make this system as automatic as possible • Important results are obtained for the Higgs boson search in two photon decay mode • the main background is the background with one prompt photon plus bremsstrahlung photon or isolated 0 , which is ~50% of the total background. QCD background is reduced down to the 15% of the total background • More precise studies need much more CPU time

More Related