1 / 7

Run on 2010 data

Run on 2010 data. Liming Zhang May 07, 2010. Look for Data Set (I).

joelle
Download Presentation

Run on 2010 data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Run on 2010 data Liming Zhang May 07, 2010

  2. Look for Data Set (I) • Type “lhcb_bkk” in hepc1 (thanks to JC) or lxplus (CERN computer) to lunch bookkeeping for browsing data set. Grid certification is needed in hepc1 or lxplus, detail see https://twiki.cern.ch/twiki/bin/view/LHCb/FAQ/Certificate#Get_or_renew_a_certificate • Data under LHCb/COLLISION10/Real Data+RecoDST-2010-02/9000000 with old Brunel processing (v36r2) is stable. They are running OK now. But eventually they will be removed • People should look at data under LHCb/Collision10 (new name) now. • New processed data is under LHCb/Collision10/RealData+RecoStripping-0x/9000000/. MiniBias.DST stream almost contains all data. You can also look at the specific stripping stream • Then you save data list to datalist.py

  3. Look for Data Set (II) • Right click “Real Data+RecoStripping-01” (for example) to get “More Information” button, click it to find DDDb and CondDb tags that will be used in your Davinci script.

  4. An example of Davinci script for data from Gaudi.Configuration import * from Configurables import GaudiSequencer, DecayTreeTuple, CheckPV from Configurables import PhysDesktop, CombineParticles, FilterDesktop, OfflineVertexFitter importOptions("/afs/cern.ch/user/l/lzhang/WORK/data10/D0MuX/D0MuX.py") importOptions("/afs/cern.ch/user/l/lzhang/WORK/data10/D0MuX/D0MuXTree.py") b2D0MuXSeq = GaudiSequencer( 'b2D0MuXSeq' ) tuple = DecayTreeTuple( 'tupleb2D0Mu' ) tuplews = DecayTreeTuple( 'tupleb2D0Muws' ) from Configurables import LoKi__ODINFilter _bxFilter = LoKi__ODINFilter('BxFilter', Code = " ODIN_BXTYP == LHCb.ODIN.BeamCrossing ") from Configurables import L0Filter l0filter = L0Filter() l0filter.OrChannels=["CALO"] MainSeq = GaudiSequencer( 'MainSeq' ) MainSeq.Members += [_bxFilter, CheckPV(), b2D0MuXSeq ] from Configurables import DaVinci DaVinci().Simulation = False DaVinci().EvtMax = -1 # Number of events from Configurables import CondDB CondDB(UseOracle = True) importOptions("$APPCONFIGOPTS/DisableLFC.py") DaVinci().UserAlgorithms = [MainSeq, tuple, tuplews ] DaVinci().DataType = "2010" DaVinci().DDDBtag = "head-20100407" DaVinci().CondDBtag = "head-20100414" DaVinci().TupleFile = "DVNtuples.root" DaVinci().PrintFreq = 1000 DaVinci().HltThresholdSettings = 'Physics_320Vis_300L0_10Hlt1_Hlt2_Feb10' Your selection and filling ntuple code Select only beamcorssing data Trigger L0 set (not used by me) Or you use DecayTreeTuple to save the trigger information and cut there) Main sequencer (including bx, checkPV and my selection) DaVinci setup Access database in grid (data only) Database released only up to April 17. Data taken after the date needs database in grid (adding the three lines). Data before doesn’t need the three lines. DB tags in bookkeeping

  5. Submit jobs using Ganga from lxplus • Setup Ganga for each time you run SetupProjectGanga v505r3 lhcb-proxy-init • Copy code from $DAVINCIROOT/job/DaVinci_Ganga.py, then comment or uncomment lines in it. The following is an example: • Run it by typing “ganga DaVinci_Ganga.py” j = Job( application = DaVinci( version = 'v25r2p3' ) ) j.name = 'MyDaVinci‘ appOpts = j.application.user_release_area + '/DaVinci_' + j.application.version + '/Phys/DaVinci/options/' wordir = '/afs/cern.ch/user/l/lzhang/WORK/data10/D0MuX/' j.application.optsfile = [ File ( wordir+'run.py' ), File('/afs/cern.ch/user/l/lzhang/data10_MiniBias.py') ] j.splitter = DiracSplitter ( filesPerJob = 20, maxFiles = 10000 ) j.outputdata = ['DVNtuples.root'] j.backend = Dirac() j.submit() https://twiki.cern.ch/twiki/bin/view/LHCb/LHCbComputing#Distributed_Analysis_Ganga

  6. Output of job • Log files are located under ~/gangadir/workspace/$USER/LocalXML • The file you put in the “j.outputdata” will be uploaded to grid storage. To download it to your local disk • Store the following code in a file, then run in ganga by typing “ganga *.py #”, where the # is the job number import os, sys jid = sys.argv[1] j = jobs(jid) for js in j.subjobs: print js.id, js.status, js.backend.status if js.status=='completed': ds = js.backend.getOutputDataLFNs() f = ds.files f[0].download() name = str(js.id) + ".root" os.rename('DVNtuples.root', name)

  7. Check job status • Method 1: Browser • Method 2: in ganga • Type “jobs[id]” where id is the number of your job • Resubmit failed jobs: https://lhcbweb.pic.es/DIRAC/LHCb-Production/lhcb_user/jobs/JobMonitor/display# In [13]: j = jobs[5] for js in j.subjobs: ....: if js.status=='failed': ....: js.backend.resubmit() http://twiki.lal.in2p3.fr/bin/view/LHCb/LHCbGangaTutorial#Check_job_status

More Related