1 / 17

Status of work

This report outlines the work performed during the QCD High pT Group's October exercise, including skimmed datasets, CMS grid usage, data validation, and dataset publication.

blazer
Download Presentation

Status of work

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Status of work • Outline • October Exercise (~ 2 weeks) [JetMET/QCD PAG] • Dijet angular distributions (~ 2 months) [QCD PAG] • Pixel Test Beam at FNAL (~ 1 month) [Tracking DPG] • Pixel Track reconstruction (~ 1 month) [Tracking DPG] • Data based Pixel charge calibration (Vcal calibration) • (~ 1 month) [Tracking DPG] NSF Report 2009-2010 Suvadeep Bose (Date of joining: 25 Sept, 2009)

  2. Compact Muon Solenoid (CMS)

  3. October Physics Exercise (Oct 5 – 19, 2009) • The QCD high pt group was asked to produce a common skim based on single jet trigger bits. The goal of the exercise was: • To test the full work-flow of data handling. Estimate the time needed and identify problems. • Estimate the reduced event content and the total storing space needed.Explore the feasibility of a common skim between low pt and high pt QCD groups. • Verify that the event content is adequate for all analyses. • 3 persons were appointed for QCD group. I was responsible for High pT subgroup. • Outline of the work performed during the exercise: • skim the secondary datasets. • Run using CMS grid (crab job). • Report outcome of jobs. • Store skimmed data in remote site (T2). • Validate skimmed set with Secondary Dataset (SD). • Publish in local DBS (Data Bookkeeping system). • Transfer dataset from one T2 to another. • Publish datasets into Global DBS of CMS. Validation of Datasets

  4. QCD and Dijets • Hard collisions between protons • and protons produce events containing • two high energy jets (Dijets). • Measurement of the distributions of the • scattering angle, between the dijet and • proton beam in the dijet CM frame, • provides fundamental test of QCD and • a sensitive probe of new physics. • Dijet angular distributions reflect the • dynamics of the hard scattering of • quarks and gluons, and are expected • to be fairly insensitive to the momentum • distributions of the partons within the • proton. • In Rutherford scattering, dijet angular • distributions from QCD processes are • peaked in the forward direction. • In contrast many sources of new physics • produce more isotropic dijet angular • distributions. (Quark compositeness).

  5. What is compositeness? • Quarks may not be fundamental particles but rather an agglomeration of smaller constituents called “preons”. • These features are visible above a characteristic energy scale Λ below which quarks appear point like. • Λ characterizes both strength of preon coupling and physical size of composite scale. • A signature of quark compositeness is seen from the study of angular distribution of dijets. • At smaller CM scattering angles, dijet angular distribution predicted by Leading Order QCD is proportional to the Rutherford cross-section. • By the definition the angular distribution is measured in the flattened variable χ.

  6. Dijet Angular Distribution The dominant QCD process is “t-channel” (Rutherford like scattering) in which case the cross section in terms of χ is almost flat, allowing for easier comparison between theory and measurement. Inclusive dijet events: jet 2 η1: rapidity of leading jet η2: rapidity of 2nd leading jet p+p -> jet + jet + X θ* θ*: scattering angle in the Center of Mass frame. p p X can be anything including additional jets. jet 1 DØ Dijet angular distribution: • Earlier experiments at Tevatron (DØ and CDF) used • measurements of Dijet angular distributions to test • predictions of QCD. • They concluded no evidence of quark substructure • when compared to standard QCD predictions. • Limits place compositeness at a contact interaction • scale (Λ) above 2.2 TeV.

  7. Dijet Angular Distribution in CMS • The Monte Carlo Sample: • QCD Dijet samples generated with PYTHIA event generator for a center of mass energy of 7 TeV (3.5 TeV proton proton collisions at the LHC). • Used the StartUP Trigger menu used (8E29). • Jets were reconstructed using calorimeters and using the CMS default Cone algorithm with cone size 0.5. • The dijet system consists of two jets with the highest transverse momentum in the event (called as leading jets). • We measure inclusive dijet events, defined as pp → 2 jets + X. • The dijet Mass is defined as: • Samples were obtained using single jet triggers in the transverse momentum (pT) region where the triggers are more than 99% efficient. • Jets are primarily measured in the Calorimeters and are corrected with the appropriate jet energy corrections (relative η and absolute pT).

  8. Trigger thresholds (8E29 menu) Jet30U/Jet15U Trigger menu for LHC StartUP [8E29] (corrected calojet pT) (corrected) Jet50U/Jet30U * eg. Jet15U means 15 GeV threshold on Uncorrected jet transverse momenta. (corrected)

  9. Event selections • To select events with high trigger efficiency and to focus on the region of the • CMS detector which will have maximum high transverse momentum jets. the events • are selected within the barrel region. • If y1: Rapidity of the leading jet and y2: rapidity of the 2nd leading jet. • then we define: y2 yboost = 0.5*(y1+y2) and y* = 0.5*|y1-y2| then we require: yboost < 1.5 and y* < 1.5 so that |yboost| + |y*| < 3.0 which implies dijet angular variable: χ < 20 • Second leading jet momentum (corrected) is required to be greater than 50 GeV • to avoid any possibility of soft jet appearing as one of the dijets.

  10. Selecting Mass bins • We demand a Single Jet trigger should be > 99% efficient in each mass bin. • Bins of dijet mass Mjj are then determined for each trigger. Jet15U Jet30U Jet50U 350 510

  11. Dijet angular distribution (χ) √s = 7 TeV L= 10 pb-1 √s = 7 TeV L= 10 pb-1 • Expected Dijet angular distributions at an integrated luminosity of 10 pb-1 • with the total statistical uncertainty. • Future plans: • To compare with other montecarlo samples – MADGRAPH, HERWIG. • To study the systematic uncertainties coming from acceptance criteria, Unfolding • to particle level, uncertainties due to jet energy scale. • Compare with the samples produced with contact interaction scale (Λ). • Compare with NLO QCD prediction. • Finally .. when we have data, tune Monte Carlo with data.

  12. Pixel Testbeam at FNAL (Nov30-Dec24, 2009) Top View Beam conditions: 120 GeV p beam, Size ~ 10 cm2 Intensity needed: 10-100K particles/ (4s spill) Beam DUT X Y Y Y Y X X X CMS Pixel Planes • Telescope had 8 pixel planes (4 each • orthogonal to each other). • Detector Under Test (DUT): Diamond. • Planes consisted of CMS PSI46 • plaquettes mounted on Al supports. • 56 Read Out Chips (ROCs) in total. • Pixel dimension: 100 μmX150 μm. Data acquisition using CAPTAN Beam Spot accumulation • 4 hrs shift everyday with breaks for days in between due to irregular beam supply. • Data Analysis to begin shortly.

  13. Data Based pixel charge calibration • The calibration pulse is coupled to the preamp of each pixel capacitively. • Coupling capacitance varies from pixel to pixel by 15-20% RMS • - determined by direct calibration with sources • - determined by spread in apparent calibration parameters • - determined by observed width cluster charge distributions in CRAFT • Entirely consistent with expectations from ROC fabrication process. • If left uncorrected, effectively adds an additional source of noise • - ~2000e effective noise • - limits achievable resolution (for details of VCAL calibration work: Please refer last slide in Back Up.) Study done by P. Trub on test sample (Ag)

  14. The Procedure for VCAL calibration • The problem can be fixed by irradiating all 66M pixels with a known • source of ionizing radiation before they become radiation damaged • + by comparing their responses to generate 66M relative gain corrections fi • Use LHC tracks to illuminate the pixels • - use 2-D cluster generator to predict charge Qpred seen in each pixel • Fit measured charge Qmeas = fi Qpred + pi • - translation + angle + Lorentz drift scans Qpred from threshold to ~20ke • - relative gains fj/fi determine the corrections needed. • Need to investigate several approaches to building estimators for fi • Estimator should be • linear in the true fi; • statistically powerful; • robust; • relatively fast enough to run 66M pixels in finite time.

  15. Pixel Track reconstruction There exists a standalone software which build track candidates using pixel pairs and triplets. It uses them for seeding but also for fast PV reconstruction.It needs to test whether this software works with real data.To measure the efficiency, ghost rate etc. Plans for work with pixel tracks: - pixel efficiency study - pixel tracks studies- pixel vertexing studies- pixel seeding studies • Immediate goal is to look into pixel pair/triplet efficiencies and ghost rates, •  triplet track parameters resolution, and pixel track based PV reconstruction • efficiency based on Monte Carlo for now; and later with Data (7 TeV).

  16. Summary of works • Analysis work : QCD Dijet Angular Distribution (7 TeV, 10 TeV). Have the framework running for Pythia samples. To work with samples from other event generators and compare. To study systematics. Final goal is to have an early paper with data at 7 TeV (summer, 2010). • Service work in QCD PAG (in collaboration with JetMET POG): Was responsible as a “data manager” for QCD High pT. Skimming of Secondary dataset useful for QCD High pT group during the October Physics exercise [Oct 5 – Oct 19, 2009]. • Service work in Tracking DPG: • Pixel Test Beam at Fermilab: Testing the Diamond detector. Data taking finished just before Christmas. Data analysis to start soon. • Data Based Pixel charge calibration (started). Preparing code for reading Monte Carlo data at 7 TeV and also will try to run over LHC data of 900 GeV and 2360 GeV taken in December, 2009. • Track reconstruction using only pixel (pairs and triplets): Preparing to revive old codes in the current framework. To study the track efficiency of pixel only data, specially useful for low pT tracks. To study Primary Vertex reconstruction and seeding.

  17. BackUp: VCal calibration - details To make pixel clusters and then hits out of pixel clusters we need to know that charge on each pixel. The charge is measured in ADC counts and then transformed into electrons by using the "gain/pedestal" calibration. This calibration injects a "certain" amount of charge in each pixel. The injected charged is controlled by a parameter called VCAL. The injected charge (VCAL) is varied. The chip response (in ADC counts) increases approximately linearly with the injected charge. The gain/pedestal calibration determines the slope and pedestal of this dependence. The gain and pedestal are then used to transform the charge measured in ADC counts to electrons.The problem is that the injected charge (controlled by VCAL) is not well known. For a certain value of VCAL the actual injected charge has a large spread:Q(in electrons) = slope * VCAL + offsetwhere slope = 65.5 with RMS = 8.9and offset = -414 with RMS = 574 According to studies done already, this pixel to pixel charge spread (effectively noise) leads to degradation of the pixel hit resolution of up to 20%. To improve it was suggested to use real data collision tracks to improve the gain/pedestal calibration. The suggestion is to gather enough data so that we have 100-1000 tracks passing through each pixel (his back of t the envelope calculation suggests this would be a matter of days or weeks).For each pixel one would plot the observed charge vs the expected charge. The slope and offset of this plot whould be the corrections to the existing calibration which would improve the pixel charge measurement. Given that we have 66 million pixels this is not an easy job. One has to do feasibility studies, create the software workflow in CMSSW, write some Root scripts that fit 66 million histograms, take the slope and offset from these fits and correct the gain and pedestal that exist in DB, make new DB gain/pedestal calibration, apply the new calibration again using the corrected gain/pedestal, study the effects.

More Related