Corfù, September 6th, 2005 High Pt Physics:from the Tevatron to LHC • Introduction: the Tevatron, CDF and D0 in Run II • Tools for high-Pt physics: jets, leptons, b-tags, and all that • Higgs boson searches and prospects • Top quark physics searches and prospects • Electroweak physics searches and prospects • Not discussing today: • Precision QCD measurements • Searches for new Physics / BSM / SUSY • Conclusions and perspectives Tommaso Dorigo University of Padova and INFN
What this talk is not • Not a showroom • skipping / forgetting / ignoring many interesting new results • some analyses only briefly mentioned • not giving a complete panorama • Not a fair balance between CDF and D0 • actually totally unfair • mostly focusing on CDF • Not a snapshot of where we stand • rather, a view of the issues we are facing in high-Pt physics in preparation for the LHC
The Tevatron in Run II • Massive upgrade with respect to Run I, to increase L by 1.5 orders of magnitude • Main injector, pbar recycler • crossing time from 3.5 ms to 396 ns • increased antiproton yield and transfer efficiency • From a endured start in 2001-2002, the Tevatron is now working excellently • So far collected more than 800 pb-1 /exp • Peak instantaneous luminosity by now regularly above 1032 • less downtime, fewer stops for beam studies needed – just fine smooth running • In 2005-2006 crucial upgrades are being worked at to complete the picture • electron cooling • stacktail bandwidth upgrade • Two foreseen plans for data accumulation • Base plan: the minimal objective • Design plan: if everything works great
Run II: where we are right now Have been following design curve! Upgrades continuing – electron cooling of antiprotons is critical. As L increases, CDF and D0 catching up by modifying trigger tables, improving DAQ Design curve means 8 fb-1 by 2009! WE ARE HERE
The CDF Detector CDF significantly upgraded from Run 1: • New L00+SVX+ISL silicon detector • New central tracker • Extended muon coverage to |h|<1.5 • New end-plug calorimeters • SVT measures IP to 45 mm at Level 2! The challenge is now a smooth operation for many years of running…
The D0 Detector • Massively upgraded from Run 1 to include: • 77,000 ch scintillating fiber tracking • 2.0 Tesla solenoid • 800,000 channel silicon detector • (4 barrel layers, 2-sided disks) • Extended muon coverage (MDT) • Tracker working well despite low • volume (R=1/3 RCDF) • High performance b-tag to |h|<2.0
The most common animals: Jets In hadronic interactions, jets of hadrons are the most common things one can observe They are common, but are they obvious to define ? “Obvious: something you may think about for 20 years and maybe understand” After 20 years of studies of pQCD, we think we understand what is going on… What we measure in our detectors is the combination of a multitude of effects Disentangling them is the key to understanding each of them better
Identification and measurement of hadronic jets Both CDF and D0 mainly use a cone algorithm (R=0.4 or 0.5) to identify localized depositions of energy in their calorimeters and measure hard partons Other algorithms (midpoint, Kt) are mainly used in QCD studies • When faced with the measurement of the kinematics of hard • parton emissions, one has to deal with two distinct issues: • SCALE: to calibrate the energy response, to minimize the average • measurement error on a sample of jets • RESOLUTION: to improve the precision of the energy measurement, decreasing the measurement error on an individual jet • The first issue is fundamental for precision mass measurements • of hadronically decaying objects (e.g. top quarks) • The second issue is critical for the successful identification of • low S/N signals (e.g. Higgs bosons)
The JetClu Algorithm • Was initially designed to meet specifications from the Snowmass Accord (1992) • A seeded, iterative cone algorithm, with R=0.7 • custom prescription for splitting and merging • Several drawbacks • not best option for QCD measurements • pQCD uses larger cone (Rsep=1.3) to emulate experimental procedure • not collinear safe, not IR safe (see next slide) • But also strong points • conceptually simple • sensible choice for 2 TeV physics • makes it easier to compute corrections and systematics • Start from Et-ordered list of seed towers (Et>1 GeV) • do preclustering by creating list of cones centered on seed towers, removing seeds as they are englobed in cones • then add to cones Et of towers within, recompute baricenter, move cone, to convergence • if two cones share too much energy (>75%) they are merged
Shortcomings of standard cone algorithms Infrared safety The jet multiplicity changes if an arbitrarily soft emission is detected between two partons the cone algorithm does not give a stable answer in the IR limit Collinear safety Replacing a massless parton by the sum of two collinear particles a jet may fail detection due to lack of a seed, and the jet multiplicity changes Fixed-order pQCD calculations contain uncanceled divergencies…
The Midpoint Algorithm • Conceived to remove some of the problems of JetClu when compared to theoretical calculations • IR safety mended by introducing imaginary seeds at midpoint of each pair of jets close in angle, and iterating to convergence
The Kt algorithm QCD appears to separate partons into different jets according to their relative transverse momentum The Kt algorithm is therefore preferred by theory, and comparisons between experimental measurements and theoretical calculations are more straightforward
Calibration of Jet Energy • To calibrate the energy measurement in CDF we use a detector-dependent correction, a scale correction, and a treatment of additional small physical effects • eta-dependent correction dijet balancing • multiple interaction correction f(Nvtx) • absolute scale correction: E/p of single tracks are used to tune the MC, which is then used to derive “calhad” corrections. • last, out of cone and underlying event corrections are made • Systematic errors reduced to 3% (data/MC comparisons, g-jet balancing) • Calorimeter stability, MC (fragmentation, simulation of single particle resp.) • Understanding of out-of-cone radiation and UE • Simulation of response function versus jet rapidity • D0 has an almost-compensated calorimeter (e/p <1.05, linear with energy ); disuniformities and gaps among cryostats need to be corrected • EM part is calibrated with Zee decays • U noise measured in situ; other offset corrections address pile-up (energy from previous interactions) and underlying event • Response is measured as a function of rapidity and Et with gamma-jet events • showering correction: Et flux vs DR off jet cones
CDF Jet Energy Corrections Ptcorr = (Ptraw frel – MI) fabs – UE + OOC
The b-Jet Energy Scale Issue • b-jets are different from generic jets • large mass of leading hadron • semileptonic decay • hard fragmentation • Originally thought the most pressing issue for precision Mtop measurements • after demonstration of auto-calibration with Wjj the picture is brighter • residual systematics of b-JES to top mass estimated at less than percent level • but that is MC extrapolation… Need to measure b-JES anyhow! • To calibrate b-jets, CDF exploits the SVT to trigger on Zbb events in Run II • extract signal, fit, get scale from Z mass (more on that later) • But this technique is unfeasible at the LHC • background cross section is huge • rate of any b-jet trigger impossible to handle • Another possibility is searching for gamma-b events • balancing the photon in the transverse plane with the jet, one obtains a calibration • but b-fraction of jets is typically 40-50% even after a tight b-tagging by secondary vertex identification • D0 and CDF currently studying this technique – expect results soon
b-jet calibration with gB events • Use the MPF method: • select back-to-back gj events • determine Rhad from missing Et projection • apply b-tagging, separate into different samples for more handles • resulting sample is a mixture of b and charm, light quarks • use also tighter b-tag by exploiting mass of tracks in secondary vertex • can fit for Rb
What to do at the LHC ? • Zbb signal extraction is unfeasible • gamma-B balancing techniques might work – studies are ongoing • Calibrations using top quark decays are possible, but one would prefer an independent determination • I have a suggestion: use Zgbbg events Advantages: • Automatically selects qq initial state, boosting the S/N by an order of magnitude at typical TeVatron energy, surely more at LHC, with respect to inclusive Zbb vs gluonbb ratio • Typical initial state of gluonbb does not produce photons! • Can fully exploit dedicated detectors for Hgg • Resolution on Eg is so good, one can determine b-jet scale by just looking at jet-jet ANGLE! Disadvantages: • Statistical power is limited by small cross section
Improving the jet energy resolution • Calibrating the calorimeter response to streams of hadrons is one of the foundations of mass measurements • It is a correction to the average systematic offset to the measurement • But the precision of an individual jet’s energy determination is no less a foundation • Separation of reconstructed hadronic resonances (W,Z,top,Higgs, other fancier animals) critically depends on it • Even continuous-Q2 distributions benefit from a more precise measurement • Less known is that top mass measurements do benefit greatly from improved resolutions even in high S/N samples
Tools for the improvement of the Et resolution • CDF has taken seriously the challenge to improve the jet Et resolution • Triggered by HSWG studies (more later) • Issue is complex: resolution can be improved in different ways depending on event characteristics, jet rapidity, flavor of parton… • focus is improvement of dijet mass resolution through more precise jet Et measurement • also focusing on b-jets • Three candidate algorithms identified and studied: • - H1 algorithm: use tracker for central charged hadrons • - Track+Cal algorithm: categorize cal towers, disentangle photon response, use tracker for charged tracks • - Hyperball algorithm: olistic approach to the problem. Use ALL information on jet measurement, exploit intercorrelation between jet observables and Et measurement error
The Hyperball algorithm: statement of the problem • From an idea developed for the HSWG, 2003 • Imagine one measures a scalar quantity (say the Et of a jet), which is subject to all sorts of biases • Alongside with Et, one measures heaps of other characteristics of the jet • several quantities in the calorimeter • track Pt information • photon clusters in Strip Chambers • b-tagging information • etcetera • Many of the latter carry information about the biases of the Et measurement • for instance, a charged fraction larger than one speaks of a undermeasurement of calorimeter Et • Simple minded approaches to remove biases neglect cross-correlations • First I correct for the charged fraction, then the presence of muons, then the missing Et along the jet direction… In the end the computed biases wash out each other to some extent • How to correct for these biases all at once ?
Basics of the algorithm • Main hypothesis: A scalar field DEt:RNR exists and is continuous • Its value is the average error (pos or neg) in the jet Et measurement performed in the calorimeter, as a function of all thinkable jet observables: DEt = Etmeas- Ettrue • Cannot determine DEt with infinite precision • How to best measure it ? • Hyperball method: • Fill RN with MC b-jets (we know DEt for them!) • Need to average locally the value of DEt • What does locality mean ? • close to point to be estimated • similar value of most important variables • smaller correlation variables are less important for averaging • Generalized distance in RN: D2(x,y) = Siwi(xi-yi)2 • Use D to find MC points closest to point where scalar field is needed • Need to determine W vector such that closest MC points provide best estimate of <DEt> • Geometrically it means determining shape of hyperellipsoids
Promising results… Work in progress! • Applied the algorithm to • B-jets (QCD direct prod.) • Resolution improves • by about 30% throughout • the Et spectrum studied • That means we can really • get back the 10% relative • resolution we promised for • Hbb decay searches • Still lots to improve: • refine list of variables used • use more MC for DEt estimates • optimize everything
Identification of High Pt Leptons Most high Pt final states studied at the Tevatron involve the detection of leptons - easy to trigger on - high signal purity - easy to calibrate using standard candles (W,Z bosons) Tevatron experiments are exploiting to the fullest these signatures, producing lots of precision Electroweak physics measurements with them Tau leptons are also beginning to contribute appreciably, especially to new physics searches which may be generation-dependent CDF CDF D0
Tagging b-jets D0 Identifying b-jets is of paramount importance for low-mass Higgs boson searches. Three methods are well-tested and used: • Soft lepton tagging • Secondary vertex tagging • Jet Probability tagging For double tag searches, efficiency factors get squared! To retain signal, both CDF/D0 have loose and tight tagging options Efficiency drops at low jet Et and high rapidity but is 45-50% for central b-jets from Higgs decay Mistag rates are kept typically at 0.5% Tight/loose SV tag eff. SV tagging: tracks with significant IP are used in a iterative fit to identify the secondary vertex inside the jet CDF I.P. B
Secondary vertex tagging This event display shows how charged tracks are used to fit for secondary vertices in jets from a ttbar candidate (single lepton decay) Decay lengths for 50 GeV b-jets are typically of the order of a few millimeters and they can be easily reconstructed with tracks having at least 3 associated hits in the silicon detectors (sd is around 20 microns)
The last resort – or the main one?The Monte Carlo Simulation • The technology of reproducing the known behavior of high energy interactions has reached exquisite heights • Now available several choices which model QCD (let alone EW interactions) very successfully • full matrix element computations and parton shower modeling agree better by the day • But tuned with the data… Will they stand the test of a x7 jump in CM energy ? • Unfortunately we are still critically dependent on PDF fits • even larger extrapolation in the unknown at LHC (more on that later) • Almost every analysis of high Pt processes now relies heavily on Monte Carlo simulations • Let’s not forget that gross mistakes are brought by relying too much on MC to extrapolate into the unknown • Need to keep a cool head • Lesson for the future from the past: use of data is fundamental at the start of a new endeavour, as was in CDF and D0 in the early days of top searches • method 1 vs method 2 • likelihood methods vs pure counting experiments
“Simulation” From the Latin “simulacrum”… My Webster’s offers the following: 1) The act of simulating; pretense; feigning. 2) A simulated resemblance 3) An imitation or counterfeit 4) The use of a computer to calculate, by means of extrapolation, the effect of a given physical process
e n q W W* H q b b SM Higgs: Production and Decay At the Tevatron, about five 120 GeV Higgs bosons are produced in a typical day of running (will be 15/day in two years). Direct production occurs mostly via gluon-gluon fusion diagrams. Associated production through a virtual W or Z boson provides sensitivity in the region where LHC will have more trouble. At higher mass, the WW(*) final state becomes dominant. Even the WHWWW(*) process is promising despite the low yield, due to the striking signature of missing Et plus three leptons, two of which may be of the same charge but different flavor. l
What we know about the Higgs • Although they did not directly observe it, the LEP experiments have collected a wealth of information on the Higgs boson through comparisons of EW observables to EW theory + radiative corrections • From theory we know its couplings, its decay modes, and how its mass impacts the W and top masses. • If it exists, then we know its mass with about 60 GeV accuracy, and the direct search limit already cuts away a large part of the allowed mass region • Latest LEP results: MH=126+73-48 GeV, MH<280 GeV @ 95% CL (Winter ‘05) now being updated for new Mtop…
Higgs Sensitivity WG Predictions In 2003 the Tevatron chances for Higgs discovery were re-evaluated CDF Idea: with available data and operating detectors, can better assess Tevatron reach Surprisingly, the new results meet or exceed 1998 Susy/Higgs WG ones. Lum (fb-1) DESIGN • Keys to success: • mass resolution improvements; • - optimized b-tagging; • - shape information vs counting. BASE
Can we see dijet resonances if they are there? • A low mass Higgs search entails believing that we can: • - appropriately reconstruct hadronically-decaying objects • - accurately understand our background shapes • All of that can be proven if we see the Zbb decay in our data. The S/N is not higher than 1/5 at the most in the signal region • good testing ground for H! • can use to test/improve dijet mass resolution with advanced algorithms We barely saw it in Run 1… Can we use it in Run 2 ??
CDF sees Zbb decays in Run 2! Double b-tagged events with no extra jets and a back-to-back topology are the signal-enriched sample: Et3<10 GeV, DF12>3 Among 85,784 selected events CDF finds3400±500 Zbb decays - signal size ok - resolution as expected - jet energy scale ok! This is a proof that we are in business with small S/N jet resonances! CDF expects to stringently constrain the b-jet energy scale with this dataset
A few additional notes • b-jet Et scale = dominant systematics in Run I top mass measurements • top decay is a two-body one • very nearly linear relationship between Eb and Mt • At Tevatron, b Jet Energy Scale syst. is approx. s(Mt) (GeV) ~ s(Eb) (%) • At LHC, typical top quark boost softens the dependence: s(Mt) (GeV) ~ 0.7 s(Eb) (%) • - for light-quark jets s(Mt) (GeV) ~ 0.3 s(Eb) (%) • In Run II we are demonstrating that by measuring with precision the JES of light-quark jets using Wjj, the part of s(Mt) due to modeling of b jets (decays, fragmentation, color connection) can be reduced to below 1% (more later). • The Zbb signal becomes important mainly as a testing ground of algorithms targeting the jet resolution improvements • Anyway Zbb decays may contribute appreciably to b-JES determinations: already with 300 pb-1 one gets a statistical error well below 2%
Search for WH in Run 2 To search for WHlnbb events a detailed understanding of the composition of the W+jets sample is mandatory. In the 2-jet bin CDF finds187 eventswith a b-tag, where175±26 are expected, mostly from Wbb production and mistags. A fit to the dijet mass distribution allows to extract a95% CL limit of 5 pbto SM WH production. The obtained limit is consistent both with a priori predictions and with expectations based on HSWG results.
Results with double tagged events When two jets are required to be b-tagged, backgrounds are strongly reduced and mostly Wbb, ttbar remain The data is still in good agreement with expectations The extracted limit of WH production is 3-10 pb for MH=110-150 GeV
WH Search in D0 D0 also study their W+2jet bin with b-tagging in 384 pb-1 of high-Pt leptons from Run 2 data. The dijet mass distribution shows no anomaly with 1 b-tag. The 2-tag distribution is divided in search windows to set limits to Higgs production. They find 4 events with two b-tags in the mass window centered on 115 GeV (exp. 2.4±0.6) 95%CL limits on sWH*B(Hbb) are set at 7 to 9 pb for MH=105-135 GeV By-product:a95% CL limitis set toWbb production(DR>0.75, Pt>20 GeV)at 4.6 pb.
e+ n W+ n W- e- High Mass Searches: HWW(*) The SM production of WW pairs has been measured by CDF in Run 1 and by both CDF and D0 in Run 2: excellent agreement with NLO. To search for Higgs boson decays, events with two high-Pt leptons (e,m) and large missing Et are selected; the tt background is rejected with a jet veto. Then both experiments use the helicity-preferred alignment of charged leptons in F to discriminate known backgrounds.
CDF results on HWW CDF searches for HWW events by selecting two tight leptons (ee,em,mm) with Ete(Ptm)>20 GeV and missing Et>25 GeV (50 GeV if DFll<20°). A strict jet veto (Et<15 GeV if |h|<2.5) rejects top candidates. Finally, a small dilepton mass is required (Mll<55-80 GeV for MH=140-180 GeV). 8 eventsare observed in 184 pb-1 of Run 2 datawith the Mll <80 GeV cut,with an expectedbackground of 8.9±1.0. A likelihood fit to the DFll distribution is performed to extract a limit on the HWW cross section as a function of its mass. The result is sHWW*B(WWllnn)>5.6 pb for MH=160 GeV.
Higgs Physics: perspectives The Higgs boson is being hunted at the Tevatron in all advantageous search channels. D0 and CDF are competing – that’s good! – but will soon start to also combine their results. No surprises with the analyzed 200 pb-1 samples, but we have already three times more data on tape to look at! We are on track to supersede the LEP2 lower limit on MH by 2007 By the end of 2009, the Tevatron might be able to see a MH=115 GeV Higgs at 5s, or exclude it all the way to 180 GeV. …but that will require both cunning and the Tevatron delivering according to the design plan! What I feel I can promise at 95% CL: exclusion up to 135 GeV, 3s evidence at 115 GeV.
Implications for LHC LHC starts collecting physics data in April 2008 if everything works as it should, the Higgs is discovered by CMS and ATLAS in 2009 (a few fb-1 should suffice) However, fits prefer a Higgs mass in the region favoring Tevatron and hampering LHC…. Let’s hypothesize MH=115 GeV. Three possible scenarios: • Scenario A: Tevatron design, LHC delays firs hints from CDF e D0 (3s, early 2008) allow LHC to put their chips in the right place confirmation, common discovery (as did Adone for J/y? Seems improbable… • Scenario B: Tevatron design, LHC in time Tevatron “confirms” the first signal from LHC • Scenario C: Tevatron base plan (or killed), LHC whatever you know the story.
The Top Quark at the Tevatron • The top quark just turned 10! Run I results: • s(tt) =5.7±1.6 pb (D0), 6.5±1.4 pb (CDF) (@1.8TeV) • Mt = 178.0±2.7±3.3 GeV (D0+CDF) • many other measurements – but still imprecise – of Vtb, BR, spin; limits to single production, non-SM production and decays. • From the “discovery” mode the Tevatron soon adapted to using top quarks as a perfect pQCD laboratory • As new data pours in, the plan is the same: first, cross section measurements are performed; then the mass, then the kinematics and the search of anomalies, and lastly, the measurement of intrinsic phhysical properties • That modus operandi allows to optimize the output of physics results as analysis tools get perfected and more sophisticated: • high- Pt lepton identification • b-tagging • precise measurement of jet energy scale
Production of top at Tevatron • At Tevatron production of ttbar pairs occurs by qq annihilation (85%) or gluon fusion (15%) proportions inverted WRT LHC! • Theoretical cross section (NNLO) is 6.1 pb 1/1010 collisions 2 events per hour • Single top production is not irrelevant (3 pb), but its signature is way less characteristic so far obtained only upper limits to single top production