1 / 14

Benchmark for Vertex/Tracker

Benchmark for Vertex/Tracker. 19 Mar. 2005 @LCWS2005 Y. Sugimoto KEK. Purpose of Benchmarks for Detector R&D. Step 1: See physics output (precision) as a function of detector performance Set detector performance goals Show the justification for detector R&D Quick simulation Step 2:

willis
Download Presentation

Benchmark for Vertex/Tracker

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Benchmark for Vertex/Tracker 19 Mar. 2005 @LCWS2005 Y. Sugimoto KEK

  2. Purpose of Benchmarks for Detector R&D • Step 1: • See physics output (precision) as a function of detector performance • Set detector performance goals • Show the justification for detector R&D • Quick simulation • Step 2: • Detector design optimization to achieve the performance goal • Full simulator • Step 3: • Show overall performance of the detector system • Full simulation/Quick simulation Step 1 Step 3 Step 2

  3. Michael Peskin’s List

  4. Preferable benchmark process • More demanding • More sensitive to det. performance • More direct (less analysis or other sub-detector dependent)

  5. Performances to be studied • Tracker • Momentum resolution : d(1/Pt) • Two-track separation  Loss of hits  d(1/Pt) • Particle ID (dE/dx) • V-particle / Kink-track reconstruction • Vertexing • Impact parameter resolution (b, c, t-tag efficiency) • Vertex charge measurement

  6. Momentum resolution • Legacy channel: e+ e-  ZH, Zm+m- • Requirement: Width of recoil mass peak should be determined by beam (dEb, beam strahlung, and ISR) • Just showing peaks is not very convincing • We should show the accuracies of measurements(MH, branching ratio, etc.)as a function of detector performance (by A. Miyamoto)

  7. Momentum resolution • Higgs rare decay (?) e+ e-  ZH, Hm+m- • Br(H m+m-)~3x10-4 • O(10) events with L=500 fb-1 • The peak above background may be seen with excellent tracker ? m (from GLC report)

  8. H  mm • Simulation with JLC-I det. (d(1/Pt)=1x10-4) • No more background process? (by K.Fujii)

  9. Momentum resolution • Smuon pair • Smuon and LSP masses are determined from the end points of muon momenta • The accuracy of masses are determined by momentum resolution of the tracker • Tim Barklow’s presentation showed dm has no dependence on tracker performance??? (from GLC report)

  10. V-particle/kink reconstruction • GMSB: • Slepton (NLSP) can be long lived and decay in the tracker volume • Kinked track(s) observed • dE/dx measurement helps slepton (low b) ID • Too exotic? • How to parameterize the performance?

  11. Impact parameter resolution • e+ e-  ZH (Higgs branching measurement) • Flavor tagging is usually depends on analysis (and other sub-detectors) • Comparison between different values of detector parameters should be done within a same analysis • Any other benchmark more direct ?

  12. Vertex Charge • Chargino pair production • To identify W-charge and get differential cross section of chargino pair production, vertex charge (D+/D-) has to be determined • To identify D0 or D0-bar, particle ID by dE/dx is important to tag K+/K- • W/Z separation requires good jet energy resolution (PFA) ~ c10 ~ c20 ~ W+ Z c1+ ~ W- Z c1- ~ c20 ~ c10 + t-channel diagram

  13. Chargino pair production • Simulation with JLC-I det. (Jet E res.=40%/E1/2) Preliminary (by K. Fujii)

  14. Summary • Candidates for benchmark processes for tracker/vertex in step1 PFA  

More Related