Methods of experimental particle physics
Download
1 / 36

Methods of Experimental Particle Physics - PowerPoint PPT Presentation


  • 100 Views
  • Uploaded on

Methods of Experimental Particle Physics. Alexei Safonov Lecture #18. Today Lecture. Presentations : D0 calorimeter by Jeff Trigger. Collisions at LHC. 7.5 m (25 ns). Bunch Crossing 40 million (10 6 ) Hz. Proton Collisions 1 billion (10 9 ) Hz. Parton Collisions.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Methods of Experimental Particle Physics' - lacy


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Methods of experimental particle physics

Methods of Experimental Particle Physics

Alexei Safonov

Lecture #18


Today lecture
Today Lecture

  • Presentations:

    • D0 calorimeter by Jeff

  • Trigger


  • Collisions at lhc
    Collisions at LHC

    7.5 m (25 ns)

    Bunch Crossing 40 million (106) Hz

    Proton Collisions 1 billion (109) Hz

    Parton Collisions

    New Particles 1 Hz to 10 micro (10-5) Hz

    (Higgs, SUSY, ....)

    14 000 x mass of proton (14 TeV) = Collision Energy

    Protons fly at 99.999999% of speed of light

    2808 = Bunches/Beam

    100 billion (1011) = Protons/Bunch

    • Finding anything at a hadron collider requires first getting rid of enormous backgrounds due to QCD multi-jet production

      • Can’t even write all these events on disk, need trigger - will talk later

    7 TeV

    Proton

    Proton

    colliding beams


    Triggering and qcd
    Triggering and QCD

    • There is a reason why QCD is called a strong interaction

    • The cross-sections for strong processes are large

    • Most are “soft” QCD events and are not very interesting:

      • We already know about jets, so now they are more of an obstacle

    • Need a device that allows discarding non-interesting events and keeping interesting ones

      • They may look alike: even though jets and leptons usually look differently, occasionally a jet can look like a lepton.

      • Initial rate is so large that occasional can turn out to be very frequent in absolute terms


    Making discoveries come faster
    Making Discoveries Come Faster

    • Because interesting events are rare, need to make a lot of non-interesting events first

    • Either increase the number of particles per bunch or make more bunches and both create challenges:

      • Many particles per bunch:

        • You end up with a lot of overlapping events (called “pile-up”) within the same crossing, difficult to disentangle things – lower efficiency and less discrimination between signal and background

      • Many bunches:

        • Short time between collisions means that the detector must be able to “recover” from previous collision within a short amount of time and also need to be able to read out your detector very fast

    • Both cause technological limitations on the detector electronics design

      • And also on the computing resources

    • In real life have to pursue both keeping a balance of cost and effectiveness


    Many bunches
    Many Bunches

    • The LHC time between collisions is 25 ns:

      • Detector needs to recover from previous interaction and be ready – else “dead-time”

      • Pressure on the readout:

        • 1 MB of data every 25 ns requires a bandwidth of 320 Terrabit per second, which is an insane number for current technologies


    Many overlapping events
    Many Overlapping Events

    • Very high occupancies of hits and particles per detector granularity

    • Many detector design and performance challenges

      • Even if your detector can “operate”, if the data is not good, you won’t be able to do much when doing analysis


    Many overlapping events1
    Many Overlapping Events

    • Very high occupancies of hits and particles per detector granularity

      • Many challenges:

        • Nice and inexpensive detectors, e.g. as chambers become inefficient due to long drift time

        • Calorimeter measurements become useless as deposits sit on top of each other for low granularity, for high granularity still a problem as you never know which interaction a specific deposit came from

        • The only measurements relatively immune to this are tracking as tracking allows to distinguish which track came from which vertex

    • But you can’t do a physics analysis based on tracks only

      • Or can you?


    Detectors and pile up
    Detectors and Pile-Up

    • An illustration of overlapping signals

      • Can get rid of it by using very “fast” and finely segmented detectors

        • But the cost will skyrocket


    Triggering basics
    Triggering Basics

    • Two paths:

      • Recognize non-interesting events and discard them

        • Not very practical as there are lots of ways how non-interesting events can look like, hard to get all possible modes identified

          • If you don’t, whatever is left can still be way too much

      • Recognize interesting events and keep them

        • More practical as you can build more sophisticated requirements targeting specific topologies

          • Build many “triggers” going after specific types of events, discard events that are not flagged by any of the triggers

          • The more exclusive you go, the less likely it is for a background event to pass your requirements

        • But also dangerous: you may miss a discovery

          • In this approach you must know what you are looking for

          • One has to strike a balance of exclusive and inclusive to not miss something that could be important


    Boundary conditions
    Boundary Conditions

    • On the input:

      • Bunch Crossing rate: 40 MHz

      • Interactions rate: 1-10 GHz (depends on how many overlapping events)

      • Data rate: hundreds of Terrabits per second

    • On the output:

      • Need to write events on disk so that one can analyze the data

      • With some reasonable assumptions on how much you can spend, the likely writing rate is 100-300 crossings per second (100-300 Hz)

        • Multiply by 1 MB event size to get some Gigabits per second

    What ‘s in between” ?

    The trigger!


    How to build a trigger
    How to Build a Trigger

    • Need to bring the rate closer to something manageable but can’t lose data:

      • Solution is to delay full readout until you know the event is interesting

      • Make “pipelines” in the front-end electronics holding the data and “go parallel”

        • Can do if electronics is very segmented (each piece serves some small portion of the coverage of a specific detector system

          • Like one muon chamber

        • Unless you go nuts on segmenting your readout (which will be very expensive), rates are still too high for any kind of commercial computers, need to use fast electronics

      • Can use a fraction of data (say reduce granularity to reduce the rate) or make a more elaborate electronics system


    Trigger designs
    Trigger Designs

    • Conventional trigger systems use 3 levels:

      • Ultra fast electronics (ASICs/FPGAs) and fast connections

      • Slower but smarter electronics (or super-fast processors)

      • Conventional computer farm




    Algorithmic considerations
    Algorithmic Considerations

    • The idea is always the same:

      • Do something fast and dirty first to quickly recognize “junk” (if you do, stop processing)

      • More intelligent (and thus slower) algorithms go later

        • The rate is already reduced by “fast and dirty”, so you can spend more time per event without creating a bottle-neck leading to dead-time

      • The deeper the storage pipe-lines, the more time you have to make a decision

        • But your system becomes more and more expensive

          • Need to strike a balance


    Parallelization
    Parallelization

    • Tree-like structure of decision making:


    Level 1
    Level-1

    • CMS and ATLAS do not have tracking in Level-1

      • Nothing to be proud of: we can kind of survive now, but won’t last long. The only reason we do it is we can’t handle the rates of the current tracker



    Cms level 1 and daq
    CMS Level-1 and DAQ

    • Current system design:


    General hlt sequence
    General HLT Sequence:

    • Conditionally it’s broken into L2 – L2.5 and L3:

      • L2: repeat L1 algorithms at full segmentation

        • Fast and can eliminate easy to eliminate events

      • L2.5: add only limited tracking information:

        • Pixel detector hits or (later within L2.5) tracks and vertices

        • Not as fast but allows large rejections (although limited tracking capabilities – reduced resolution, potential efficiency losses)

      • L3: add full tracking and particle flow

        • Slow, but hopefully the number of events coming is already small enough to allow it to work


    Trigger table
    Trigger Table

    • A typical experiment has hundreds of what’s called trigger paths

      • Each path is a sequence of requirements at Level-1 and HLT

        • Essentially you are looking for some specific object (very energetic electron) or a topology (3 muons with high pt), but you can use earlier paths as bricks in building your trigger path

        • Each path has it’s “owners” who maintain them and continuously improve their trigger

        • An analysis usually uses one or few of these trigger paths

      • A special group usually deals with allocating available bandwidth among trigger paths

        • Reviews proposed triggers and physics motivation, suggests modifications (say to improve background rejection or to make trigger usable for more than one purpose)

        • Allocates available bandwidth to specific paths based on physics priorities

        • The result of such allocation is a “Trigger Table”

          • Dynamic as needs change, different triggers have different growth terms in their rates, needing frequent rebalancing


    Data storage
    Data Storage

    • Once the trigger has made a decision to keep the event, the data is written on disk

      • The data is sent in several streams based on the type of objects and physics

        • This way you can only filter events from one stream for your analysis instead of looping over 10 times more events

    • Then the data gets manipulated before it becomes available for analysis

      • Within a few days these events move around

        • From Tier-0 to Tier-1 and further as full event reconstruction is performed

          • Various “standard” formats:

            • Some information that s rarely used gets dropped to make events smaller in size, but a full event record is kept somewhere (one of Tier-1 centers)

        • Eventually data in one of the light format moves to Tier-2 where it can be accessed by analyzers


    Next time
    Next time

    • Monte Carlo event generators

    • Detector emulation

    • This lecture had a lot of slides borrowed from one of the lectures about triggers by Wesley Smith (UW-Madison)


    ad