1 / 1

A Fast Hardware Tracker for the ATLAS Trigger System

A Fast Hardware Tracker for the ATLAS Trigger System. Mark Neubauer 1 , Laura Sartori 2 1 University of Illinois at Urbana-Champaign, USA – msn@illinois.edu 2 Istituto Nazionale di Fisica Nucleare, and Marie Curie Fellow OIF, Italy – sartori@fnal.gov on behalf of the ATLAS TDAQ Collaboration.

sook
Download Presentation

A Fast Hardware Tracker for the ATLAS Trigger System

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Fast Hardware Tracker for the ATLAS Trigger System Mark Neubauer1, Laura Sartori2 1University of Illinois at Urbana-Champaign, USA – msn@illinois.edu 2Istituto Nazionale di Fisica Nucleare, and Marie Curie Fellow OIF, Italy– sartori@fnal.gov on behalf of the ATLAS TDAQ Collaboration The Current ATLAS Trigger System Why Add a Hardware Tracker? • Controlling trigger rates at high-energy hadron collider experiments in a way that maintains the physics capabilities is very challenging • The trigger system should be flexible & robust, with sufficient redundancy and operating margin •  Providing high-quality track reconstruction by the start of LVL2 processing is an important element in achieving these goals • Physics motivation for high luminosity ( large interaction pileup) LHC running: • 3rd-generation fermions will play an important role in LHC physics given the central role mass likely plays in electroweak symmetry breaking. Bottom quarks and τ-leptons produce very specific tracking signatures which can be exploited in triggering. At high luminosity, sophisticated algorithms run at LVL2 will be needed to suppress backgrounds. In the current system, the LVL2 farm is burdened with reconstructing tracks within Regions-of-Interest (ROIs) • Global tracking will be important at high interaction pile-up (high luminosity) • Use of primary vertexing and subsequent charged lepton association for improved isolation to reject hadronic jet backgrounds • b-decay events with final state hadrons identified outside of ROIs • Additional combined triggers (e.g. lepton + track) • Track-based missing transverse energy & jets at trigger level, possibly combined wtih calorimeter information • We propose to enable early rejection of background events and more LVL2 execution time for sophisticated algorithms by moving track reconstruction into a hardware system (FTK) with massively parallel processing that produces global track reconstruction with nearly offline resolution near the start of LVL2 processing Pipeline memories ~2 μs (fixed) Hardware Derandomizers Readout drivers Readout buffers ~1-10 ms (variable) Full-event buffers Processor sub-farms Software ~10-100 MB/s 6.2 m Proposed FTK in Trigger System The ATLAS Inner Detector (ID) 2.1 m • ID: ~5,800 silicon pixel • and strip detectors (SCT) and • ~1000 modules of straw tubes • FTK uses pixel and SCT • hit information to infer • charged particle tracks • FTK processes 8 overlapping Φ-regions in parallel • The FTK system receives data from ReadOut Drivers (RODs) • ROD output is duplicated by a dual output board • Tracks reconstructed by the FTK processors are written into ReadOut Buffers (ROBs) for use at the beginning of LVL2 trigger processing • FTK operates in parallel with the silicon tracker readout following each LVL1 trigger Pattern Recognition using Associative Memory (AM) A large bank of pre-computed hit patterns is used for very fast tracking (memory  speed) Bingo analogy FTK Functional Diagram & Architecture Linearized track fit using full-resolution silicon hits after pattern recognition Some Expected Performance Comparable FTK tracking resolution to offline tracking (Simulated muons with pt>1 GeV and |η|<2.5) Light quark rejection versus b-tagging efficiency comparing FTK and offline tracking • We are currently evaluating three different possible architectures which balance good tracking efficiency and an acceptable number of track fits per event: • Two pattern banks offset by ½ superstrip (SS = binned silicon hits) • Require road to be found in both. Reduced effective SS width decreases # of combinations w/ small increase (x2) in patten bank size • Tree Search Processor (TSP) • Smaller SS binary search after AM. Reduces SS size fewer combinations • Split Architecture • First find SCT tracks, then fit with pixels (ala CDF SVT) Offline FTK IEEE09, March25-31, Orlando, Florida, USA

More Related