High level triggering
This presentation is the property of its rightful owner.
Sponsored Links
1 / 78

High Level Triggering PowerPoint PPT Presentation


  • 126 Views
  • Uploaded on
  • Presentation posted in: General

High Level Triggering. Fred Wickens. High Level Triggering (HLT). Introduction to triggering and HLT systems What is Triggering What is High Level Triggering Why do we need it Case study of ATLAS HLT (+ some comparisons with other experiments) Summary.

Download Presentation

High Level Triggering

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


High level triggering

High Level Triggering

Fred Wickens


High level triggering hlt

High Level Triggering (HLT)

  • Introduction to triggering and HLT systems

    • What is Triggering

    • What is High Level Triggering

    • Why do we need it

  • Case study of ATLAS HLT (+ some comparisons with other experiments)

  • Summary


Simple trigger for spark chamber set up

Simple trigger for spark chamber set-up


Dead time

Dead time

  • Experiments frozen from trigger to end of readout

    • Trigger rate with no deadtime = R per sec.

    • Dead time / trigger = t sec.

    • For 1 second of live time = 1 + Rt seconds

    • Live time fraction = 1/(1 + Rt)

    • Real trigger rate = R/(1 + Rt) per sec.


Trigger systems 1980 s and 90 s

Trigger systems 1980’s and 90’s

  • bigger experiments  more data per event

  • higher luminosities  more triggers per second

    • both led to increased fractional deadtime

  • use multi-level triggers to reduce dead-time

    • first level - fast detectors, fast algorithms

    • higher levels can use data from slower detectors and more complex algorithms to obtain better event selection/background rejection


Trigger systems 1990 s and 2000 s

Trigger systems 1990’s and 2000’s

  • Dead-time was not the only problem

  • Experiments focussed on rarer processes

    • Need large statistics of these rare events

    • But increasingly difficult to select the interesting events

    • DAQ system (and off-line analysis capability) under increasing strain - limiting useful event statistics

      • This is a major issue at hadron colliders, but will also be significant at ILC

  • Use the High Level Trigger to reduce the requirements for

    • The DAQ system

    • Off-line data storage and off-line analysis


Summary of atlas data flow rates

Summary of ATLAS Data Flow Rates

  • From detectors> 1014 Bytes/sec

  • After Level-1 accept~ 1011 Bytes/sec

  • Into event builder~ 109 Bytes/sec

  • Onto permanent storage~ 108 Bytes/sec ~ 1015 Bytes/year


Tdaq comparisons

TDAQ Comparisons


The evolution of daq systems

The evolution of DAQ systems


Typical architecture 2000

Typical architecture 2000+


Level 1 sometimes called level 0 lhcb

Level 1 (Sometimes called Level-0 - LHCb)

  • Time:one  very few microseconds

  • Standard electronics modules for small systems

  • Dedicated logic for larger systems

    • ASIC - Application Specific Integrated Circuits

    • FPGA - Field Programmable Gate Arrays

  • Reduced granularity and precision

    • calorimeter energy sums

    • tracking by masks

  • Event data stored in front-end electronics (at LHC use pipeline as collision rate shorter than Level-1 decision time)


Level 2

Level 2

  • 1) few microseconds (10-100)

    • hardwired, fixed algorithm, adjustable parameters

  • 2) few milliseconds (1-100)

    • Dedicated microprocessors, adjustable algorithm

      • 3-D, fine grain calorimetry

      • tracking, matching

      • Topology

    • Different sub-detectors handled in parallel

      • Primitives from each detector may be combined in a global trigger processor or passed to next level


Level 2 cont d

Level 2 - cont’d

  • 3) few milliseconds (10-100) - 2008

    • Processor farm with Linux PC’s

    • Partial events received with high-speed network

    • Specialised algorithms

    • Each event allocated to a single processor, large farm of processors to handle rate

    • If separate Level 2, data from each event stored in many parallel buffers (each dedicated to a small part of the detector)


Level 3

Level 3

  • millisecs to seconds

  • processor farm

    • microprocessors/emulators/workstations

    • Now standard server PC’s

  • full or partial event reconstruction

    • after event building (collection of all data from all detectors)

  • Each event allocated to a single processor, large farm of processors to handle rate


Summary of introduction

Summary of Introduction

  • For many physics analyses, aim is to obtain as high statistics as possible for a given process

    • We cannot afford to handle or store all of the data a detector can produce!

  • What does the trigger do

    • select the most interesting events from the myriad of events seen

      • I.e. Obtain better use of limited output band-width

      • Throw away less interesting events

      • Keep all of the good events(or as many as possible)

    • But note must get it right - any good events thrown away are lost for ever!

  • High level trigger allows much more complex selection algorithms


Case study of the atlas hlt system

Case study of the ATLAS HLT system

Concentrate on issues relevant forATLAS (CMS very similar issues), but try to address some more general points


Starting points for any hlt system

Starting points for any HLT system

  • physics programme for the experiment

    • what are you trying to measure

  • accelerator parameters

    • what rates and structures

  • detector and trigger performance

    • what data is available

    • what trigger resources do we have to use it


Physics at the lhc

Physics at the LHC

Interesting events are buried in a seaof soft interactions

B physics

High energy QCD jet production

top physics

Higgs production


The lhc and atlas cms

The LHC and ATLAS/CMS

  • LHC has

    • design luminosity 1034 cm-2s-1 (In 2008 from 1030 - 1032 ?)

    • bunch separation 25 ns (bunch length ~1 ns)

  • This results in

    • ~ 23 interactions / bunch crossing

      • ~ 80 charged particles (mainly soft pions) / interaction

      • ~2000 charged particles / bunch crossing

  • Total interaction rate109 sec-1

    • b-physicsfraction ~ 10-3106 sec-1

    • t-physicsfraction ~ 10-810 sec-1

    • Higgsfraction ~ 10-1110-2 sec-1


Physics programme

Physics programme

  • Higgs signal extraction important but very difficult

  • Also there is lots of other interesting physics

    • B physics and CP violation

    • quarks, gluons and QCD

    • top quarks

    • SUSY

    • ‘new’ physics

  • Programme will evolve with: luminosity, HLT capacity and understanding of the detector

    • low luminosity (2008 - 2009)

      • high PT programme (Higgs etc.)

      • b-physics programme (CP measurements)

    • high luminosity (2010?)

      • high PT programme (Higgs etc.)

      • searches for new physics


Trigger strategy at lhc

Trigger strategy at LHC

  • To avoid being overwhelmed use signatures with small backgrounds

    • Leptons

    • High mass resonances

    • Heavy quarks

  • The trigger selection looks for events with:

    • Isolated leptons and photons,

    • -, central- and forward-jets

    • Events with high ET

    • Events with missing ET


Example physics signatures

Example Physics signatures


Architecture

~ 200 Hz

Physics

~ 300 MB/s

ARCHITECTURE

Trigger

DAQ

40 MHz

~1 PB/s(equivalent)

Three logical levels

Hierarchical data-flow

LVL1 - Fastest:Only Calo and MuHardwired

On-detector electronics: Pipelines

~2.5 ms

LVL2 - Local:LVL1 refinement +track association

Event fragments buffered in parallel

~40 ms

LVL3 - Full event:“Offline” analysis

Full event in processor farm

~4 sec.


Selected inclusive signatures

Selected (inclusive) signatures


Trigger design level 1

Trigger design - Level-1

  • Level-1

    • sets the context for the HLT

    • reduces triggers to ~75 kHz

    • has a very short time budget

      • few micro-sec (ATLAS/CMS ~2.5 - much used in cable delays!)

  • Detectors used must provide data very promptly, must be simple to analyse

    • Coarse grain data from calorimeters

    • Fast parts of muon spectrometer (I.e. not precision chambers)

    • NOT precision trackers - too slow, too complex

    • (LHCb does use some simple tracking data from their VELO detector to veto events with more than 1 primary vertex)

    • (CMS plans track trigger for sLHC - L1 time => ~6 micro-s)

    • Proposed FP420 detectors provide data too late


Atlas level 1 trigger system

ATLAS Level-1 trigger system

  • Calorimeter and muon

    • trigger on inclusive signatures

      • muons;

      • em/tau/jet calo clusters; missing and sum ET

  • Hardware trigger

    • Programmable thresholds

    • Selection based on multiplicities and thresholds


Atlas em cluster trigger algorithm

ATLAS em cluster trigger algorithm

“Sliding window” algorithm repeated for each of ~4000 cells


Atlas level 1 muon trigger

ATLAS Level 1 Muon trigger

RPC - Trigger Chambers - TGC

Measure muon momentum with very simple tracking in a few planes of trigger chambers

RPC: Restive Plate ChambersTGC: Thin Gap ChambersMDT: Monitored Drift Tubes


Level 1 selection

Level-1 Selection

  • The Level-1 trigger - an “or” of a large number of inclusive signals - set to match the current physics priorities and beam conditions

  • Precision of cuts at Level-1 is generally limited

  • Adjust the overall Level-1 accept rate (and the relative frequency of different triggers) by

    • Adjusting thresholds

    • Pre-scaling (e.g. only accept every 10th trigger of a particular type) higher rate triggers

      • Can be used to include a low rate of calibration events

  • Menu can be changed at the start of run

    • Pre-scale factors may change during the course of a run


Example level 1 menu for 2x10 33

Example Level-1 Menu for 2x10^33


Trigger design level 2

Trigger design - Level-2

  • Level-2 reduce triggers to ~2 kHz

    • Note CMS does not have a physically separate Level-2 trigger, but the HLT processors include a first stage of Level-2 algorithms

  • Level-2 trigger has a short time budget

    • ATLAS ~40 milli-sec average

      • Note for Level-1 the time budget is a hard limit for every event, for the High Level Trigger it is the average that matters, so a some events can take several times the average, provided thay are a minority

  • Full detector data is available, but to minimise resources needed:

    • Limit the data accessed

    • Only unpack detector data when it is needed

    • Use information from Level-1 to guide the process

    • Analysis proceeds in steps with possibility to reject event after each step

    • Use custom algorithms


Regions of interest

Regions of Interest

  • The Level-1 selection is dominated by local signatures (I.e. within Region of Interest - RoI)

    • Based on coarse granularity data from calo and mu only

  • Typically, there are 1-2 RoI/event

  • ATLAS uses RoI’s to reduce network b/w and processing power required


Trigger design level 2 cont d

Trigger design - Level-2 - cont’d

  • Processing scheme

    • extract features from sub-detector data in each RoI

    • combine features from one RoI into object

    • combine objects to test event topology

  • Precision of Level-2 cuts

    • Emphasis is on very fast algorithms with reasonable accuracy

      • Do not include many corrections which may be applied off-line

    • Calibrations and alignment available for trigger not as precise as ones available for off-line


Architecture1

Trigger

DAQ

Calo MuTrCh

Other detectors

~ 1 PB/s

40 MHz

40 MHz

LVL1

2.5 ms

LVL1 accept

Calorimeter

Trigger

Muon

Trigger

ROD

ROD

ROD

Read-Out Drivers

75 kHz

RoI’s

120 GB/s

Read-Out Links

RoI

requests

LVL2

ROB

ROB

ROB

~ 10 ms

ROS

Read-Out Buffers

ROIB

L2SV

Read-Out Sub-systems

RoI data = 1-2%

~2 GB/s

L2P

L2P

L2P

~2 kHz

~3 GB/s

L2N

LVL2 accept

Event Builder

Event Filter

~ 1 sec

EB

~3 GB/s

EFN

EFP

EFP

EFP

~ 300 MB/s

~ 200 Hz

~ 300 MB/s

ARCHITECTURE

FE Pipelines

2.5 ms

H

L

T


Cms event building

CMS Event Building

  • CMS perform Event Building after Level-1

  • This simplifies the architecture, but places much higher demand on technology:

    • Network traffic ~100 GB/s

      • Use Myrinet instead of GbE for the EB network

      • Plan a number of independent slices with barrel shifter to switch to a new slice at each event

    • Time will tell whichphilosophy is better


Example for two electron trigger

Signature 

+

e30i

e30i

Iso–

lation

Iso–

lation

STEP 4

Signature 

+

e30

e30

pt>

30GeV

pt>

30GeV

STEP 3

Signature 

t i m e

e

+

e

track

finding

track

finding

STEP 2

Signature 

ecand

ecand

+

Cluster

shape

Cluster

shape

STEP 1

Level1 seed 

+

EM20i

EM20i

Example for Two electron trigger

LVL1 triggers on two isolated

e/m clusters with pT>20GeV

(possible signature: Z–>ee)

HLT Strategy:

  • Validate step-by-step

  • Check intermediate signatures

  • Reject as early as possible

Sequential/modular approach facilitates early rejection


Trigger design event filter level 3

Trigger design - Event Filter / Level-3

  • Event Filter reduce triggers to ~200 Hz

  • Event Filter budget ~ 4 sec average

  • Full event detector data is available, but to minimise resources needed:

    • Only unpack detector data when it is needed

    • Use information from Level-2 to guide the process

    • Analysis proceeds in steps with possibility to reject event after each step

    • Use optimised off-line algorithms


Electron slice at the ef

Electron slice at the EF

Wrapper of CaloRec

TrigCaloRec

EFCaloHypo

Wrapper of newTracking

EF tracking

matches electromagnetic

clusters with tracks and builds

egamma objects

EFTrackHypo

Wrapper of EgammaRec

TrigEgammaRec

EFEgammaHypo


Hlt processing at lhcb

HLT Processing at LHCb


Trigger design hlt strategy

Trigger design - HLT strategy

  • Level 2

    • confirm Level 1, some inclusive, some semi-inclusive,some simple topology triggers, vertex reconstruction(e.g. two particle mass cuts to select Zs)

  • Level 3

    • confirm Level 2, more refined topology selection,near off-line code


Example hlt menu for 2x10 33

Example HLT Menu for 2x10^33


Example b physics menu for 10 33

Example B-physics Menu for 10^33

LVL1 :

  • MU6 rate 24kHz (note there are large uncertainties in cross-section)

  • In case of larger rates use MU8 => 1/2xRate

  • 2MU6

    LVL2:

  • Run muFast in LVL1 RoI ~ 9kHz

  • Run ID recon. in muFast RoI mu6 (combined muon & ID) ~ 5kHz

  • Run TrigDiMuon seeded by mu6 RoI (or MU6)

  • Make exclusive and semi-inclusive selections using loose cuts

    • B(mumu), B(mumu)X, J/psi(mumu)

  • Run IDSCAN in Jet RoI, make selection for Ds(PhiPi)

    EF:

  • Redo muon reconstruction in LVL2 (LVL1) RoI

  • Redo track reconstruction in Jet RoI

  • Selections for B(mumu) B(mumuK*) B(mumuPhi), BsDsPhiPi etc.


Lhcb trigger menu

LHCb Trigger Menu


Matching problem

Background

Off-line

Physics channel

On-line

Matching problem


Matching problem cont

Matching problem (cont.)

  • ideally

    • off-line algorithms select phase space which shrink-wraps the physics channel

    • trigger algorithms shrink-wrap the off-line selection

  • in practice, this doesn’t happen

    • need to match the off-line algorithm selection

      • For this reason many trigger studies quote trigger efficiency wrt events which pass off-line selection

    • BUT off-line can change algorithm, re-process and recalibrate at a later stage

  • SO, make sure on-line algorithm selection is well known, controlled and monitored


Selection and rejection

Selection and rejection

  • as selection criteria are tightened

    • background rejection improves

    • BUT event selection efficiency decreases


Selection and rejection1

Selection and rejection

  • Example of a ATLAS Event Filter (I.e. Level-3) study of the effectiveness of various discriminants used to select 25 GeV electrons from a background of dijets


Other issues for the trigger

Other issues for the Trigger

  • Efficiency and Monitoring

    • In general need high trigger efficiency

    • Also for many analyses need a well known efficiency

      • Monitor efficiency by various means

        • Overlapping triggers

        • Pre-scaled samples of triggers in tagging mode (pass-through)

  • Final detector calibration and alignment constants not available immediately - keep as up-to-date as possible and allow for the lower precision in the trigger cuts when defining trigger menus and in subsequent analyses

  • Code used in trigger needs to be very robust - low memory leaks, low crash rate, fast

  • Beam conditions and HLT resources will evolve over several years (for both ATLAS and CMS)

    • In 2008 luminosity low, but also HLT capacity will be < 50% of full system (funding constraints)


Summary

Summary

  • High-level triggers allow complex selection procedures to be applied as the data is taken

    • Thus allow large numbers of events to be accumulated, even in presence of very large backgrounds

    • Especially important at LHC - but significant at most accelerators

  • The trigger stages - in the ATLAS example

    • Level 1 uses inclusive signatures

      • muons; em/tau/jet calo clusters; missing and sum ET

    • Level 2 refines Level 1 selection, adds simple topology triggers, vertex reconstruction, etc

    • Level 3 refines Level 2 adds more refined topology selection

  • Trigger menus need to be defined, taking into account:

    • Physics priorities, beam conditions, HLT resources

      • Include items for monitoring trigger efficiency and calibration

  • Must get it right - any events thrown away are lost for ever!


Additional foils

Additional Foils


The evolution of daq systems1

The evolution of DAQ systems


High level triggering

ATLAS Detector


Atlas event tracker end view

ATLAS event - tracker end-view


Trigger functional design

Trigger functional design

  • Level 1 Input 40 MHz Accept 75 kHz Latency 2.5 μs

    • Inclusive triggers based on fast detectors

    • Muon, electron/photon, jet, sum and missing ET triggers

    • Coarse(r) granularity, low(er) resolution data

    • Special purpose hardware (FPGAs, ASICs)

  • Level 2Input 75 (100) kHz Accept O(1) kHz Latency ~10 ms

    • Confirm Level 1 and add track information

    • Mainly inclusive but some simple event topology triggers

    • Full granularity and resolution available

    • Farm of commercial processors with special algorithms

  • Event FilterInput O(1) kHz Accept O(100) Hz Latency ~secs

    • Full event reconstruction

    • Confirm Level 2; topology triggers

    • Farm of commercial processors using near off-line code


High level triggering

ATLAS Trigger / DAQ Data Flow

CERN

computer

centre

SDX1

dual-socket server PC’s

~30

~1600

~100

~ 500

Local

Storage

SubFarm

Outputs

(SFOs)

Event

Filter

(EF)

Event

Builder

SubFarm

Inputs

(SFIs)

LVL2

farm

Event rate

~ 200 Hz

Second-

level

trigger

Data

storage

SDX1

pROS

pROS

DataFlow

Manager

Network

switches

stores

LVL2

output

stores

LVL2

output

Network switches

LVL2

Super-

visor

Gigabit Ethernet

Event data requests

Delete commands

Requested event data

Event data

pulled:

partial events

@ ≤ 100 kHz,

full events

@ ~ 3 kHz

USA15

Regions Of Interest

USA15

Data of events accepted

by first-level trigger

1600

Read-

Out

Links

UX15

~150

PCs

VME

Dedicated links

Read-

Out

Drivers

(RODs)

ATLAS

detector

Read-Out

Subsystems

(ROSs)

RoI

Builder

First-

level

trigger

UX15

Timing Trigger Control (TTC)

Event data pushed @ ≤ 100 kHz,

1600 fragments of ~ 1 kByte each


Event s eye view step 1

Event’s Eye View - step-1

  • At each beam crossing latch data into detector front end

  • After processing, data put into many parallel pipelines - moves along the pipeline at every bunch crossing, falls out the far end after 2.5 microsecs

  • Also send calo + mu trigger data to Level-1


Event s eye view step 2

Event’s Eye View - step-2

  • The Level-1 Central Trigger Processor combines the information from the Muon and Calo triggers and when appropriate generates the Level-1 Accept (L1A)

  • The L1A is distributed in real-time via the TTC system to the detector front-ends to send data from the accepted event to the detector ROD’s (Read-Out Drivers)

    • Note must arrive before data has dropped out of the pipe-line - hence hard dead-line of 2.5 micro-secs

    • The TTC system (Trigger, Timing and Control) is a CERN system used by all of the LHC experiments. Allows very precise real-time data distribution of small data packets

  • Detector ROD’s receive data, process and reformat it as necessary and send via fibre links to TDAQ ROS


Event s eye view step 3

Event’s Eye View - Step-3

  • At L1A the different parts of LVL1 also send RoI data to the RoI Builder (RoIB), which combines the information and sends as a single packet to a Level-2 Supervisor PC

    • The RoIB is implemented as a number of VME boards with FPGAs to identify and combine the fragments coming from the same event from the different parts of Level-1


High level triggering

Step-4

ATLAS Level-2 Trigger

CERN

computer

centre

SDX1

dual-socket server PC’s

~30

~1600

~100

~ 500

Region of Interest Builder (RoIB) passes formatted information to one of the LVL2 supervisors.

LVL2 supervisor selects one of the processors in the LVL2 farm and sends it the RoI information.

LVL2 processor requests data from the ROSs as needed (possibly in several steps), produces an accept or reject and informs the LVL2 supervisor. Result of processing is stored in pseudo-ROS (pROS) for an accept.

Reduces network traffic to ~2 GB/s c.f. ~150 GB/s if do full event build

LVL2 supervisor passes decision to the DataFlow Manager (controls Event Building).

Local

Storage

SubFarm

Outputs

(SFOs)

Event

Filter

(EF)

Event

Builder

SubFarm

Inputs

(SFIs)

LVL2

farm

Event rate

~ 200 Hz

Second-

level

trigger

Data

storage

pROS

pROS

DataFlow

Manager

Network

switches

stores

LVL2

output

stores

LVL2

output

Network switches

LVL2

Super-

visor

Gigabit Ethernet

Event data requests

Requested event data

Event data for Level-2 pulled:

partial events

@ ≤ 100 kHz

Regions Of Interest

USA15

~150

PCs

Read-Out

Subsystems

(ROSs)

RoI

Builder


High level triggering

Step-5

ATLAS Event Building

CERN

computer

centre

SDX1

dual-socket server PC’s

~30

~1600

~100

~ 500

For each accepted event the DataFlow Manager selects a Sub-Farm Input (SFI) and sends it a request to take care of the building of a complete Event.

The SFI sends requests to all ROSs for data of the event to be built. Completion of building is reported to the DataFlow Manager.

For rejected events and for events for which event Building has completed the DataFlow Manager

sends "clears" to the ROSs (for 100 - 300 events Together).

Network traffic for Event Building is ~5 GB/s

Local

Storage

SubFarm

Outputs

(SFOs)

Event

Filter

(EF)

Event

Builder

SubFarm

Inputs

(SFIs)

LVL2

farm

Event rate

~ 200 Hz

Second-

level

trigger

Data

storage

pROS

pROS

DataFlow

Manager

Network

switches

stores

LVL2

output

stores

LVL2

output

Network switches

LVL2

Super-

visor

Gigabit Ethernet

Event data requests

Delete commands

Requested event data

Event data after Level-2 pulled:

full events

@ ~3 kHz

Regions Of Interest

USA15

~150

PCs

Read-Out

Subsystems

(ROSs)

RoI

Builder


High level triggering

Step-6

ATLAS Event Filter

CERN

computer

centre

SDX1

dual-socket server PC’s

~30

~1600

~100

~ 500

A process (EFD) running in each Event Filter farm node collects each complete event from the SFI and assigns it to one of a number of Processing Task’s in that node

The Event Filter uses more sophisticated algorithms (near or adapted off-line) and more detailed calibration data to select events based on the complete event data

Accepted events are sent to SFO (Sub-Farm Output) node to be written to disk

Local

Storage

SubFarm

Outputs

(SFOs)

Event

Filter

(EF)

Event

Builder

SubFarm

Inputs

(SFIs)

LVL2

farm

Event rate

~ 200 Hz

Second-

level

trigger

Data

storage

pROS

pROS

DataFlow

Manager

Network

switches

stores

LVL2

output

stores

LVL2

output

Network switches

LVL2

Super-

visor

Gigabit Ethernet

Event data requests

Delete commands

Requested event data

Regions Of Interest

USA15

~150

PCs

Read-Out

Subsystems

(ROSs)

RoI

Builder


High level triggering

Step-7

ATLAS Data Output

CERN

computer

centre

SDX1

dual-socket server PC’s

~30

~1600

~100

~ 500

Local

Storage

SubFarm

Outputs

(SFOs)

Event

Filter

(EF)

Event

Builder

SubFarm

Inputs

(SFIs)

LVL2

farm

Event rate

~ 200 Hz

Second-

level

trigger

The SFO nodes receive the final accepted events and writes them to disk

The events include ‘Stream Tags’ to support multiple simultaneous files (e.g. Express Stream, Calibration, b-physics stream, etc)

Files are closed when they reach 2 GB or at end of run

Closed files are finally transmitted via GbE to the CERN Tier-0 for off-line analysis

Data

storage

pROS

pROS

DataFlow

Manager

Network

switches

stores

LVL2

output

stores

LVL2

output

Network switches

LVL2

Super-

visor

Gigabit Ethernet

Event data requests

Delete commands

Requested event data

Regions Of Interest

USA15

~150

PCs

Read-Out

Subsystems

(ROSs)

RoI

Builder


Atlas hlt hardware

ATLAS HLT Hardware

First 4 racks of HLT processors, each rack contains

  • ~30 HLT PC’s (PC’s very similar to Tier-0/1 compute nodes)

  • 2 Gigabit Ethernet Switches

  • a dedicated Local File Server


Atlas tdaq barrack rack layout

ATLAS TDAQ Barrack Rack Layout


Naming convention

threshold

threshold

MU 20 I

name

name

isolated

isolated

EF in tagging mode

mu 20 i _ passEF

Naming Convention

First Level Trigger (LVL1) Signatures in capitals e.g.

HLT in lower case:

  • New in 13.0.30:

  • Threshold is cut value applied

  • previously was ~95% effic. point.

  • More details : see :https://twiki.cern.ch/twiki/bin/view/Atlas/TriggerPhysicsMenu


Min bias triggers

Min Bias Triggers

Min. Bias Trigger available for the first time in 13.0.30.3

  • Based on SP Counting

  • Trigger if:

  • >40 SCT SP or > 900 Pixel Clusters

Trigger if:

>40 SCT SP

or > 900 Pixels clusters

To be done: add MBTS trigger

MBTS – Scintillators on the inside of endcap calorimeter giving LVL1 info.


Electron menu coverage for l 10 31 cm 2 s 1

Electron Menu Coverage for L=1031cm-2s-1

16 LVL1 Thresholds for EM (electron, photon) & HA (tau)

EM3, EM7, EM13, EM13I, EM18, EM18I, EM23I, EM100

  • s


Photon menus for 10 31

Photon Menus for 1031

Total rate (including overlaps) ~10 Hz


Muon triggers

Muon Triggers

Six LVL1 thresholds : MU4, MU6, MU10, MU15, MU20, MU40

Isolation can be applied at the HLT


Bphysics

Bphysics

LVL1 + Muon at HLT

  • 2mu4 : 2.5 Hz

  • mu4 & mu6 pre-scaled : 4 Hz

    LVL1 + ID & MU at HLT:

  • mu4_DsPhiPi_FS, MU4_Jpsimumu_FS, MU4_Upsimumu_FS,

  • MU4_Bmumu_FS, MU4_BmumuX_FS

Loose selections ~10Hz


Tau triggers

Tau Triggers

16 LVL1 Thresholds for EM (electron, photon) & HA (tau)

HA5, HA6, HA9I, HA11I, HA16I, HA25, HA25I, HA40


Single jet triggers

Single Jet Triggers

  • Strategy:

  • Initially use LVL1 selection with no active HLT selection and b-jet trigger in tagging mode

  • 8 LVL1 Jet thresholds:

    • Highest un-prescaled, value determined by rate considerations (Aim for ~20Hz)

    • Other threshold set to equalize bandwidth across the ET spectrum

    • Lowest threshold used to provide RoI for Bphysics trigger.


Jet triggers contd

Jet Triggers (contd)

Trigger Rates for Forward Jets

Trigger Rates for multi-jets


Bjet triggers

Bjet Triggers

  • Jets tagged as B-jets at HLT based on track information

  • Will allow lower LVL1 jet thresholds to be used

  • For initial running the Bjet triggers will be in tagging mode. Active selection will be switched on once the detector & trigger are understood.


Missing et total sumet

Missing ET, Total SumET

8 LVL1 Missing ET thresholds


Combined triggers

Combined Triggers

  • Menu contains large no. combined signatures

Total Rate 46 Hz


Total rates

LVL1

Total Rates

EF

LVL2

15


  • Login