Skip this Video
Download Presentation
NOνA Event Building, Buffering & Filtering From within the DAQ System

Loading in 2 Seconds...

play fullscreen
1 / 1

NOνA Event Building, Buffering & Filtering From within the DAQ System - PowerPoint PPT Presentation

  • Uploaded on

NOνA Event Building, Buffering & Filtering From within the DAQ System . 11520 FEBs (368,4600 det. channels). FEB. FEB. FEB. FEB. FEB. FEB. 11,160 Front End Boards. 5ms data blocks. FEB. FEB. FEB. FEB. DCM 1. FEB. FEB. Data Buffer. Far Det. FEB. FEB.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'NOνA Event Building, Buffering & Filtering From within the DAQ System' - semah

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
NOνAEvent Building, Buffering & Filtering

From within the DAQ System

11520 FEBs

(368,4600 det. channels)







11,160 Front End Boards

5ms datablocks








Data Buffer

Far Det.



Data Driven Triggers System


Shared Memory DDT event stack


M.Fischler, C.Green, J.Kowalkowski, A.Norman, M.Paterno, R.RechenMacher, Fermi National Accelerator Laboratory







Zero Suppressedat



180 Data Concentrator Modules

Buffer Nodes

Buffer Nodes

Buffer Nodes

Data DrivenTrig. Decisions

NuMI Beam @ 14mrad (810 km)

Event builder

Buffer Nodes

Minimum Bias0.75GB/S Stream

Buffer Nodes

COtS Ethernet 1Gb/s

The NOνAExperiment & DAQ Readout

The Power ofData Driven Triggering

Grand Trigger OR

Buffer Nodes

Trigger Reception

Buffer Nodes

Data Slice Pointer Table

νµ CC candidate

The NOνAexperiment is designed to probe the oscillations of neutrinos and anti-neutrinos to determine the transition probabilities for:

These transition rates will allow NOνAto determine the neutrino mass hierarchy and highly constrain the CP violating phase δ. The NOνAmeasurements will improve our understanding of the mixing angles θ13 and θ23. To perform these measurements NOνAhas been designed with a 15kTon far detector which is located 810 km away from Fermilab and a small near detector at Fermilab.

This enormous detector with over 368,000 channels of synchronized readout electronics operates in a free-running “trigger-less” readout mode. All of the channels are continuously digitized and the data are accumulated through a multi-tiered readout chain into a large farm of high speed computers. The hit data is buffered for minimum of 20 s in the farm while it awaits a beam timing data from Fermilab.

Beam spill information from the NuMI (Neutrinos at the Main Injector) beamline is generated independently at FNAL and transmitted to the Ash River detector site. When the beam information is received the relevant data blocks that time match the beam spill are extracted with zero bias and recorded to permanent storage.

The rest of the data is examined using real time filtering to determine which portions of the 4.3GB/s data stream are of interest.

  • The Nova detector utilizes a simple XY range stack geometry. This allows for the application of very efficient and powerful global tracking algorithms to identify event topologies that are of interest to physics analyses.
  • The NOνAonline physics program needs the ability to produce data driven triggers based on the identification and global reconstruction characteristics of four broad classes of event topology:
  • Being able to generate DDTs based on these topologies allows NOνAto:
  • Improve detector calibrations using “horizontal” muons
  • Verify detector/beam timing with contained neutrino events
  • Search for exotic phenomena in non-beam data (magnetic monopole, WIMP annihilation signatures)
  • Search for directionalcosmic neutrino sources
  • Detect nearby supernovae

TriggeredData Output

»140-200 Buffer Node Computers


Data Time Window Search



Trigger Broadcast

Data Logger

Multi-prong neutrino candidate

200 Buffer Nodes

(3200 Compute Cores)

Global Trigger Processor

(cosmic ray)

Beam Spill Indicator

  • Linear tracks
  • Multi Track Vertices
  • EM Shower objects
  • Correlated Hit Cluster

(Async from FNAL @ .5-.9Hz)

Calib. Pulser ( 50-91Hz)

NOvA Far Detector Stats:

Weight: 15kT

Height: 5 stories (53x53ft)

Detection Cells: 368,640

“Largest plastic structure

built by man.”

Neutrino Event topologies of interest observed in the NOνAprototype near detector during the 2011-2012 run. These topologies exhibit the core features for identification with the NOνA-DDT.

Prototype near detector, in operation at FNAL 2010-Present

DAQ Topology

DAQ Event Buffering & Building

Realtime Event Filtering

ARTDAQ is a software product, under active development at Fermilab, that provides tools to build programs that combine realtime event building and filtering of events as part of an integrated DAQ system.

The NOνAData Driven Trigger (NOνA-DDT) demonstrator module uses the high speed event feeding and event filtering components of ARTDAQ to provide a real world proof of concept application that meets the NOvA trigger requirements.

NOνA-DDT takes already-built 5ms wide raw data windows fromshared memory, and injects them into the art framework where trigger decision modules are run and a decision message is broadcast to the NOνAGlobal Trigger System.

The ART framework allows the usage of standard NOνAreconstruction concepts to be applied to the online trigger decision process. The common ART framework additionally allows for online reconstruction & filtering algorithms to be applied in the offline environment.

This allows the identical code used in the DAQ to be used in offline simulation and testing, for determination of selection efficiencies, reconstruction biases and over all trigger performance.

Because the DAQ system performs a 200 to 1 synchronized burst transmission of data between the data concentrator modules (DCMs) and the Event Builder buffer nodes, a large amount of network& data transmission buffering is requiredto handle the data rate.

The buffering has been tuned by adjusting:

  • the size of data packets to match a single complete time window
  • the receive window on the buffer nodes to exploit the buffering capability of the high speed switched fabric between the DCMs and buffer nodes.
  • Cisco 4948 network switch array

The NOνADAQ system is designed to support continuous readout of over 368,000 detector channels. NOνA accomplishes this by creating a hierarchical readout structure that transmits raw data to aggregation nodes which perform multiple stages of event building and organization before retransmitting the data in real time to the next stage of the DAQ chain. In this manner data moves from Front End digitizer boards (FEBs) to DCMs to Event Building & Buffering nodes and finally into a data logging station.

The flow of data out of the buffer nodes is controlled by a Global Trigger process that issues extraction directives based on time windows identified to be of interest. These windows can represent beam spills, calibration triggers, or data driven trigger decisions.

The Buffer nodes used for DAQ processing are a farm of high performance commodity computers. Each node has a minimum of 16 compute cores and 32 GB of available memory for data processing and buffering. This permits them to run both the event builder process and the full multi-threaded data driven triggering suite, and buffer a minimum of 20 seconds of full, continuous, zero-bias data readout.

Network (switch) buffering optimization as a function of far detector average DCM data rates and acquisition window time.

Data Driven Trigger Buffering

  • As timeslices of detector data are collected and built they are written into two distinct buffer pools:
  • The Global Trigger pool contains timeslices waiting for up to 20 s to be extracted by a global trigger and sent to the datalogger.
  • The DDT pool is a separate set of IPV4 shared memory buffers designed to be the input queues for data driven trigger analysis processes.
  • The DDT shared memory interface is optimized as a one-writer-many-readers design:
  • The shared memory buffer has N “large” (fixed size) segments which are rotated through by the writer to reduce collisions with reader processes
  • The writing process uses guard indicators before and after the detector data. These indicators allow read processes to tell if the data segment being read may have been over written during read-out.

NOνA-DDT Performance

The NOνA-DDT prototype system implements a real world trigger algorithm designed to identify tracks in the NOνAdetector. The algorithm calculates the pair wise intersections of the Hough trajectories for the detector hits and identifies the resulting track candidate peaks in the Hough space. This is a real world example of an N2 complexity algorithm and is of particular interest due to its ability to identify candidate slow particle tracks (i.e. Magnetic Monopoles with β>10-5) in the far detector.

The NOνA-DDT-Hough algorithm, without parallelization, was run on the NOνAnear detector DAQ trigger cluster. It was able to reach decisions in on average 98±14 ms (compared to the target goal of 60). It was able to achieve this level of performance using only 1/16th of the available CPU resources of each cluster node. This initial test shows that the system, after parallelization and optimizations, will be able to meet and exceed the NOνAtrigger requirements.

Hough transform execution time for 5m time windows of NOνAnear detector readout. No parallelization was applied to the algorithm.

The ARTDAQ framework overhead for processing of 5ms time windows of NOνAnear detector readout. Overhead includes unpacking and formatting of the DAQHit data objects.

Fermilab Scientific Computing Division

Fermi National Accelerator Laboratory, Batavia, Illinois, USA.

Read the Full paper at: