1 / 19

Parallel Neutrino Triggers using GPUs for an underwater telescope KM3-NEMO

Parallel Neutrino Triggers using GPUs for an underwater telescope KM3-NEMO. Bachir Bouhadef, Mauro Morganti, G. Terreni (KM3-Italy Collaboration). Outlines. NEMO Phase II Tower DAQ system for NEMO Tower Showing a possibility of using a CPU-GPU DAQ for an online muon-track selection.

tbull
Download Presentation

Parallel Neutrino Triggers using GPUs for an underwater telescope KM3-NEMO

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Parallel Neutrino Triggers using GPUs for an underwater telescope KM3-NEMO Bachir Bouhadef, Mauro Morganti, G. Terreni (KM3-Italy Collaboration) INFN Pisa & Physics Department of Pisa

  2. Outlines • NEMO Phase II Tower • DAQ system for NEMO Tower • Showing a possibility of using a CPU-GPU DAQ for an online muon-track selection. • Proposing a method for Parallelizing the online trigger software NEMO-II Tower. • Test muon triggers in GPUs. • Proposition for KM3Net-It Tower Trigger data handling. 2 INFN Pisa & Physics Department of Pisa

  3. 8 floors 4 PMT/Floor 8m Floor Arm length NEMO Tower Phase II. KM3 - NEMO Tower . Floor 14 Floor 13 Floor 1 14 floors 6 PMT/Floor 6m Floor Arm 84 PMTs 32 PMTs X 55 kHz 1.7 Mhit/s 3 INFN Pisa & Physics Department of Pisa

  4. Trigger and Data Acquisition System (NEMO Phase II) T. Chiarusi & F. Simeone VLVNT 2013 INFN Pisa & Physics Department of Pisa 4

  5. Gbit swich TS 1 TS 1 TS 0 TS 0 Trigger and Data Acquisition System (NEMO Phase II) Onshore FC SC time AFC TS 0 TS 0 HM 0 TCPU 0 TCPU 1 HM 1 TS 1 TS 1 time TS is Time Slice of 200ms. The trigger in TCPU: sorting time’s PMTs hits and applying a charge threshold and time coincidences (simple coincidence, Floor coincidence). INFN Pisa & Physics Department of Pisa 5

  6. SC AFC AFC Tower D.U. Why GPUs ? Scalable Programming Model A GPU uses blocks and threads for parallel programming INFN Pisa & Physics Department of Pisa 6

  7. A new parallel trigger for muon detection based on time difference of muon’s hits Most of the muon track hits are Within a Time Windows (TD). We need at least 5 hits in different PMTs to reconstruct the muon track. We must look for a number of hits N in a fixed time windows. INFN Pisa & Physics Department of Pisa 7

  8. INFN Pisa & Physics Department of Pisa 8

  9. Onshore Gbit swich Storage Unit We propose a DAQ system using CPU-GPU Architecture INFN Pisa & Physics Department of Pisa 9

  10. TGPU-CPU TCPU is replaced by TGPU-CPU, and every second the TGPU-CPU will receive 5 Time slices of 200ms each. INFN Pisa & Physics Department of Pisa 10

  11. Network THRD TTS4 TTS4 TTS3 TTS3 TTS2 TTS2 TTS1 TTS1 TTS0 TTS0 Step 1 5 TTS from network thread 5 CPU threads to put PMT hits in correct time interval, and the hits are time-order by thread Structure is ready to be treated in GPU Step 2 CPU work The size of hits are not fixed, we must prepare a new structure for the GPU Number of threads must be multiple of 5 and 32 We can not predict how many hits per thread there are, so we fixed a max hits number by thread using the nominal rate X 3 or 6 We have considered also the edge effect between threads and TTSs. We should avoid threads with a few hits by choosing the optimal thread time interval. INFN Pisa & Physics Department of Pisa 11

  12. TTS0 TTS1 TTS2 TTS3 Sort all PMT hits using classical Algorithms (shell) 1 4 Step 1 2 Structure is ready to Trigger tagging 3 1 4 Step 2 2 3 Trigger L0 In L1 Trigger all possible trigger can be implemented, and according to L0+L1 efficiencies the best is chosen to tag the event to be saved. GPU work • 1 (L0) - N7TW1000 • (L0) SC or AFC • 2 (L1) - N7TW1000 & SC • 3 (L1) - N7TW1000 & AFC • 4 (L2) - N7TW1000 & SC & AFC • 5 (L2) - N7TW1000 & ( (SC & AFC ) || (SC>1) || (AFC>1)) • 6 (L2) - N7TW1000 & (SC || AFC)) • 7 (L0) Charge > Charge_THRHD INFN Pisa & Physics Department of Pisa 12

  13. Muon triggers tests in GPUs GPU cards INFN Pisa & Physics Department of Pisa 13

  14. Muon triggers tests in GPUs Trigger Time execution for one tower of 32PMT/ ~55kHz. Using 2 different GPU cards and one 1 second of raw data for 32 PMTs at background rate ~ 55 kHz and applying all triggers. The back times are measured while the CPU in Idle. The read times are measured times while the CPU executes other processes INFN Pisa & Physics Department of Pisa 14

  15. Muon triggers tests in GPUs Using the same two GPU cards, we use one second data of 84 PMTs (6PMT X 14 Floors) at background rate 55 kHz and applying all triggers. The back numbers are the measured timeswhile the CPU in Idle. The red numbers are the measured times while the CPU executes other processes INFN Pisa & Physics Department of Pisa 15

  16. We have also tested a new trigger that combines between 7 different hits within 1000ns and time-space correlation with respect to first hit. In GPU the trigger was applied to data tower of 84 PMTs, and measured trigger time is 350ms in case of Tesla20c50. We are also working muon track reconstruction in GPUs. Sorting in Tesla 20c50 100 X 182 threads Alessio Bacciarelli INFN Pisa & Physics Department of Pisa 16

  17. Proposed DAQ CPU-GPU for 8 KM3-Ita Tower INFN Pisa & Physics Department of Pisa 17

  18. Conclusion and future work - GPUs can also take place in optical neutrino telescopes. - Still other online more selective algorithms can be applied. - Muon track reconstruction is underway. - Both Tesla 20c50 and GTX TITAN can be a good choice. INFN Pisa & Physics Department of Pisa 18

  19. Thank you for attention I would like to thank all Members of the KM3Net-It Collaboration INFN Pisa & Physics Department of Pisa 19

More Related