1 / 26

The CMS Modular Track Finder (MTF7) Trigger Board

The CMS Modular Track Finder (MTF7) Trigger Board. D. Acosta , G. Brown , A. Carnes, M . Carver, D. Curry , G.P. Di Giovanni, I. Furic , A. Kropivnitskaya , A . Madorsky , D. Rank, C. Reeves, B. Scurlock , S. Wang University of Florida/Physics, Gainesville , FL, USA

ciara
Download Presentation

The CMS Modular Track Finder (MTF7) Trigger Board

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The CMS Modular Track Finder (MTF7) Trigger Board D. Acosta, G. Brown, A. Carnes, M. Carver, D. Curry, G.P. Di Giovanni, I. Furic, A. Kropivnitskaya, A. Madorsky, D. Rank, C. Reeves, B. Scurlock, S. Wang University of Florida/Physics, Gainesville, FL, USA M. Matveev, P. Padley Rice University, Houston, TX, USA

  2. CMS Endcap Muon System φ θ, η η coverage: 1.2 to 2.4

  3. Muon Trigger structure rework Barrel TF Overlap TF Endcap TF - Overlap TF is now separated from Endcap and Barrel

  4. CMS Endcap Muon Trigger • Each of two Endcaps is split into 6 sectors, 60° each • Each sector is served by one Sector Processor (SP) • Total 12 SPs in the entire system • CMS trigger requires us to identify distinct muons • Each SP can build up to 3 muon tracks per BX Trigger sector 60˚

  5. EndcapMuon Trigger upgrade • Trigger information • Wiregroup patterns • Strip hits Trigger primitives 2 per chamber 18 per station 90 total Muon Endcap Trigger sector (60°) Upgraded Port Cards One per station 1/6 filtering Fibers (~100 m) 18 primitives per station 90 total • Upgraded • Sector Processor • Complete 3-D tracks assembled from primitives • Up to 3 tracks per BX

  6. Motivations for upgrade • Current Endcap Trigger is adequate for LHC luminosity before the upgrade • The following improvements are needed for the upgrade: • Transverse Momentum (Pt) assignment • Final Ptassignment is currently done with 2MB LUT • Address space is already over-saturated • Need bigger Pt assignment memory • Trigger primitives bandwidth • With luminosity upgrade, we expect ~7 Trigger primitives per Sector per BX (average). • Current system selects only 3 best primitives in each sector • Inadequate for the upgrade • Need to import preferably all primitives on each BX • 18 per sector • Also need to import other data (Resistive Plate Chambers) • The above means: • Higher input bandwidth • Bigger FPGA

  7. Block diagram Best 3 Muons in each sector From MPCs 60 12-corefibers, 8 cores used in each. 90 trig. primitives per 60° sector 3.2 Gbps Sector Processors 12 units 60° sector each Muon Sorter Optical plant (fanouts and splitters) 8 Best Muons to Global Muon Trigger From RPC Up to 216 fibers at 1.6 Gbps (may be concentrated to higher bandwidth and fewer fibers) To Overlap Track Finder

  8. uTCA chassis • Sector Processor (SP) • occupies 2 uTCA slots • 12 units in system • Muon Sorter (MS) • hardware identical to SP • 1 unit in system • All chassis use AMC13 (designed by Boston University) • clocking and DAQ • 3 units • Plan to control boards via PCI-express • Will make compatible with IPbus as well Chassis #1 SP SP SP SP SP Chassis #2 SP SP SP SP SP Chassis #3 SP SP MS

  9. Hardware prototype • 2012 prototype • based on Virtex-6 FPGA • Modular design • Makes future partial upgrades easier Custom backplane Optical module Core logic module Pt LUT module Custom backplane Core logic module Optical module

  10. Virtex-6 Core logic module Custom backplane connector PT LUT module connector Control FPGA JTAG Core logic FPGA FMM connector Control FPGA SD card connector MMC USB console uTCA connector MMC JTAG 1Gb FLASH Main FPGA firmware storage MMC CPU MMC = Module Management Controller Estimated power consumption: ~50 W (assuming FPGAs nearly full) PT LUT mezzanine not included

  11. Virtex-6 Core logic module:Features • Serial I/O: • 53 GTX receivers (up to 4.8 Gbps) • 8 GTH receivers (10 Gbps) • 12 GTX transmitters (up to 4.8 Gbps) • 2 GTH transmitters (10 Gbps) • MMC – Wisconsin design • See this link for details • Configuration memory for Core FPGA: • PC28F00AP30EFA • 1 Gb parallel FLASH • Can be used to store any other information (in addition to firmware) • Permanent configuration settings • Multiple firmware versions • SD card slot • Can also be used to store Core FPGA firmware and settings • Fast Monitoring (FMM) connector • Compatible with the current FMM system • Control interfaces: • PCI express • IPbus

  12. Optical module Optical transmitters (2 out of 3 installed) Custom backplane connector Optical receivers (2 out of 7 installed) Backplane redrivers uTCA connector MMC

  13. Optical module • Receivers: • 7 12-channel RX • Avago’s AFBR-820BEZ • 84 RX channels • Transmitters: • 3 12-channel TX • Avago’s AFBR-810BEZ • 28 TX channels (12+12+4) • All of them 10 Gbps parts • Not enough space on front panel to accommodate all • TX parts located inside • connect with short fibers to MPO fiber couplers on front panel • Tight but enough space to fit couplers on top of AFBR-820 parts. • Receivers are on front panel to minimize count of fiber-to-fiber transitions for inputs • Control: • Wisconsin MMC design, no FPGA • Compatible with future Virtex-7 design of Core logic board

  14. PT LUT module Clock synthesis and distribution Base board connector Glue logic FPGA (Spartan-6) RLDRAM3 memory 16 chips, 8 on each side (clamshell topology) Total size: 512M x 18 bits ≈ 1GB Upgrade possible to 2 GB with bigger RLDRAM3 chips (no board redesign) DC-DC converters

  15. Optical communication test • @3.2 Gbps • 47 input channels • Transmission from: • Loopback • Muon Port Card • Earlier VME prototype (2010) • For MPC and VME prototype clock was synchronized with VME crate • Twisted pair LVDS connection to uTCA backplane • @10 Gbps • 6 input channels • Asynchronous clock • Transmission from: • Loopback • Earlier 10Gbps prototype (2006) • Results • Zero errors for hours Eye pattern @10 Gbps. GTH receiver input.

  16. PT LUT tests • Parameters: • RLDRAM clock : 200 MHz • Address & control: 200 Mbps each bit • Data: 400 Mbps each bit • RLDRAM can tolerate up to ~1GHz clock. However: • Hard to implement in FPGA • Needed for burst-oriented applications mostly • Does not change latency for random address access • Lower clk F  lower power consumption • Tests performed (random data, full 1GB space): • Writing into consecutive addresses • Reading from consecutive addresses • Reading from random addresses • No errors detected • Except soldering defects in one RLDRAM chip

  17. PCI Express tests: Setup 2012 Prototype PC adapter Fiber (up to 50 m) Motherboard AMC113 (uTCA PCIE adapter) Multiple PC adapters tested Best results: HIB35-x4 Also least expensive

  18. PCI express performance • Theoretical PCIe performance (generation 2, single lane): • Link bitrate: 5 Gbps • After 8b/10b encoding: 4 Gbps • After PCIe packet overhead: ~3.56 Gbps • On top of that: • Software • Hardware overhead of a particular chipset on host system • Hardware overhead of your device • Our test design has very small overhead • Results of performance tests at UF: • Sustained performance, all overheads included • 5 meter fiber: • Reading: 2.4 Gbps • Writing: 2.88 Gbps • 50 meter fiber: • Reading: 2.3 Gbps • Writing: 2.88 Gbps

  19. Plans for Virtex-7 design • Core logic module • FPGAs: • Core logic: XC7VX690T-FFG1927 or XC7VX550T-FFG1927 • Control: XC7K70-FB676 • Inputs: • 80 Virtex-7 GTH links (10 Gbps) directly to Core FPGA • Minimal latency • All available receivers are designated for trigger data • Maximum flexibility • 4 additional Kintex GTX links (6.4 Gbps) to Control FPGA • Delivered to Core FPGA via parallel channel • Longer latency • Outputs: • 28 Virtex-7 GTH links (10 Gbps) • Control: • PCI express Gen 2, 2 lanes • Ipbus • Status: PCB layout nearly done

  20. Core logic module FPGA interconnections PT LUT connector 28 outputs 10 Gbps XC7VX690T or XC7VX550T Custom backplane connector 80 inputs 10 Gbps 4 TX to AMC13 Parallel data exchange channel (control, DAQ control, 4 extra links) DAQ TX 4 inputs 6.4Gbps PCI express 2 lanes XC7K70 IPbus uTCA backplane connector DAQ control (RX)

  21. Conclusions • Virtex-6 based prototype built and tested • 53 inputs up to 4.8 Gbps • 8 inputs at 10 Gbps • RLDRAM3-based Pt LUT (1GB) • PCI express • IPbus • Virtex-7 base board prototype is in its final design phase

  22. Backup

  23. CSC trigger data sharing Endcap and Overlap processors ME1 ME2,3,4 Subsector 1 Subsector 2 7 7 6 7 5 8 8 8 9 9 4 9 Overlap TF 2 4 4 5 5 1 3 6 6 1 1 2 2 3 3 Endcap TF Multiple chambers have to be shared between Endcap and Overlap TFs (shown in yellow)

  24. CSC trigger data sharingNeighbor sector ME1 ME2,3,4 Subsector 1 Subsector 2 9 8 8 9 6 7 5 7 7 8 4 9 6 2 6 5 5 1 3 4 4 Neighbor sector TF Neighbor sector TF 3 2 2 3 1 1 Several chambers shared with neighbor Sector Processor for better sector overlap coverage (shown in pink) Most of the chambers shared between 2 SPs Some chambers shared between 4 SPs

  25. Optical components Components from Fibertronics MPO Fanout 4-way splitter 2-way splitter

  26. Optical plant (one sector) Slack spool Slack spool 19” 1U Rack-mount Enclosure * Components shown not in full quantities Splitters LC-LC adapters MTP fanouts MTP connectors (inputs) MTP connectors (outputs)

More Related