1 / 14

PCIe based readout

PCIe based readout. U. Marconi, INFN Bologna CERN, May 2013. Electronics: basic assumptions. 24 × 3.2 Gb/s. AMC40. Data transfer from the frontend boards to the read-out boards at 40 MHz: ~ 40 Tb/s,12000 optical links, using 3.2 Gb/s GBT serializers .

tamal
Download Presentation

PCIe based readout

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PCIe based readout U. Marconi, INFN Bologna CERN, May 2013

  2. Electronics: basic assumptions. 24 × 3.2 Gb/s AMC40 • Data transfer from the frontend boardsto the read-out boards at 40 MHz:~ 40 Tb/s,12000 optical links, using 3.2 Gb/s GBT serializers. • Zero suppression performed at the frontend board • Readout boards for buffering and data formatconversion, from custom to industrial standard (LAN protocol). 12 × 10 Gb/s AMC40 24 input 12 output AMC40 ATCA Carrier Board The LHCb readout board AMC40 data throughput ~100 Gb/s AMC40 ~600 AMC40 In total DIS2013: The LHCb upgrade

  3. DAQ Network Implement the InfiniBand protocol on the AMC40 FPGAs: not an easy task though … Why not try then with PCIe Gen3? One would need just a suitable receiver card on the server … Changing the architecture of course

  4. PCIe Gen3 extension:Avago-PLX test setup Not available for testing …

  5. PCIe-IB-ETH-uniform cluster Event builder High speed network Event filter

  6. PCIe Gen3 extension • PCIe Gen3 bandwidth: 12 x 8 = 96 Gb/s RU/BU unit Event Builder Event fragments AMC-40 x8 PCIe3 PCIeReceiverCard Stratix V 12 12 x4 PCIe3 4 8 2 x 12 optical fibres PXE 8733 24 DMA 16 PCIe3 hard IP block Event fragments PCIe connectors PLX PCIE switch CUSTOM DESIGN

  7. CPU-to-CPUconnection through PCIe • The PXF51002 is a low profile dual QSFP+ PCIe adapter for connecting to x16 PCIe slot on motherboard.

  8. PCIe x16 Gen3 Switch-based Cable Adapter • Under test at LAL

  9. PXF51002 based solution • PCIe Gen3 bandwidth: 12 x 8 = 96 Gb/s RU/BU unit Event Builder Event fragments AMC-40 QSFP+ x4 PCIe3 x8 Stratix V x4 PXF5102 x4 x4 PXF5102 3x4 optical fibres x4 24 x8 PCIe3 hard IP block Event fragments from FEE PCIe connectors PXF51002 8725 PCIE switch

  10. ONE STOP SYSTEM based solution • PCIe Gen3 bandwidth: 12 x 8 = 96 Gb/s RU/BU unit Event Builder Event fragments AMC-40 x12 used Stratix V x16 PCIe3 x16 PXE 8733 x16 optical fibres 24 x16 PCIe3 hard IP block Event fragments from FEE PCIe connectors PCIe x16 Gen3 Switch-based Cable Adapter

  11. Stratix V: n. of PCIe hard IP blocks

  12. I/O performance of PC servers • Dual-socket server main-boards with 4 x 16 lane-sockets and 2 x 8 lane-sockets: the total theoretical I/O of a dual-socket system is 1280 Gb/s. • Test setup: • GTX 680 GPU PCIe Gen3 x 16 • 2x InfiniBand FDR adpaterMellanox (PCIe Gen3 x 8) • Results: • It is possible to transfer more than 100 Gb/s to/from the GPU. • The PC using InfiniBand can transfer simultaneously to/from the network 2 x 56 Gbit/s over the two InfiniBand cards. http://lbonupgrade.cern.ch/wiki/index.php/I/O_performance_of_PC_servers

  13. Clock isolation • Typically, when employing optical fiber, both ends of the link will not reside in the same enclosure. This means they will not share the same reset nor the same system clock • Because the interface is optical, there is a reduced need for EMI suppression of the link: keep the optical link in a constant frequency mode. • In a system that uses SSC clocking, the SSC must be completely disabled at the host. If disabling the SSC is not possible then a clock isolation adapter card will be required to isolate the SSC clock: appropriate PLX switch can provide SSC isolation. • PLX integrated spread spectrum clock (SSC) isolation, provides the capability for isolating the clock domains of two systems. SSC isolation allows designers the flexibility to develop products with asynchronous clock sources, thus removing the need for a single clock source for all PCIe components in a system. • When you enable the switch, its Port 0 operates in the spread-spectrum-clocking domain, and the other ports operate in the constant-frequency-clock domain.

  14. Summary • PCIe Gen3 appears a viable solution to inject data from the AMC40 to the EFF servers. • We are ready to start testing PCIe Gen3 CPU-to-CPU based connections, relying on commercial PCIe cable adapters, linked with optical fibres. • Next step is to replace one of the CPU with a Stratix V FPGA.

More Related