1 / 18

The APV Emulator (APVE)

Task 1. The APV25 has a 10 event buffer in de-convolution mode. Readout of an event = 7us Triggers arrive in a Poisson distribution with mean period = 10us. Finite buffer + Random triggers => Possibility of buffer overflow OVERFLOW => DEAD TRACKER (APV reset required). Task 2.

danika
Download Presentation

The APV Emulator (APVE)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Task 1. The APV25 has a 10 event buffer in de-convolution mode. Readout of an event = 7us Triggers arrive in a Poisson distribution with mean period = 10us. Finite buffer + Random triggers => Possibility of buffer overflow OVERFLOW => DEAD TRACKER (APV reset required) Task 2. FED detects a small portionof APVs within the Tracker losing sync The APV Emulator (APVE) What does the APVE do & why ? APVEprotects against buffer overflow APVE detects a group of APVs losing sync System Meeting: Greg Iles

  2. What does APVE do & why ? • Primary task: Preventing buffer overflow in APVs • Its takes too long to send a ‘buffers full’ signal from APVs in the tracker to Trigger Control System (TCS). • Therefore require an APV close to the TCS. • Secondary task: Synchronisation check • All APV data frames include the memory cell (pipeline) address used to store the L1A data in the APV. • The pipeline address is sent to all FEDs to ensure that all APVs are in-sync. Reset and L1A (min period = 75ns) TCS: Inhibit L1A? Busy Tracker APVE APV in deconvolution mode 1: Full 2: Full 3: Empty Pipeline address 10: Empty Data frame (period = 7000ns) FED: Data OK? System Meeting: Greg Iles

  3. How does APVE work ? • L1A Throttle • A counter keeps track of the number of filled APV buffers. • L1A => INCREMENTS counter. • Output frame => DECREMENTS the counter. • Reset => CLEARS the counter. • APVE must receive the same L1As and Resets as APVs within the Tracker or System fails • When the counter reaches preset values it asserts Warn followed by Busy. • Synchronisation check • Header on APV data frame provides pipeline address Reset L1A APVE Real APV25 APV data frame Header recognition UP CLEAR Frame output signal Buffer counter DOWN Assertbusy? Busy Pipeline address System Meeting: Greg Iles

  4. Task 1: L1A Throttle • Timing critical • Set by control loop from L1A inhibit gate within Global or Local TCS to APVE and back again. • Ideally need to assert busy < 75ns) • Alternatively we lose an event buffer location in the APV for every 75ns delay. • Loss of buffers increases dead time. • Require fast access to GTCS/LTCS to receive L1A/RESET and send Fast Control signals. • TCS prefers a single set of Fast Control signals from the Tracker. • Fast Merge Module (FMM) signals to go via APVE L1A & RESET Inhibit gate (inside TCS) L1A & RESET BUSY APVE Buffers Full ? System Meeting: Greg Iles

  5. Deadtime System Meeting: Greg Iles

  6. Control structure • WARNING................. • Timing structure of L1As and Resets received by the APVE and the APVs within the Tracker must be identical. • How are resets issued by the TCS transformed into ‘101’ resets for the APV. Also relevant for ‘11’ calibrates. • The APVE will NOT operate if the TTCvi is used as a source of Resets & L1As. • Applying timing constraints to control structure • At present envisage that APVE receives L1A and Reset from both Global and Local TCS. GTCS LTCS APVE TTCvi TTCex FMM Fast Merge Module TTCtx Reset & L1As FEC Fast control CCU Ring ? APV Pipeline address FED System Meeting: Greg Iles

  7. Current progress • Development system built and tested. • Fast Control functions (i.e. busy, warn and out-of-sync). • Programmable (i.e busy asserted when ‘X’ many APV buffers remain empty. • History of signals recorded • busy, warn, out-of-sync (i.e. when asserted, when negated) for ‘X’ many occasions. • Total time asserted for busy, warn, and out-of-sync. • Interfaces such VME, Wishbone and I2C interface • TCS system implemented for testing. Clk Trig & Reset APV FPGA System Meeting: Greg Iles

  8. Reset ‘101’ reset ‘1’ trigger Pre TCS, T1 Pre TCS, Reset Pre TCS, Trigger inhibit blocks trigger Post TCS, T1 first tick after APV reset APV Output Busy busy asserted Warn warn asserted Time penalty incurred in the FPGA to distinguishtriggers,'1' fromresets, '101‘and calibrates, '11‘, unless we receive trigger and reset signals separately.

  9. End of reset period ‘1’ trigger Pre TCS, T1 Pre TCS, Reset Pre TCS, Trigger Post TCS, T1 APV ready to receive trigger APV Output Busy Warn Busy asserted after 8 triggers. Warn asserted after5 triggers.

  10. Buffer empties ‘1’ trigger Pre TCS, T1 Pre TCS, Reset Pre TCS, Trigger Post TCS, T1 APV Output APV frame digital header Busy Warn Busy negated when an APV buffer empties, providing space for another event to be stored. It is asserted once more after a further trigger is sent to the APV.

  11. Future.... Simulate APV in real time • VHDL code to simulate the APV in real time written and tested on ModelSim (VHDL logic simulation package), but not downloaded into an FPGA. • An FPGA is sufficiently fast • The APV pipeline logic , a possible obstacle to FPGA implementation, has been tested in a Xilinx Spartan-2. • ....... and sufficiently large • At a size of 200k gates the design is too big for our Spartan-2 (100k gates), but will fit in a Virtex-2. System Meeting: Greg Iles

  12. Task 2: Sync (global) • Task 2. • FED detects individual APVs losing sync • If more than 65 (?) APVs out of sync........ • FED can generate the wrong pipeline address • Incorrect data to the DAQ. • Should happen very rarely....... • How important is getting pipeline address to the FED? APVE detects a group of APVs losing sync System Meeting: Greg Iles

  13. Pipeline address via network • At present........ • Check data after it has been sent to DAQ at a frequency of every few seconds and in software. • Cons ... • Requires lots of software and the pipeline address travels a complex route to FED. APVE Crate CPU APVE APVE APVE APVE x10 (ish) FED Crate FED Crate CPU FED FED FED CPU FED FED FED x20 x20 System Meeting: Greg Iles

  14. Pipeline address via serial link • Possibly..... • Direct serial link (optical) to each FED crate. • Pipeline address distributed to FEDs via single trace on VME back-plane. • Cons ... • Additional hardware which needs to be designed built and tested. • Reliability & maintenance for duration of LHC. APVE Crate CPU APVE APVE APVE APVE x10 (ish) FED Crate FED Crate CPU APVP FED FED FED CPU APVP FED FED System Meeting: Greg Iles

  15. Outstanding issues • Where do we get L1A and Reset from, if not LTCS and GTCS ? • If not LTCS & GTCS what is the time penalty of obtaining these signals further down the command chain? • Where does merge of fast feedback signals take place, if at all ? • APVE needs to be the last stage in this process, or very near it because timing critical. • What is the time penalty imposed by going through FMM? • At present APVE design assumes it is after FMM module and has 4 inputs (Ready/Error/Busy, Warn and Out-Of-Sync) that are OR’ed with APVE fast feedback signals. • How do we get pipeline address to the FEDs ? • At present intend to check data after it has been sent to DAQ at a frequency of every few seconds and in software. • Serial link to each FED crate VME bus is possibly a simpler, more robust option, but more awkward to implement. System Meeting: Greg Iles

  16. Conclusions • Need to finalise location of APVE in command (RESET/L1A) and fast feedback (BUSY/WARN etc.) control structure. • Many aspects, such as obtaining fast access to TCS already discussed with those responsible. • When details finalised we will be able to finish board schematics, manufacture and test APVE. • At the beginning we envisage 4 VME boards, one for each partition located in the Global Trigger rack. System Meeting: Greg Iles

  17. APVE IO • Perhaps also...... • A fibre optic serial links to each FED crate to deliver the pipeline address (approx 10 per APVE) Global TCS, Reset/L1A Local TCS, Reset/L1A FPGA VME, A24/D16 Global TCS Fast Control Local TCS Fast Control FMM, Fast Control LHC Clock System Meeting: Greg Iles

  18. Alternative control structures GTCS GTCS LTCS 0 LTCS 0 LTCS 1 L1A/Fast Control Source Sel Key APVE APVE Reset & L1As TTCvi TTCvi Fast control FMM FMM FEC FEC APV APV Pipeline address FED FED System Meeting: Greg Iles

More Related