1 / 9

FGT Readout/DAQ

FGT Readout/DAQ. 2012 repairs, upgrades, work to complete. Gerard Visser. Broken hardware to be repaired. 1 of 24 cable/FPP sets is reported not working (Ben tested w/ “golden FEE”). Likely a cable connector problem. We simply skipped it since we needed only 19.

wei
Download Presentation

FGT Readout/DAQ

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. FGT Readout/DAQ 2012 repairs, upgrades, work to complete Gerard Visser

  2. Broken hardware to be repaired 1 of 24 cable/FPP sets is reported not working (Ben tested w/ “golden FEE”). Likely a cable connector problem. We simply skipped it since we needed only 19. 1 ARC module was swapped out on 2/7/12 to deal with problems, this resolved the issue but I think it was actually a design (firmware) problem, subsequently resolved. So, this module is expected to be ok to use, is currently in clean hut. 1 of 19 FEE groups in run 12 FGT has definite hardware problems with I2C line, this could be broken hardware in FEE or a cable/connector problem. 1 of 19 FEE groups (4AB) in run 12 FGT shows crazy pedestals; ARM was swapped but no change, probable broken interconnect board. 1 APV (2BC-5) has a hardware problem with analog output. Probably FEE or cable/connector problem. ARM board will be swapped to (re-)check.

  3. ARM status & production • Status summary • 5 modules at IU, 2 of these need PS transformers installed • 1 module at ANL, needs a lot of final assembly work (will swap for one of the 5) • 2 modules in clean hut • 10 modules in STAR • Front panels to be designed, fabricated, and installed • IST/GMT new ARM production • Revise design regarding patch wire and possible changes in backplane clock interface • Fabricate about 45 modules (36 IST, 2 GMT + spares) • Work to start about Oct. 1st.

  4. APV chip I2C VIL,MAX catastrophe a bit of a disconnect here... I2C measurements on the IC APV test setup Mark Raymond m.raymond@ic.ac.uk CMS Tracker Electronics 0 indicates write cycle G. Visser Fall, 2010 ARM as built • assumed typical VIL,MAX of about 0.7V APV address 0100001 0.25V APV acknowledges own address 0.35V APV fails to decode own address and so doesn’t generate acknowledge >0.35V I2C transaction fails on this setup if SDA line not pulled lower than ~ 0.35 Volts

  5. New ARC Module • Increase buffer to 1 GB. This will support a 256 kB (per RDO) event size with full 4095 tokens with ‘decoupled busy’ (as we have in ETOW, ESMD, BTOW, BSMD, and currently in FGT/IST). • 1 GB buffer is possible using XC6SLX45 with 2x 4Gb (8 bit) DDR3 attached. GV has made some preliminary design instantiating that stuff, looks ok. Parts are available. More than 1 GB is a higher level of complexity. • 256 kB per RDO allows up to 10 timebins full FGT non-ZS • Add TCD cable busy driver “just in case”. I think Tonko will like to see that included. I would like to continue with the ‘decoupled busy’ as in calorimeters. • Register all backplane signals with hard register and clocking scheme copied from ARM. This will allow maximum rate on the backplane, and eliminate any source of variability in backplane timing. I think that such things have cost us (me, at least) significant headaches. • Fix all patch issues (well, most/all will be superseded with new design with new FPGA type anyway). • GV strongly prefers to eliminate PIC, do all config through the SIU (as is done in BSMD); let’s just connect ethernet to the FPGA to keep that available for any diagnostics, etc.

  6. ARM Firmware: Bugs / Issues “Length hack”: Because buffered APV readout isn’t implemented (see #1 next slide), an APV which is to be skipped because it isn’t present/detected still generates 3 data words and furthermore has wrong length count in the header. The DAQ reader software has a workaround for this. (Which won’t work for full IST, by the way.) There is an inconsistency in resetting of event sequence numbers by ARC, ARM main, and ARM FE FPGA’s. I think it is the ARM main needs to be revised. This is not of importance if resets come in the usual way, but it can/does cause some trouble in the test stand. (Possible): Some trouble is possible, needs to be checked, with changes of APV clock timing. We are running (with “Eun waveform fit”) with default fine timing, so no problem. But this should be investigated and fixed if necessary.

  7. ARM Firmware: Incomplete features Buffered scratchpad readout of APV. Currently the ADC data is written directly to channel DAQ FIFO on the fly as it comes in. Need to write through a scratchpad memory. (This enables true length in header, null data record, and waveform ZS.) Serial control register interface from main  FE FPGA. In particular then, software control of number of timebins and data format options. Zero suppression. Requires #1, #2 above, also requires threshold table & software to fill it in. Reorder the APV data in channel number sequence. (This was planned/proposed but not done, now I think maybe better not, or at least no pressing need to do it? Discuss...) FEE power supply monitoring. Background FEE temperature monitoring (including during runs). (Currently only monitored at configuration time.) Speed up FE  main FPGA datapath from 66 to 100 MB/s. Speed up backplane datapath from 132 to 200 MB/s or more (cf. ARC discussion).

  8. DAQ software: Incomplete features [GV] Masks are needed at the FEE board level. Otherwise, temperature and configuration reads produce irrelevant errors. [TL] Standard machinery of pedestal, rms calculation in pedestal runs, and calculating ZS thresholds, needs to be implemented for FGT. This will certainly follow the existing model from other detectors, e.g. BSMD. I’m sure this will not require much time to finish. [TL, GV] The ZS thresholds have to be pushed to the ARM modules. [TL, GV] DAQ reader needs updates to deal with FGT ZS data from hardware. [GV] DAQ reader and online DAQ needs further header / data integrity checks. [FGT team] Not really a DAQ topic, but highly relevant: We can and should see if we are ready for ZS by mocking it up offline, now with run 12 non-ZS data. I don’t think it would be a lot of work. Discuss... [GV] Controls to be added for ‘less common’ things: Change APV signal polarity (for IST), change number of timebins, ...

  9. Clean room operations How many cables (power & signal pairs) are needed? How many patch panel boards, can we use interim boards, how to support the boards? How to provide a decent ground and in general to ensure FGT should work as well in cleanroom as in STAR (and vice versa)? Readout system needs to support cosmic ray test stand and 1 full disk at a time FGT in WSC? I think so. Not simultaneously? I hope not. NO HARDWARE WILL BE REMOVED FROM STAR TO THE CLEANROOM EXCEPT FOR ARM & ARC MODULES. Cables, ABC, patch panels, crate STAYS IN STAR. Intersections with IST & GMT testing (beneficial and otherwise)? Discuss... Do we need the 2nd FGT crate (now at ANL) here?

More Related