1 / 11

CSC’s in the Mid-Week Global Run (MWGR) 4 – 5 March 2009

CSC’s in the Mid-Week Global Run (MWGR) 4 – 5 March 2009. Wednesday spent replacing Global Trigger crate Started global running Wednesday late afternoon Reached 99.99kHz L1A rate Including CSC, DT, HB FMM errors from CSC FED 754  had to be masked out of the run

chiku
Download Presentation

CSC’s in the Mid-Week Global Run (MWGR) 4 – 5 March 2009

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSC’s in the Mid-Week Global Run (MWGR) 4 – 5 March 2009 • Wednesday spent replacing Global Trigger crate • Started global running Wednesday late afternoon • Reached 99.99kHz L1A rate • Including CSC, DT, HB • FMM errors from CSC FED 754  had to be masked out of the run • FIFO overflow from CFEB problem on ME-1/1/7 combined with a specific error-handling deficiency in the DDU firmware • To be fixed with DDU firmware upgrade… • CSC Track Finder • DAQ processes not properly killed by CSC Run Control  Put “CSC” Function Manager into Error • Trigger Supervisor Cell crashed occasionally • Memory leak in Track Finder trigger supervisor cell used up all memory on this machine. Under investigation… G. Rakness (UCLA)

  2. Online Measurements of L1A Latency from CSC Point of View CRAFT With tmb_lct_cable_delay = 2 AFF to L1A Counter[127] = 0 AFF to L1A Counter[128] = 0 AFF to L1A Counter[129] = 3 AFF to L1A Counter[130] = 11 AFF to L1A Counter[131] = 38 AFF to L1A Counter[132] = 46 AFF to L1A Counter[133] = 2 AFF to L1A Counter[134] = 0 AFF to L1A Counter[135] = 0 average = 131.33 MWGR With tmb_lct_cable_delay = 2 AFF to L1A Counter[127] = 0 AFF to L1A Counter[128] = 0 AFF to L1A Counter[129] = 3 AFF to L1A Counter[130] = 13 AFF to L1A Counter[131] = 34 AFF to L1A Counter[132] = 46 AFF to L1A Counter[133] = 4 AFF to L1A Counter[134] = 0 AFF to L1A Counter[135] = 0 average = 131.35 L1A latency in MWGR 4 – 5 March 2009 = L1A latency at the end of CRAFT 2008 https://cmsdaq.cern.ch/elog/CSC/7818 https://cmsdaq.cern.ch/elog/CSC/7062 G. Rakness (UCLA)

  3. HE readout with CSC trigger https://cmsdaq.cern.ch/elog/CSC/6952 CRAFT Nov 2008 MWGR 4-5 Mar 2009 Pawel de Barbaro Shift of 1-1.5 bx? G. Rakness (UCLA) https://cmsdaq.cern.ch/elog/Shift/4950

  4. Configuration Database Check • Performed a (nearly) comprehensive check of the peripheral crate configuration database: • loading values to the database • extracting values from the database G. Rakness (UCLA)

  5. Peripheral Crate Configuration Procedure • Load software with configuration values from xml file (or database) • This is the function we need to check. • Write values from software onto the userPROM’s • Hard Reset • FPGA loads configuration values from userPROM • Check Configuration • Compares expected values (in software) with read values (from hardware) Note: if 2) is skipped, and a different set of values is loaded at 1) than what is on the userPROM, then 4) will fail  This is how we check consistency (or not) from one set of parameters to another G. Rakness (UCLA)

  6. Configurations used in test • Work our way through 3 different configurations… • MWGR = xml config used in MWGR 4 – 5 March • DB = Database Config ID = 2000001 • Supposed to be identical to MWGR • Test = xml • Differs from MWGR xml config by one parameter per chamber  using this ensures that we are not doing a “dummy” check G. Rakness (UCLA)

  7. Steps • Load MWGR xml to userPROM • Check Test xml vs. MWGR xml (on userPROM) • Many differences expected  Shook out some bugs found in the configuration check function • Load test xml to userPROM • Check DB vs. test xml (on userPROM) • Many differences expected  inverse differences as seen in step 2 • Load userPROM from DB • check MWGR xml vs. DB (on userPROM) Conclusion: it works. G. Rakness (UCLA)

  8. Caveats • Number of parameters verified to be correct  140 • Parameters not checked include: • Firmware versions for all components • I.e., did not load incorrect versions to the hardware nor the database • TMB::ccb_ignore_startstop • Bug in original configuration check G. Rakness (UCLA)

  9. In other news, the RPC endcap folks have been working on their analysis… Following slide presented today at the CRAFT analysis workshop in Torino by C. Carrillo (Uniandes) Agenda at: http://indico.cern.ch/conferenceDisplay.py?confId=50961 G. Rakness (UCLA)

  10. CSC Segment Extrapolation Efficiency for the EndCap http://webcms.ba.infn.it/~webrpc/efficiency/rereco/70195/indexEndCap.html Run 67539 Efficiency RPC pad bit by G. Rakness(UCLA) Run 70195 RPC Segment Efficiency RE+1/2 RE+1/3 C. Carrillo (Uniandes)G. Rakness (UCLA)

  11. To do • Take a local run • We have been ~off since the MWGR… this is not a good way to go • Get the TMB – ALCT loopback software going here at CERN… • Self-frustration for not getting it going • EMu meeting Sunday – Monday • Bora Akgun is giving the Commissioning talk… I have been coaching him… G. Rakness (UCLA)

More Related