1 / 15

HMPID status and LS1 plans

HMPID status and LS1 plans. A. Di Mauro 27/06/12. 1) Review current detector issues: -- What are the current hardware problems that create data taking inefficiencies and what is the planned follow up ?

goldy
Download Presentation

HMPID status and LS1 plans

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HMPID status and LS1 plans A. Di Mauro 27/06/12

  2. 1) Review current detector issues: • -- What are the current hardware problems that create data taking inefficiencies and what is the planned follow up ? • During period LHC12c (149 runs,118h data taking with TPC) we had 6 CDH errors, the last one was during run 182021, June 5). Since then we had more than 38h of data taking without errors. We are inspecting the connections of the faulty DDLs. During LS1, if the detector will be taken to the surface, faulty DDL could be replaced if necessary. • -- Which are the rate limits of your detector and what are the detailed reasons ? • The typical dead time is 210 us in pp mainly related to the performance readout chain (DILOGIC) • -- We have to make sure that we are on track for excellent performance of the ALICE detector in this year's p-A run: • The predicted p-A luminosities range from 3x10^28 to 10^29, which translates into interaction rates of 60kHz to 200kHz. • Considering p-A multiplicities that are 3.5 times larger than in p-p this corresponds to equivalent proton rates of 210 to 700kHz. This means that our detector has to cope with the current particle rates. How can your subdetector deal with these rates ? • We performed a test ramping up one module when V0tot was at 500 KHz, the total current wen up to 50 nA (trip limit is 12 uA). After the solenoid trip when V0tot reached 1.2 MHz all detectors were on, we had no trips (the total current per module was 0.6 uA, see next slide)

  3. RICH-0 currents on solenoid trip (16/06/12 @ 13:23)

  4. 2) Plans for LS1:  What are the planned interventions and needs at P2:  -- review of the cooling system -- are there upgrades and improvements necessary ? NO  -- review of the gas system -- are there upgrades planned ? NO Can we reduce the gas consumption ? It has already been reduced significantly, but we can check if some 10-20% can be gained.  -- are the new cable to be pulled between the CRs and the cavern for your subdetector? YES, 14 cables for remote programming of FPGA via JTAG  -- what kind of hardware interventions are planned ? SEE NEXT SLIDES  -- is there any change or extension of hardware foreseen that would need NO       -- extension of the DCS (in particular more PCs)      -- installation of additional network outlets      -- additional rack space-- Are the currently implemented hardware interlocks sufficient to protect your subdetector, or is a review/modification/extension required. Present HW interlock scheme seems fine.

  5. Detector parts to be repaired/replaced • 4 “broken” radiators: • RICH3-Rad0 • RICH4-Rad0 and Rad1 • RICH6-Rad1 • 3 HV sectors at low V (the inefficiency has to include an area larger than the 24 pads of an HV sector, since MIPs hitting adjacent sectors, within a ring radius – or more – from faulty sector edges, will have less photons): • RICH0-HV3 (~ 1/3 inefficiency) • RICH4-HV0 (overlaps with Rad-0) • RICH5-HV1 (~ 1/3 inefficiency) • Overall loss: (1.333+2/3)/7 = ~28.6 % • Status of CsI PC will be checked by summer student, until 2011 no or very limited ageing was observed • 151 out of 161280 channels are masked, it is less than ~ 0.1% , but some could be recovered by replacing the FEE card • Also DDLs which gives problems could be replaced • When opening chambers, o-rings might need to be replaced as well

  6. Leaking radiator 5 6 Faulty HV sector 4 3 2 1 0

  7. Can we install the yellow platform with Mini-Frame in place?

  8. stephane.maridor@cern.ch

  9. Collision between left side extension and services on MF stephane.maridor@cern.ch

  10. COLLISION ! stephane.maridor@cern.ch

  11. Remove left-side extension stephane.maridor@cern.ch

  12. Without left-side extension Still a small conflict with some pipes on MF, to be checked and eventually to be disconnected stephane.maridor@cern.ch

  13. Still a small conflict with some pipes on MF, to be checked and eventually to be disconnected stephane.maridor@cern.ch

  14. Estimation of timescale (for 5 modules) Intervention could last 10 months, from Dec 2012 to Sept 2013 (LS1 ends in Sep 2014). Not included: new CsI PC (but that goes in parallel), production of new protective boxes for CsI PC, check/refurbishing of all radiator fittings in opened modules, more complex radiator validation/testing procedure, ….

  15. Estimation of manpower • The names are, in some cases, symbolic or ideal options, real availability has to be checked • In summary, for technical support: • Yannick (+ Mimmo?): 100% for 2 months, then 20% for 7 months, than 100% for 1 month • Jaap and Pieter: 100% for 2 months, then 60% for 7 months, than 100% for 1 month • Help for Cosimo in radiator production? Bari student/technician? • Help for Paolo in FEE dismounting/mounting? Me, Giacomo?

More Related