1 / 48

ATLAS Operations 2008 Getting ready / First beam / combined running & Shutdown planning

ATLAS Operations 2008 Getting ready / First beam / combined running & Shutdown planning. T. Wengler DESY seminar 19 Jan 08. The ATLAS collaboration. 37 countries 169 Institutes 2500 Authors. The ATLAS Detector. Solenoid field: 2T Toroid field: 4T. Inner Detector. Barrel.

benito
Download Presentation

ATLAS Operations 2008 Getting ready / First beam / combined running & Shutdown planning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ATLAS Operations 2008Getting ready / First beam / combined running& Shutdown planning T. Wengler DESY seminar 19 Jan 08

  2. The ATLAS collaboration 37 countries 169 Institutes 2500 Authors

  3. The ATLAS Detector Solenoid field: 2T Toroid field: 4T

  4. Inner Detector Barrel

  5. The LHC complex Need full LHC for any beam through ATLAS

  6. Milestone commissioning weeks Commissioning of the integrated system M2 M3 M4 M5 M6 M1 2007 2008 Barrel Calorimeters Barrel Calorimeters Barrel Muon M2 + EC Calorimeters EC Muon Barrel SCT, TRT M3 + (-)SCT r/o L1 Mu/Calo M5 + SCT detectors M4 + Pixel r/o (-) TRT

  7. M6 highlights 1/3 Reconstructed cosmic SCT-TRT-Muon Phi difference SCT+TRT track segment  RPC • SCT Barrel (~1/2 SCT), stable cooling throughout M6 • TRT 25% Barrel A. 10% Barrel C, top + bottom sections • RPC Sect 7/8, TGC 3 Sect/side, MDT Sect 1-12 Barrel, Big Wheels, 650 chambers

  8. M6 highlights 2/3 L1Calo trigger seen in Tile L1Calo trigger seen in LAr EMB pulse shape Extensive testing of Calorimeters and L1Calo during M6 and after, including trigger timing studies

  9. M6 highlights 3/3 All TRT reco tracks High Level Trigger Triggered TRT tracks Analysis of one M6 cosmics commissioning run , March 2008 • The trigger is requiring TRT tracks reconstructed online within the pixel volume (equivalent to d0  25 cm) • Triggered events (red) end up in one stream file, non-triggered events (blue) into another one: proves trigger and streaming are working

  10. Schedule: May-June 10

  11. Milestone commissioning weeks Commissioning of the integrated system M7 M8 first beam M9 ? cosmics comb. 2008 2009 Full system (-) CSC Full system

  12. Magnet system Another important mile stone The magnets and their cryogenics, powering, dump etc systems have been fully commissioned to nominal field strength

  13. Closure of the LHC beam pipe The Final Piece: Closure of the LHC beam pipe ring on 16thJune 2008 ATLAS was ready for data taking in August 2008

  14. ACR for 2008 running • ID Gen • Group two screens intoID Gen access station(evap. cool etc.) • No separate shifter • Shift Leader • Shift Leader tasks • LHC communications • DCS central operation • SLIMOS • CIC, safety, access • CCC/TC communication • Level-1 • L1Calo,L1Muon, CTP h/w & r/o • Overall Timing TRT • Trigger • Configuration • Performance • Run Control • Main RunControl SCT Pixel • DAQ/HLT • DAQ/Dataflow • HLT/DAQ farms • Data Quality • Central DQ • Online/Offline/Tier-0 Tile • LAr • Two shifters with separate access • Muon detectors • TGC, RPC, MDT, CSC • Fwd Detectors • Lucid, ZDC (ALFA …) • Luminosity • Exp. Supervisor • Chair + Laptop

  15. Muon system (MDT, RPC, TGC) on at reduced HV LAr (-FCAL HV), Tile on TRT on, SCT reduced HV, Pixel off BCM, LUCID, MinBias Scint. (MBTS), Beam pickups (BPTX) L1 trigger processor, DAQ up and running, HLT available (but used for streaming only) Going into beam op. Sep 10th LHC beamloss monitor tertiary collimators 140 m BPTX 175 m LHC start-up scenario: ATLAS was ready for first beam: • Go step-by-step, stopping beam on collimators, re-align with centre • Open collimator, keep going • Last collimator before beam through ATLAS – tertiary collimators (protection of triplets) • Splash event from these collimators for each beam shot with collimator closed

  16. Strategy for first beam / first signals • Splash events are first • only 10 shots expected – not enough time to set up BPTX • Rely on small radius triggers with well defined cosmics timing (L1Calo + MBTS) • Catch the first events! • Start timing in Beam Pickups (our timing reference) rapidly to trigger on through-going beam • First signal seen in ATLAS • Multi-turns! BPTX Trigger RF captured beam for > 20 min on day 2! Worked out nicely, with very stable data taking

  17. The first events – did we get them? “First beam event seen in ATLAS” (not quite !) Zoom into the first beam ‘splash’ activity with beam-1 closed collimator, recorded in run 87764. Of 11 beam ‘splashes’ (solid-blue), 9 have been triggered (ATLAS events are dashed-black, the event numbers are indicated).

  18. Timing of the trigger inputs Intense programme of timing adjustments of trigger inputs Only about 30 hours of (any kind of) beam in 2008 … 1 BC = 25 ns

  19. The first has been delivered to the CCC Delivered! 2009 2009

  20. LHC incident on Sept. 19th Systems reacted as expected until here See slides R. Aymar attached below for full details Powering tests in Sect 3-4 for 5.5 TeV operation Resistive zone in bus between C24 and Q24 In < 1 sec power converter tripped and dump started Electrical arc in bus, punctured helium enclosure Damage to super-insulation for some magnets Pressure rise too fast for valves, pressure wave in isolation vacuum Safety valves start venting helium to the tunnel Helium release into isolation vacuum of cryostat Displacements where wave hits vacuum barriers Both beam pipes cut open and contaminated About 60 magnets to be removed for repairs etc No more beam in 2008

  21. Q27R3 R. Aymar, PECFA, 28 Nov 08

  22. QQBI.27R3 R. Aymar, PECFA, 28 Nov 08

  23. Magnet movements Sector 3-4 R. Aymar, PECFA, 28 Nov 08

  24. LHC incident on Sept. 19th See slides R. Aymar attached below for full details • Prevention of similar incidents • Search for indicators of faulty connections in commissioning data • Retests of all suspicious locations • Improvements on Quench Protection Systems • Mitigation of consequences in case of similar incidents • Increase number and discharge capacity of relief valves • Reinforce external anchoring of cryostats at the locations of the vacuum barriers

  25. Shutdown intervention schedule ATLAS reacted with a revised running and shutdown plan:

  26. Running 2008 Stop TGC n-pentane Start to open So +To Sol. only 41 42 43 44 45 46 47 48 49 50 51 Nov Dec Oct 41 42 43 44 45 46 47 48 49 50 51 24/7 combined run Sub-system (group) commissioning Weekly run meetings only 9:30 Run Meeting 27. Oct • SLIMOS + SL(RC) • TDAQ/Sys-admin on-call • Effort from sub-systems as needed for ongoing activity • RPC S12-14 available for combined runs Full shift coverage as agreed for this period

  27. Run Priorities until Oct 27th • Collect enough data with small wheels to check efficiencies of chambers potentially damaged by overpressure before n-pentane stops. Implications: • Recording: Record all TGC triggered events • Trigger: enable full TGC trigger coverage for some time (partial blocking of trigger with good timing for ID end-caps) • Collect large samples of good ID tracks with Solenoid on/off. Implications: • High recording rate of L1 trigger = small event size (HLT not fully efficient for these conditions) • Collect large samples of good Muon spectrometer tracks with field off/on/off. Implications: • Same as 2. • Run combined system for cross-detector studies and stability • All (available) systems in readout ✔ ✔ ✔ ✔

  28. Data overview

  29. High rate running • Re-visited at the end of combined period • Stability (i.e. people not touching essential parts of their DAQ and FE) had started to erode somewhat … • Still managed to do several high rate runs, with random triggers at L1, filtered at HLT • Today’s evidence suggests we can run combined with about 40 kHz L1A • No indication for bottlenecks at nominal rates downstream • Also did test on stop-less recovery (automatic disabling of blocked read-out links) • switched on without problems, but • no systematic testing (provoked failures) during combined running

  30. After the end of the combined run • Moved immediately into system commissioning • ID running 24/7 shifts for flat-out commissioning work into December • Frequent L1Calo/Calo runs with and without CTP, including 2 weeks of overnight/WE runs from Nov 3rd • RPC trigger test and threshold scans • … • Major events still in 2008 • ID combined run (RPC trigger, TRT fast OR) 26th Nov – 1st Dec • New major TDAQ release beginning of December (testing by sub-detectors starts before end of 2008) • Test Programme • Bare-bone run transition timing (all monitoring etc turned off) • Check situation on amount of messages in MRS

  31. Collection of cosmic ray data • Full cosmic rate (dominated by RPC) at L1 is ~500 Hz • Nominal ATLAS recording rate is 200 Hz @ 1.5 MB/event • For cosmics with full system 3-13 MB/event, dominated by number of LAr samples read out → We need the HLT for cosmics! • Example: Rate of Pixel tracks: Full L1 rate recorded (no LAr) New TRT fast-OR at L1 HLT cosmics filtering learning curve

  32. Pixel alignment An example of using cosmic-ray data for detector alignment

  33. The TRT as … Bubble chamber Fixed target detector A cosmic event with field on A beam splash event Occupancy ~ 1% Occupancies up to 30%

  34. Transition radiation in the TRT Transition radiation (TR) photons generated by radiator foils (boundary of 2 materials with different dielectric constants) Effect starts at γ = (E/m) ≈ 103 ➞ mostly for electron ID γ In the TRT, photons are absorbed in chamber gas ➞ large pulse ➞ passes high threshold Turn-on of TR from cosmics at about γ=103 as expected

  35. Muon spectrometer X-ray of the ATLAS cavern with cosmic muons Very good correlation between RPC (trigger chambers) and MDT (precision chambers) hits Elevators Access shafts • Optical alignment system • Dead channels < 1% • End-cap: 40 μm ➞ o.k. • Barrel: 250 μm ➞ in progress

  36. Calorimetry Tile cell uniformity measured with horizontal beam 1 MeV Pedestal stability in LAr EM layer over a 5 months period L1Calo trigger tower (x) vs. calorimeter full readout (y), Spread will decrease with full calibration

  37. How much of ATLAS is operational • Inner Detector • Critical item was evaporative cooling plant (for Pixel/SCT) after failure on May 1st 2008 (repair finished end of July 2008) • About 2.5 % dead channels due to cooling/module problems, similar amount of dead channels in TRT • Calorimeters • For LAr ~0.01% dead channels in EM, and ~0.1% in HEC, plus LV power supply affecting ¼ of one endcap (under repair) • No dead channels in FCAL • About 1 % dead channels in Tile cal • Muon spectrometer • Low noise and low number of dead channels • MDT: ~1.5%, CSC: 0.5%, TGC: 0.03%, RPC: 6% (commissioning still ongoing) • Magnets • Toroids and Solenoid fully operational • Trigger and DAQ • All L1 trigger inputs and central L1 trigger fully operational • HLT farm and algorithms, and DAQ system fully operational (will be scaled up according to need) In all cases expect to recover a significant fraction during shutdown from an already small number of missing channels

  38. Conclusions • We had a very successful start-up • Caught the first beam events • Managed to go through part of the single beam commissioning programme on day 1+2, with beam • We start to be able to operate and steer ATLAS as one system • On-the-fly L1-pre-scale changes and r/o recovery are extremely useful • Only the CSC now missing from regular combined data taking • CSC chambers up and running, r/o ROD issues being addressed, first tracks seen in combined runs • Pixel off on day one for safety reasons, but has seen many cosmic tracks in combined running since • Timing needs to be completed • As much as possible with test-pulses & cosmics over the next months, but need beam to finalise it • Will be in good shape for collisions • The operation model is working • Control room is working • Operational structure is sound • Main work for operations • Bring down overheads (and up efficiency) • run start/stops • recovery for power cuts • DQ tools / turn-around • …

  39. Additional Slides

  40. Inner Detector

  41. Calorimeter system

  42. Muon system

  43. Beam RF capture ‘Mountain range’ plot ‘Each line = one bunch orbit Correct phasing Correct reference LHC RF group

  44. Beam splash events in LAr and Tile Energy in LAr EM layer 2 Average cell energy in Tile

  45. LHC incident on 19th Sep 2008 Incident during powering The magnet circuits in the seven other sectors of the LHC had been fully commissioned to their nominal currents (corresponding to beam energy of 5.5 TeV) before the first beam injection on 10 September 2008. For the main dipole circuit, this meant a powering in stages up to a current of 9.3 kA. The dipole circuit of sector 3-4, the last one to be commissioned, had only been powered to 7 kA prior to 10 September 2008. After the successful injection and circulation of the first beams at 0.45 TeV, commissioning of this sector up to the 5.5 TeV beam energy level was resumed as planned and according to established procedures. On 19 September 2008 morning, the current was being ramped up to 9.3 kA in the main dipole circuit at the nominal rate of 10 A/s, when at a value of 8.7 kA, a resistive zone developed in the electrical bus in the region between dipole C24 and quadrupole Q24. No resistive voltage appeared on the dipoles of the circuit, so that the quench of any magnet can be excluded as initial event. In less than 1s, when the resistive voltage had grown to 1 V and the power converter, unable to maintain the current ramp, tripped off, the energy discharge switch opened, inserting dump resistors in the circuit to produce a fast power abort. In this sequence of events, the quench detection, power converter and energy discharge systems behaved as expected. R. Aymar, PECFA, 28 Nov 08

  46. Summary Report on the analysis of the 19th September 2008 incident at the LHC R. Aymar, PECFA, 28 Nov 08 Sequence of events and consequences Within the first second, an electrical arc developed and punctured the helium enclosure, leading to release of helium into the insulation vacuum of the cryostat. The spring-loaded relief discs on the vacuum enclosure opened when the pressure exceeded atmospheric, thus relieving the helium to the tunnel. They were however unable to contain the pressure rise below the nominal 0.15 MPa absolute in the vacuum enclosures of subsector 23-25, thus resulting in large pressure forces acting on the vacuum barriers separating neighboring subsectors, which most probably damaged them. These forces displaced dipoles in the subsectors affected from their cold internal supports, and knocked the Short Straight Section cryostats housing the quadrupoles and vacuum barriers from their external support jacks at positions Q23, Q27 and Q31, in some locations breaking their anchors in the concrete floor of the tunnel. The displacement of the Short Straight Section cryostats also damaged the “jumper” connections to the cryogenic distribution line, but without rupture of the transverse vacuum barriers equipping these jumper connections, so that the insulation vacuum in the cryogenic line did not degrade.

  47. Summary Report on the analysis of the 19th September 2008 incident at the LHC R. Aymar, PECFA, 28 Nov 08 Inspection and diagnostics The number of magnets to be repaired is at maximum of 5 quadrupoles (in Short Straight Sections) and 24 dipoles, but more (42 dipoles and 15 quadrupoles) will have to be removed from the tunnel for cleaning and exchange of multilayer insulation. Spare magnets and spare components are available in adequate types and sufficient quantities for allowing replacement of the damaged ones. The extent of contamination to the beam vacuum pipes is not yet fully mapped, but known to be limited; in situ cleaning is being considered to keep to a minimum the number of magnets to be removed. The plan for removing/reinstallation, transport and repair of magnets in sector 3-4 is being established and integrated with the maintenance and consolidation work to be performed during the winter shutdown. It should be available for the next Council meeting in December. The corresponding manpower resources have been secured.

  48. Summary Report on the analysis of the 19th September 2008 incident at the LHC R. Aymar, PECFA, 28 Nov 08 • Follow-up actions (preliminary) • Two different goals, namely to prevent any other occurrence of this type of initial event, and to mitigate its consequences should it however reproduce accidentally. Precursors of the incident in sector 3-4 are being scrutinized in the electrical and calorimetric data recorded on all sectors, which remain cold, in order to spot any other problem of the same nature in the machine. • An improvement of the quench detection system is currently tested, before being implemented. • The relief devices on the cryostat vacuum vessels will be increased in discharge capacity and in number. • The external anchoring of the cryostats at the locations of the vacuum barriers will be reinforced to guarantee mechanical stability. • Until now, no other interconnection resistance has been identified as above specification, but two (?) connections inside the cold masses (which have been tested successfully to 9T) have been measured higher than specified.

More Related