1 / 47

Summary of the mini Workshop on ATLAS Counting Room (ATCR)

Summary of the mini Workshop on ATLAS Counting Room (ATCR). Beniamino Di Girolamo CERN. Mini Workshop Program (part I). Experience and future. ATLAS-LHC interfaces. Mini Workshop Program (Part II). Experience and discussions from WG. ATLAS systems requirements. Summary.

Download Presentation

Summary of the mini Workshop on ATLAS Counting Room (ATCR)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Summary of the mini Workshop on ATLAS Counting Room (ATCR) Beniamino Di Girolamo CERN ATLAS Week Plenary Meeting - 26 February 2004 Beniamino Di Girolamo - CERN PH/ATD

  2. Mini Workshop Program (part I) Experience and future ATLAS-LHC interfaces ATLAS Week Plenary Meeting - 26 February 2004 Beniamino Di Girolamo - CERN PH/ATD

  3. Mini Workshop Program (Part II) Experience and discussions from WG ATLAS systems requirements ATLAS Week Plenary Meeting - 26 February 2004 Beniamino Di Girolamo - CERN PH/ATD

  4. Summary • Summarizing has been difficult • Many slides • Lively discussions • Going back to the original material is suggested to have more detailed answers on specific questions • All slides are available at http://agenda.cern.ch/fullAgenda.php?ida=a04353 ATLAS Week Plenary Meeting - 26 February 2004 Beniamino Di Girolamo - CERN PH/ATD

  5. Goal • Start discussions inside ATLAS on the detector operating model • Survey the experience gained up to now in/outside HEP in this domain • Collect first requirements and needs • Define a “control room” ATLAS project from commissioning to steady running of the experiment M. Nessi ATLAS Week Plenary Meeting - 26 February 2004 Beniamino Di Girolamo - CERN PH/ATD

  6. Experience talks: J. Proudfoot on D0&CDF • experience and hints useful because very near to our future needs: • Access system based on microchip-equipped badges • Shift crew: two different approaches • D0: based on sub-detector partitions • CDF: not based on sub-detector partitions • In both cases geographical divisions of functions of the stations with a shift captain in the middle • Binary decision making where possible • Monitoring of data with extremely simple automatic flagging of the quality, refined manually as soon as possible. Results logged in database • It takes 5 minutes to start a run, the same to stop. Run is stopped only when reaching the limit record size on disk J. Proudfoot ATLAS Week Plenary Meeting - 26 February 2004 Beniamino Di Girolamo - CERN PH/ATD

  7. It works efficiently. Nothing special just keep in mind future upgrades and leave plenty of extra space. D0 Detector Monitoring Stations – arranged by detector subsystem D0 Data Acquisition Control and Monitoring Station J. Proudfoot ATLAS Week Plenary Meeting - 26 February 2004 Beniamino Di Girolamo - CERN PH/ATD

  8. Revamped between Run 1 and Run2 • CDF “Slow Controls” Station • Low & High Voltage • Cooling • Beam Losses CDF Data Acquisition System control and Monitoring Station ATLAS Week Plenary Meeting - 26 February 2004 Beniamino Di Girolamo - CERN PH/ATD J. Proudfoot

  9. Additional hints: different choices • D0 shift crew • Captain • DAQ • calorimeter/muon • central tracker • Global monitor • mechanical support • CDF shift crew SciCo, DAQ ACE, Monitoring ACE, CO Plus 1 Operations Manager (either in control room or on call) Plus 1 Offline shifter on day shift Shift crew focus is to take data not to solve specific problems of sub-detectors • CDF choice • People brought into the game choosing among the best people available, not always tied to institutes responsibilities. No volunteers. 95 % resident at Fermilab J. Proudfoot ATLAS Week Plenary Meeting - 26 February 2004 Beniamino Di Girolamo - CERN PH/ATD

  10. Recipes • ACE - 12 wks. - 1/2 time • Scientific Coordinator (Scico) - THREE - 8 day shifts in a yr. • Consumer Operator (CO)- ONE - 8 day shift - maybe every 2 yrs. • Everyone sends in availability for 6 months to a yr and we have someone (DeeDee Hahn)work out the schedule • Training, - everyone gets safety training - rad worker, loto, controlled and supervised access before they come on shift. We have a CDF training officer (DeeDee Hahn) to give the training, though it can be taken through Fermilab ES&H. • One day of overlap shift for CO and Scico, so they get on the job training. • Aces have 2 weeks of overlap, plus 2 - half days of in class training • There is a large amount of web-based training material J. Proudfoot ATLAS Week Plenary Meeting - 26 February 2004 Beniamino Di Girolamo - CERN PH/ATD

  11. Remote access • No possibility to do remote shifts • Remote checks and monitoring by experts • A lot of material on the web • Critical systems not available for remote login Efficiency • 85 % efficiency of the operations, struggling to go up to 91 % J. Proudfoot ATLAS Week Plenary Meeting - 26 February 2004 Beniamino Di Girolamo - CERN PH/ATD

  12. Experience from ESO: A. Wallander • Model of operations • Local visitor: classical highly interactive operation • Remote operations • Negative experience: stopped • Support service operations • All observations fully specified in advance (months) and stored in a queue • Execution done by professionals (staff astronomer and operator) with minimum human interaction A. Wallander ATLAS Week Plenary Meeting - 26 February 2004 Beniamino Di Girolamo - CERN PH/ATD

  13. Commissioning and operation experience • Commissioning of a new telescope site in Paranal • Commissioning plan with well defined tasks • Strong team on site (temporary relocation, missions) • Day to day decisions on site (fast decision making) • Strict access control (no remote engineering) • IMPORTANT: strict policy on standards • All PCs are from firm X and run OS Y • Everybody becomes expert because everybody uses the same material A. Wallander ATLAS Week Plenary Meeting - 26 February 2004 Beniamino Di Girolamo - CERN PH/ATD

  14. Paranal Observatory aerial view La Silla A. Wallander ATLAS Week Plenary Meeting - 26 February 2004 Beniamino Di Girolamo - CERN PH/ATD

  15. Operation A. Wallander ATLAS Week Plenary Meeting - 26 February 2004 Beniamino Di Girolamo - CERN PH/ATD

  16. Different remote control experience: Far remote control of accelerators - F. Willeke • Completely different strategy from ESO • Targeted to Linear Collider • From past experience: LC will be continuously in commissioning • It is also a way to keep the attention high • Far remote control strategy • Not to save money, but to keep expertise in various places not only in a central site • Could follow time zone switch to change shifts • Sociological aspects: not discussed, but under careful analysis F. Willeke ATLAS Week Plenary Meeting - 26 February 2004 Beniamino Di Girolamo - CERN PH/ATD

  17. Collaboration Models Site lab instead of host lab HERA/LHC Model GAN Model Partner Labs Partner Inst. Site Laboratory In kind Contributions In kind Contributions Special responsibilities Project Host Laboratory Project F. Willeke ATLAS Week Plenary Meeting - 26 February 2004 Beniamino Di Girolamo - CERN PH/ATD

  18. Experience from HERA, LEP, SLC... Maintenance, Trouble Shooting Repair: essentially “REMOTE FACILITIES”,: • problems diagnosed remotely before intervention, • interventions by non-experts successful in 90% of the cases, • experts help via telephone suffices or via remote access • unscheduled presence of experts on-site is an exception Commonality with ESO: Very reliable equipment • If the intervention of the remote expert is needed on site, it may take a week • Therefore careful MTBF analysis and spare policy F. Willeke ATLAS Week Plenary Meeting - 26 February 2004 Beniamino Di Girolamo - CERN PH/ATD

  19. CERN Control CenterD. Manglunki • Integrate (NOT aggregate) the functions of: • MCR • PCR • QCR • TCR • … into ONECERN Control Centre: the CCC D. Manglunki ATLAS Week Plenary Meeting - 26 February 2004 Beniamino Di Girolamo - CERN PH/ATD

  20. System requirements • Standardised consoles for AB, AT,TS: • system allows any operation from anywhere • Reconfigurable room • Fixed displays / CATV • Access systems • Presently 4 different ones; some hardwired • Fast analog signals observation and processing (FFT, BTF,…) • Administration PC • Telephone • Intercom D. Manglunki ATLAS Week Plenary Meeting - 26 February 2004 Beniamino Di Girolamo - CERN PH/ATD

  21. Current building extension plan 625m2 control room at ground level with 5.6m ceiling height Tele- com Visitors’ balcony Venti- lation Repair lab, Meeting room, Videoconference/remote MD, temporary offices [staged] … Servers • 625m2 Control room • 40 console modules including 4 access systems • 40 fixed displays • Reconfigurable working space • Easy access • Comfortable light, acoustics, and temperature • Outside view • Combine visibility and privacy Operators services (kitchen, meeting room, rest room, toilets, showers, lockers, ….) Recep- tion D. Manglunki ATLAS Week Plenary Meeting - 26 February 2004 Beniamino Di Girolamo - CERN PH/ATD

  22. Relations with the LHC • N. Ellis: Signal exchange • These signal can be available in the ATCR • Issue: policy of possible action on sub-detectors based on info from machine • Mutual machine <-> experiment interlock • B. Chauchaix • Overview on safety system • Implications for ATLAS ATLAS Week Plenary Meeting - 26 February 2004 Beniamino Di Girolamo - CERN PH/ATD

  23. ATLAS to machine - illustration N. Ellis ATLAS Week Plenary Meeting - 26 February 2004 Beniamino Di Girolamo - CERN PH/ATD

  24. Controled and Interlocked Areas • Access control takes place upon entering or leaving zones : • Non-interlocked areas • Service zones • Beam zones System Overview

  25. ATLAS specifics • Monitor access – simple card reader (person ID): • Site entrance • ATLAS Control Room -SCX1 • Entrance of SDX1 • Computer barrack inside SDX1 • PAD at surface level - shaft PX15: • All people entering must be identified with ID + Biometrics • No safety token needed • Log and display at ATCR the ID of the people entered • Date and time of entry and access duration • Number of people present – (a maximum of ~100 people) • Personnel Access in UX15 (cavern) via a Personnel Access Device (PAD) at ULX15 & UPX16 • Material Access in UX15 via a Material Access Device at ULX15 • Tracing radioactive material – INB obligation • At the ULX15 & UPX16 access points • At the PX14, PX16 (when opened) Implementation at Point-1 (ATLAS)

  26. Subdetector operational model • Functions to be done by shift crew (non-experts): • Normal data-taking operation, monitoring of DAQ/DCS, radiation levels, … • Monitor DCS Warnings and Errors • Monitor MINBIAS rates • Monitor Calibration Triggers • Monitor Basic Histograms • Call LAr experts

  27. Subdetector operational model • Functions to be done by subdetector people locally at IP1 / expt area • In normal datataking: monitoring as above but in more detail • In local calibration running/dedicated studies (hardware or software), control ID from control room or US(A)15 depending on tasks being performed. • Maintain USA15 electronics - Tilecal • Maintain DAQ Code (ROD crates) - Tilecal • LAr • (initially, all actions are here at the ATLAS pit area) • Detailed monitoring of histograms, DCS etc. for each LAr sub-detector (EMB, EMEC, HEC, FCAL) • Detailed status and checks of FEC and BEC electronics systems • Need ability to control local runs (pulsing, calibration) as part of these checks and diagnostics • Local repairs in USA15 and EMF (Electronics maintenance facility)

  28. Subdetector operational model • LVL1 • Quiet work area • Stations for preparing trigger configurations, etc. • Stations for in-depth (offline) prompt analysis of trigger performance, efficiencies, backgrounds, etc. • Local stations in USA15 • Also some tables, chairs, etc. • Lab space • work area • full test-rig (or elsewhere on site?) • Space for use of laptops • wireless networking • Storage cupboards for spares, tools, etc.

  29. Subdetector operational model • Functions to be done by subdetector people from their CERN offices • Monitoring, and offline tasks not impacting the detector hardware • Don’t expect to ‘take control’ from offices at CERN (ID) • Calibration Coefficient Calculations (Tilecal) • Database Updates • Monitor TileCal Performance • Physics analysis • LAr • (this is at a later stage during more stable running) • Monitor subsystem performance for typical events/triggers • Inform shift crew if something abnormal • Expert control possible

  30. Subdetector operational model • Functions to be done by subdetector people at home institutes • Monitoring and offline tasks not impacting the detector hardware (ID, LAr and Tilecal) • Note: offsite but on-call experts will be very important – communicate with the local team at IP1 (ID) • Same as those at CERN offices   • Monitor TileCal Global Performance • Physics analysis

  31. Subdetector needs at IP1 • In UX15 cavern: • ID • Not defined in detail, but (wireless?) network access for laptops at UX15 platforms and close to patch panels PP2 and PP3 will be needed • Already (and even more importantly) during commissioning phase, and in shutdowns • TILECAL • Cabinet(s) for storage, working place (table) • Visual alarms (needed for Cs scans, part of DSS ?) • LAr • Access to Front End Crates (FEC ) during standard (long) access times • Scaffolding must be provided • Tools from LAr experts • Access to ELMBs on cryo platform area during short access

  32. Subdetector needs at IP1 • In US15 and USA15 areas • ID • Again, not defined in detail. PC/network access (also in gas and cooling areas) • Local DAQ running from terminals in rack areas • Throughout experiment lifetime, but especially during commissioning and initial running • ID has around 50 racks to commission and keep running – no small task • Mobile phone coverage in all underground areas • TILECAL • Electronics test items: scopes, … • Space to work • Monitors and keyboards in/near racks • Cabinet(s) for storage (tools, cables, power supplies, ...), books, documentation etc. • LAr • Nothing in US15 • USA15: • Special permanent cupboards for LAr-specific equipment, documentation, tooling, ... • Carts to move heavy equipment

  33. Subdetector needs at IP1 • on surface, outside control room • ID • Each ID system will need workplaces, preferably close together bearing in mind combined studies and need for communication between systems • 2,3,4 workplaces with 2-4 PCs each – certainly more than one • Three ID subdetectors, barrel + endcaps • All will want to calibrate/test/develop at same time when there is no beam • TILECAL • Electronics Room • Mechanical workshop • Analysis area with PCs (including Tilecal DCS Control Station), general workarea • Meeting room(s) • Cabinets for storage • LAr • No LAr-specific work area is needed if all requested workstation places in the ATCR are provided • If not possible then equivalent LAr-specific places for workstations are needed • A few small meeting/discussion rooms with whiteboards for detailed technical discussions

  34. Control room functionality • General purpose equipment • Coffee machine, fridge, small kitchen with water, Office supplies nearby, Telephone, fax machine and paid telephones for outgoing calls nearby, Video conferencing nearby, Printers, Whiteboards, bottled water, vending machines (nearby), Coat rack • Permanent displays • Audible effects: Warnings, end of run, … • Magnet systems, general cooling and gas status • Detector status and alarms, primary services (water, electricity, gas) • Beam conditions and radiation levels • LVL1, HLT, Data logging parameters • Event Display • Webcam/video of different parts of detector in UX cavern and of electronics in USA15 • Note: will need to duplicate (some of) this information for local running • Need to be able to launch additional local displays if work is not being operated from control room • LAr Cryogenics status

  35. Control room functionality • TILECAL • workstations for • Single event display • Histograms • Analysis of data • General purpose for shift crew (e.g. mail) • Run Control • HLT development • LAr • One for each sub-detector for detailed data quality tests, event display, ability to drive local pulsing/calibrations runs, ... • EM barrel • EM end-cap • Hadronic end-cap • Forward calorimeter • One for detailed FEC/BEC electronics monitoring • One for each critical DCS system • LAr DCS SCS + temperature + purity displays • LAr HV displays, monitoring and control • LAr FEC LV and PS monitoring and control

  36. Separated Functionality for Shift Work Additional terminal room Meeting table with computer screen Control area for shift Visitors area Det. Terminals G. Mikenberg

  37. Separate Control and Safety functions • Safety functions should be in permanent display in a part of the control room and be constantly supervised. • The safety elements include power, magnet, cryogenics, cooling, gas as well as gas and fire alarms. • DAQ, detector power, histograms should be controllable and displayed at various terminals in the Control Rom. G. Mikenberg

  38. Experts on call should be able to perform work via the network. • The best way to solve problems is to have it handle by an expert and not by the shift-crew. • Once a problem is found, it should be reported to the expert on-call. • The experts should be able to find the problem via the network. • A secure access system with a firewall should be available for controlling sub-detectors from outside. G. Mikenberg

  39. HLT requirements ATLAS Week Plenary Meeting - 26 February 2004 Beniamino Di Girolamo - CERN PH/ATD

  40. HLT requirements ATLAS Week Plenary Meeting - 26 February 2004 Beniamino Di Girolamo - CERN PH/ATD

  41. Prompt Reco • Goal is to minimize latency between events being available at output of Event Filter to being available for Physics Analysis • Propose that prompt reconstruction be operated as an extension of the TDAQ/HLT system • Operators in the control room • And office space nearby • Good communication with primary operators • Rapid feedback of problems • In both directions • Hardware requirements in control room are not large • ~2 workstations • Multiple slave displays

  42. Sub-detector Operation (USA15 or surface) • Full operation of the detector • Summary status of the detector • Archiving of summary information • Coordination and synchronization of services or sections of the detector • Verification of commands • Logging of commands • Execution of automatic procedures or actions • Receive commands from DAQ • Export data to DAQ • Send messages to DAQ • Connect to services in the layer above • Subsystems Operation (USA15 or US15) • Hardware monitoring and control • Readout/Write data from/to the front-end • Calculations (calibration, conversions, etc.) • Triggering of automatic actions (inc. feedback) • Archiving of raw data into the PVSS DB BE Hierarchy Functions Global Operation LHC ATLAS DAQ CERN Magnet DSS SCS CIC Pixel Tilecal MDT … LCS EB- B- B+ EB+ Cooling LV HV SAlone

  43. Status display Operator Tools (1b/7)

  44. Services (1/3) • Detector Safety System • Highly reliable system for detector equipment safety • PLC-based FE independent from DCS • BE implemented with the same tools as the DCS • Graphical interface in the control room • Underground access control • List of people in the different ATLAS zones • Retrieved from the grant access or the Find People in ATLAS (FPiA) systems F. Varela

  45. Services (2/3) • Web server • DCS on a private network • Publishes info which can be checked via the web • Allows for a limited set of actions • Remote access to the DCS • Regulates the access to the DCS via remote login • Authentication will be provided • Access granted by the shift operator (?) • Session will be logged • Allowed actions to be decided F. Varela

  46. Services (3/3) • Databases • Where will the Conf. and the Cond. DB server be? • GSM • Alarms in the system will be reported to the expert via SMS • Reporting system • Produces statistics of incidents happened for a time interval • Miscellaneous • Web browser, • e-mail • etc. F. Varela

  47. Summarizing • Will we adopt CDF or D0 model? • Many requests of space for tools, documentation in USA15 • Requests for possibility of easy working in USA15 • Chairs and small tables near racks • Number of workstations going to infinite • Not everybody will work at the same time • Functions will be configurable on workstations • Remote monitoring in nearby barracks • Otherwise… remember 40 consoles in 625 m2 … • Function of control room will evolve with time • Commissioning • Sub-detector debugging with first events • “1st year of beam” ~ 3 years? • Stable operations • Remote access for monitoring from the very beginning to help debugging • Building the concept in the design to allow further utilizations • Other points will be summarized in the next two talks by Marzio and Ilias

More Related