The CMS RPC D etector C ontrol S ystem: First operational experiences. Giovanni Polese Lappeenranta University of Technology On behalf of CMS RPC Collaboration. CHEP'09 17th International Conference on Computing in High Energy and Nuclear Physics 21 - 27 March 2009 Prague, Czech Republic.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Detector Control System:
First operational experiences
Lappeenranta University of Technology
On behalf of CMS RPC Collaboration
17th International Conference on Computing in High Energy and Nuclear Physics21 - 27 March 2009 Prague, Czech Republic
Part of the CMS muon system, the RPC detector has fast time resolutions (few ns) and a good spatial resolution ( cm), thatassure robustness and redundancy to the muon trigger.
The RPC in CMS
6 layers of RPCs are embedded in the barrel iron yoke closely following the DT segmentation.
The forward region is instrumented with four layers of RPCs covering up to η= 2.1.
A total of 480 + 432 RPC chambers at startup.
RPC Power system
Every RPC chamber has been equipped with 2 independent HV channels (one per layer up to 12kV ) and 2 LV channels for Front end boards. In addition 4 LV channels per sector are needed for supply the very front end part of the trigger chain.
> 90 m
The system is composed by 912 HV + 1580 LV channels and controlled by 4 different computers to optimize the CPU load
Power System Software challenge
20k parameters @ 100 Mbytes/hour raw data rates
Based on the OPC server communication protocol, developed by CAEN. It is based on an event-driven approach, where the most significant parameters are handled with a 2 s refresh time.
Hot startup time of the entire system (OFF->ON) is about 470 s.
Mainly depending by detector operation mode requests
(i.e. ramping up settings)
Max load per OPC server in our conf. : 480 channels.
Max parameters set simultaneously per ch during operation : 3
All < 22 °C, max accepted value 24 °C
The gas quality and the mixture composition are of primary importance in the operation of the RPC system.
The gas system is controlled by a CERN centralized system, LHC GCS project. It aims to acquire the data from the Programmable Logic Controllers (PLCs) and supervise it with a dedicated control system .
Main information, like gas flow, and gas mixture composition, are acquired via DIP protocol by the RPC DCS and correlate online with other operational parameters online, assuring a general overview of the detector status to optimize its behavior.
All the auxiliary systems under the RPC DCS control are handled centrally by a unique RPC Supervisor application, aimed to summarize and correlated the status of the entire detector and publish it to the central DCS.
Based on the same FSM logic, the central DCS sends commands, and reads back alarms and messages directly to the RPC DCS, publishing to the CMS Run Control the RPC status condition.
To synchronize the operation with the RPC acquisition, a direct connection to the RPC Run Control is foreseen, allowing to operate the RPC in standalone mode, during the commissioning and calibration phase.
The RPC DCS software architecture has been developed following a hierarchical design, creating different tree-like structures: geographical and hardware view.
The hierarchical tree structure allows only vertical data flow: commands move downwards, while alarms and state changes propagate upwards.
Commands are propagated through the RPC FSM tree down to the devices, where they are interpreted accordingly as hardware commands.
In addition the status of the system is described by alarm messages that define how well the system is working (OK, WARNING, ERROR, FATAL) and alert about possible changes of conditions