1 / 50

Dimuon Forward Spectrometer February-March 2008 Cosmic Run

Dimuon Forward Spectrometer February-March 2008 Cosmic Run. Gener al Planning Day by Day report People Shift timetable. General Planning. muon tracking run will try to get enough manpower to provide 2 weeks (Feb. 25th to Mar. 9th) of shifts ~7 seems clearly feasible

monte
Download Presentation

Dimuon Forward Spectrometer February-March 2008 Cosmic Run

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dimuon Forward Spectrometer February-March 2008 Cosmic Run • General Planning • Day by Day report • People • Shift timetable

  2. General Planning • muon tracking run will try to get enough manpower to provide 2 weeks (Feb. 25th to Mar. 9th) of shifts • ~7 seems clearly feasible • remaining days depend strongly on the manpower involved in the commissioning and test (noise) of St345 • Planning • run with St1 and St2 in 0-suppressed mode • try to include Ch5 when dipole is switch on

  3. Monday 4th • Test of new configuration in trigger dispatching crate (full cfg is 1 FFT and 5 FTD). Slots numbered from 0 (FFT )to 5 • 1 FTD in slot 1work fine • 2 FTD (slots 1 and 3 or 1 and 2) doesn’t work. A few triggers sent (no data coming up) then busy → Contact pool to try to get a vme crate (including power supply) 5pm meeting : L3 doors have been closed. Restricted access Thursday and Friday. Kr source will be tomorrow in place for TPC.

  4. Tuesday 5th • all dimuon LDC have now the same rpm version of LC2, ECS and DA • rpm –qa | grep LC/ECS/DA to check • Trigger problem is solved : backplane cable connecting the FFT and the FTD was not plugged properly • trigger sent into 2 FTD; works fine • Problem to install the trigger dispatching software (driver-kernel incompatibility) • Anton/Klauss are investigating → Need to think about MUON interlock (if ventilation goes off or gas stop flowing…) Station 1 and 2 read simultaneously with 0-supp (using ECS); 130 000 events no errors 4pm meeting : L3 field on Monday and Tuesday → do muon standalone noise runs.

  5. Wednesday 6th • Take some test run with station 1 and 2 using ECS (all St1&2 raised up to 1200V using DCS FSM) • no major problem seen (weird crash of FSM in the evening) • sometimes busy from ltu : sending TTCinit from the vmecrate CTP emulator solve the problem • calibration tested : need to increase the execution time allotted by ECS • Still not possible to install the software for the trigger dispatching → MUON interlock (if ventilation goes off or gas stop flowing…) will be discuss on Friday. Gas flow is stopped ! Investigated… → integration DCS-ECS session is now planned for Wed.13th → need to discuss status/readiness of chamber 5 4pm meeting : 5pm meeting :

  6. Thurday 7th • Still not possible to install the software for the trigger dispatching (correct drivers missing; should be provided by cern) • the problem with the crash of the CAEN HV unit occurred again -> freeze the corresponding FSM: no explanation so far… → Proposition for magnet operation : Mon 11:09:00-12:00   +0.2T Tue 12:     09:00-12:00   -0.2T          12:00-15:00   +0.5T 12:00-15:00   -0.5T         15:00-18:00   -0.5T 14:00-18:00   +0.5T → Chamber 5 almost : need scaffolding to finish local commissioning 4pm meeting : magnet planning 5pm meeting :

  7. Friday 8th • DCS integration : no major issues preventing to participate to cosmic run (a few small issues) • ventilation to LV and gas to HV interlocks will be wired next week • LV currents and voltage limitation in DCS should be set properly (each station should take care of this for its hardware) • Clear progresses from Sylvain and Gerard on trigger dispatching software. • ECS scripts timeout problems solved by Franco (set new values). • set pulse 20  60s, compute gain 20  180s, compute threshold 60  180s • Successful calibration with St1+St2 → Proposition for magnet operation : Tuesday stable 09:00-12:00   -0.2T,12:00-15:00   -0.5T, 14:00-18:00   +0.5T 4pm meeting : no control on magnet on Monday (see above). Trigger questionnaire to be answered 5pm meeting :

  8. Saturday/Sunday 9 and 10th • Testing high voltages ramp for St1 and St2 • Dedicated electronic testing of St1 4pm meeting : 5pm meeting :

  9. Monday 11th • Started training sessions this morning and writing of the basis of the documentation • Upgrade of the ECS scripts started (on fly generation of lc2 config files) → GMS measurements have been done during magnet tests  See Raphael for results 4pm meeting : no news. preparing Wednesday ECS-DCS integration 5pm meeting : full (positive) current in L3 : 30kA. No movements observed by survey but some seen with bcams : 2mm in vertical between ITS and absorber. Some noises (metallic up to 5kA and loud after 10kA). One magnet doors will be opened on Thursday.

  10. Tuesday 12th • Continuing training sessions this morning and writing of the basis of the documentation (Volodia, danish) • Seen problem with trigger rate when 4 crocus are connected on a FTD • doesn’t appear with 3 crocus or 4 crocus on 2 FTDs → Pedestal and calibration taken with L3 full field (see run list) 4pm meeting : nothing 5pm meeting : skipped

  11. Wednesday 13th • ECS-DCS integration session : didn’t work in the morning but Ivan fixed the problem. • still an incompatibility between the ECS and the DCS FSM, should be fixed easily • prepare trigger/dead time measurements for DAQ, next session Friday 22nd in the afternoon. • New ECS scripts (include a better validation of commands send to LC2) and LC2 rpm (less verbose in the DAQ infoBrowser) → Ask trigger group why the random pulser is not implemented for muon_trk → ALICE shift booking now available : alicesms.cern.ch 4pm meeting : Magnet test (inverse polarity not reached yesterday, some trips ?). Magnet off tomorrow until week 9. DAQ upgrade tomorrow (gdc memory and file cluster). DAQ upgrade Friday morning (DATE and DB) 5pm meeting : Access permitted in L3 but HV are present.

  12. Thursday 14th • Pedestal and calibration run seems to be transferred successfully with the shuttle • Laurent is testing pedestal subtraction and calibration • New ECS script; some work done but the validation scripts is very regardful… need to better understand MARC returned status bits. → Take some DAQ runs to measure dead time versus number of DDL in a FFT → Should fix a day during week8 with Andre for hardwired interlock 4pm meeting : Magnet test yesterday seems a bit more stable than Tuesday: but why ? Instability is not understood. Magnet will be powered up week 9 . ACORDE 55/60 modules are working  trigger rate ~ 87.3Hz. DCS integration had some problems with dim servers, recovery difficult once stopped… 5pm meeting : skipped.

  13. Friday 14th • New ECS scripts to be committed • still some work needed • new getCrocusCmdRes.x command  increase scripts execution time: see Franco Carena for ECS modification • MCH fully integrated in DCS (thanks a lot to Ivan) : Now the control of the DCS is done in the DCA panel → Small problem to monitor with MOOD after the DAQ up-date/grade → Small problem with the trigger cable; station has recurrent problem with one of them. Cable loose  trigger in busy !! 4.30pm meeting : daq prepare clusters for week9 so that different trigger rate can be applied. DCS has 9 detectors integrated. 5pm meeting : no meeting.

  14. Monday 18th • New LC2 and ECS rpm installed  increase scripts execution time: see Franco Carena for ECS modification  understanding of the validator is not complete : MARC bit status !? • Status of Chamber 5 : scaffolding installed (will be removed Friday “fast” commissioning). → Old FTD is in place to test trigger rate problem : not showing better results + instability with busy (ttcrst is not handled properly in this FTD) → SMS web site (http://alicesms.cern.ch) send to MCH groups 16h30 meeting : CTP input, MTR can communicate but global MTR output is flat ?! investigating. TOF will participate to CTP input. 17h00 meeting : all power supplies (Marathon LV and VME ) need to be dismounted and send to germany for being repaired (1 week). That’s 140+24 for Alice. 34 LV and 1 VME for MCH. Probably during week 10 or 11. Need to map serial number versus position.

  15. Tuesday 19th • Crash of the CAEN PS (needed manual shutdown)+ DCS + DCA (bridge ECS-DCS died)  3rd crash of CAEN PS : Ivan will sent a report to them  boot 1.3, firmware 2.00.02. Pb with OPC seen also by TOF people. More info tomorrow • All fibers & trigger cables present for Ch5; commissioning ongoing → Problem with DCA-DCS FSM ; only ready_locked allows to take pedestal run. F. Carena will modify it for next week. 16h30 meeting : DAQ upgrade of cluster filer no data recording for 2 hours from 8.00 Thursday. DCS db reboot planed on Wednesday is cancelled. MTR will provide cosmic trigger from ½ of the detector (missing DARC). Radiation test in cavern starting at 21h00 : no access. 17h00 meeting : (news merged with above)

  16. Wednesday 20th • Crash of the CAEN PS :  upgrade to firmware vers. > 2.30.00 according to TOF (latest working unofficial release is 3.01.01) • Standalone runs with St1+St2 (HV on) to test stability. Some problems related electronics (change bridge and/or manus) → Try to connect crocus 2568 from St3 (S/N 3008 m3) : link ok. 16h30 meeting : nothing important 17h00 meeting : skipped

  17. Thursday 21th • Problem with the DCS bridge (TRD contamination…) : no stable conditions to run. Start fake DCS, change DIM DNS then went ok in the afternoon  upgrade to firmware vers. > 2.30.00 according to TOF (latest working unofficial release is 3.01.01) • CROCUS 2568 (Chamber 5) responds correctly to LC2 config. → Interlocks (see next slide) → trigger rate problem doesn’t seems compatible with a problem in the trigger dispatching crate. Send email to date-support about the possibility of a misbehavior of the LDC…?? 16h30 meeting : according to trigger questionnaire, muon tracking is not ready for the cosmic run. No hardwired for us this run; alarms will be sufficient. 17h00 meeting :

  18. Interlocks : Status with A. Augustinus (21/02/2008 morning) • GAS : interlock is plugged at what level ? After the main CO2+AR or after the main and the rescue (CO2) output  probably not necessary to sent the interlock (ready to plug, cable are present) Just retrieve the “interlock” signal from DSS • Ventilation : 3 DSS signals for muon tracking • St1 + St2 pulsed air • St4 + St5 extracted air • St3 pulsed air  Need to pull cables (3) from CR4 to LV power supplies  Need to map the correspondence interlock to PS  Need to decide about the delay Detector Safety System

  19. Friday 22th • FFT board (sent with DHL) not received… • Still waiting for the CAEN firmware… (Eugenio-?- from TOF) • Readout rate performances tests done with DAQ. See logbook  trigger rate problem solved : “The configuration was 1 LDC with 3 DDLs and 1 LDC with 4 DDLs. We checked the Muon TRK LTU and it was indeed showing a deadtime of 4 ms/event. We checked the DDL LEDs: GREEN on the LDC with 3 DDLs and ORANGE on the LDC with 4 DDLs. It was the membanks too small for the Muon TRK LDC. After increasing to the value of TPC, it is working fine. “ -P.V.Vyvre- → After discussion with trigger people, MCH seems ok for cosmic run. Do not forget to answer asap to the trigger questionnaire ! → Readout rate performances tests done with DAQ. See logbook 16h30 meeting : skipped 17h00 meeting : skipped : preparing readout rate performances tests for DAQ After an unexplained spark in the TPC (yesterday), they decided not to participate to the cosmic run !

  20. Saturday 23th • Switching crocus of Chamber 5 from local to dispatching mode. → current limits and voltage not set in DCS  Needs to be done ! → problem with rorc 7152 (minor 2) channel 0 (channel 1 ok) : DAQ team will look into it. Crocus bottom right has not been updated. • Number of event for pedestal run is now set at 400 in the DAQ configuration → Problem with DCS-ECS bridge; DCA detector lock was red but owned by nobody…? Came back after killing the bridge and starting the dummy FSM. → When rising up HV after some time when they stayed off, currents appear (~20 nA) at HV=1200V. Gas purity should be check ? 16h30 meeting : detector integration in global partition starting at 8.00. 17h00 meeting : no meeting

  21. Sunday 24th • Problems of cdh since yesterday (after we connect Ch5 trigger cables) : error 384 in CDH ; DAQ tolerate 10 then stops ! → Problem solved by itself ?!? VME crate was off ?? Restart it. Then St2 sees always busy. LTU proxy restartnothing changed. Rebooting crocus and take data all crocus but 2565 and crocus 2565 alone was fine. Run with 4 crocus : no more CDH errors → It is apparently related to a problem with 2565 already seen by St2 team. Sometimes the init failed, it succeed after some “rest” time… • Configuration (default, ped., calib. ) done on LDC S1,S2,S4,S5 • connection problem with S3 : report send to alice-date-support → Reference Ped. and Calib. runs taken and transferred on castor Ped. with 400 events per run : COMPUTE_THRESHOLDS = ~70 s Calib. with 400 events per run (10 runs) + gain computation ~22 min ! 16h30 meeting : no meeting 17h00 meeting : no meeting

  22. Monday 25th • Global run start : • first step is to include all detectors in the global DCS. Cycle standby-beam_tuning-ready-beam_tuning-standby. Ok for MCH • Drorc (minor 2) in aldaqpc090 replaced (71528020) and already changed in date dB. → problem with crocus 2565 : Sylvain suggested to switch the electronics LV off before trying the init sequence. → run in global partition with HV @1500V (1600 tomorrow); no major problems (a few busy on…) 16h30 meeting : L3 magnet test qualified as disastrous by Lars : large number of trips due to a lot of metallic items found in the magnet (e.g. a nice pliers). Dipole, 1kV seems fine (more tomorrow). Week 9, step by step integration of detector in DCS(11 this morning)/ECS. Alignment data during week10 for ITS and TOF. Because of the absence of TPC in the run, HTL might not join global run. TPC is most probably NOT damaged. 17h00 meeting : access is supervised after 18.00. Call werner or Lars for intervention in the Cavern

  23. Tuesday 26th • DCS test in the morning : some weird behavior • after we took back the control everything seems normal. → Now back in the global partition (17h00) with 2 LDC and 7 ddl. Runs ok. After a problem with cdh errors, we ended up in a state where standalone with one DDL was not even working (busy problem). Recovery procedure needs to be clarified… standalone runs starting at 21h00 (22220 and 22232) . It seems now pretty clear that the running in the global partition causes unexpected stops due to the others detectors: “l’enfer c’est les autres” . The muon readout does not handle this properly and can not be restarted easily ; protection against this unexpected stops needs to be implemented to insure a stable running in global partition. 16h30 meeting : Quite difficult to start the global partition. 2 runs of 1 hour with 10 detectors not recorded 21907 and 21928 (21861 recorded). P Vande Vyvre requested again to have one expert per detector 24/24 : crucial to debug the data taking with ECS. A real gas alarm demonstrate that some interlocks were bypassed (nothing bad). DCS instatbility test didn’t work. Now the detector in busy is displayed on a screen. 17h00 meeting : no meeting

  24. Wednesday 27th • No DCS test in the morning. • Some test on chamber 5 with LC2 to modify configuration (faulty patch bus) • New COMPUTE_THRESHOLDS scripts failed (seg. fault), Alberto and Frederic are working on it. → Now include in global (13h30) run 22347.Around 18h, the usual crash of DAQ db. Plus the CDH errors reappeared. Went back to normal around 21h. In global with HV on, see run list. → CDH errors (1 every 3445 events) stopped with no reasons (Trigger crate ? Influence of HLT ?) → A few very good and long runs during the night with HV on 16h30 meeting : DAQ runs stop mostly because of cdh errors (hmpid,v0,mtr,mch), busy (v0,tof) and a change of state of the FSM (sdd). S/EOR not yet ready and requires a reflexion. DAQ asked for a basic set of command to reconfigure each detector. DAQ upgrade w14. DCS test : try to limit the chaos due to a restart of the dim dns server (go to lab.test). Tomorrow until noon, S/EOR tests. HLT wants to setup the muon chain. Gas problem for MTR, isobutane was not warm enough : HV trip during night. L3: not a successful day, will try to lower the sensibility of power supplies. 17h00 meeting : no meeting

  25. Friday 29th • In global partition during the night, with a 3 hours run ! Stability seems ok. No stop due to us. See run list. • few problems. HV trip in the morning causes a crash of the CAEN opc server. Needed manual restart in CR4 and Ivan has to restart the project. • Trigger VME tripped again (see configuration with L.Wallet) • Pedestal and calibration runs taken when magnet was on and off : no effect visible “by eyes”. • During the magnet tests, no major problems were seen. Some (non recorded) runs were taken during the ramp down. Seems ok… → Join global partition around midnight in run 23536 and following. Runs very fine (no errors no busy), very stable. Clean 0-supp and readout time around 280 microsec. → HV crate right crashed at 5h30 (need manual restart); for the first, I could take control back from DCS. 16h30 meeting : Ramping of the 2 magnets started very quickly. The leakage current didn’t show the structure observed yesterday. Tilt of the absorber : fixation point is moving. Still need to power up the magnets on Monday. Tomorrow trigger test instead of DCS test. DAQ; ESC HI stops sometimes and restart is difficult. 6hours of stable running during last night. Muon trigger has seen physic trigger ! Next week, muon chambers and trigger may runs together 17h00 meeting : no meeting

  26. Saturday March 1rst • Some problem with the trigger crate in the afternoon; need to go in the cavern and push the reset button ?! • 3 FTD are now full with all the trigger cables of St1,2 and 3 • 2 runs in standalone at 40 MHz of 30 min (~2.5 106 events each) • no cdh errors seen • no busy state → Did not join the global partition ! We need more people to get trained and take shifts. Almost all the others subsystem comply with the 3x8hours shifts per day… 16h30 meeting : tomorrow morning, trigger (in cluster) tests. Both magnets on Monday for testing, no stable conditions (alignment data taking starts in the evening). Cluster for muon trigger and tracking is foreseen in global partition, some tests tomorrow. Some cdh problems may come from the trigger when testing the trigger classes/clusters. DCS, integration of TPC failed, but some DAQ readout was perfomed (with different cdh checks). Muon trigger will readout inside detector during next week. 6.3% of the acorde trigger give a track in the TPC, which agree with simulations (~10%). 17h00 meeting : no meeting

  27. Sunday March 2nd • No night shift, detector turned off. In the morning, data taking start without a single problem. Several 106 events in standalone. • Trigger cluster tests in the morning : most probably having clusters of detectors triggered by different trigger will not be ready. Not sure it will be available during the production week : • this mean muon tracking will be triggered by acorde and/or muon trigger. Not even sure yet that a multiple trigger input send to the global partition will work… → First track in Station1 and 2 (see at the end of the summary) → A few standalone runs during the day (all statistics on https://alice-logbook.cern.ch/logbook/date_online.php?p_cont=es&p_lhcp=LHC08aS) → Join global partition around 23h30; problem with a trip that broke the MCH_DCS (dim dns problem is also suspected). Running without high voltages. Dim dns server had to be restarted…end of shift ! 16h30 meeting : Production run should start tomorrow (most probably in the evening after the magnet tests). During production 15 minutes at the beginning of each run will be dedicated to calibration (pedestal run for instance for mch). 17h00 meeting : no meeting

  28. Monday March 3rd • Test of the DCS recovery procedure : some problems with our final state machine (right part restarted ok but not the left). • Some modification of Ivan for a better handling of trips; doesn’t crash the CAEN HV opc anymore ! When a trip occurred there is still a oscillation status tripped/error, but recovery is quite easy. • A few standalone runs during the day. System seems quite stable • A problem has been seen several times over the last week : the power supply of the trigger dispatching crate switch off, with no reasons… → Taking data with the muon trigger ~0.2 Hz, clusters visible in 30 to 40% of the events ! A least 2 events (mood display ) with 1 hit per plan… Tracks are probably there. 16h30 meeting : Next cosmic run end of April; preparation meeting to be held soon. DCS, problem with the dim dns server of last night caused by an excessive number of connection (MCH,HMPID, TRD). Recovery procedure didn’t work because TOF fsm has been changed since its validation… Workers node may be locked during production run to avoid any unauthorized modification of the FSM. CTP, no trigger classes/clusters available for this run. 80h of run with acorde single muons. No time left for MTR+MCH ?! We will use the slots when ITS is not ready. 17h00 meeting : no meeting

  29. Tuesday March 4th • Good runs with muon trigger stopped around 4.00 when the global trigger was switched to acorde. • In the morning, Muon tracking couldn’t enter to the Alice_All global partition ! Running in standalone and in global Alice_test (@40MHz) do not show any problems ?! • the problem has certainly another origin, not determined up to now. Global partition is now “rebuild”, including detectors one by one. → Run are dedicated to ITS alignment (with TOF diagonal trigger at 1 Hz ?!). The minimum set of detectors is included in the global partition. 16h30 meeting : Official email about the DCS “lock” procedure to be sent soon. Provide the list of the DCA commands to DAQ people so that they can configure the detector. 17h00 meeting : no meeting

  30. Wednesday March 5th • Did not join global partition : Standalone tests run 25402 with 106 events to tests stability. • Global partition with ITS very unstable; seems unrealistic they will have enough tracks for alignment. • Still the priority 1 → During the 4h30 meeting, the decision was taken to run ITS only with the acorde trigger (minimal set of detector in order not to stress the system). The question is now, what are we doing ? Does the alignment of ITS (with half of the SSD) without the TPC deserves such a high priority that other detectors are excluded from the runs. In any case “these data” will be requested again next run with again the highest priority… 16h30 meeting : DAQ/ECS, some weird smi states seen; causes the crash of the ECS. 17h00 meeting : no meeting

  31. Thursday March 6th • Try to join global partition : problem seen on Tuesday is still there. After calling Anton, he increased the time between the ttrx reset and the SOD of 2 additional seconds. • “I added the 2 seconds wait before sending SOD in ctp_proxy. Ivan is prepared for the test. He should be able to see in our logs, if the right sequence of steps is executed, i.e.: - goto GLOBAL in ltuproxy (TTCrx reset is part of this step) - Start partition in ctpproxy -now there is 4 seconds (before it was 2 secs) sleep before an atttempt to send the SOD Bye, Anton “ • Problem was finally solved by simply increasing the time (at the ECS/DAQ level) allowed to receive the start of data. • Another problem with the DIM dns server in the morning : stopped all activities. Since MCH was one of the suspect, our DCS was restarted on another dim server (alidcscom051 ). → Restarting the global partition with SPD,SDD, and SSD took all day. No (muon_trigger) trigger time for us…! 16h30 meeting : DAQ/ECS problem with smi seems related to start/stop of DCS FSM and the appearance of unknown states. Protection will be added asap (next cosmic run). DCS questionnaire send around : needs instruction for, what to monitor and how to recover. 17h00 meeting : no meeting

  32. Friday March 7th • Big problems with the ITS partition from 1.00 ; couldn’t be restarted before the morning (by changing DAQ configuration). • Finally running different partitions with different trigger inputs. Works fine at all levels… • Now taking data with the muon trigger (trigger 0.1 Hz) and V0 in the Alice_Multi partition (global DAQ) → Run list has been updated with a lot of runs with the muon trigger. All the data are usable for offline-analysis… So please do ! 16h30 meeting : 3 partitions assembled by the DAQ team. Problem with a recurrent error in SPD has been solved by setting a daq parameter allowing to remove everything that arrive before the SOD. Monday evening DAQ is off until Tuesday afternoon. Solution for dim_smi availbable next week. ITS : SPD has 92/120, SSD 61/144 and SDD 100/260… 17h00 meeting : no meeting

  33. Saturday March 8th • OPC connections to HV right lost during the night (HV remained at 1600V, so run 25836 was kept running) • shortly after the restart of the crate in CR4 (around 8.00) the trigger crate went off. Is it possible to have any correlation here ? • DCS is running as a service with automatic restart • If OPC crash again : power cycle the crate and then the opc server can be stop and restarted from the DCS shift desk. Finally restart the DCS FSM. • A short try to power up and boot/init the CH3Left : the init at the crocus level doesn’t work. → Very stable running all day. Several runs of several hours without any problems. 16h30 meeting : not intented 17h00 meeting : no meeting

  34. Sunday March 9th • Run (25985) still going on in the morning ; more than 8 hours…! • Cosmic test run will stop at 12 for muon tracking and muon trigger. • “Dear Alice members, The Alice cosmic run will be stopped as planned on Sunday at 16.00. We will then bring the detectors into a safe state so that installation can continue unhindered on Monday morning. After a long battle we are now taking data which should allow us to exercise alignment in the ITS and which should even give us tracks in the TOF, TRD and HMPID. In order to celebrate this we will have an end of run party on Sunday 9 March at about 5 pm at point 2. Paul Kuijer “ → Lots of events displayed by the HLT (I.Das) on the big screen of the ACR. On some of them, tracks are visible without any doubt ! All chambers of station 1 and 2 have hits and all the trigger chamber as well. All hits perfectly aligned ! Our best achievement of the run ; a nice set of track (eyes reconstructed for the moment) in the trigger and tracking chambers. 16h30 meeting : no meeting 17h00 meeting : no meeting

  35. People @CERN • Volodia nikulin (w 5,6,7 and 10) • Mamu (leaves 17th) • Sanjoy (leaves on 29th) • Danish (leaves on march 6th) • Indranil (arrived on 16th, leaves in april) • Guillaume Batigne (w 9) • Frederic Lefevre (w9) • Alberto Baldisseri (w9) • Andry Rakotozafindrabe (w9) • Hervé Borel (w9) Priority to the St345 commissioning : • Jean-Luc Charvet (w10) • Corrado Cicalo (w10) • Elisabetta Siddi (w10) • Sebastien Herlant (w10) • Hervé Borel (w10)  possibility to free one person for the shifts : did not happened • Magdalena Malek (end of w9 + w10)

  36. Shift schedule Book your shifts as soon as possible : http://alicesms.cern.ch/ http://alicesms.cern.ch/alicesms/shifts/dashboard Documentation is in progress : http://ipnweb.in2p3.fr/~suire/

  37. Recorded runs in Global Partition Complete run list is available on the alice-logbook https://alice-logbook.cern.ch/. Here are the most interesting ones for the dimuon tracking : • 21392 : MCH. 14068 events @ ~90 Hz (acorde trigger) with 0-sup at 4 sigma's, Station 1 and 2 (HV = 100V  no signal) • associated pedestal run #21386 • 21861 : 8 detectors including MCH and MTR. 47572 events with acorde trigger. Station 1 and 2 (HV = 1500V maybe some low signal) with 0-sup at 4 sigma's. • associated pedestal run #21824 • 22492 : 11 detectors including MCH and MTR. 116 814 events with acorde trigger. Station 1 and 2 (HV = 1600V  some signal may be there) with 0-sup at 4 sigma's. A few problem with St1 0-supp. • associated pedestal run #22489 Following runs (22495 et 22498 ) were also taken with high voltages on. Some problem with low voltages (trips) will cause some errors (or missing data). Let’s consider this as a “reality” test for code ;-)

  38. Recorded runs in Global Partition • 22505 : 11 detectors including MCH and MTR. 15284 events with acorde trigger. Station 1 and 2 (HV = 1600V) with 0-sup at 4 sigma's. • associated pedestal run #22502 • 22506 : 11 detectors including MCH and MTR. Acorde trigger. Station 1 and 2 (HV = 1600V) with 0-sup at 4 sigma's. • associated pedestal run #22504 • 22939 : 8 detectors including MCH and MTR. Station 1 and 2 (HV = 1600V) with 0-sup at 4 sigma's. Stopped upon TOF et HMPID request. • associated pedestal run #22932 • 22962 : 8 detectors including MCH and MTR. Station 1 and 2 (HV = 1600V) with 0-sup at 4 sigma's. Stopped because of MCH… maybe ? • associated pedestal run #22932 • 22986 : 8 detectors including MCH and MTR. Station 1 and 2 (HV = 1600V) with 0-sup at 4 sigma's. 3h! • associated pedestal run #22974

  39. Recorded runs in Global Partition • 23536, 23545, 23548, 23549, 23579 (2 hours) : MCH and MTR. Station 1 and 2 (HV = 1600V) with 0-sup at 4 sigma's. Very clean chambers (Readout time = 280 microsec) • associated pedestal run #23533 • 23637 : MCH and MTR. Station 1 and 2 (HV = 1600V) with 0-sup at 4 sigma's. Very clean chambers (Readout time = 280 microsec) • associated pedestal run #23632 • 24567 : MCH Station 1 and 2 (HV = 0V) with 0-sup at 4 sigma's. (Readout time = 300 microsec) • associated pedestal run #24562 • 24836, 24841, 24849 : MCH Station 1 and 2 (HV = 1600V) + MTR with 0-sup at 4 sigma's. Muon Tracking trigger !! • associated pedestal run #24830 • 24897: MCH Station 1 and 2 (HV = 1600V) + MTR with 0-sup at 4 sigma's. Muon Tracking trigger !!(some problems with electronics) • associated pedestal run #24830

  40. Recorded runs in Global Partition • 24906: MCH Station 1 and 2 (HV = 1600V) + MTR with 0-sup at 4 sigma's. Muon Tracking trigger !! • associated pedestal run #24904 • 24908: MCH Station 1 and 2 (HV = 1600V) + MTR with 0-sup at 4 sigma's. Muon Tracking trigger !!Very clean • associated pedestal run #24907 • 25754, 25757, 25758 : MCH Station 1 and 2 (HV = 1600V) + MTR with 0-sup at 4 sigma's. Muon Tracking trigger !! • associated pedestal run #25745 • 25775, 25779, 25793 : MCH Station 1 and 2 (HV = 1600V) + MTR with 0-sup at 4 sigma's. Muon Tracking trigger !! • associated pedestal run #25769 • 25821: MCH Station 1 and 2 (HV = 1600V) + MTR with 0-sup at 4 sigma's. Muon Tracking trigger !!Very clean and then problem with DE 101 bending at the end of the run • associated pedestal run #25800

  41. Recorded runs in Global Partition • 25836, 25862: MCH Station 1 and 2 (HV = 1600V) + MTR with 0-sup at 4 sigma's. Muon Tracking trigger !!Still problem with DE 101 bending + lost connection to HV (kept at 1600) • associated pedestal run #25832 • 25898, 25910, 25911: MCH Station 1 and 2 (HV = 1600V) + MTR with 0-sup at 4 sigma's. Muon Tracking trigger !!Very clean (DE 101 bending removed from readout) • associated pedestal run #25894 • 25944: MCH Station 1 (HV = 1650V) and Station 2 (HV = 1650V) + MTR with 0-sup at 4 sigma's. Muon Tracking trigger !! Do not use, a few HV trip during the readout  decrease HV • associated pedestal run #25938 • 25960: MCH Station 1 and 2 (HV = 1600V) + MTR with 0-sup at 4 sigma's. Muon Tracking trigger !!Very clean (DE 101 bending removed from readout) • associated pedestal run #25948

  42. Recorded runs in Global Partition • 25985: MCH Station 1 and 2 (HV = 1600V) + MTR with 0-sup at 4 sigma's. Muon Tracking trigger !!Very clean (DE 101 bending removed from readout) excepted at the very end, problem on DE 200 bending. A 9 hours run… • associated pedestal run #25948 • 26024: MCH Station 1 and 2 (HV = 1600V) + MTR with 0-sup at 4 sigma's. Muon Tracking trigger !!Very clean (DE 101 bending removed from readout). A trip occurred during the on HV of DE 102 • associated pedestal run #26019 • That’s all folks

  43. Run list (not in global) : see more details in http://alice-logbook.cern.ch • Local test Runs (pulser trigger @100 Hz) • 18375 pedestal run with magnetic field • 18378, 18379, 18380, 18381, 18384, 18388, 18389, 18392, 18394, 18398 : calibrations runs with magnetic field • Pedestal reference runs : root recording and transferred to CASTOR • 21542 : Full Station 1 with HV=1200 (beam tuning state) • 21626 : Full Station 2 with HV=1200 (beam tuning state) • Calibration reference runs : root recording and transferred to CASTOR • 21712(0), 21713(400), 21714(800), 21716(1200), 21717(1600), 21718(2000), 21719(2400), 21720(2800), 21721(3200), 21722(3600), : Full Station 2 with HV=1200 (beam tuning state) • 21732(0), 21733(400), 21734(800), 21735(1200), 21736(1600), 21737(2000), 21738(2400), 21740(2800), 21741(3200), 21742(3600), : Full Station 1 with HV=1200 (beam tuning state)

  44. Magnet effect on measured noise (L. Aphecetche) : comparison of runs 23097 - 23355 15 % Laurent Aphecetche

  45. Magnet effect on measured noise (L. Aphecetche) : comparison of runs 23097 - 23355 Mean noise is ~1.1 ADC. Relative difference between a run with and without the magnets (L3 + dipole) is negligible.

  46. run 22506 -----------------------------------------------------Occupancy numbers-----------------------------------------------------           |     HRAW22506 |    HCALZ22506 |    HCALG22506 |    HCALC22506 |Chamber  0 |      0.23 %   |      0.06 %   |      0.04 %   |      0.04 %   |   DE 0100 |      0.19 %   |      0.04 %   |      0.04 %   |      0.04 %   |   DE 0101 |      0.29 %   |      0.11 %   |      0.04 %   |      0.04 %   |   DE 0102 |      0.19 %   |      0.04 %   |      0.04 %   |      0.04 %   |   DE 0103 |      0.24 %   |      0.06 %   |      0.05 %   |      0.05 %   |Chamber  1 |      0.26 %   |      0.06 %   |      0.06 %   |      0.06 %   |   DE 0200 |      0.55 %   |      0.16 %   |      0.14 %   |      0.14 %   |   DE 0201 |      0.17 %   |      0.04 %   |      0.04 %   |      0.04 %   |   DE 0202 |      0.16 %   |      0.03 %   |      0.03 %   |      0.03 %   |   DE 0203 |      0.16 %   |      0.03 %   |      0.03 %   |      0.03 %   |Chamber  2 |      0.08 %   |      0.01 %   |      0.00 %   |      0.00 %   |   DE 0300 |      0.14 %   |      0.03 %   |      0.00 %   |      0.00 %   |   DE 0303 |      0.20 %   |      0.02 %   |      0.00 %   |      0.00 %   |Chamber  3 |      0.15 %   |      0.04 %   |      0.00 %   |      0.00 %   |   DE 0400 |      0.18 %   |      0.06 %   |      0.00 %   |      0.00 %   |   DE 0401 |      0.16 %   |      0.03 %   |      0.00 %   |      0.00 %   |   DE 0402 |      0.12 %   |      0.02 %   |      0.00 %   |      0.00 %   | DE 0403| 0.12% | 0.03% | 0.00% | 0.00 % |

  47. Run 23549 (HLT analysis, I.Das)Acorde trigger Fired pads isolated with a sharp 10 ADC cut on 0-suppressed and pedestal substracted data.

  48. Run 23549 (HLT analysis, I.Das) Acorde trigger more @ http://www.cern.ch/indranil.das/dimuon-collaboration A straight line passing the fired pads points clearly to ACORDE.

  49. Run 24841 (HLT analysis, I.Das)Muon trigger more @ http://idas.web.cern.ch/idas/dimuon-collaboration/trigger_tracker A straight line passing the fired pads/strip points towards the center of ALICE (muon trigger decision algorithm)

  50. Run 26024 (HLT analysis, I.Das)Muon trigger http://www.cern.ch/Indranil.Das/dimuon-collaboration/HLT-OnlineDisplay

More Related