1 / 22

Summary from Ringberg ID workshop

Summary from Ringberg ID workshop. Not a straightforward summary of all the Ringberg talks - try to draw out messages and things for follow-up Cannot mention everything - sorry if your talk / favourite subject not mentioned! No attributions - everything is taken from the workshop

tamera
Download Presentation

Summary from Ringberg ID workshop

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Summary from Ringberg ID workshop • Not a straightforward summary of all the Ringberg talks - try to draw out messages and things for follow-up • Cannot mention everything - sorry if your talk / favourite subject not mentioned! • No attributions - everything is taken from the workshop • See the workshop slides for more information • http://indico.cern.ch/conferenceDisplay.py?confId=23706 • Progress - some of the Ringberg highlights • Integration and robustness • Resources • Choices and complexity • Getting ready for data Richard Hawkings (CERN) ATLAS CAT physics meeting, 9/5/08 Richard Hawkings

  2. Progress - Tracking software • Complete ‘NewTracking’ chain in place • Good performance in general • Work still ongoing in dense jets (cf b-tag) • CPU under control (4 sec/event) • But need to tune for pileup • One output track collection produced • Simple for track consumers • All options integrated (incl. DQ and perfmon) in InDetRecExample • Very modular - plug-and-play (e.g. fitters) • New appl in low-pT tracking (<500 MeV) • Optional additional pass after standard tracking, to pick up unused hits • Particularly important for min-bias studies • Still some issues before startup • Handling broad/split/flipped RIOs • Tuning, robustness, optimisation • EDM issues (later) Low pT efficiency Low pT fake rate Richard Hawkings

  3. Progress - cosmic reconstruction • ID software chain reconstructing cosmics! • Combined tracks in SCT, TRT and muons • Standard tracking chain + dedicated cosmic tracking • Many lessons learned … • Realism: dead/noisy modules: so far by hand! • Effects due to ‘strange’ geometry - glancing tracks, odd angles • Pathological events - many seeds, high multipcty • Alignment is good enough to find tracks! • Need to understand overlap between track algs • Only 1/3 tracks found by both tracking chains • Preparation for next activities • Cosmics with magnetic field and pixels • Single beam: halo and beam-gas Track classes Richard Hawkings

  4. Progress - bytestream and trigger • Simplification and (offline) speed-up of bytestream converters • Allows use of same code at LVL2 and EF/offline • Code developed in MIG2 - ready, need muons • HLT tracking software participating in both Mx cosmic and TDAQ technical runs • Algorithms adapted for high cosmic efficiency • Pre-recorded events to test dataflow, rates, etc • MC studies with release 14 … • Exploring effects of different track fitters in EF • Comparisons of LVL2 and EF track efficiency • Comparisons of EF/offline M6 cosmic Richard Hawkings

  5. Progress - conversions, V0 and material • Tracking framework now quite mature, algorithms to maximise efficiency • Forward tracking seeded from Si, then backtracking from TRT, standalone TRT, TRT + single Si space point • All in coherent framework, ready for analysis • Algorithms to find conversions and V0s • A lot of commonality - can they be merged? • Effort starting on mapping ID material via ee Conversion rvtx resolution • Using 0 in min bias events • Enough statistics in a few months • Challenges ahead … • Normalisation (use beampipe?) • Efficiency determination • Various ideas - exploit Ks, isospin • Exploit particle ID and calo info • Cuts to improve purity/resolution Richard Hawkings

  6. Progress - muon reconstruction • Long effort to provide common tracking framework for ID and muons, bearing fruit • MooreMuId performance good in release 14 • Small fake rate even with cavern background • Use of common tracking framework naturally supports global track fits - important to get best tracking resolution • ID dominates resolution up to ~ 50 GeV! • Bewildering array of muon reco packages • Standalone, +ID, +calo, seeded from MS/ID • Modular approach important - try to combine the best features of each rather than choosing between two monolithic chains • Explore tag/probe efficiency with ,,Z • One muon identified in ID+MS, second in MS, search for corresponding ID track • Interesting possibility to adapt b-tag System8 J/, pT>4 GeV Z , ID/MS track fit Richard Hawkings

  7. Progress - alignment on Monte Carlo • CSC exercise very valuable for alignment group • Data with initial (unrealistically) large misalignments - can ‘bootstrap’ and align silicon and TRT in a consistent way • Reality check - strategies and implementation work - although event sample (multimuons) is unrealistic • Many lessons learned, e.g. followup weak modes • New work: 3x3 distortion ‘matrix’, effect on ID perf • Some issues remain - Z width ‘mystery’ • Followup with FDR1 • Semi-realistic single track alignment stream • Time pressure for fast turnaround - CAF-like computing infrastructure (though I/O problems) • ‘Reasonable’ alignment constants produced • Again, many lessons learned - need realistic starting point, auxiliary event samples, how to run computing • FDR2 comes next - with full CAF setup Richard Hawkings

  8. Progress - alignment with real data • SR1 cosmic runs already given first idea of internal alignments of different parts (barrel, endcaps) with ~100k cosmic tracks • First runs in pit (M-ID, M6) have low statistics O(10k tracks), but do not show big differences to alignment corrections in SR1 TRT M6 SCT M6 • Also gathering all survey data on ID parts • Big effort made surveying modules, barrels, disks • How to use this optimally in track-based alignment? • Need to complete DB upload, error treatment • Overall ID installation surveyed to 0.1-0.3 mm • Relatively good shape to start track alignment Richard Hawkings

  9. Progress - FSI system • Laser interferometry system for SCT • Measure grid lengths to 1m over 1m • 852 grid lines inside SCT volume - build up 3D picture of how SCT structures move • Potentially, scan every ~ 10 minutes • Hardware commissioning well advanced • Very impressive optical system in SR1 • First results for some grid lines • To be improved with full laser commissioning • How to use it and incorporate into alignment • Need full analysis software to routinely perform scans, convert to geometrical profile of SCT movements, validate results • Combination with track alignment - define stable periods? apply short term corrections? • Need experience with real SCT motions • How to relate to integrated ID alignment including pixels and TRT which have no FSI? One line, 20 mins Richard Hawkings

  10. Integration - conditions data • Lots of conditions data written online • Configuration information from DAQ • Calibration information from online calib • DCS data (to offline COOL) • Need to start using this in offline reconn • Optimal parameters for reconstruction • Knowledge of dead/noisy modules • Essential to have this working for startup • Online/offline consistency checks • Still requires a lot of work • Make sure data is written routinely online • Make it available in Athena to all clients • In an integrated way - e.g. ISCT_Conditions • Using the standard DB tools (IOVDbSvc) • Database access issues for real data • Available from Oracle at CERN/T1 • More difficult beyond T1s (some ideas…) Richard Hawkings

  11. Integration - prompt calibration and alignment Discussion with beer • Prompt cal/align in 24h is extremely challenging • Need to integrate Si alignment, TRT calibration, TRT alignment, beamspot finding in one process • Using ID calib stream and perhaps express/cosmics • Start of discussions about how to do this … • FDR1 was first instructive attempt … • Si and TRT alignment only, offsite computing • Spur to develop ‘control framework’ to manage iterations - needs to be further enhanced • Needs O(50 CPUs) and associated I/O and storage to complete in 24hours - big computing system • Next priorities • Fully integrate TRT calibration and alignment • Beamspot-finding, interplay with alignment • Use of cosmics and perhaps express stream data • … Very challenging to be ready for FDR2 • Is 24h turnaround realistic for 1st data…? Sober design Richard Hawkings

  12. Robustness • Expect the unexpected - as in M6 • Beware highly combinatoric algorithms, guard against unreasonably high occupancy • Need full conditions data chain to be working • Mask noisy modules - inform reconstruction of dead ones, so track scoring etc is aware • Might well have dead areas in silicon due to cooling problems • Hopefully all detectors will be working at some level from startup (they are all installed) • An optimist said: ‘If we get it working at all, it will take a lot to kill it’ … • A pessimist said: ‘The SCT barrel is a sad story, as you know’ • We have a good ‘toolkit’ of modular tracking software just in case major parts are not working • Robustness in Tier-0 reconstruction • Keep going if at all possible - proper use of error & return codes etc Noisy SCT modules Hadronic shower in TRT Richard Hawkings This will never happen - will it?

  13. Resources - event sizes Full ATLAS AOD: • Global ATLAS problem for AOD/ESD size • AOD in release 14.0.0 was 860 kB • Increased dramatically just as release closed • Since reduced by factor ~2 with crash effort • AOD too big - fewer events on disk, multiply access problems, reduce processing speed • ID is a significant part of ESD and AOD • Technical tricks (double to float), bit packing • Make sure every bit counts, avoid overheads • Think carefully what really needs to be stored • Flexibility - tailor content to particle type - e.g. store more for tracks ID-d as leptons • Pileup is lurking around the corner… • Big effect e.g. on vertexing EDM - optimisation will be required … • Depending on LHC strategy, we may have significant pileup very soon ID alone: Vertex EDM: Richard Hawkings

  14. Resources - simulation • FATRAS fast tracking simulation now mature • Good reuse of existing Tracking components • Geometry, extrapolators; interactions using G4 modules or parameterisation (nuclear interactions) • Fast track simulation, with optional full recon • Now integrating more and more parts of Digitisation model for added realism • Impressive validation results • Better than AtlFast I (parameterisation), faster than G4 fullsim (which is used by AtlFast II) • Also becoming usable for muon tracking • ATLAS simulation crisis • CPU time x 4-5 too big (with latest G4 models), event size x3 too big for computing model • NB - already expecting to simulate only 20% data • FATRAS (+ shower libs) could be part of solution • Need to make simulation community aware • Be prepared to tune FATRAS with full data • Still need for G4 for some studies, optimise balance G4+recon efficiency Richard Hawkings

  15. Resources - calibration model • ID Calibration plans becoming clearer • Calibrations to be done at ROD level, between fills: All detectors have tasks • Calibration in the CAF • TRT Rt/t0 calibration, e/pi sep, pixel Lorentz angle, depletion depth, charge sharing, alignment of ID parts and integrated ID alignment, beamspot, … • Up to O(100 CPUs) making use of calibration and express streams at CAF • Need better definition/separation of monitoring and calibration tasks • Concentrate CAF on things which can be improved in the ~24h before bulk reco • Other tasks might be better done as monitoring in Tier-0, or even in RODs (pixel map?) • Offline is ‘easier’ than online, but CAF/Tier-0 is quasi-online - cannot fall behind • Don’t forget calibration for reprocessing - at Tier-1,2, institutes … • This is a big system - all requests use scarce resources (CPU, disk, I/O) and need to be well-justified and matched to available resources • Also consider optimal ordering, what in parallel • Flexibility to adapt to stability of real data Richard Hawkings

  16. Choices and optimisation • In many areas, blessed with N alternatives • Vertexing algorithms, track fitters, pattern recognition strategies, muon reconstruction • Obvious benefits, but also drawbacks • Multiplication of effort (develop, validate, maintain, understand/use downstream) • CPU, memory and event size penalties • Probably the most critical at present… • Delicate balance between the two, but time to ‘baseline’ whilst keeping other options open • Favour ‘simple and straightforward’ ? • Once we see real data, criteria may change! • To achieve this, important to have • Common interfaces / EDM wherever possible • Ability to compare apples and apples • Feedback from the clients - performance groups • Many examples (also from trigger, e.g. EF fitters) Tau recon Tracks in jets (b-tag) 2ndary vertex Richard Hawkings

  17. EDM - ‘break-down of factorisation’ • For practical reasons, recon/EDM is ‘linear’ • PRDs track/vertexcomb perfanalysis • Examples of ‘pushing the limits’ due to real physics needs - getting the most from ATLAS • Broad/split/flipped ROT/clusters indicated by tracking • Discussions - solution in sight for 14.2.0 • Kinematic fitting: B-physics and high pT physics • Support ‘extended’ track parameters with vertex / mass / error matrix in tracking EDM (B-phys) • Can this be connected to kinematic fitting in the analysis world..? The jury is out (3 possibilities) • Complex issues in the e/ EDM (due to material) • Need to go back/forth between ID and calo to best identify conv. (cut on elec ET not pT due to brem) • Need N:N ID:calo matches to cover all e/ cases in the EDM - tracks, clusters, error matrices, kin fits • Did not conclude on this - will be a long process! Charged/neutral tracks in tracking EDM, adding mass Zee faking H4e Zoo of e/ objects Richard Hawkings

  18. Getting ready for data - monitoring • Subdetectors and ID global well advanced • Online & offline monitoring tested in cosmics • Offline monitoring tested in FDR exercises • Collisions look very different from cosmics - FDR first opportunity to do physics monitoring • Some concerns / things to be improved … • CPU and memory usage, code stability • Archiving to COOL, DQMF tools • Number and relevance of histograms • Tuning, warning thresholds need real data • Alignment monitoring is also well advanced • Tested in FDR1, spotted some problems • ‘Alignment’ monitoring has expanded to include comb-perf and physics monitoring • Discussions on restructuring at Ringberg • Difference between short/long-term plots (what can only be checked with lots of data) TRT online monitoring (M6) TRT calib problem in FDR1 Richard Hawkings

  19. Getting ready for data - pre-collisions • Already started to work with cosmics in pit, need statistics and pixels! • Reconstruction works well, very useful for debugging, alignment/calibration • Record as many as possible before/during LHC commissioning • Beam-gas events (single beam running) • Rates and triggering are very uncertain • Few Hz at most ?? Will detectors be on? • Tracks distributed along z - potentially useful • Beam-halo events (1 & 2 beam running?) • Again, rates and triggering are uncertain • Horizontal cosmics - clearly useful if can be triggered by MB scintillators • In both cases, very hard to quantify in advance or justify a request - ‘wait and see’ approach? • Interactions with displaced IP (37 cm) • Again, potentially useful, a more concrete scenario for simulation if effort available • Will this be done at start up - come back later? Old study of beam-gas (Athens, 2003) RH / M Boonekamp Recon with iPatRec Rate ~25 Hz Richard Hawkings

  20. When collisions come … • Or .. ‘I have nothing to offer you but blood, sweat, toil and tears’ … • Do everything we can now to be prepared - make sure all s/w tools are in place • Detector operation, finding tracks, calibration, alignment, material, physics … • Is your black hole shelter ready? This is not an OTSMU task Richard Hawkings

  21. Towards understanding the first data … • Begin by looking at basic distributions • Low mass dimuons (already studied in FDR1) • E/p distributions (studies starting) • Will have to disentangle many effects • Misalignment, material, B-field, simulation … • Already some hints of how hard this will be • Previous experiments have used ‘fudge factors’ • Scaling error matrices, smearing tracks • Starting to develop the tools for this - can have significant effects e.g. on b-tagging performance • But will need to start simply … Low mass  B-tagging Effect of error scaling Additional material >0 E/p - fraction in tails Richard Hawkings

  22. Conclusion:A few concerns / challenges / opportunities • Where we are suffering and struggling … • Software process and release preparation / validation • Persistency - complex, manpower intensive, limiting • Alignment package restructuring falling behind • Fragmentation of effort in some areas - need to agree baseline and focus on it • Where more effort is needed • Global alignment issues (ID-MS, ID-calo) • Triggering on cosmics in LHC fills - vital for constraining weak modes for alignment • Communication with physics/performance groups - role of Tracking group • Some known unknowns • How well will the detector work - hardware, cooling, occupancies, backgrounds • How well will the accelerator work - uptime/fill length, luminosity development • Will have a big effect on our data processing and reprocessing capability • Data distribution to outside institutes - CAF will not do everything • Make sure everyone can contribute to understanding the first data Richard Hawkings

More Related