wlcg lhcc mini review lhcb summary n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
WLCG LHCC mini-review LHCb Summary PowerPoint Presentation
Download Presentation
WLCG LHCC mini-review LHCb Summary

Loading in 2 Seconds...

play fullscreen
1 / 19

WLCG LHCC mini-review LHCb Summary - PowerPoint PPT Presentation


  • 87 Views
  • Uploaded on

WLCG LHCC mini-review LHCb Summary. Outline. Activities in 2008: summary Status of DIRAC Activities in 2009: outlook Resources in 2009-10. Tier1s (re-)configuration. LFC mirror, ConditionsDB replication DB replication using 3D from CERN to all Tier1s In place for the whole year

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'WLCG LHCC mini-review LHCb Summary' - buffy


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
outline
Outline
  • Activities in 2008: summary
  • Status of DIRAC
  • Activities in 2009: outlook
  • Resources in 2009-10

PhC

tier1s re configuration
Tier1s (re-)configuration
  • LFC mirror, ConditionsDB replication
    • DB replication using 3D from CERN to all Tier1s
      • In place for the whole year
    • LFC service for scalability and redundancy
      • Some problems at GridKa
  • Site SE migration (winter 08-09)
    • RAL (dCache to Castor2)
      • T0D1 migration went rather smoothly (FTS copy of files)
      • T1D0 migration extremely painful (staging tape by tape)
        • Took several months
    • PIC (Castor1 to dCache for T1D0)
      • Went very smoothly without file copy (file migration to Enstore)
    • CNAF (Castor2 to StoRM for TxD1)
      • Performed in May
  • SURLs migrated to SRM v2.2 end-points (October)
      • Needs as datasets in use for 2.5 years now (DC06)
      • No dependency on SRM v1

WLCG-MB report, February 08

pit 8 to tier0 castor transfers
Pit 8 to Tier0-Castor transfers
  • First weeks in February: continuous transfers at low rate
  • As of 18 Feb: nominal rate (70 MB/s) with ~50% duty cycle
    • A few longer stops for SW upgrades

Tier1 Transfers

Migration

WLCG-MB report, February 08

castor migration
Castor migration

WLCG-MB report, February 08

may ccrc reconstruction
May CCRC: reconstruction

41.2k reconstruction jobs submitted

27.6k jobs proceeded to done state

Done/created ~67%

cosmics and first beam data
Cosmics and first beam data
  • Cosmics used for detector commissioning
    • Of course very few detectors are hit!
    • Allows internal time alignment of subdetectors
      • Using readout of consecutive 25ns slots
    • Partially time alignment between detectors
      • Shifting timing by 12.5 ns and equalising population in consecutive bins…
    • All subdetectors included in global runs as of end August
  • TED data
    • LHCb was first to see tracks coming from the injection line!
    • Single shots with ~2 muons/cm2, but once every 48s only!
    • First tracks in the VeLo
    • Allowed rough global detector time alignment (~2ns)
  • 10th September
    • Only muons, calorimeters and for short time OT

PhC

dirac3 put in production
DIRAC3 put in production
  • Production activities
    • Started in July
    • Simulation, reconstruction, stripping
      • Includes file distribution strategy, failover mechanism
      • File access using local access protocol (rootd, rfio, (gsi)dcap, xrootd)
      • Commissioned alternative method: copy to local disk
        • Drawback: non-guaranteed space, less CPU efficiency, additional network traffic (possibly copied from remote site)
    • Failover using VOBOXes
      • File transfers (delegated to FTS)
      • LFC registration
      • Internal DIRAC operations (bookkeeping, job monitoring…)
  • Analysis
    • Started in September
    • Ganga available for DIRAC3 in November
    • DIRAC2 de-commissioned on January 12th

PhC

issues in 2008
Issues in 2008
  • Data Management
    • Site configuration (non-scaling)
    • SRM v2.2 still not fully mature (e.g. pinning)
    • Many issues with StorageWare (mainly dCache)
  • Workload Management
    • Moved to gLite WMS, but still many issues with it (e.g. mix-up of identities). Better scaling behavior though than LCG-RB
    • LHCb moved to using “generic pilot jobs” (i.e. can execute workload from any user or production)
      • Not switching identity yet (gLexec / SCAS not available)
      • Not a show-stopper as not required by LHCb but by sites
  • Middleware deployment
    • LHCb distributes the client middleware
      • From distribution in the LCG-AA
      • Necessary to ensure bug fixes to be available
      • Allows multiple platform (OS, architecture, python version)

PhC

lhcb computing operations
LHCb Computing Operations
  • Production manager
    • Schedules production work, sets up and checks workflows, reports to LHCb operations
  • Computing shifters
    • Computing Operations shifter (pool of ~12 shifters)
      • Covers 14h/day, 7 days / week
      • Computing Control room (2-R-014)
    • Data Quality shifter
      • Covers 8h/day, 7 days / week
    • Both are in the LHCb Computing Control room (2-R-014)
  • Daily DQ and Operations meetings
    • Week days (twice a week during shutdowns)
  • Grid Expert on-call
    • On duty for a week
    • Runs the operations meetings
  • Grid Team (~6 FTEs needed, ~2 missing)
    • Shared responsibilities (WMS, DMS, SAM, Bookkeeping…)

PhC

plans for 2009
Plans for 2009
  • Commissioning for 2009-10 data taking (FEST’09)
    • See next slides
  • Simulation
    • Replacing DC06 datasets
      • Signal and background samples (~300 Mevts)
      • Minimum bias for L0 and HLT commissioning (~100 Mevts)
      • Used for CP-violation performance studies
      • Nominal LHC settings (7 TeV, 25 ns, 2 1032 cm-2s-1)
    • Tuning stripping and HLT for 2010
      • 4/5 TeV, 50 ns (no spillover), 1032 cm-1s-1
      • Benchmark channels for first physics studies
        • Bµµ, Γs, BDh, BsJ/ψϕ, BK*µµ …
      • Large minimum bias samples (~ 1mn of LHC running)
      • Stripping performance required: ~ 50 Hz for benchmark channels
      • Tune HLT: efficiency vs retention, optimisation
    • Preparation for very first physics
      • 2 TeV, low luminosity
      • Large minimum bias sample (part used for FEST’09)

PhC

fest 09
FEST’09
  • Aim
    • Replace the non-existing 2008 beam data with MC
    • Points to be tested
      • L0 (Hardware trigger) strategy
        • Emulated in software
      • HLT strategy
        • First data (loose trigger)
        • Higher lumi/energy data (b-physics trigger)
      • Online detector monitoring
        • Based on event selection from HLT e.g. J/Psi events
        • Automatic detector problems detection
      • Data streaming
        • Physics stream (all triggers) and calibration stream (subset of triggers, typically 5 Hz)
      • Alignment and calibration loop
        • Trigger re-alignment
        • Run alignment processes
        • Validate new alignment (based on calibration stream)

PhC

fest 09 runs
FEST’09 runs
  • FEST activity
    • Define running conditions (rate, HLT version + config)
    • Start runs from the Control System
      • Events are injected and follow the normal path
    • Files export to Tier0 and distribution to Tier1s
    • Automatic reconstruction jobs at CERN and Tier1s
      • Commission Data Quality green-light
  • Short test periods
    • Typically full week, ½ to 1 day every week for tests
    • Depending on results, take a few weeks interval for fixing problems
  • Vary conditions
    • L0 parameters
    • Event rates
    • HLT parameters
    • Trigger calibration and alignment loop

PhC

r esources very preliminary
Resources (very preliminary)
  • Consider 2009-10 as a whole (new LHC schedule)
    • Real data
      • Split year in two parts:
        • 0.5 106s at low lumi– LHC-phase1
        • 3106s at higher lumi (1 1032) – LHC phase2
      • Trigger rate independent on lumi and energy: 2 kHz
    • Simulation: 2 109events (nominal year) over 2 years
  • New assumptions for (re-)processing and analysis
    • More re-processings during LHC-phase1
    • Add calibration checks (doneat CERN)
    • Envision more analysis at CERN with first data
      • Increase from 25% (TDR) to 50% (phase1) and 35% (phase2)
      • Include SW development and testing (LXBATCH)
    • Adjust event sizes and CPU needs to current estimates
      • Important effort to reduce data size (packed format for rDST, DST, µDST…)
      • Use new HEP-SPEC06 benchmarking

PhC

r esources cont d
Resources (cont’d)
  • CERN usage
    • Tier0:
      • Real data recording, export to Tier1s
      • First pass reconstruction of ~85% of raw data
      • Reprocessing (in future foresee to use also the Online HLT farm)
    • CAF (“Calibration and Alignment Facility”)
      • Dedicated LXBATCH resources
      • Detector studies, alignment and calibration
    • CAF (“CERN Analysis Facility”)
      • Part of Grid distributed analysis facilities (estimate 40% in 2009-10)
      • Histograms and interactive analysis (lxplus, desk/lap-tops)
  • Tier1 usage
    • Reconstruction
      • First pass during data taking, reprocessing
    • Analysis facilities
      • Grid distributed analysis
      • Local storage for users’ data (LHCb_USER SRM space)

PhC

conclusions
Conclusions
  • 2008
    • CCRC very useful for LHCb (although irrelevant to be simultaneous due to low throughput)
    • DIRAC3 fully commissioned
      • Production in July
      • Analysis in November
      • As of now, called DIRAC
    • Last processing on DC06
      • Analysis will continue in 2009
    • Commission simulation and reconstruction for real data
  • 2009-10
    • Large simulation requests for replacing DC06, preparing 2009-10
    • FEST’09: ~1 week a month and 1 day a week
    • Resource requirements being prepared for WLCG workshop in March and C-RRB in April

PhC