1 / 13

Development of LHCb Computing Model F Harris

Development of LHCb Computing Model F Harris. Overview of proposed workplan to produce ‘baseline computing model for LHCb’. WHY are we worrying NOW about this?. HOFFMAN REVIEW (starting Jan 2000) How will the LHC experiments do their computing? Answers in late 2000

dane-porter
Download Presentation

Development of LHCb Computing Model F Harris

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Development of LHCb Computing ModelF Harris Overview of proposed workplan to produce ‘baseline computing model for LHCb’ F Harris LHCb computing workshop

  2. WHY are we worrying NOW about this? • HOFFMAN REVIEW (starting Jan 2000) • How will the LHC experiments do their computing? Answers in late 2000 • The basic logical data flow model, patterns of use, resources for tasks • The preferred distributed resource model (CERN,regions,institutes) • Computing MOUs in 2001 • Countries (UK,Germany, …) are planning now for ‘development’ facilities F Harris LHCb computing workshop

  3. Proposed project organisation to do the work Tasks and Deliverables(1) • Logical data flow model (all data-sets and processing tasks) • Data Flow Model Specification • Resource requirements • Data volumes,rates,CPU needs by task (these are essential parameters for model development) - measure current status and predict the future • URD giving distributions • Use Cases • map demands for reconstruction, simulation, analysis, calibration and alignment onto the model (e.g. physics groups working) • Document ‘patterns of usage’ and resulting demands on resources - ‘workflow specification’ F Harris LHCb computing workshop

  4. LHCb datasets and processing stages (must update CPU and store reqts.) 200 Hz 200 kB 100 kB 10kB 150 kB 70 kB 0.1 kB 0.1 kB

  5. A General view of Analysis (G Corti)(? Patterns of group and user analysis) Data Acquisition Detector Simulation Raw Data MC “truth”Data Reconstruction MC generator DST (reconstructed particles , primary vertex) Physicist Analysis (Group) Analysis “Reconstructed” Physics channels F Harris LHCb computing workshop

  6. Status of simulated event production (since june)E van Herwijnen F Harris LHCb computing workshop

  7. Tasks and Deliverables (2) • Resource distribution • Produce description of distribution of LHCb institutes, regional centres and resources (equipment and people), and the connectivity • Resource Map with network connectivity. • List of people and equipment… • Special requirements for remote working • (OS platfoms,s/w distribution,videoconferencing..) • URD on ‘Remote working…’ • Technology Tracking • (Follow PASTA. Data Management s/w, GAUDI data management….) • Technology trend figures • Capabilities of data management s/w F Harris LHCb computing workshop

  8. http://nicewww.cern.ch/~les/monarc/lhc_farm_mock_up/index.htmhttp://nicewww.cern.ch/~les/monarc/lhc_farm_mock_up/index.htm Mock-up of an Offline Computing Facility for an LHC Experiment at CERN(Les Robertson July 99 with ‘old’ experiment estimates) • Purpose • investigate the feasibility of building LHC computing facilities using current cluster architectures, conservative assumptions about technology evolution • scale & performance • technology • power • footprint • cost • reliability • manageability F Harris LHCb computing workshop

  9. F Harris LHCb computing workshop

  10. CERN storage network 12 Gbps processors ………… 1400 boxes 160 clusters 40 sub-farms tapes 1.5 Gbps 0.8 Gbps 3 Gbps* 8 Gbps 12 Gbps* farm network 480 Gbps* 0.8 Gbps (daq) LAN-SAN routers 100 drives CMS Offline Farm at CERN circa 2006 LAN-WAN routers 250 Gbps storage network 5 Gbps 0.8 Gbps 5400 disks 340 arrays ……... disks * assumes all disk & tape traffic on storage network double these numbers if all disk & tape traffic through LAN-SAN router lmr for Monarc study- april 1999

  11. Tasks and Deliverables(3) • Candidate Computer Models evaluation • Map data and tasks to facilities (try different scenarios) • Develop spreadsheet model with key parameters-get ‘average answers’ • Develop simulation model with distributions • Evaluate different models (performance,cost,risk…..) • Establish a BASELINE MODEL • >> BASELINE COMPUTING MODEL together with cost, performance,risk analysis F Harris LHCb computing workshop

  12. GUI User Directory Config Files Initializing Data Define Activities (jobs) Monarc Package Auxiliary tools Regional Center, Farm, AMS, CPU Node, Disk, Tape Graphics Statistics Processing Package Job, Active Jobs, Physics Activities, Init Functions, Dynamically Loadable Modules Parameters Prices .... Network Package Data Model Package Data Container, Database LAN, WAN, Messages Database Index SIMULATION ENGINE The Structure of the Simulation Program(I Legrand) F Harris LHCb computing workshop

  13. Proposed composition and organisation of working group • Contacts from each country • Contacts from other LHCb projects (can/will have multi-function people..) • DAQ • Reconstruction • Analysis • MONARC • IT (PASTA +?) • Project Plan (constraint - timescales to match requests from the review..) • Monthly meetings? (with videoconferencing) • 1st meeting week after LHCb week (first try at planning execution of tasks) • Documentation • all on WWW (need a WEBMASTER) F Harris LHCb computing workshop

More Related