1 / 31

What Can The Grid Do For Us?

What Can The Grid Do For Us?. Roger Jones Lancaster University Higgs-Maxwell meeting RSE, Edinburgh 7/02/07. Heavy Ions. SUSY. Top quark. Exotics. Higgs. Standard Model. Computing at the LHC. B Physics. ATLAS

colleene
Download Presentation

What Can The Grid Do For Us?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. What Can The Grid Do For Us? Roger Jones Lancaster University Higgs-Maxwell meeting RSE, Edinburgh 7/02/07

  2. Heavy Ions SUSY Top quark Exotics Higgs Standard Model Computing at the LHC B Physics RWL Jones 7 February 2007 RSE, Edinburgh

  3. ATLAS • general purpose: origin of mass, supersymmetry, micro-black holes, where did the antimatter go? • 2,000 scientists from 34 countries CMS • general purpose detector like ATLAS • muon tracking, electromagnetic calorimeter, central tracking and hadron calorimeter LHCb • to study the differences between matter and antimatter • producing over 100 million b and b-bar mesons each year ALICE • heavy ion collisions, to create quark-gluon plasmas • 50,000 particles in each collision 4 LHC Experiments Large and similar computing needs Smaller, LEP-like requirements Quite large needs, but different time structure RWL Jones 7 February 2007 RSE, Edinburgh

  4. LHC Data Challenge Starting from this event… Selectivity: 1 in 1013 Like looking for 1 person in a thousand world populations Or for a needle in 20 million haystacks! ..we are looking for this “signature” A particle collision = an event We need: • Detectors to record • Triggers to select • Computing and software to process/reconstruct/simulate • Computing, software & physicists to refine selection and analyse RWL Jones 7 February 2007 RSE, Edinburgh

  5. LHC Data LHC will produce 40 million collisions per second per experiment After filtering, ~500 collisions per second will be of interest A Megabyte of data digitised for each collision = recording rate of 0.5 Gigabytes/sec 1010 collisions recorded each year = 10 Petabytes/year of data RWL Jones 7 February 2007 RSE, Edinburgh

  6. The Solution: The Grid Note: Truly HPC, but requires more Not designed for tight-coupled problems, but spin-offs many RWL Jones 7 February 2007 RSE, Edinburgh

  7. Why the Grid? • There are several reasons why a Grid is the solution: • The expense requires input from every source possible • People will fund if the resources are local • Many agencies wanted to fund Grid development • Building a gigantic computing centre would be a real challenge • The user community is worldwide, and we should avoid a ‘map reference lottery’ when doing analysis • A distributed system is complex, but has redundancy • A shared Grid system makes better use of resources RWL Jones 7 February 2007 RSE, Edinburgh

  8. Grids – 3 Different Kinds • Computational Grid • Lots of fast processors spread over a large physical area interlinked by fast networks • Effectively a huge multiprocessor computer • Shared memory more difficult but do-able • Data Grid • Lots of databases linked by fast networks • Need effective access to mass stores • Need db query tools spanning sites and database systems • Genomics, Sloan Sky Survey, Social sciences HEP needs a hybrid of the first two • Sensor or Control Grid • Wide-area sensor networks or remote control, connections by fast networks • Flood-plain monitoring, Accelerator control rooms RWL Jones 7 February 2007 RSE, Edinburgh

  9. Components • Whichever model you have, you need: • Hardware to run things on • Middleware to glue it together • Workload Management System (WMS) • Database of known files • Information system for available resources • Authentication and authorisation • File replication • Front ends to hide complexity from the users RWL Jones 7 February 2007 RSE, Edinburgh

  10. edg-job-submit myjob.jdl Myjob.jdl JobType = “Normal”; Executable = "/sum.exe"; InputData = "LF:testbed0-00019"; DataAccessProtocol = "gridftp"; InputSandbox = {"/home/user/WP1testC","/home/file*”, "/home/user/DATA/*"}; OutputSandbox = {“sim.err”, “test.out”, “sim.log"}; Requirements = other. GlueHostOperatingSystemName == “linux" && other. GlueHostOperatingSystemRelease == "Red Hat 6.2“ && other.GlueCEPolicyMaxWallClockTime > 10000; Rank = other.GlueCEStateFreeCPUs; A storage element A compute element Job & Input Sandbox Using A Grid Now a happy user Data Location service Each Site consists of: edg-job-get-output <dg-job-id> Confused and unhappy user So now the user knows about what machines are out there and can communicate with them… however where to submit the job is too complex a decision for user alone. What is needed is an automated system So lets introduce some grid infrastructure… Security and an information system This is the world without Grids Sites are not identical. • Different Computers • Different Storage • Different Files • Different Usage Policies Workload Management System (Resource Broker) Resource Broker decides on execution location Logging & Bookkeeping RWL Jones 7 February 2007 RSE, Edinburgh

  11. LHC Computing Grid (LCG) By late 2007: • 100,000 CPU - 300 institutions worldwide • building on software being developed in advanced grid technology projects, both in Europe and in the USA Currently running on around 200 sites. November 06 Sites: 177 CPUs: 27981 Storage: 12.8PB RWL Jones 7 February 2007 RSE, Edinburgh

  12. Lab m Uni x USA Brookhaven Lancs UK Taipei ASCC Lab a France Tier 1 Physics Department Uni n CERN Tier2 ……… Italy Desktop Lab b Germany NL Lab c  Uni y Uni b  LondonGrid SouthGrid NorthGrid ScotGrid  The Grid Model with Tiers The LHC Computing Facility RWL Jones 7 February 2007 RSE, Edinburgh

  13. Facilities at CERN • Tier-0: • Prompt first pass processing on express/calibration & physics streams with old calibrations - calibration, monitoring • Calibrations tasks on prompt data • 24-48 hours later (longer for ALICE), process full physics data streams with reasonable calibrations • Implies large data movement from T0 →T1s • CERN Analysis Facility • Access to ESD/reco and RAW/calibration data on demand • Essential for early calibration • Detector optimisation/algorithmic development RWL Jones 7 February 2007 RSE, Edinburgh

  14. Facilities Away from CERN • Tier-1: • Responsible for a fraction of the RAW data • Reprocessing of RAW with better calibrations/algorithms • Group-based analysis/skimming • ~60 Tier 2 Centers distributed worldwide Monte Carlo Simulation, producing ESD, AOD, ESD, AOD  Tier 1 centers • On demand user physics analysis of shared datasets • (LHCb will do this at Tier 1s) • Limited access to ESD and RAW data sets • Simulation • Tier 3 Centers distributed worldwide • Physics analysis • Data private and local - summary datasets RWL Jones 7 February 2007 RSE, Edinburgh

  15. Setting the Scale:ATLAS Requirements start 2008, 2010 1MSI2k ~ 700 new PCs RWL Jones 7 February 2007 RSE, Edinburgh

  16. Analysis computing model Analysis model typically broken into two components • Scheduled central production of augmented analysis output (AOD), tuples & TAG collections from ESD/reco • Done at Tier 1s • Chaotic user analysis of augmented AOD streams, tuples, new selections etc and individual user simulation and CPU-bound tasks matching the official MC production • Done at Tier 2s, except for LHCb RWL Jones 7 February 2007 RSE, Edinburgh

  17. GridPP – the UK’s contribution to LCG • 19 UK Universities, CCLRC (RAL & Daresbury) and CERN • Funded by PPARC • £33m, over 2001-2007 • Deployment of Hardware • Middleware • Applications RWL Jones 7 February 2007 RSE, Edinburgh

  18. Running and planned experiments Pre-existing systems must be evolved to Grid frameworks (“I wouldn’t start from here!”) Multiple Grid deployments spanned by experiments Must interface existing software and data frameworks into Grids User interfaces Metadata systems Integration and large-scale tests Production systems Data analysis (challenging!) Seek to see commonalities between experiment needs Common projects (GANGA, SAMGrid) Synergies between projects The HEP Challenges RWL Jones 7 February 2007 RSE, Edinburgh

  19. Common Project: GANGA • GANGA User Interface to Grid for ATLAS and LHCb experiments • Configures and submits applications to the Grid. • Tailored for programmes in experiments Gaudi/Athena software framework but easily adapted for others (e.g. BaBar experiment) • Typical applications are private simulation and reconstruction jobs, analysis packages for event selection and physics analysis of distributed data RWL Jones 7 February 2007 RSE, Edinburgh

  20. GANGA – Single front-end, Multiple back-ends • Three Interfaces: • GUI, Python Command Line Interface (c.f. pAthena), Python scripts • GUI aids job preparation • Job splitting and merging works • Plug-ins allow definition of applications • E.G.: Athena and Gaudi, AthenaMC, generic executable, root • And back-ends • Currently: Fork, LSF, PBS, Condor, LCG/gLite, DIAL, DIRAC & PANDA AthenaMC RWL Jones 7 February 2007 RSE, Edinburgh

  21. At the Tevatron Collider in Chicago, many of the questions for the LHC are already being addressed Experiment: DØ Lower energies Less data, but still large Grid tools allowed a large scale-reprocessing This lead to DØ being able to uncover direct evidence for Bs particles and antiparticles ‘mixing’ SAM (Sequential Access to Metadata) developed by DØ Runjob handles job workflow and associated metadata SAMGrid – evolves system to Grid middleware Production systems: SAMGrid RWL Jones 7 February 2007 RSE, Edinburgh

  22. Data Management • We are building data processing centres, not big CPUs • The hard problems are all in the storage and IO • At Tier-2, not just Tier-1 • Need to nail the ‘local IO problem’ very soon • Experiments must provide the high-level data management tools • Examples: DDM from ATLAS, PHedEx from CMS • Two simple goals • Manage the prioritized transfer of files from multiple sources to multiple sinks • Provide information on cost- latency and rate- of any given transfer to enable scheduling • E.g. PhEDEx manages large-scale transfers for CMS • Large-scale  O(1000+) files per dataset • Hundreds of TB under management so far • Reaching nearly 20TB a month • Main issues are with underlying fabric RWL Jones 7 February 2007 RSE, Edinburgh

  23. Demonstrating Data Management Performance • The LHC needs to sustain ~GB/s per experiment data transfers over months into disk/MSS systems at each national centre • We are just starting to demonstrate this • Tuning at each site is vital and painful • Example: FTS transfers at RAL CSA06 RWL Jones 7 February 2007 RSE, Edinburgh

  24. The Hard Graft Starts Now No more neat ideas, please! • In 2007/8, that is! • Focus now is (dull, tedious, hard) integration, deployment, testing, documentation • The excitement will come from the physics! But also many ‘big unsolved problems’ for later: • How can we store data more efficiently? • How can we compute more efficiently? • How should we use virtualisation? • How do we use really high-speed comms? • Will anyone ever make videoconferencing work properly? RWL Jones 7 February 2007 RSE, Edinburgh

  25. …Ask what you can do for the Grid! LHC computing is a team sport • But which one? • A horrendous cliché, but the key to our success! • We are all working towards the same goal… RWL Jones 7 February 2007 RSE, Edinburgh

  26. Credo for Success Keep local stuff local • Aim of the Grid: avoid central points of congestion • Present coherent local interfaces, but reduce global state • This also applies to contacts with sites • ‘Users’ / ‘experiments’ must work with sites directly • This requires real effort from the local experiment group • ‘Up the chain of command and down again’ does not work • NB: also applies to ‘strategic’ discussions and resource planning RWL Jones 7 February 2007 RSE, Edinburgh

  27. The Year Coming The hard work starts here! • HEP New Year’s resolutions • We must be (ruthlessly) pragmatic to reach first data • We must work closely with each of our computing centres • We must stick to the plan and avoid surprises • We must give recognition to those working behind the scenes • We will do physics in 2007! RWL Jones 7 February 2007 RSE, Edinburgh

  28. Summary • We have come a long way • Grids are not an option, they are a necessity • Scheduled production is largely solved • Chaotic analysis, data management & serving many users are the mountains we are climbing now • Users are important to getting everything working • ‘No Pain, No Gain!’ RWL Jones 7 February 2007 RSE, Edinburgh

  29. One Last Thing… RWL Jones 7 February 2007 RSE, Edinburgh

  30. MSS MSS MSS Northern Tier ~200kSI2k The Computing Model ~Pb/sec Event Builder 10 GB/sec Event Filter~159kSI2k Some data for calibration and monitoring to institutes Calibrations flow back 450 Mb/sec • Calibration • First processing Tier 0 T0 ~5MSI2k MSS ~ 300MB/s/T1 /expt Tier 1 • Reprocessing • Group analysis UK Regional Centre (RAL) US Regional Centre Italian Regional Centre Spanish Regional Centre (PIC) MSS 622Mb/s Tier 2 Tier2 Centre ~200kSI2k Tier2 Centre ~200kSI2k Tier2 Centre ~200kSI2k • Analysis • Simulation 622Mb/s Average Tier 2 has ~25 physicists working on one or more channels Roughly 3 Tier 2s should have the full AOD, TAG & relevant Physics Group summary data Tier 2 do bulk of simulation Lancaster ~0.25TIPS Liverpool Manchester Sheffield Physics data cache 100 - 1000 MB/s Desktop Workstations RWL Jones 7 February 2007 RSE, Edinburgh

  31. RAW ESD2 AODm2 0.044 Hz 3.74K f/day 44 MB/s 3.66 TB/day RAW ESD (2x) RAW RAW AODm (10x) 1.6 GB/file 0.02 Hz 1.7K f/day 32 MB/s 2.7 TB/day 1.6 GB/file 0.02 Hz 1.7K f/day 32 MB/s 2.7 TB/day 1 Hz 85K f/day 720 MB/s OtherTier-1s OtherTier-1s EachTier-2 ESD2 ESD2 ESD2 ESD1 ESD2 AOD2 AOD2 AODm1 AODm2 AODm1 AODm2 AODm2 AODm2 AODm2 T1 T1 T1 T1 T1 T1 0.5 GB/file 0.02 Hz 1.7K f/day 10 MB/s 0.8 TB/day 0.5 GB/file 0.02 Hz 1.7K f/day 10 MB/s 0.8 TB/day 0.5 GB/file 0.02 Hz 1.7K f/day 10 MB/s 0.8 TB/day 0.5 GB/file 0.02 Hz 1.7K f/day 10 MB/s 0.8 TB/day 0.5 GB/file 0.02 Hz 1.7K f/day 10 MB/s 0.8 TB/day 10 MB/file 0.2 Hz 17K f/day 2 MB/s 0.16 TB/day 10 MB/file 0.2 Hz 17K f/day 2 MB/s 0.16 TB/day 500 MB/file 0.036 Hz 3.1K f/day 18 MB/s 1.44 TB/day 500 MB/file 0.04 Hz 3.4K f/day 20 MB/s 1.6 TB/day 500 MB/file 0.036 Hz 3.1K f/day 18 MB/s 1.44 TB/day 500 MB/file 0.04 Hz 3.4K f/day 20 MB/s 1.6 TB/day 500 MB/file 0.04 Hz 3.4K f/day 20 MB/s 1.6 TB/day 500 MB/file 0.004 Hz 0.34K f/day 2 MB/s 0.16 TB/day 500 MB/file 0.004 Hz 0.34K f/day 2 MB/s 0.16 TB/day ATLAS “average” T1 Internal Data Flow (2008) Tape Tier-0 diskbuffer Plus simulation and analysis data flow CPUfarm diskstorage RWL Jones 7 February 2007 RSE, Edinburgh

More Related