1 / 70

Third LCB Workshop Distributed Computing and Regional Centres Session Harvey B. Newman (CIT)

Third LCB Workshop Distributed Computing and Regional Centres Session Harvey B. Newman (CIT) Marseilles, September 29, 1999 http://l3www.cern.ch/~newman/marseillessep29.ppt http://l3www.cern.ch/~newman/marseillessep29/index.htm.

sanjiv
Download Presentation

Third LCB Workshop Distributed Computing and Regional Centres Session Harvey B. Newman (CIT)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Third LCB Workshop • Distributed Computing and Regional Centres Session • Harvey B. Newman (CIT) • Marseilles, September 29, 1999 • http://l3www.cern.ch/~newman/marseillessep29.ppt • http://l3www.cern.ch/~newman/marseillessep29/index.htm September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  2. LHC Computing: Different from Previous Experiment Generations • Geographical dispersion: of people and resources • Complexity: the detector and the LHC environment • Scale: Petabytes per year of data 1800 Physicists 150 Institutes 32 Countries • Major challenges associated with: •  Coordinated Use of Distributed computing resources •  Remote software development and physics analysis •  Communication and collaboration at a distance • R&D: New Forms of Distributed Systems September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  3. HEP Bandwidth Needs & Price Evolution • HEP GROWTH • 1989 - 1999 A Factor of one to Several Hundred on Principal Transoceanic Links • A Factor of Up to 1000 in Domestic Academic and Research Nets • HEP NEEDS • 1999 - 2006 Continued Study by ICFA-SCIC; 1998 Results of ICFA-NTF Show A Factor of One to Several Hundred (2X Per Year) • COSTS ( to Vendors) • Optical Fibers and WDM: a factor > 2/year reduction now ?Limits of Transmission Speed, Electronics, Protocol SpeedPRICE to HEP ? • Complex Market, but Increased Budget likely to be neededReference BW/Price Evolution: ~1.5 times/year September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  4. Cost Evolution: CMS 1996 Versus1999 Technology Tracking Team • Compare to 1999 Technology Tracking Team Projections for 2005 • CPU: Unit cost will be close to early prediction • Disk: Will be more expensive (by ~2) than early prediction • Tape: Currently Zero to 10% Annual Cost Decrease (Potential Problem) CMS 1996 Estimates 1996 Estimates 1996 Estimates September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  5. LHC (and HENP) Computing and Software Challenges Software:Modern Languages, Methods and Tools The Key to Manage Complexity • FORTRAN The End of an Era;OBJECTS A Coming of Age • “TRANSPARENT” Access To Data: • Location and Storage Medium Independence Data Grids: A New Generation of Data-Intensive Network-Distributed Systems for Analysis • A Deep Heterogeneous Client/Server Hierarchy, of Up to 5 Levels • An Ensemble of Tape and Disk Mass Stores • LHC: Object Database Federations Interaction of the Software and Data Handling Architectures: The Emergence of New Classes of Operating Systems September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  6. Four Experiments The Petabyte to Exabyte Challenge • ATLAS, CMS, ALICE, LHCB • Higgs and New particles; Quark-Gluon Plasma; CP Violation • Data written to tape ~5 Petabytes/Year and UP (1 PB = 1015 Bytes) • 0.1 to 1 Exabyte (1 EB = 1018 Bytes) (~2010) (~2020 ?) Total for the LHC Experiments September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  7. To Solve: the HENP “Data Problem” • While the proposed future computing and data handling facilities are large by present-day standards,They will not support FREE access, transport or reconstruction for more than a Minute portion of the data. • Need effective global strategies to handle and prioritise requests, based on both policies and marginal utility • Strategies must be studied and prototyped, to ensure Viability: acceptable turnaround times; efficient resource utilization • Problems to be Explored; How To • Meet the demands of hundreds of users who need transparent access to local and remote data, in disk caches and tape stores • Prioritise hundreds to thousands of requests from local and remote communities • Ensure that the system is dimensioned “optimally”, for the aggregate demand September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  8. MONARC • Models Of Networked Analysis At Regional Centers • Caltech, CERN, Columbia, FNAL, Heidelberg, • Helsinki, INFN, IN2P3, KEK, Marseilles, MPI, Munich, Orsay, Oxford, Tufts • GOALS • Specify the main parameters characterizing the Model’s performance: throughputs, latencies • Develop “Baseline Models” in the “feasible” category • Verify resource requirement baselines: (computing, data handling, networks) • COROLLARIES: • Define the Analysis Process • Define RC Architectures and Services • Provide Guidelines for the final Models • Provide a Simulation Toolsetfor Further Model studies 622 Mbits/s FNAL/BNL 4.106 MIPS 200 Tbyte; Robot Desk tops 622 Mbits/s Desk tops University n.106MIPS 100 Tbyte; Robot Optional Air Freight 622 Mbits/s CERN 6.107 MIPS 2000 Tbyte; Robot Desk tops 622Mbits/s Model Circa 2006 622 Mbits/s 622 Mbits/s September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  9. MONARC General Conclusions on LHC Computing • Following discussions of computing and network requirements, technology evolution and projected costs, support requirements etc. • The scale of LHC “Computing” is such that it requires a worldwide effort to accumulate the necessary technical and financial resources • The uncertainty in the affordable network BW implies that several scenarios of computing resource-distribution must be developed • A distributed hierarchy of computing centres will lead to better useof the financial and manpower resources of CERN, the Collaborations,and the nations involved, than a highly centralised model focused at CERN • Hence: The distributed model also provides better use of physics opportunities at the LHC by physicists and students • At the top of the hierarchy is the CERN Center, with the ability to perform allanalysis-related functions, but not the ability to do them completely • At the next step in the hierarchy is a collection of large, multi-service “Tier1 Regional Centres”, each with • 10-20% of the CERN capacity devoted to one experiment • There will be Tier2 or smaller special purpose centers in many regions September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  10. Grid-Hierarchy Concept • Matched to the Worldwide-Distributed Collaboration Structure of LHC Experiments • Best Suited for the Multifaceted • Balance Between • Proximity of the data to centralized processing resources • Proximity to end-users for frequently accessed data • Efficient use of limited network bandwidth (especially transoceanic; and many world regions)through organized caching/mirroring/replication • Appropriate use of (world-) regional and local computing and data handling resources • Effective involvement of scientists and students in eachworld region, in the data analysis and the physics September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  11. MONARC Phase 1 and 2Deliverables • September 1999: Benchmark test validating the simulation • Milestone completed • Fall 1999: A Baseline Model representing a possible (somewhat simplified) solution for LHC Computing. • Baseline numbers for a set of system and analysis process parameters • CPU times, data volumes, frequency and site of jobs and data... • Reasonable “ranges” of parameters • “Derivatives”: How the effectiveness depends on some of the more sensitive parameters • Agreement of the experiments on the reasonableness of the Baseline Model • Chapter on Computing Models in the CMS and ATLAS Computing Technical Progress Reports September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  12. MONARC and Regional Centres • MONARC RC Representative Meetings in April and August • Regional Centre Planning well-advanced, with optimistic outlook, in US (FNAL for CMS; BNL for ATLAS), France (CCIN2P3), Italy • Proposals to be submitted late this year or early next • Active R&D and prototyping underway, especially in US, Italy, Japan;and UK (LHCb), Russia (MSU, ITEP), Finland (HIP/Tuovi) • Discussions in the national communities also underway in Japan, Finland, Russia, UK, Germany • Varying situations: according to the funding structure and outlook • Need for more active planning outside of US, Europe, Japan, Russia • Important for R&D and overall planning • There is a near-term need to understand the level and sharing ofsupport for LHC computing between CERN and the outside institutes, to enable the planning in several countries to advance. • MONARC + CMS/SCB assumption: “traditional” 1/3: 2/3 sharing September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  13. MONARC Working Groups & Chairs • “Analysis Process Design” • P. Capiluppi (Bologna, CMS) • “Architectures” • Joel Butler (FNAL, CMS) • “Simulation” • Krzysztof Sliwa (Tufts, ATLAS) • “Testbeds” • Lamberto Luminari (Rome, ATLAS) • “Steering” • Laura Perini (Milan, ATLAS) • Harvey Newman (Caltech, CMS) • & “Regional Centres Committee” September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  14. MONARC Architectures WG • Discussion and studyof Site Requirements • Analysis task division between CERN and RC • Facilities required with different analysis scenarios, and network bandwidth • Support required to (a) sustain the Centre, and (b) contribute effectively to the distributed system • Reports • Rough Sizing Estimates for a Large LHC Experiment Facility • Computing Architectures of Existing Experiments: • LEP, FNAL Run2, CERN Fixed Target (NA45, NA48), FNAL Fixed Target (KTeV, FOCUS) • Regional Centres for LHC Computing: (functionality & services) • Computing Architectures of Future Experiments (in progress) • Babar, RHIC, COMPASS • Conceptual Designs, Drawings and Specifications for Candidate Site Architecture September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  15. Comparisons with LHC sized experiment: CMS or ATLAS • [*] Total CPU: CMS or ATLAS ~ 1.5-2,000,000 MSi95 (Current Concepts; maybe for 1033 Luminosity) September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  16. Architectural Sketch: One Major LHC Experiment, At CERN (L. Robertson) • Mass Market Commodity PC Farms • LAN-SAN and LAN-WAN “Stars” (Switch/Routers) • Tapes (Many Drives for ALICE); an archival medium only ? September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  17. MONARC Architectures WG: Lessons and Challenges for LHC • SCALE: ~100 Times more CPU and ~10 Times more Data than CDF at Run2 (2000-2003) • DISTRIBUTION: Mostly Achieved in HEP Only for Simulation. For Analysis (and some re-Processing), it will not happen without advance planning and commitments • REGIONAL CENTRES: Require Coherent support, continuity, the ability to maintain the code base, calibrations and job parameters up-to-date • HETEROGENEITY: Of facility architecture and mode of use, and of operating systems, must be accommodated. • FINANCIAL PLANNING: Analysis of the early planning for the LEP era showed a definite tendency to underestimate the more requirements (by more than an order of magnitude) • Partly due to budgetary considerations September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  18. Network from CERN Network from Tier 2 & simulation centers Tapes Regional Centre ArchitectureExample by I. Gaines Tape Mass Storage & Disk Servers Database Servers Tier 2 Local institutes Data Import Data Export Production Reconstruction Raw/Sim  ESD Scheduled, predictable experiment/ physics groups Production Analysis ESD  AOD AOD  DPD Scheduled Physics groups Individual Analysis AOD  DPD and plots Chaotic Physicists CERN Tapes Desktops Physics Software Development R&D Systems and Testbeds Info servers Code servers Web Servers Telepresence Servers Training Consulting Help Desk September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  19. MONARC Architectures WG:Regional Centre Facilities & Services • Regional Centres Should Provide • All technical and data services required to do physics analysis • All Physics Objects, Tags and Calibration data • Significant fraction of raw data • Caching or mirroring calibration constants • Excellent network connectivity to CERN and the region’s users • Manpower to share in the development of common maintenance, validation and production software • A fair share of post- and re-reconstruction processing • Manpower to share in the work on Common R&D Projects • Service to members of other regions on a (?) best effort basis • Excellent support services for training, documentation, troubleshooting at the Centre or remote sites served by it • Long Term Commitment for staffing, hardware evolution and supportfor R&D, as part of the distributed data analysis architecture September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  20. MONARC Analysis Process WG • “How much data is processed by how many people, how often, in how many places, with which priorities…” • Analysis Process Design: Initial Steps • Consider number and type of processing and analysis jobs, frequency, number of events, data volumes, CPU etc. • Consider physics goals, triggers, signals and background rates • Studies covered Reconstruction, Selection/Sample Reduction (one or more passes), Analysis, Simulation • Lessons from existing experiments are limited: each case is tuned to the detector, run conditions, physics goals and technology of the time • Limited studies so far, from the user rather than the system point of view; more as feedback from simulations are obtained • Limitations on CPU dictate a largely “Physics Analysis Group” oriented approach to reprocessing of data • And Regional (“local”) support for individual activities • Implies dependence on the RC Hierarchy September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  21. MONARC Analysis Process:Initial Sharing Assumptions • Assume similar computing capacity available outside CERN for re-processing and data analysis [*] • There is no allowance for event simulation and reconstruction of simulated data, which it is assumed will be performed entirely outside CERN [*] • Investment, services and infrastructure should be optimised to reduce overall costs [TCO] • Tape sharing makes sense if Alice needs so much more at a different time of the year • [*] First two assumptions would likely result in “at least” a 1/3:2/3 CERN:Outside ratio of resources(I.e., likely to be larger outside). September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  22. MONARC Analysis Process Example September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  23. MONARC Analysis Process BaselineGroup-Oriented Analysis September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  24. MONARC Baseline Analysis Process:ATLAS/CMS Reconstruction Step September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  25. Monarc Analysis Model Baseline: Event Sizes and CPU Times • Sizes • Raw data 1 MB/event • ESD 100 KB/event • AOD 10 KB/event • TAG or DPD 1 KB/event • CPU Time in SI95 seconds • (without ODBMS overhead; ~ 20%) • Creating ESD (from Raw) 350 • Selecting ESD 0.25 • Creating AOD (from ESD)2.5 • Creating TAG (from AOD) 0.5 • Analyzing TAG or DPD 3.0 • Analyzing AOD 3.0 • Analyzing ESD 3.0 • Analyzing RAW 350 September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  26. Monarc Analysis Model Baseline: ATLAS or CMS at CERN Center • CPU Power 520 KSI95 • Disk space 540 TB • Tape capacity 3 PB, 400 MB/sec • Link speed to RC 40 MB/sec (1/2 of 622 Mbps) • Raw data 100% 1-1.5 PB/year • ESD data 100% 100-150 TB/year • Selected ESD 100% 20 TB/year [*] • Revised ESD 100% 40 TB/year [*] • AOD data 100% 2 TB/year [*] • Revised AOD 100% 4 TB/year [*] • TAG/DPD 100% 200 GB/year • Simulated data 100% 100 TB/year (repository) • [*] Covering all Analysis Groups; each selecting ~1% of Total ESD or AOD data for a Typical Analysis September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  27. Monarc Analysis Model Baseline: ATLAS or CMS at CERN Center LHCb (Prelim.) • CPU Power 520 KSI95 • Disk space 540 TB • Tape capacity 3 PB, 400 MB/sec • Link speed to RC 40 MB/sec (1/2 of 622 Mbps) • Raw data 100% 1-1.5 PB/year • ESD data 100% 100-150 TB/year • Selected ESD 100% 20 TB/year • Revised ESD 100% 40 TB/year • AOD data 100% 2 TB/year • Revised AOD 100% 4 TB/year • TAG/DPD 100% 200 GB/year • Simulated data 100% 100 TB/year (repository) • Some of these Basic Numbers require further Study 300 KSI95 ? 200 TB/yr 140 TB/yr ~1-10 TB/yr ~70 TB/yr September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  28. Monarc Analysis Model Baseline: ATLAS or CMS “Typical” Tier1 RC • CPU Power ~100 KSI95 • Disk space ~100 TB • Tape capacity 300 TB, 100 MB/sec • Link speed to Tier2 10 MB/sec (1/2 of 155 Mbps) • Raw data 1% 10-15 TB/year • ESD data 100% 100-150 TB/year • Selected ESD 25% 5 TB/year [*] • Revised ESD 25% 10 TB/year [*] • AOD data 100% 2 TB/year [**] • Revised AOD 100% 4 TB/year [**] • TAG/DPD 100% 200 GB/year Simulated data 25% 25 TB/year (repository) • [*] Covering Five Analysis Groups; each selecting ~1% of Total ESD or AOD data for a Typical Analysis • [**] Covering All Analysis Groups September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  29. MONARC Analysis Process WG:A Short List of Upcoming Issues • Priorities, schedules and policies • Production vs. Analysis Group vs. Individual activities • Allowed percentage of access to higher data tiers (TAG /Physics Objects/Reconstructed/RAW) • Improved understanding of the Data Model, and ODBMS • Including MC production; simulated data storage and access • Mapping the Analysis Process onto heterogeneous distributed resources • Determining the role of Institutes’ workgroup servers and desktops, in the Regional Centre Hierarchy • Understanding how to manage persistent data: e.g. storage / migration / transport / re-compute strategies • Deriving a methodology for Model testing and optimisation • Metrics for evaluating the global efficiency of a Model: Cost vs throughput; turnaround; reliability of data access September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  30. MONARC Testbeds WG • Measurements of Key Parameters governing the behavior and scalability of the Models • Simple testbed configuration defined and implemented • Sun Solaris 2.6, C++ compiler version 4.2 • Objectivity 5.1 with /C++, /stl, /FTO, /Java options • Set up at CNAF, FNAL, Genova, Milano, Padova, Roma, KEK, Tufts, CERN • Four “Use Case” Applications Using Objectivity: • ATLASFAST++, GIOD/JavaCMS, ATLAS 1 TB Milestone, CMS Test Beams • System Performance Tests; Simulation Validation Milestone Carried Out: See I. Legrand talk September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  31. MONARC Testbed Systems September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  32. MONARC Testbeds WG: Isolation of Key Parameters • Some Parameters Measured,Installed in the MONARC Simulation Models,and Used in First Round Validation of Models. • Objectivity AMS Response Time-Function, and its dependence on • Object clustering, page-size, data class-hierarchy and access pattern • Mirroring and caching (e.g. with the Objectivity DRO option) • Scalability of the System Under “Stress”: • Performance as a function of the number of jobs, relative to the single-job performance • Performance and Bottlenecks for a variety of data access patterns • Frequency of following TAG AOD; AOD  ESD; ESDRAW • Data volume accessed remotely • Fraction on Tape, and on Disk • As Function of Net Bandwidth; Use of QoS September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  33. MONARC Simulation • A CPU- and code-efficient approach for the simulation of distributed systemshas been developed for MONARC • provides an easy way to map the distributed data processing, transport, and analysis tasks onto the simulation • can handle dynamically any Model configuration,including very elaborate ones with hundreds of interacting complex Objects • can run on real distributed computer systems, and may interact with real components • The Java (JDK 1.2) environment is well-suited for developinga flexible and distributed process oriented simulation. • This Simulation program is still under development, and dedicated measurements to evaluate realistic parameters and “validate” the simulation program are in progress. September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  34. Example : Physics Analysis at Regional Centres • Similar data processing jobs are performed in several RCs • Each Centre has “TAG” and “AOD” databases replicated. • Main Centre provides “ESD” and “RAW” data • Each job processes AOD data, and also a a fraction of ESD and RAW. September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  35. Example: Physics Analysis September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  36. Simple Validation Measurements The AMS Data Access Case Simulation Measurements 4 CPUs Client LAN Raw Data DB September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  37. MONARC Strategy and Tools for Phase 2 • Strategy : Vary System Capacity and Network Performance Parameters Over a Wide Range • Avoid complex, multi-step decision processes that could require protracted study. • Keep for a possible Phase 3 • Majority of the workload satisfied in an acceptable time • Up to minutes for interactive queries, up to hours for short jobs, up to a few days for the whole workload • Determine requirements “baselines” and/or flaws in certain Analysis Processes in this way • Perform a comparison of a CERN-tralised Model, and suitable variations of Regional Centre Models • Tools and Operations to be Designed in Phase 2 • Query estimators • Affinity evaluators, to determine proximity of multiple requests in space or time • Strategic algorithms for caching, reclustering, mirroring, or pre-emptively moving data (or jobs or parts of jobs) September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  38. MONARC Phase 2Detailed Milestones July 1999: Complete Phase 1; Begin Second Cycle of Simulationswith More Refined Models September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  39. MONARC Possible Phase 3 • TIMELINESS and USEFUL IMPACT • Facilitate the efficient planning and design of mutually compatible site and network architectures, and services • Among the experiments, the CERN Centre and Regional Centres • Provide modelling consultancy and service to the experiments and Centres • Provide a core of advanced R&D activities, aimed at LHC computing system optimisation and production prototyping • Take advantage of work on distributed data-intensive computingfor HENP this year in other “next generation” projects [*] • For example in US: “Particle Physics Data Grid” (PPDG) of DoE/NGI; “A Physics Optimized Grid Environment for Experiments” (APOGEE) to DoE/HENP; joint “GriPhyN” proposal to NSF by ATLAS/CMS/LIGO • [*] See H. Newman, http://www.cern.ch/MONARC/progress_report/longc7.html September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  40. MONARC Phase 3 • Possible Technical Goal: System OptimisationMaximise Throughput and/or Reduce Long Turnaround • Include long and potentially complex decision-processesin the studies and simulations • Potential for substantial gains in the work performed or resources saved • Phase 3 System Design Elements • RESILIENCE, resulting from flexible management of each data transaction, especially over WANs • FAULT TOLERANCE, resulting from robust fall-back strategies to recover from abnormal conditions • SYSTEM STATE & PERFORMANCE TRACKING, to match and co-schedule requests and resources, detect or predict faults • Synergy with PPDG and other Advanced R&D Projects. • Potential Importance for Scientific Research and Industry:Simulation of Distributed Systems for Data-Intensive Computing. September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  41. MONARC Status: Conclusions • MONARC is well on its way to specifying baseline Models representing cost-effective solutions to LHC Computing. • Initial discussions have shown that LHC computing has a new scale and level of complexity. • A Regional Centre hierarchy of networked centres appears to be the most promising solution. • A powerful simulation system has been developed, and we areconfident of delivering a very useful toolset for further model studies by the end of the project. • Synergy with other advanced R&D projects has been identified.This may be of considerable mutual benefit. • We will deliver important information, and example Models: • That is very timely for the Hoffmann Review and discussions of LHC Computing over the next months • In time for the Computing Progress Reports of ATLAS and CMS September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  42. LHC Data Models: RD45 • HEP data models are complex! • Rich hierarchy of hundreds of complex data types (classes) • Many relations between them • Different access patterns (Multiple Viewpoints) • LHC experiments rely on OO technology • OO applications deal with networks of objects (and containers) • Pointers (or references) are used to describe relations • Existing solutions do not scale • Solution suggested by RD45: ODBMS coupled to a Mass Storage System Event Tracker Calorimeter TrackList HitList Track Hit Hit Track Track Hit Hit Track Hit Track September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  43. System View of Data Analysis by 2005 • Multi-Petabyte Object Database Federation • Backed by a Networked Set of Archival Stores • High Availability and Immunity from Corruption • “Seamless” response to database queries • Location Independence; storage brokers; caching • Clustering and Reclustering of Objects • Transfer only “useful” data: • Tape/disk; across networks; disk/client • Access and Processing Flexibility • Resource and application profiling, state tracking, co-scheduling • Continuous retrieval/recalculation/storage decisions • Trade off data storage, CPU and network capabilities to optimize performance and costs September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  44. Online CMS Analysis and Persistent Object Store • Data Organized In a(n Object) “Hierarchy” • Raw, Reconstructed (ESD), Analysis Objects (AOD), Tags Data Distribution • All raw, reconstructed and master parameter DB’s at CERN • All event TAG and AODs at all regional centers • Selected reconstructed data sets at each regional center • HOT data (frequently accessed) moved to RCs CMS L1 Slow Control Detector Monitoring L2/L3 “L4” Filtering Persistent Object Store Object Database Management System Simulation Calibrations, Group Analyses User Analysis Common Filters and Pre-Emptive Object Creation On Demand Object Creation September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  45. GIOD Summary: (Caltech/CERN/FNAL/HP/SDSC) • GIOD has • Constructed a Terabyte-scale set of fully simulated events and used these to create a large OO database • Learned how to create large database federations • Completed “100” (to 170) Mbyte/sec CMS Milestone • Developed prototype reconstruction and analysis codes, and Java 3D OO visualization prototypes, that work seamlessly with persistent objects over networks • Deployed facilities and database federations as useful testbeds for Computing Model studies Hit Track Detector September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  46. User application Objectivity Database Protocol Layer ams Filesystem Logical Layer SLAC Designed & Developed oofs Filesystem Physical Layer ooss Filesystem Implementation vfs hpss security IBM DOE Veritas Babar OOFS: Putting The Pieces Together September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  47. ams Redwood ams Dynamic Selection hpss client Redwood ams Tapes are transparent. Tape resources can reside anywhere. Redwood vfs vfs vfs Dynamic Load Balancing Hierarchical Secure AMS • Defer Request Protocol • Transparently delays client while data is made available • Accommodates high latency storage systems (e.g., tape) • Request Redirect Protocol • Redirects client to an alternate AMS • Provides for dynamic replication and real-time load balancing September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  48. Regional Centers Concept:A Data Grid Hierarchy • LHC Grid Hierarchy Example • Tier0: CERN • Tier1: National “Regional” Center • Tier2: Regional Center • Tier3: Institute Workgroup Server • Tier4: Individual Desktop • Total 5 Levels September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  49. Background: Why “Grids”? For transparent, rapid access and delivery of Petabyte-scale data(and Multi-TIPS computing resources) • I. Foster, ANL/Chicago • Because the resources needed to solve complex problems are rarely colocated • Advanced scientific instruments • Large amounts of storage • Large amounts of computing • Groups of smart people • For a variety of reasons • Resource allocations not optimized for one application • Required resource configurations change • Different views of priorities and truth September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

  50. Applns A Rich Set of HEP Data-Analysis Related Applications Appln Toolkits Remote data toolkit Remote comp. toolkit Remote viz toolkit Remote collab. toolkit Remote sensors toolkit ... Grid Services Protocols, authentication, policy, resource management, instrumentation, resource discovery,etc. Grid Fabric Data stores, networks, computers, display devices,… ; associated local services Grid Services Architecture [*] [*] Adapted from Ian Foster: there are computing grids, data grids, access (collaborative) grids,... September 29,1999: LCB Workshop Session on Distributed Computing & Regional Centres Harvey Newman (CIT)

More Related