1 / 18

The Particle Physics Data Grid Collaboratory Pilot

Richard P. Mount For the PPDG Collaboration DOE SciDAC PI Meeting January 15, 2002. The Particle Physics Data Grid Collaboratory Pilot. Observation of CP violation in the B 0 meson system. (Announced July 5, 2001). sin(2 b ) = 0.59 ± 0.14 (statistical) ± 0.05 (systematic).

yeva
Download Presentation

The Particle Physics Data Grid Collaboratory Pilot

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Richard P. Mount For the PPDG Collaboration DOE SciDAC PI Meeting January 15, 2002 TheParticle Physics Data GridCollaboratory Pilot

  2. Observation of CP violation in the B0 meson system. (Announced July 5, 2001) sin(2b) = 0.59 ± 0.14 (statistical) ± 0.05 (systematic) 32 million B0 – anti-B0 pairs studied: these are the July 2001 plots after months of analysis

  3. The Top Quark Discovery (1995)

  4. Quarks Revealed: structure inside Protons & Neutrons Richard Taylor (SLAC) 1990 Nobel Prize in Physics

  5. Scope and Goals • Who: • OASCR (Mary Anne Scott) and HENP (Vicky White) • Condor, Globus, SRM, SRB (PI, Miron Livny, U.Wisconsin) • High Energy and Nuclear Physics Experiments - ATLAS, BaBar, CMS, D0, JLAB, STAR (PIs Richard Mount, SLAC and Harvey Newman, Caltech) • Project Coordinators: Ruth Pordes, Fermilab and Doug Olson, LBNL • Experiment data handling requirements today : • Petabytes of storage, Teraops/s of computing, Thousands of users, • Hundreds of institutions, 10+ years of analysis ahead • Focus of PPDG: • Vertical Integration of Grid middleware components into HENP experiments’ ongoing work • Pragmatic development of common Grid services and standards – data replication, storage and job management, monitoring and planning.

  6. The Novel Ideas • End to end integration and deployment of experiment applications using existing and emerging Grid services. • Deployment of Grid technologies and services in production (24x7) environments with stressful performance needs. • Collaborative development of Grid middleware and extensions between application and middleware groups – leading to pragmatic and least risk solutions. • HENP experiments extend their adoption of common infrastructures to higher layers of their data analysis and processing applications. • Much attention paid to integration, coordination, interoperability and interworking with emphasis on incremental deployment of increasingly functional working systems.

  7. Impact and Connections IMPACT. • Make Grids usable and useful for the real problems facing international physics collaborations and for the average scientist in HENP. • Improving the robustness, reliability and maintainability of Grid software through early use in production application environments. • Common software components that have general applicability and contributions to standard Grid middleware. Connections • DOE Science Grid will deploy and support Certificate Authorities and develop Policy documents. • Security and Policy for Group Collaboration provides Community Authorization Service. • SDM/SRM working with PPDG on common storage interface APIs and software components. • Connections with other SciDAC projects (HENP and non-HENP).

  8. Challenge and Opportunity

  9. The Growth of “Computational Physics” in HENP 2001 Large Scale Data Management WorldwideCollaboration(Grids) FeatureExtractionandSimulation Physics Analysis and Results Detector and Computing Hardware 1971 ~500 people (BaBar) ~7 Million Lines of Code (BaBar) ~10 people ~100k LOC

  10. The Collaboratory Past 30 years ago an HEP “collaboratory” involved: Air freight of bubble chamber film (e.g. CERN to Cambridge) 20 years ago: Tens of thousands of tapes 100 physicists from all over Europe (or US) Air freight of tapes, 300 baud modems 10 years ago: Tens of thousands of tapes 500 physicists from US, Europe, USSR, PRC … 64k bps leased lines and air freight

  11. The Collaboratory Present and Future Present: Tens of thousands of tapes 500 physicists from US, Europe, Japan, FSU, PRC … Dedicated intercontinental links at up to 155/622 Mbps Home brewed, experiment-specific, data/job distribution software (if you’re lucky) Future (~2006): Tens of thousands of tapes 2000 physicists from, worldwide collaboration Many links at 2.5/10 Gbps The Grid

  12. End-to-End Applications& Integrated Production Systems the challenges! to allow thousands of physicists to share data & computing resources for scientific processing and analyses • PPDG Focus: • Robust Data Replication • - Intelligent Job Placement • and Scheduling • - Management of Storage • Resources • - Monitoring and Information • of Global Services • Relies on Grid infrastructure: • - Security & Policy • High Speed Data Transfer • - Network management Operators & Users Resources: Computers, Storage, Networks Put to good use by the Experiments

  13. Project Activities to date – “One-to-one”Experiment – Computer Science developments • Replicated data sets for science analysis • BaBar – SRB • CMS – Globus, European Data Grid • STAR – Globus • JLAB – SRB http://www.jlab.org/hpc/WebServices/GGF3_WS-WG_Summary.ppt • Distributed Monte Carlo simulation job production and management • ATLAS – Globus, Condor • http://atlassw1.phy.bnl.gov/magda/dyShowMain.pl • D0 – Condor • CMS – Globus, Condor, EDG – SC2001 Demo http://www-ed.fnal.gov/work/SC2001/mop-animate-2.html • Storage management interfaces • STAR – SRM • JLAB – SRB

  14. Cross-Cut – all collaborator - activities Certificate Authority policy and authentication – working with the SciDAC Science Grid, SciDAC Security and Policy for Group Collaboration and ESNET to develop policies and procedures. PPDG experiments will act as early testers and adopters of the CA. http://www.envisage.es.net/ Monitoring of networks, computers, storage and applications – collaboration with GriPhyN. Developing use cases and requirements; evaluating and analysing existing systems with many components – D0 SAM, Condor pools etc. SC2001 demo: http://www-iepm.slac.stanford.edu/pinger/perfmap/iperf/anim.gif. Architecture components and interfaces – collaboration with GriPhyN. Defining services and interfaces for analysis, comparison, and discussion with other architecture definitions such as the European Data Grid. http://www.griphyn.org/mail_archive/all/doc00012.doc International test beds – iVDGL and experiment applications.

  15. Common Middleware Services Robust file transfer and replica services SRB Replication Services Globus replication services Globus robust file transfer GDMP application replication layer - common project between European Data Grid Work Package 2 and PPDG. Distributed Job Scheduling and Resource Management: Condor-G, DAGman, Gram; Sc2001 demo with GriPhyN http://www-ed.fnal.gov/work/sc2001/griphyn-animate.html Storage Resource Interface and Management Common API with EDG, SRM Standards Committees Internet2 HENP Working Group Global Grid Forum

  16. Grid Realities BaBar Offline Computing EquipmentBottom-up Cost Estimate (December 2001)(Based only on costs we already expect)(To be revised annually)

  17. Grid Realities

  18. PPDG World An Experiment HENP Grid SciDAC connections PPDG

More Related