1 / 52

Submit locally and run globally – The GLOW and OSG Experience

Submit locally and run globally – The GLOW and OSG Experience. What impact does our computing infrastructure have on our scientists?. 1992. Oklahoma Supercomputing Symposium 2003. Supercomputing in Social Science. Maria Marta Ferreyra Carnegie Mellon University.

thane-beck
Download Presentation

Submit locally and run globally – The GLOW and OSG Experience

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Submit locally and run globally –The GLOW and OSGExperience

  2. What impact does our computing infrastructure have onour scientists?

  3. 1992

  4. Oklahoma Supercomputing Symposium 2003 Supercomputing in Social Science Maria Marta Ferreyra Carnegie Mellon University

  5. What would happen if many families in the largest U.S. metropolitan areas received vouchers for private schools? • Completed my dissertation with Condor’s help • The contributions from my research are made possible by Condor • Questions could not be answered otherwise

  6. Research question • vouchers allow people to choose the type of school they want • vouchers may affect where families choose to live  Problem has many moving parts (a general equilibrium problem)

  7. Why Condor was a great match to my needs (cont.) • I did not have to alter my code • I did not have to pay • since 19 March 2001 I have used 462,667 hours (about 53 years with one 1 Ghz processor)

  8. The search for SUSY* • Sanjay Padhi is a UW Chancellor Fellow who is working at the group of Prof. Sau Lan Wu located at CERN (Geneva) • Using Condor Technologies he established a “grid access point” in his office at CERN • Through this access-point he managed to harness in 3 month (12/05-2/06) more that 500 CPU years from the LHC Computing Grid (LCG) the Open Science Grid (OSG) the Grid Laboratory Of Wisconsin (GLOW) resources and local group owned desk-top resources. *Super-Symmetry

  9. Claims for “benefits” provided by Distributed Processing Systems • High Availability and Reliability • High System Performance • Ease of Modular and Incremental Growth • Automatic Load and Resource Sharing • Good Response to Temporary Overloads • Easy Expansion in Capacity and/or Function “What is a Distributed Data Processing System?” , P.H. Enslow, Computer, January 1978

  10. Democratizationof Computing:You do not need to be asuper-person to do super-computing

  11. High Throughput Computing We first introduced the distinction between High Performance Computing (HPC) and High Throughput Computing (HTC) in a seminar at the NASA Goddard Flight Center in July of 1996 and a month later at the European Laboratory for Particle Physics (CERN). In June of 1997 HPCWire published an interview on High Throughput Computing.

  12. HTC For many experimental scientists, scientific progress and quality of research are strongly linked to computing throughput. In other words, they are less concerned about instantaneous computing power. Instead, what matters to them is the amount of computing they can harness over a month or a year --- they measure computing power in units of scenarios per day, wind patterns per week, instructions sets per month, or crystal configurations per year.

  13. High Throughput Computingis a24-7-365activity FLOPY (60*60*24*7*52)*FLOPS

  14. “ … We claim that these mechanisms, although originally developed in the context of a cluster of workstations, are also applicable to computational grids. In addition to the required flexibility of services in these grids, a very important concern is that the system be robust enough to run in “production mode” continuously even in the face of component failures. … “ Miron Livny & Rajesh Raman, "High Throughput Resource Management", in “The Grid: Blueprint for a New Computing Infrastructure”, 1998.

  15. HTC leads to a “bottom up”approach tobuilding and operating a distributed computing infrastructure

  16. My jobs should run … • … on my laptop if it is not connected to the network • … on my group resources if my grid certificate expired • ... on my campus resources if the meta scheduler is down • … on my national grid if the trans-Atlantic link was cut by a submarine

  17. Taking HTCto the next level –The Open Science Grid(OSG)

  18. What is OSG? The Open Science Grid is a US national distributed computing facility that supports scientific computing via an open collaboration of science researchers, software developers and computing, storage  and network providers. The OSG Consortium is building and operating the OSG, bringing resources and researchers from universities and national laboratories together and cooperating with other national and international infrastructures to give scientists from a broad range of disciplines access to shared resources worldwide.

  19. The OSG Project • Co-funded by DOE and NSF at an annual rate of ~$6M for 5 years starting FY-07. • 16 institutions involved – 4 DOE Labs and 12 universities • Currently main stakeholders are from physics - US LHC experiments, LIGO, STAR  experiment, the Tevatron Run II and Astrophysics experiments • A mix of DOE-Lab and campus resources • Active “engagement” effort to add new domains and resource providers to the OSG consortium

  20. OSG PEP - Organization

  21. FTEs Facility operations 5.0 Security and troubleshooting 4.5 Software release and support 6.5 Engagement 2.0 Education, outreach & training 2.0 Facility management 1.0 Extensions in capability and scale. 9.0 Staff 3.0 Total FTEs 33 OSG Project Execution Plan (PEP) - FTEs

  22. Part of the OSG Consortium Contributors Project

  23. OSG Principles • Characteristics - • Provide guaranteed and opportunistic access to shared resources. • Operate a heterogeneous environment both in services available at any site and for any VO, and multiple implementations behind common interfaces. • Interface to Campus and Regional Grids. • Federate with other national/international Grids. • Support multiple software releases at any one time. • Drivers - • Delivery to the schedule, capacity and capability of LHC and LIGO: • Contributions to/from and collaboration with the US ATLAS, US CMS, LIGO software and computing programs. • Support for/collaboration with other physics/non-physics communities. • Partnerships with other Grids - especially EGEE and TeraGrid. • Evolution by deployment of externally developed new services and technologies:.

  24. Grid of Grids - from Local to Global National Campus Community

  25. Who are you? • A resource can be accessed by a user via the campus, community or national grid. • A user can access a resource with a campus, community or national grid identity.

  26. 32 Virtual Organizations - participating Groups 3 with >1000 jobs max. (all particle physics) 3 with 500-1000 max. (all outside physics) 5 with 100-500 max (particle, nuclear, and astro physics)

  27. NSF Middleware Initiative (NMI): Condor, Globus, Myproxy OSG Middleware Layering CMSServices & Framework ATLAS Services &Framework CDF, D0SamGrid & Framework LIGOData Grid Applications OSG Release Cache: VDT + Configuration, Validation, VO management Virtual Data Toolkit (VDT) Common Services NMI + VOMS, CEMon (common EGEE components), MonaLisa, Clarens, AuthZ Infrastructure

  28. OSG Middleware Deployment Domain science requirements. Condor, Globus, Privilege, EGEE etc OSG stakeholders and middleware developer (joint) projects. Test on “VO specific grid” Integrate into VDT Release. Deploy on OSG integration grid Provision in OSG release & deploy to OSG production.

  29. Inter-operability with Campus grids At this point we have three operational campus grids – Fermi, Purdue and Wisconsin. We are working on adding Harvard (Crimson) and Lehigh. FermiGrid is an interesting example for the challenges we face when making the resources of a campus (in this case a DOE Laboratory) grid accessible to the OSG community

  30. What is FermiGrid? • Integrates resources across most (soon all) owners at Fermilab. • Supports jobs from Fermilab organizations to run on any/all accessible campus FermiGrid and national Open Science Grid resources. • Supports jobs from OSG to be scheduled onto any/all Fermilab sites. • Unified and reliable common interface and services for FermiGrid gateway - including security, job scheduling, user management, and storage. • More information is available at http://fermigrid.fnal.gov

  31. Job Forwarding and Resource Sharing • Gateway currently interfaces 5 Condor pools with diverse file systems and >1000 Job Slots. Plans to grow to 11 clusters (8 Condor, 2 PBS and 1 LSF) • Job scheduling policies and in place agreements for sharing allow fast response to changes in resource needs by Fermilab and OSG users. • Gateway provides single bridge between OSG wide area distributed infrastructure and FermiGrid local sites. Consists of a Globus gate-keeper and a Condor-G • Each cluster has its own Globus gate-keeper • Storage and Job execution policies applied through Site-wide managed security and authorization services.

  32. OSG General Users Fermilab Users OSG “agreed” Users DZero Condor pool CDF Condor pool CMS Condor pool Shared Condor pool Access to FermiGrid FermiGrid Gateway GT-GK Condor-G Condor-G Condor-G Condor-G GT-GK GT-GK GT-GK GT-GK

  33. The Crimson Grid is • a Scalable collaborative computing environment for research at the interface of science and engineering • a Gateway/Middleware release service to enable campus/community/national/global computing infrastructures for interdisciplinary research • a Test bed for faculty & IT-industry affiliates within the framework of a production environment for integrating HPC solutions for higher education & research • a Campus Resource for skills & knowledge sharing for advanced systems administration & management of switched architectures

  34. CrimsonGrid Role as a Campus Grid Enabler

  35. Homework? CrimsonGrid ATLAS Campus Grids OSG OSG Tier II

  36. HTC on the UW campus(or what you can dowith $1.5M?)

  37. Grid Laboratory of Wisconsin 2003 Initiative funded by NSF(MIR)/UW at ~ $1.5M Six Initial GLOW Sites • Computational Genomics, Chemistry • Amanda, Ice-cube, Physics/Space Science • High Energy Physics/CMS, Physics • Materials by Design, Chemical Engineering • Radiation Therapy, Medical Physics • Computer Science Diverse users with different deadlines and usage patterns.

  38. Example Uses • Chemical Engineering • Students do not know where the computing cycles are coming from - they just do it - largest user group • ATLAS • Over 15 Million proton collision events simulated at 10 minutes each • CMS • Over 70 Million events simulated, reconstructed and analyzed (total ~10 minutes per event) in the past one year • IceCube / Amanda • Data filtering used 12 CPU-years in one month • Computational Genomics • Prof. Shwartz asserts that GLOW has opened up a new paradigm of work patterns in his group • They no longer think about how long a particular computational job will take - they just do it

  39. GLOW Usage 4/04-9/05 Leftover cycles available for “Others” Takes advantage of “shadow” jobs Take advantage of check-pointing jobs Over 7.6 million CPU-Hours (865 CPU-Years) served!

  40. UW Madison Campus Grid • Condor pools in various departments, made accessible via Condor ‘flocking’ • Users submit jobs to their own private or department Condor scheduler. • Jobs are dynamically matched to available machines. • Crosses multiple administrative domains. • No common uid-space across campus. • No cross-campus NFS for file access. • Users rely on Condor remote I/O, file-staging, AFS, SRM, gridftp, etc.

  41. Housing the Machines • Condominium Style • centralized computing center • space, power, cooling, management • standardized packages • Neighborhood Association Style • each group hosts its own machines • each contributes to administrative effort • base standards (e.g. Linux & Condor) to make easy sharing of resources • GLOW has elements of both, but leans towards neighborhood style

  42. The value of the big G • Our users want to collaborate outside the bounds of the campus (e.g. Atlas and CMS are international). • We also don’t want to be limited to sharing resources with people who have made identical technological choices. • The Open Science Grid (OSG) gives us the opportunity to operate at both scales, which is ideal.

  43. condor_submit schedd (Job caretaker) startd (Job Executor) Submitting Jobs within UW Campus Grid UW HEP User HEP matchmaker CS matchmaker GLOW matchmaker flocking • Supports full feature-set of Condor: • matchmaking • remote system calls • checkpointing • MPI • suspension VMs • preemption policies

  44. HEP matchmaker GLOW matchmaker CS matchmaker condor_submit Globus gatekeeper schedd (Job caretaker) flocking schedd (Job caretaker) startd (Job Executor) condor gridmanager Submitting jobs through OSG to UW Campus Grid Open Science Grid User

  45. condor_submit schedd (Job caretaker) globus gatekeeper condor gridmanager Routing Jobs fromUW Campus Grid to OSG HEP matchmaker CS matchmaker GLOW matchmaker Grid JobRouter • Combining both worlds: • simple, feature-rich local mode • when possible, transform to grid job for traveling globally

  46. GLOW Architecture in a Nutshell One big Condor pool • But backup central manager runs at each site (Condor HAD service) • Users submit jobs as members of a group (e.g. “CMS” or “MedPhysics”) • Computers at each site give highest priority to jobs from same group (via machine RANK) • Jobs run preferentially at the “home” site, but may run anywhere when machines are available

  47. Accommodating Special Cases • Members have flexibility to make arrangements with each other when needed • Example: granting 2nd priority • Opportunistic access • Long-running jobs which can’t easily be checkpointed can be run as bottom feeders that are suspended instead of being killed by higher priority jobs • Computing on Demand • tasks requiring low latency (e.g. interactive analysis) may quickly suspend any other jobs while they run

  48. Elevating from GLOW to OSG Job 1 Job 2 Job 3 Job 4 Job 5 … Schedd On The Side Job 4* job queue Schedd

  49. Gatekeeper Random Seed Random Seed Random Seed Random Seed Random Seed Random Seed Random Seed Random Seed Random Seed Random Seed Random Seed Random Seed Random Seed Negotiator Schedd X Random Seed Random Seed Random Seed Startds The Grid Universe vanilla site X • easier to live with private networks • may use non-Condor resources • restricted Condor feature set(e.g. no std universe over grid) • must pre-allocating jobsbetween vanilla and grid universe

More Related