1 / 33

Software Support for High Performance Problem Solving on the Grid

An overview of the GrADS project, sponsored by NSF NGS Ken Kennedy Center for High Performance Software at Rice University. The project aims to build a national problem-solving system on the grid, providing software support for application development. Challenges include high-level application development interface, adaptability of applications, and performance monitoring and control.

dzimmermann
Download Presentation

Software Support for High Performance Problem Solving on the Grid

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Software Support for High Performance Problem Solving on the Grid An Overview of the GrADS Project Sponsored by NSF NGS Ken Kennedy Center for High Performance Software Rice University http://www.cs.rice.edu/~ken/Presentations/GrADSOverview.pdf

  2. Principal Investigators Francine Berman, UCSD Andrew Chien, UCSD Keith Cooper, Rice Jack Dongarra, Tennessee Ian Foster, Chicago Dennis Gannon, Indiana Lennart Johnsson, Houston Ken Kennedy, Rice Carl Kesselman, USC ISI John Mellor-Crummey, Rice Dan Reed, UIUC Linda Torczon, Rice Rich Wolski, UCSB

  3. Other Contributors Dave Angulo, Chicago Henri Casanova, UCSD Holly Dail, UCSD Anshu Dasgupta, Rice Sridhar Gullapalli, USC ISI Charles Koelbel, Rice Anirban Mandal, Rice Gabriel Marin, Rice Mark Mazina, Rice Celso Mendes, UIUC Otto Sievert, UCSD Martin Swany, UCSB Satish Vadhiyar, Tennessee Shannon Whitmore, UIUC Asim Yarkan, Tennessee

  4. Database Supercomputer Database Supercomputer National Distributed Problem Solving

  5. GrADS Vision • Build a National Problem-Solving System on the Grid • Transparent to the user, who sees a problem-solving system • Software Support for Application Development on Grids • Goal: Design and build programming systems for the Grid that broaden the community of users who can develop and run applications in this complex environment • Challenges: • Presenting a high-level application development interface • If programming is hard, the Grid will not not reach its potential • Designing and constructing applications for adaptability • Late mapping of applications to Grid resources • Monitoring and control of performance • When should the application be interrupted and remapped?

  6. Today: Globus • Developed by Ian Foster and Carl Kesselman • Grew from the I-Way (SC-95) • Basic Services for distributed computing • Resource discovery and information services • User authentication and access control • Job initiation • Communication services (Nexus and MPI) • Applications are programmed by hand • Many applications • User responsible for resource mapping and all communication • Existing users acknowledge how hard this is

  7. Today: Condor • Support for matching application requirements to resources • User and resource provider write ClassAD specifications • System matches ClassADs for applications with ClassADs for resources • Selects the “best” match based on a user-specified priority • Can extend to Grid via Globus (Condor-G) • What is missing? • User must handle application mapping tasks • No dynamic resource selection • No checkpoint/migration (resource re-selection) • Performance matching is straightforward • Priorities coded into ClassADs

  8. GrADS Strategy • Goal: Reduce work of preparing an application for Grid execution • Provide generic versions of key components currently built in to applications • E.g., scheduling, application launch, performance monitoring • Key Issue: What is in the application and what is in the system? • GrADS: Application = configurable object program • Code, mapper, and performance modeler

  9. Performance Feedback Real-time Performance Performance Problem Software Monitor Components Resource Config- Whole- Source Grid Negotiator urable Appli- Program Negotiation Runtime Object Compiler cation System Scheduler Program Binder Libraries GrADSoft Architecture

  10. Configurable Object Program • Goal: Provide minimum needed to automate resource selection and program launch • Code • Today: MPI program • Tomorrow: more general representations • Mapper • Defines required resources and affinities to specialized resources • Given a set of resources, maps computation to those resources • “Optimal” performance, given all requirements met • Performance Model • Given a set of resources and mapping, estimates performance • Serves as objective function for Resource Negotiator/Scheduler

  11. Performance Feedback Real-time Performance Performance Problem Software Monitor Components Resource Config- Whole- Source Grid Negotiator urable Appli- Program Negotiation Runtime Object Compiler cation System Scheduler Program Binder Libraries GrADSoft Architecture Execution Environment

  12. Execution Cycle • Configurable Object Program is presented • Space of feasible resources must be defined • Mapping strategy and performance model provided • Resource Negotiator solicits acceptable resource collections • Performance model is used to evaluate each • Best match is selected and contracted for • Execution begins • Binder tailors program to resources • Carries out final mapping according to mapping strategy • Inserts sensors and actuators for performance monitoring • Contract monitoring is performed continuously during execution • Soft violation detection based on fuzzy logic

  13. Application Manager (one per app) Scheduler/ Resource Negotiator Perf Model GrADS Information Repository Binder Mapping Sensor Insertion Mapper Launch Grid Resources And Services Contract Monitor GrADS Program Execution System Configurable Application

  14. Performance Feedback Real-time Performance Performance Problem Software Monitor Components Resource Config- Whole- Source Grid Negotiator urable Appli- Program Negotiation Runtime Object Compiler cation System Scheduler Program Binder Libraries GrADSoft Architecture Program Preparation System

  15. Program Preparation Tools • Goal: provide tools to support the construction of Grid-ready applications (in the GrADS framework) • Performance modeling • Challenge: synthesis and integration of performance models • Combine expert knowledge, trial execution, and scaled projections • Focus on binary analysis, derivation of scaling factors • Mapping • Construction of mappers from parallel programs • Mapping of task graphs to resources (graph clustering) • Integration of mappers and performance modelers from components • High-level programming interfaces • Problem-solving systems: integration of components

  16. Generation of Mappers • Start from parallel program • Typically written using a communication library (e.g. MPI) • Can be composed from library components • Construct a task graph • Vertices represent tasks • Edges represent data sharing • Read-read: undirected edges • Read-write in any order: directed edges (dependences) • Weights represent volume of communication • Identify oportunities for pipelining • Use a clustering algorithm to match tasks to resources • One option: global weighted fusion

  17. Constructing Scalable, Portable Models Construct Application Signatures • Measure static characteristics • Measure dynamic characteristics for multiple executions • computation • memory access locality • message frequency and size • Determine sensitivity of aggregate dynamic characteristics to • data size • processor count • machine characteristics • Build the model based via integration

  18. High Level Programming • Rationale • programming is hard, and getting harder with new platforms • professional programmers are in short supply • high performance will continue to be important • Strategy: Make the End User a Programmer • professional programmers develop components • users integrate components using: • problem-solving environments (PSEs) based on scripting languages (possibly graphical) • examples: Visual Basic, Tcl/Tk, AVS, Khoros • Achieving High Performance • translate scripts and components to common intermediate language • optimize the resulting program using whole-program compilation

  19. Global Optimizer Component Library Intermediate Code Code Generator User Library Translator Script Whole-Program Compilation Problem: long compilation times, even for short scripts! Problem: expert knowledge on specialization lost

  20. Could run for hours Compiler Generator understands library calls as primitives Script Translator L1 Compiler Vendor Compiler Optimized Application Telescoping Languages L1 Class Library Script

  21. Telescoping Languages: Advantages • Compile times can be reasonable • More compilation time can be spent on libraries • Script compilations can be fast • Components reused from scripts may be included in libraries • High-level optimizations can be included • Based on specifications of the library designer • Properties often cannot be determined by compilers • Properties may be hidden after low-level code generation • User retains substantive control over language performance • Mature code can be built into a library and incorporated into language • Reliability can be improved • Specialization by compilation framework, not user

  22. Applications • Matlab Compiler • Automatically generated from LAPACK or ScaLAPACK • Matlab SP* • Based on signal processing library • Optimizing Component Integration System • DOE Common Component Architecture — High component invocation costs • Generator for ARPACK • Avoid recoding developer version by hand • System for Analysis of Cancer Experiments* • Based on S+ (collaboration with M.D. Anderson Cancer Center) • Flexible Data Distributions in HPF • Data distribution == collection of interfaces that meet specs • Generator for Grid Computations* • GrADS: automatic generation of NetSolve

  23. Testbeds • Goal: • Provide vehicle for experimentation with the dynamic components of the GrADS software framework • MacroGrid (Carl Kesselman) • Collection of processors running Globus and GrADS framework • Consistent software environment • At all 9 GrADS sites (but 3 are really useful) • Availability listed on web page • Permits experimentation with real applications • MicroGrid (Andrew Chien) • Cluster of processors (currently Compaq Alphas and x86 clusters) • Runs standard Grid software (Globus, Nexus, GrADS middleware) • Permits simulation of varying loads and configurations • Stress GrADS components (Performance modeling and control)

  24. Research Strategy • Applications Studies • Prototype a series of applications using components of envisioned execution system • ScaLAPACK and Cactus demonstration projects • Move from Hand Development to Automated System • Identify key components that can be isolated and built into a Grid execution system • e.g., prototype reconfiguration system • Use experience to elaborate design of software support systems • Experiment • Use testbeds to evaluate results and refine design

  25. Progress Report • Testbeds Working • Preliminary Application Studies Complete • ScaLAPACK and Cactus • GrADS functionality built in

  26. ScaLAPACK Across 3 Clusters OPUS OPUS, CYPHER OPUS, TORC, CYPHER 3500 2 OPUS, 4 TORC, 6 CYPHER 3000 2500 2000 8 OPUS, 4 TORC, 4 CYPHER Time (seconds) 8 OPUS, 2 TORC, 6 CYPHER 1500 6 OPUS, 5 CYPHER 1000 8 OPUS, 6 CYPHER 500 8 OPUS 8 OPUS 5 OPUS 0 0 5000 10000 15000 20000 Matrix Size

  27. Largest Problem Solved • Matrix of size 30,000 • 7.2 GB for the data • 32 processors to choose from at UIUC and UT • Not all machines have 512 MBs, some little as 128 MBs • PM chose 17 processors in 2 clusters from UT • Computation took 84 minutes • 3.6 Gflop/s total • 210 Mflop/s per processor • ScaLAPACK on a cluster of 17 processors would get about 50% of peak • Processors are 500 MHz or 500 Mflop/s peak • For this Grid computation 20% less than ScaLAPACK

  28. PDSYEVX – Timing Breakdown

  29. Gig-E 100MB/sec Cactus 17 4 2 2 12 OC-12 line (But only 2.5MB/sec) 12 5 5 SDSC IBM SP 1024 procs 5x12x17 =1020 NCSA Origin Array 256+128+128 5x12x(4+2+2) =480 • Solved equations for gravitational waves (real code) • Tightly coupled, communications required through derivatives • Must communicate 30MB/step between machines • Time step takes 1.6 sec • Used 10 ghost zones along direction of machines: communicate every 10 steps • Compression/decompression on all data passed in this direction • Achieved 70-80% scaling, ~200GF (only 14% scaling without tricks) • Gordon Bell Award Winner at SC’2001

  30. Progress Report • Testbeds Working • Application Studies Complete • ScaLAPACK and Cactus • GrADS functionality built in • Prototype Execution System Complete • All components of Execution System (except rescheduling/migration) • Six applications working in new framework • Demonstrations at SC02 • ScaLAPACK, FASTA, Cactus, GrADSAT • In NPACI, NCSA, Argonne, Tennessee, Rice Booths • Prototype Program Preparation Tools Under Development • Black-box performance model construction preliminary experiments • Prototype mapper generator complete • Generated Grid version of HPF appplication Tomcatv

  31. SC02 Demo Applications • ScaLAPACK • LU decomposition of large matrices • Cactus • Solver for gravitational wave equations • Collaboration with Ed Seidel’s GridLAB • FASTA • Biological sequence matching on distributed databases • Smith-Waterman • Another sequence matching application using a strong algorithm • Tomcatv • Vectorized mesh generation written in HPF • Satisfiability • An NP-complete problem useful in circuit design and verification

  32. Summary • Goal: • Design and build programming systems for the Grid that broaden the community of users who can develop and run applications in this complex environment • Strategy: • Build an execution environment that automates the most difficult tasks • Maps applications to available resources • Manages adapting to varying loads and changing resources • Automate the process of producing Grid-ready programs • generate performance models and mapping strategies semi-automatically • Construct programs using high-level domain-specific programming interfaces

  33. Resources • GrADS Web Site • http://hipersoft.rice.edu/grads/ • Contains: • Planning reports • Technical reports • Links to papers

More Related