1 / 31

Project Team: David Abramson, Rajkumar Buyya, and Jonathan Giddy

Computational Grids and Computational Economy: Nimrod/G Approach Nimrod/G: Economic/Market-based Resource Management and Scheduling (for Parametric Modeling) on the Global Computational Grid. Project Team: David Abramson, Rajkumar Buyya, and Jonathan Giddy.

Download Presentation

Project Team: David Abramson, Rajkumar Buyya, and Jonathan Giddy

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computational Grids and Computational Economy: Nimrod/G ApproachNimrod/G: Economic/Market-basedResource Management and Scheduling (for Parametric Modeling) on the Global Computational Grid Project Team: David Abramson, Rajkumar Buyya, and Jonathan Giddy

  2. Study the behaviour of output variables against a range of different input scenarios Execute one application repeatedly for many combinations of input parameters Coarse-grained SPMD (single program - multiple data) model Parametric Modeling for i in (10, 20, 30, 40, 50, 60, 70, 80, 90, 100): for j in (‘v’, ‘w’, ‘x’, ‘y’, ‘z’): myprog $i $j > output.$i.$j

  3. Nimrod (1994 - ) DSTC funded project Designed for department level clusters Proof of concept Clustor (Activetools) (1997 - ) Commercial version of Nimrod Re-engineered Features Workstation orientation Access to idle workstations Random allocation policy Password security Working with Small Clusters

  4. Execution Architecture Input Files Substitution Output Files Computational Nodes Root Machine

  5. Clustor Tools

  6. Dispatch cycle using Clustor...

  7. Bioinformatics: Protein Modeling Sensitivity experiments on smog formation Parametric study of Laser detuning Combinatorial Optimization: Simulated Annealing Ecological Modeling: Control Strategies for Cattle Tick Electronic CAD: Field Programmable Gate Arrays Computer Graphics: Ray Tracing High Energy Physics: Searching for Rare Events Physics: Laser-Atom Collisions VLSI Design: SPICE Simulations Sample Applications of Clustor

  8. Manual resource location static file of machine names No resource scheduling first come first served No cost model all machines cost alike Single access mechanism Clustor limitations

  9. Users and system managers want to know where it will run when it will run how much it will cost that access is secure homogeneous access Requirements

  10. Towards Grid Computing…. Source: www.globus.org & updated

  11. New applications based on high-speed coupling of people, computers, databases, instruments, etc. Computer-enhanced instruments Collaborative engineering Browsing of remote datasets Use of remote software Data-intensive computing Very large-scale simulation Large-scale parameter studies Why “The Grid”? Source: www.globus.org

  12. “Dependable, consistent, pervasive access to [high-end] resources” Dependable: Can provide performance and functionality guarantees Consistent: Uniform interfaces to a wide variety of resources Pervasive: Ability to “plug in” from anywhere The Grid Vision: To offer Source: www.globus.org

  13. Challenging Issues • Authenticate once • Specify simulation (code, resources, etc.) • Locate resources • Negotiate authorization, acceptable use, etc. • Acquire resources • Initiate computation • Steer computation • Access remote datasets • Collaborate on results • Account for usage Domain 1 Domain 2 Source: www.globus.org

  14. Where appropriate, exploit standards and commodity technology in core infrastructure LDAP, SSL, X.509, GSS-API, GAA-API, http, ftp, XML, etc. Provides leverage Interface with other common standards CORBA, Java/Jini, DCOM, Web, etc While our core infrastructure may not be built on one of these distributed architectures, we must cleanly interface with them Standards & Commodity Tech Source: www.globus.org

  15. Basic research in grid-related technologies Resource management, QoS, networking, storage, security, adaptation, policy, etc. Development of Globus toolkit Core services for grid-enabled tools & applns Construction of large grid testbed: GUSTO Largest grid testbed in terms of sites & apps Application experiments Tele-immersion, distributed computing, etc. The Globus Project Source: www.globus.org

  16. Layered Architecture (Grid Components) Applications High-level Services and Tools GlobusView Testbed Status Nimrod/G DUROC MPI MPI-IO CC++ globusrun Core Services Nexus GRAM Metacomputing Directory Service Globus Security Interface Heartbeat Monitor Gloperf GASS Local Services Condor MPI TCP UDP LSF Easy NQE AIX Irix Solaris Source: www.globus.org

  17. Communication infrastructure (Nexus, IO) Information services (MDS) Network performance monitoring (Gloperf) Process monitoring (HBM) Remote file and executable management (GASS and GEM) Resource management (GRAM) Security (GSI) Core Globus Services Source: www.globus.org

  18. Nimrod/G Architecture Nimrod/G Client Nimrod/G Client Nimrod/G Client Parametric Engine Schedule Advisor Resource Discovery Persistent Info. Dispatcher Grid Directory Services Grid Middleware Services GUSTO Test Bed

  19. User process GASS server File access Nimrod/G Interactions • Additional services used implicitly: • GSI (authentication & authorization) • Nexus (communication) Resource location MDS server Scheduler Resource allocation (local) Prmtc.. Engine Dispatcher Queuing System Job Wrapper GRAM server Root node Computational node Gatekeeper node

  20. Global information is hard to get and out of date Load balancing Fairness to multiple users Global resource allocation • Global limits are easy to set and fairly stable • Load profiling • Cost-based resource allocation

  21. Resource selection on based real money and market based A large number of sellers and buyers (resources may be dedicated/shared) Negotiation: tenders/bids and select those offers meet the requirement Trading and Advance Resource Reservation Schedule computations on those resources that meet all requirements Computational Economy

  22. non-uniform costing time to time one user to another usage duration encourages use of local resources first user can access remote resources, but pays a penalty in higher cost. Machine 1 Machine 5 User 1 1 3 User 5 2 1 Cost Model

  23. Deadline Cost Available Machines A Nimrod/G User Console

  24. Some early results

  25. AppLeS (UC. San Diego) application level scheduling & case-by-case NetSolve (UTK/ORNL) API for creating farms DISCWorld (U. Adelaide) remote information access Millennium (UC. Berkeley) remote execution environment on clusters and supports computational economy Related Works

  26. Nimrod/G architecture offers a scalable model for resource management and scheduling on computational grids Supports Computational Economy The current model supporting Parametric Computing can be extended to support parallel jobs or any other computational model. Plan to use the concept of Advance Resource Reservation in order to offer the feature wherein the user can say “I am willing to pay $…, can you complete my job by this time…” Conclusions

  27. Nimrod/G: www.csse.monash.edu.au/~davida/nimrod.html Active Tools (Clustor): www.activetools.com Further Information

  28. Closed systems

More Related