1 / 29

Summary for Chapter 5 --Distributed Process Scheduling

Student: Zhibo Wang Professor: Yanqing Zhang. Summary for Chapter 5 --Distributed Process Scheduling. Chapter Outline. Three process models: precedence, communication, and disjoint A system performance model that illustrate the relationship among the algorithm, scheduling, and architecture

toshi
Download Presentation

Summary for Chapter 5 --Distributed Process Scheduling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Student: Zhibo Wang Professor: Yanqing Zhang Summary for Chapter 5--Distributed Process Scheduling

  2. Chapter Outline • Three process models: precedence, communication, and disjoint • A system performance model that illustrate the relationship among the algorithm, scheduling, and architecture • Static scheduling : precedence and communicate models • Dynamic Scheduling: local sharing and balancing for disjoint and interacting process model • Implementation: remote service and execution, and process migration • Real-time scheduling and synchronization

  3. Process Models It depicts the relationship among algorithm, scheduling and architecture to describe the Inter process communication Basically three types of model are there: • Precedence process model • Communication process model • Disjoint process model It will be described by DAG(Directed Acyclic Graph)

  4. Process Models Precedence process model • This is best applied to the concurrent processes generated by concurrent languages constructs such as fork/join. • A useful measure is to minimize the completion time including the computation and communication

  5. Process Models Communication process model • In this model processes co-exist and Communicate asynchronously. • Edges in this model represent the need of communication between the processes

  6. Process Models Disjoint process model • In this processes run independently and completed in finite time. • Processes are mapped to the processors to maximize the utilization of processes and minimize the turnaround time of the processes.

  7. System Performance model Partitioning a task into multiple processes for execution can result in a speedup of the total task completion time. The speedup factor S is a function S = F(Algorithm; System; Schedule) The unified speedup model integrates three major Components • Algorithm development • System architecture • Scheduling policy with the objective of minimizing the total completion time (makespan) of a set of interacting processes

  8. Speed up n- Number of processes - Efficiency Loss when implemented on a real machine. RC-relative concurrency RP-relative processing requirement Speed up depends on: Design and efficiency of the scheduling algorithm. Architecture of the system

  9. System Performance model If processes are not constrained by precedence relations and are free to be redistributed or moved around among processors in the system, performance can be further improved by sharing the workload • statically - load sharing • dynamically - load balancing

  10. Static Process Scheduling • Scheduling a set of partially ordered tasks on a nonpreemtivemultiprocessor system of identical processors to minimize the overall finishing time (makespan) • Except for some very restricted cases scheduling to optimize makespan is NP-complete • Most research is oriented toward using approximate or heuristic methods to obtain a near optimal solution to the problem • A good heuristic distributed scheduling algorithm is one that can best balance and overlap computation and communication

  11. Static Process Scheduling • Static Process Schedule is used to find optimal solution to the problem. • There are two extreme cases of work assignment. • mapping of processes is done before execution of the processes. once process started it stays at the processor until completion of the process. And never preempted. • Decentralized and non –Adaptive are the drawbacks of Static process scheduling.

  12. PrecedenceProcessModel • Computational model--this model is used to describe scheduling for ‘program’ which consists of several sub-tasks. The schedulable unit is sub-tasks. • Primary objective of task scheduling is to achieve maximal concurrency for task execution within a program. • Scheduling goal: minimize the makespan time.

  13. PrecedenceProcessModel Algorithms: • List Scheduling (LS): Communication overhead is not considered. Using a simple greedy heuristic: No processor remains idle if there are some tasks available that it could process. • Extended List Scheduling (ELS): the actual scheduling results of LS with communication consideration. • Earliest Task First scheduling (ETF): the earliest schedulable task (with communication delay considered) is scheduled first. • what is the scheduling results of the above example when there are two processors? how about four processors?

  14. Communicating Process Model • There are no precedence constrains among processes • Modeled by a undirected graph G, node represent processes and weight on the edge is the amount of communication messages between two connected processes. • Scheduling goal: maximize the resource utilization.

  15. Dynamic load sharing and Balancing • Load balancing can be defined as a technique to distribute work between many computers processes or any other resources to get optimal resource utilization. • controller reduces the process idling through load sharing, by joining the shortest queue and equalizing queue sizes by load balancing. • Further, processes can be allowed to move from longer queue to shorter queue through load Redistribution.

  16. Sender Initiated Algorithm • It is activated by a sender process that wishes to off-load some of its computation by migration of processes from a heavily loaded sender to a lightly loaded receiver. • Transfer of process form a sender to receiver requires three basic decisions. • Transfer policy:-when does the node become the sender? • Selection Policy:-How does the sender choose a process for transfer? • Location Policy:-which node should be the target receiver?

  17. Receiver initiated Algorithm • This are the pull models in which receiver can pull a process from others to its site for execution. • They are more stable than the sender initiated algorithm. • At high system load ,process migrations are few and a sender can be found easily. • Receiver initiated algorithms perform better than the sender initiated algorithms • Both the algorithms can be combined depending on RT and ST.

  18. Distributed Process Implementation Depending on how the request messages are interpreted, there are three main application scenarios: • Remote Service • The message is interpreted as a request for a known service at the remote site. • Remote Execution • The messages contain a program to be executed at the remote site. • Process Migration • The messages represent a process being migrated to a remote site for continuing the execution.

  19. Remote Service • As remote procedure calls at the language level • As remote commands at the operating system level • As interpretive messages at the application level

  20. Remote Execution • The purpose of remote service is to access the remote host unlike remote service remote process maintains the view of originating system. • Some Implementation issues: • load sharing algorithms. • Location independence. • System heterogeneity. • Protection and security.

  21. Load-Sharing Algorithm • Each process server are responsible to maintain the load information. • The list of hosts participating are broadcasted. • The selection procedure is by a centralized broker process. • Once a remote host is selected- • The client process server indicates the resource requirements to the process server at the remote site. • If the client is authenticated and its resource requirements can be met, the server grants permission for remote execution. • The transfer of code image follows, and the server creates the remote process and the stub. • The client initializes the process forked at the remote site.

  22. Location Independence • Process created by remote execution requires coordination to accomplish common task. • So it is necessary to support logical views for the processes. • Each remote process is represented by an agent process at the originating host. • It appears as though the process is running on a single machine.

  23. System heterogeneity • If remote execution is invoked on heterogeneous host , then it is necessary to re-compile the program. • Overhead Issue. • Solution: • Use canonical machine-independent intermediate language for program execution.

  24. Process Migration • The message represents a process being migrated to the remote site for continuing execution. • Process migration facility • State and context transfer: It transfers the computation state information and communication state information

  25. Real Time Scheduling • The systems which insures that certain actions are taken within specified time constraints are called real time systems. Can be classified as: Static vs dynamic Premptivevs non-premptive Global vs Local

  26. Rate Monotonic • It’s easy to implement. • Sorts the tasks by the lengths of their periods. • It also makes very good priority assignments. • Rate monotonic is an optimal priority assignment algorithm.

  27. Deadline Monotonic: In real time system some tasks need to complete execution a short time after being requested. Earliest Deadline First: This is applies dynamic priority scheduling to achieve better CPU utilization . Real time Synchronization: A set of tasks that cooperate to achieve a goal will need to share information and resources or other words synchronize with other tasks.

  28. References [1].http://en.wikipedia.org/wiki/ [2]. Randy Chow, Theodore Johnson, “Distributed Operating Systems & Algorithms”, Addison Wesley.(all diagrams) [3].Dejan S. Milojicic Fred DouglisYves Paindaveine Richard Wheeler Songnian Zhou, “Process Migration” , ACM Computing Surveys (CSUR) Volume 32 ,  Issue 3  (September 2000) [4]. S. Cheng, J.A. Stankovic and K. Ramamritham, ‘‘Scheduling Algorithms for Hard Real-Time Systems: A Brief Survey’’, page6-7 in Hard Real-Time Systems: Tutorial, IEEE (1988). [5] .Distributed Process Scheduling. Advanced Operating Systems Louisiana State University Rajgopal Kannan. Issues in Distributed Scheduling.www.csc.lsu.edu/

  29. Any Question?

More Related