1 / 43

Chapter 6: CPU Scheduling

Chapter 6: CPU Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation. Basic Concepts. Maximum CPU utilization obtained with multiprogramming

marion
Download Presentation

Chapter 6: CPU Scheduling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 6: CPU Scheduling • Basic Concepts • Scheduling Criteria • Scheduling Algorithms • Multiple-Processor Scheduling • Real-Time Scheduling • Algorithm Evaluation Operating System Concepts

  2. Basic Concepts • Maximum CPU utilization obtained with multiprogramming • CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait (Fig. 6.1). • CPU burst distribution is generally characterized as exponential or hyperexponential, with many short CPU bursts (I/O-bound), and a few long CPU bursts (CPU-bound) (Fig. 6.2). Operating System Concepts

  3. Alternating Sequence of CPU And I/O Bursts Operating System Concepts

  4. Histogram of CPU-burst Times Operating System Concepts

  5. CPU Scheduler • CPU scheduler (short-term scheduler) selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. • The records in the queues are generally process control blocks (PCBs) of the processes. • CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state. 2. Switches from running to ready state. 3. Switches from waiting to ready. 4. Terminates. • Scheduling under 1 and 4 is nonpreemptive. • All other scheduling is preemptive. Operating System Concepts

  6. CPU Scheduler • Under nonpreemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU. This is used by Windows 3.1 and Macintosh operating system. • Under preemptive scheduling a process switches in and out of CPU processing. • Preemptive scheduling could cause inconsistent shared or kernel data. • For systems to scale efficiently, interrupts state changes must be minimized. Operating System Concepts

  7. Dispatcher • Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: • switching context • switching to user mode • jumping to the proper location in the user program to restart that program • Dispatch latency – time it takes for the dispatcher to stop one process and start another running. Operating System Concepts

  8. Scheduling Criteria • CPU utilization – keep the CPU as busy as possible • Throughput – # of processes that complete their execution per time unit • Turnaround time – amount of time to execute a particular process • Waiting time – amount of time a process has been waiting in the ready queue • Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment) Operating System Concepts

  9. Optimization Criteria • Maximize CPU utilization • Maximize throughput • Minimize turnaround time • Minimize waiting time • Minimize response time • Minimize the variance in the response time. A system can have reasonable and predictable response time. • CPU scheduling deals with the problems of deciding which of the processes in the ready queue is to be allocated the CPU. Operating System Concepts

  10. First-Come, First-Served (FCFS) Scheduling P1 P2 P3 0 24 27 30 ProcessBurst Time P1 24 P2 3 P3 3 • Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart for the schedule is: • Waiting time for P1 = 0; P2 = 24; P3 = 27 • Average waiting time: (0 + 24 + 27)/3 = 17 Operating System Concepts

  11. FCFS Scheduling (Cont.) P2 P3 P1 0 3 6 30 Suppose that the processes arrive in the order P2 , P3 , P1 . • The Gantt chart for the schedule is: • Waiting time for P1 = 6;P2 = 0; P3 = 3 • Average waiting time: (6 + 0 + 3)/3 = 3 • Much better than previous case. • Convoy effect: short process wait behind long process Operating System Concepts

  12. Shortest-Job-First (SJR) Scheduling • Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time. • The real difficulty with the SJF algorithm is knowing the length of the next CPU request. • SJF scheduling is used frequently in long-term scheduling. • The next CPU burst is generally predicated as an exponential average of the measured lengths of previous CPU bursts. Operating System Concepts

  13. Shortest-Job-First (SJR) Scheduling • Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time. • The real difficulty with the SJF algorithm is knowing the length of the next CPU request. • Two schemes: • nonpreemptive – once CPU given to the process it cannot be preempted until completes its CPU burst. • preemptive – if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the Shortest-Remaining-Time-First (SRTF). • SJF is optimal – gives minimum average waiting time for a given set of processes. Operating System Concepts

  14. Example of Non-Preemptive SJF P1 P3 P2 P4 0 3 7 8 12 16 Process Arrival TimeBurst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4 • SJF (non-preemptive) • Average waiting time = (0 + 6 + 3 + 7)/4 = 4 Operating System Concepts

  15. Example of Preemptive SJF P1 P2 P3 P2 P4 P1 11 16 0 2 4 5 7 Process Arrival TimeBurst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4 • SJF (preemptive) • Average waiting time = (9 + 1 + 0 +2)/4 = 3 Operating System Concepts

  16. Determining Length of Next CPU Burst • Can only estimate the length. • Can be done by using the length of previous CPU bursts, using exponential averaging. Operating System Concepts

  17. Prediction of the Length of the Next CPU Burst Operating System Concepts

  18. Examples of Exponential Averaging •  =0 • n+1 = n • Recent history does not count. •  =1 • n+1 = tn • Only the actual last CPU burst counts. • If we expand the formula, we get: n+1 =  tn+(1 - )  tn -1 + … +(1 -  )j  tn -1 + … +(1 -  )n=1 tn 0 • Since both  and (1 - ) are less than or equal to 1, each successive term has less weight than its predecessor. Operating System Concepts

  19. Priority Scheduling • A priority number (integer) is associated with each process • The CPU is allocated to the process with the highest priority (smallest integer  highest priority). • Preemptive • nonpreemptive • SJF is a priority scheduling where priority is the predicted next CPU burst time. • Problem  Starvation – low priority processes may never execute. • Solution  Aging – as time progresses increase the priority of the process. Operating System Concepts

  20. Example of Priority Scheduling P2 P5 P3 P6 P4 P1 11 16 0 2 4 5 7 Process Burst TimePriority P1 5.0 6 P2 2.0 1 P3 1.0 3 P4 4.0 5 P5 2.0 2 P6 2.0 4 • Priority • Average waiting time = (2 + 4 + 5 + 7 + 11 +2) / 6 = 4.83 Operating System Concepts

  21. Round Robin (RR) • Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. • If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units. • Performance • q large  FIFO • q small  q must be large with respect to context switch, otherwise overhead is too high. Operating System Concepts

  22. Example of RR with Time Quantum = 20 P1 P2 P3 P4 P1 P3 P4 P1 P3 P3 0 20 37 57 77 97 117 121 134 154 162 ProcessBurst Time P1 53 P2 17 P3 68 P4 24 • The Gantt chart is: • Typically, higher average turnaround than SJF, but better response. Operating System Concepts

  23. Time Quantum and Context Switch Time Operating System Concepts

  24. Turnaround Time Varies With The Time Quantum Operating System Concepts

  25. Multilevel Queue • Ready queue is partitioned into separate queues:foreground (interactive)background (batch) • Each queue has its own scheduling algorithm, foreground – RRbackground – FCFS • Scheduling must be done between the queues. • Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation. • Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR • 20% to background in FCFS Operating System Concepts

  26. Multilevel Queue Scheduling • Ready queue is partitioned into separate queues:foreground (interactive)background (batch) • Each queue has its own scheduling algorithm, foreground – RRbackground – FCFS • Scheduling must be done between the queues. • Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation. • Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR • 20% to background in FCFS Operating System Concepts

  27. Multilevel Queue Scheduling Operating System Concepts

  28. Multilevel Feedback Queue • A process can move between the various queues; aging can be implemented this way. • Multilevel-feedback-queue scheduler defined by the following parameters: • number of queues • scheduling algorithms for each queue • method used to determine when to upgrade a process • method used to determine when to demote a process • method used to determine which queue a process will enter when that process needs service Operating System Concepts

  29. Example of Multilevel Feedback Queue • Three queues: • Q0 – time quantum 8 milliseconds • Q1 – time quantum 16 milliseconds • Q2 – FCFS • Scheduling • A new job enters queue Q0which is servedFCFS. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1. • At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q2. Operating System Concepts

  30. Multilevel Feedback Queues Operating System Concepts

  31. Multiple-Processor Scheduling • CPU scheduling more complex when multiple CPUs are available. • Homogeneous processors are identical in terms of their functionality within a multiprocessor system. • Load sharing can occur if several identical processors are available. • Asymmetric multiprocessing – only one processor accesses the system data structures, alleviating the need for data sharing. Operating System Concepts

  32. Real-Time Scheduling • Hard real-time systems – required to complete a critical task within a guaranteed amount of time. • The scheduler can either admits the process or rejects it. This is known as resource reservation. • Usually, hard real-time systems are composed of special-purpose software running on specific hardware. • Soft real-time computing – requires that critical processes receive priority over less fortunate ones. • Implementing soft real-time functionality requires: • The system must have priority scheduling. • The dispatch latency must be small. Operating System Concepts

  33. Real-Time Scheduling • To keep dispatch latency low, we need to allow system calls to be preemptible: preemption points or preempt the entire kernel. • The high-priority process would be waiting for a lower-priority one to finish. This situation is known as priority inversion. • The priority inversion can be solved via the priority-inheritance protocol, in which all these processes inherit high priority until they are finished. • The conflict phase of dispatch latency has two components: • Preemption of any process running in the kernel. • Release by low-priority processes resources needed by the high-priority process. Operating System Concepts

  34. Dispatch Latency Operating System Concepts

  35. Algorithm Evaluation • Deterministic modeling – takes a particular predetermined workload and defines the performance of each algorithm for that workload. • Queueing models: • knowing arrival rates and service rates, we can compute utilization, average queue length, average wait time, and so on. • This area of study is called queueing-network analysis. • To get a more accurate evaluation of scheduling algorithms, we can use simulations. Simulations involve programming a model of the computer system. • Implementation – The only completely accurate way to evaluate a scheduling algorithm. Operating System Concepts

  36. Deterministic modeling Process Burst Time P1 10 P2 29 P3 3 P4 7 P5 12 • For the FCFS algorithm, the average waiting time is (0 + 10 + 39 + 42 + 49) / 5 = 28 milliseconds. • With the nonpreemptive SJF algorithm, the average waiting time is (10 + 32 + 0 + 3 + 20) / 5 = 13 milliseconds. • With the RR algorithm, the average waiting time is (0 + 32 + 20 + 23 + 40) / 5 = 23 milliseconds. Operating System Concepts

  37. Evaluation of CPU Schedulers by Simulation Operating System Concepts

  38. Comparisons of Evaluation Methods • Deterministic modeling is to specific, and requires too much exact knowledge, to be useful. • Queueing models: • Little’s formula: , n = average queue length W is the average waiting time, is the average arrival rate. • Queueing models are often only an approximation of a real system. • A more detailed simulation provides more accurate results but the design, coding and debugging of the simulator can be a major task. • The major difficulty of implementation is the cost. Operating System Concepts

  39. Solaris 2 Scheduling • Solaris 2 uses priority-based process scheduling. • It has four classes of scheduling, which are, in order of priority, real time, system, time sharing, and interactive. • The scheduler converts the class-specific priorities into global priorities, and selects to run the thread with the highest global priority. • The selected thread runs on the CPU until one of the following occurs: • It blocks • It uses its time slice (if it is not a system thread) • It is preempted by a higher-priority thread Operating System Concepts

  40. Solaris 2 Scheduling Operating System Concepts

  41. Windows 2000 Priorities • Windows 2000 schedules threads using a priority-based, preemptive scheduling algorithm. • The portion of the Windows 2000 kernel that handles scheduling is called the dispatcher. • Priorities are divided into two classes: the variable class contains threads having priorities from 1 to 15, and the real-time class contains threads with priority ranging from 16 to 31. • Windows 2000 distinguishes between the foreground process and the background processes. A foreground process has three times quantum of a background process. Operating System Concepts

  42. Windows 2000 Priorities Operating System Concepts

  43. An Example: Linux • Linux provides two separate process-scheduling algorithms: • One is a time-sharing algorithm for fair preemptive among multiple processes. • The other is designed for real-time tasks where absolute priorities are more important than fairness. • Linux uses a prioritized, credit-based algorithm for time-sharing processes. • Linux implements the two real-time scheduling classes required by POSIX.1b: first come, first served (FCFS), and round-robin (RR). Operating System Concepts

More Related