1 / 140

Chapter 5, CPU Scheduling

Chapter 5, CPU Scheduling. 5.1 Basic Concepts. The goal of multi-programming is to maximize the utilization of the CPU as a system resource by having a process running on it at all times

moana
Download Presentation

Chapter 5, CPU Scheduling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 5, CPU Scheduling

  2. 5.1 Basic Concepts • The goal of multi-programming is to maximize the utilization of the CPU as a system resource by having a process running on it at all times • Supporting multi-programming means encoding the ability in the O/S to switch between currently running jobs • Switching between jobs can be non-preemptive or preemptive

  3. Simple, non-preemptive scheduling means that a new process can be scheduled on the CPU only when the current job has begun waiting, for I/O, for example • Non-preemptive means that the O/S will not preempt the currently running job in favor of another one • I/O is the classic case of waiting, and it is the scenario that is customarily used to explain scheduling concepts

  4. The CPU-I/O Burst Cycle • A CPU burst refers to the period of time when a given process occupies the CPU before making an I/O request or taking some other action which causes it to wait • CPU bursts are of varying length and can be plotted in a distribution by length

  5. Overall system activity can also be plotted as a distribution of CPU and other activity bursts by processes • The distribution of CPU burst lengths tends to be exponential or hyperexponential

  6. The CPU scheduler = the short term scheduler • Under non-preemptive scheduling, when the processor becomes idle, a new process has to be picked from the ready queue and have the CPU allocated to it • Note that the ready queue doesn’t have to be FIFO, although that is a simple, initial assumption • It does tend to be some sort of linked data structure with a queuing discipline which implements the scheduling algorithm

  7. Preemptive scheduling • Preemptive scheduling is more advanced than non-preemptive scheduling. • Preemptive scheduling can take into account factors besides I/O waiting when deciding which job should be given the CPU. • A list of scheduling points will be given next. • It is worthwhile to understand what it means.

  8. Scheduling decisions can be made at these points: • A process goes from the run state to the wait state (e.g., I/O wait, wait for a child process to terminate) • A process goes from the run state to the ready state (e.g., as the result of an interrupt) • A process goes from the wait state to the ready state (e.g., I/O completes) • A process terminates

  9. Scheduling has to occur at points 1 and 4. • If it only occurs then, this is non-preemptive or cooperative scheduling • If scheduling is also done at points 2 and 3, this is preemptive scheduling

  10. Points 1 and 4 are given in terms of the job that will give up the CPU. • Points 2 and 3 seem to relate to which process might become available to run that could preempt the currently running process.

  11. Historically, simple systems existed without timers, just like they existed without mode bits, for example • It is possible to write a simple, non-preemptive operating system for multi-programming without multi-tasking • Without a timer or other signaling, jobs could only be switched when one was waiting for I/O

  12. However, recall that much of the discussion in the previous chapters assumed the use of interrupts, timers, etc., to trigger a context switch • This implies preemptive scheduling • Preemptive schedulers are more difficult to write than non-preemptive schedulers, and they raise complex technical questions

  13. The problem with preemption comes from data sharing between processes • If two concurrent processes share data, preemption of one or the other can lead to inconsistent data, lost updates in the shared data, etc.

  14. Note that kernel data structures hold state for user processes. • The user processes do not directly dictate what the kernel data structures contain, but by definition, the kernel loads the state of >1 user process

  15. This means that the kernel data structures themselves have the characteristic of data shared between processes • As a consequence, in order to be correctly implemented, preemptive scheduling has to prevent inconsistent state in the kernel data structures

  16. Concurrency is rearing its ugly head again, even though it still hasn’t been thoroughly explained. • The point is that it will become apparent that concurrency is a condition that is inherent to a preemptive scheduler. • Therefore, a complete explanation of operating systems eventually requires a complete explanation of concurrency issues.

  17. The idea that the O/S is based on shared data about processes can be explained concretely by considering the movement of PCB’s from one queue to another • If an interrupt occurs while one system process is moving a PCB, and the PCB has been removed from one queue, but not yet added to another, this is an error state • In other words, the data maintained internally by the O/S is now wrong/broken/incorrect…

  18. Possible solutions to the problem • So the question becomes, can the scheduler be coded so that inconsistent queue state couldn’t occur? • One solution would be to only allow switching on I/O blocks. • The idea is that interrupts will be queued rather than instantaneous (a queuing mechanism will be needed)

  19. This means that processes will run to a point where they can be moved to an I/O queue and the next process will not be scheduled until that happens • This solves the problem of concurrency in preemptive scheduling in a mindless way • This solution basically means backing off to non-preemptive scheduling

  20. Other solutions to the problem • 1. Only allow switching after a system call runs to completion. • In other words, make kernel processes uninterruptible. • If the code that moves PCB’s around can’t be interrupted, inconsistent state can’t result. • This solution also assumes a queuing system for interrupts.

  21. 2. Make certain code segments in the O/S uninterruptible. • This is the same idea as the previous one, but with finer granularity. • It increases concurrency because interrupts can at least occur in parts of kernel code, not just at the ends of kernel code calls.

  22. Note that interruptibility of the kernel is related to the problem of real time operating systems • If certain code blocks are not interruptible, you are not guaranteed a fixed, maximum response time to any particular system request or interrupt that you generate

  23. You may have to wait an indeterminate amount of time while the uninterruptible code finishes processing • This violates the requirement for a hard real-time system

  24. Scheduling and the dispatcher • The dispatcher = the module called by the short term scheduler which • Switches context • Switches to user mode • Jumps to the location in user code to run • Speed is desirable. • Dispatch latency refers to time lost in the switching process

  25. Scheduling criteria • There are various algorithms for scheduling • There are also various criteria for evaluating them • Performance is always a trade-off • You can never maximize all of the criteria with one scheduling algorithm

  26. Criteria • CPU utilization. The higher, the better. 40%-90% is realistic • Throughput = processes completed / unit time • Turnaround time = total time for any single process to complete • Waiting time = total time spent waiting in O/S queues • Response time = time between submission and first visible sign of response to the request—important in interactive systems

  27. Depending on the criterion, you may want to: • Strive to attain an absolute maximum or minimum (utilization, throughput) • Minimize or maximize the average (turnaround, waiting) • Minimize or maximize the variance (for time-sharing, minimize the variance, for example)

  28. 5.3 Scheduling Algorithms • 5.3.1 First-Come, First-Served (FCFS) • 5.3.2 Shortest-Job-First (SJF) • 5.3.3 Priority • 5.3.4 Round Robin (RR) • 5.3.5 Multilevel Queue • 5.3.6 Multilevel Feedback Queue

  29. Reality involves a steady stream of many, many CPU bursts • Reality involves balancing a number of different performance criteria or measures • Examples of the different scheduling algorithms will be given below based on a very few processes and a limited number of bursts • The examples will be illustrated using Gantt charts • The scheduling algorithms will be evaluated and compared based on a simple measure of average waiting time

  30. FCFS Scheduling • The name, first-come, first-served, should be self-explanatory • This is an older, simpler scheduling algorithm • It is non-preemptive • It is not suitable for interactive time sharing • It can be implemented with a simple FIFO queue of PCB’s

  31. Consider the following scenario • Process Burst length • P1 24 ms. • P2 3 ms. • P3 3 ms.

  32. Avg. wait time = (0 + 24 + 27) / 3 =17 ms.

  33. Compare with a different arrival order: • P2, P3, P1

  34. Avg. wait time = (0 + 3 + 6) / 3 =3 ms.

  35. Additional comments on performance analysis • It is clear that average wait time varies greatly depending on the arrival order of processes and their varying burst lengths • As a consequence, it is also possible to conclude that for any given set of processes and burst lengths, arbitrary FCFS scheduling does not result in a minimal or optimal average wait time

  36. FCFS scheduling is subject to the convoy effect • There is the initial arrival order of process bursts • After that, the processes enter the ready queue after I/O waits, etc. • Let there be one CPU bound job (long CPU burst) • Let there be many I/O bound jobs (short CPU bursts)

  37. Scenario: • The CPU bound job holds the CPU • The other jobs finish their I/O waits and enter the ready queue • Each of the other jobs is scheduled, FCFS, and is quickly finished with the CPU due to an I/O request • The CPU bound job then takes the CPU again

  38. CPU utilization may be high (good) under this scheme • The CPU bound job is a hog • The I/O bound jobs spend a lot of their time waiting • Therefore, the average wait time will tend to be high • Recall that FCFS is not preemptive, so once the jobs have entered, scheduling only occurs when a job voluntarily enters a wait state due to an I/O request or some other condition

  39. SJF Scheduling • The name, shortest-job-first, is not quite self-explanatory • Various ideas involved deserve explanation • Recall that these thumbnail examples of scheduling are based on bursts, not the overall job time • For scheduling purposes, it is the length of the next burst that is important • There is no perfect way of predicting the length of the next burst

  40. Implementing SJF in reality involves devising formulas for predicting the next burst length based on past performance • SJF can be a non-preemptive algorithm. The assumption now is that all processes are available at time 0 for scheduling and the shortest is chosen • A more descriptive name for the algorithm is “shortest next CPU burst” scheduling

  41. SJF can also be implemented as a preemptive algorithm. The assumption is that jobs enter the ready queue at different times. If a job with a shorter burst enters the queue when a job with a longer burst is running, the shorter job preempts the longer one • Under the preemptive scenario a more descriptive name for the algorithm would be “shortest remaining time first” scheduling

  42. Non-preemptive Example • Consider the following scenario: • Process burst length • P1 6 ms. • P2 8 ms. • P3 7 ms. • P4 3 ms.

  43. SJF order: P4, P1, P3, P2average wait time = (0 + 3 + 9 + 16) / 4 =7 ms.

  44. SJF average wait time is lower than the average wait time for FCFS scheduling of the same processes: FCFS average wait time = (0 + 6 + 14 + 21) / 4 =10.25 ms.

  45. In theory, SJF is optimal for average wait time performance • Always doing the shortest burst first minimizes the aggregate wait time for all processes • This is only theoretical because burst length can’t be known • In a batch system user estimates might be used • In an interactive system user estimates make no sense

  46. Devising a formula for predicting burst time • The only basis for such a formula is past performance • What follows is the definition of an exponential average function for this purpose • Let tn = actual, observed length of nth CPU burst for a given process • Let Tn+1 = predicted value of next burst • Let a be given such that 0 <= a < 1 • Then define Tn+1 as follows: • Tn+1 = atn + (1 – a)Tn

  47. Explanation: • a is a weighting factor. How important is the most recent actual performance vs. performance before that • To get an idea of the function it serves, consider a = 0, a = ½, a = 1

  48. Tn appears in the formula. It is the previous prediction. • It includes real past performance because • Tn = atn-1 + (1 – a)Tn-1 • Ultimately this expansion depends on the initial predicted value, T0 • Some arbitrary constant can be used, a system average can be used, etc.

  49. Expanding the formula • This illustrates how come it is known as an exponential average • It gives a better feel for the role of the components in the formula • Tn+1 = atn + (1-a)(atn-1 + (1-a)(…at0 + (1-a)T0)…) • = atn + (1-a)atn-1 + (1-a)2atn-2 + … + (1-a)nat0 + (1-a)n+1T0

More Related