1 / 52

Chapter 5: Process Scheduling

Chapter 5: Process Scheduling. Chapter 5: Process Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Thread Scheduling. Basic Concepts. Process Scheduling is the basis of multi-programmed operating system

Download Presentation

Chapter 5: Process Scheduling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 5: Process Scheduling

  2. Chapter 5: Process Scheduling • Basic Concepts • Scheduling Criteria • Scheduling Algorithms • Multiple-Processor Scheduling • Real-Time Scheduling • Thread Scheduling

  3. Basic Concepts • Process Scheduling is • the basis of multi-programmed operating system • Terminology • CPU scheduling, Process scheduling, Kernel Thread scheduling • used interchangeably, we use process scheduling

  4. Basic Concepts • In a single-processor system • Only one process can run at a time • Any others must waituntil the CPU is free and can be rescheduled. • When the running process goes to the waiting state, • the OS may select another process to assign CPU to improve CPU utilization. • Every time one process has to wait, another process can take over use of the CPU • Process scheduling is • a fundamental function of operating-system. • to select a process from the ready queue and assign the CPU • Maximum CPU utilization obtained with multiprogramming • by process scheduling

  5. Diagram of Process State from ch.3 • It is important to realize that only one process can be running on any processor at any instant. • Many processes may be ready and waiting states.

  6. Process Scheduling from ch.3 • Two types of queues: one ready queue, a set of device queues • Two types of resources: CPU, I/O • Arrow indicates the flow of processes in the system

  7. CPU - I/O burst Cycle • Process execution consists of • a cycle of CPU execution (CPU burst) and I/O wait (I/O burst) • Process alternate between these two states • Process execution begins with a CPU burst, which is followed by an I/O burst, and so on. • Eventually, the final CPU burst ends with an system call to terminate execution. • CPU burst distribution of a process • varies greatly from process to process and from computer to computer

  8. CPU burst time I/O burst time CPU burst time CPU burst time Alternating Sequence of CPU & I/O Bursts

  9. Histogram of CPU-burst Times • CPU burst distribution is generally characterized • as exponential or hyper-exponential • with large number of short CPU burst and small number of long CPU burst • I/O bound process has many short CPU bursts • CPU bound process might have a few long CPU bursts.

  10. Process Scheduler • is one of OS modules. • selects one of the processes in memory that are ready to execute, and allocates the CPU to the selected process. • CPU scheduling decisions may take place when a process: 1. switches from running to waiting state: I/O request, invocation of wait() for the termination of other process 2.switches from running to ready state: when interrupt occurs 3.switches from waiting to ready: at completion of I/O 4.terminates • Scheduling under 1 and 4 is non-preemptive • Scheduling under 2 and 3 is preemptive

  11. Non-preemptive vs. Preemptive • Non-preemptive scheduling • Once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU • either byterminating or by switching to the waiting state. • used by Windows 3.x • Preemptive scheduling • Current running process can be switched with another at any time • because interrupt can occur at any time • Most of modern OS provides this scheme. (Windows XP, Max OS, UNIX) • incurs a cost associated with access to shared data among processes • affects the design of the OS kernel • Certain OS (UNIX) waits either for a system call to complete or for an I/O block to take place before doing a context switch. • protects critical kernel code by disabling and enabling the interrupt at the entry and exit of the code.

  12. Dispatcher • Dispatcher module is a part of a Process Scheduler • gives control of the CPU to the process selected by the short-term scheduler; this involves: • switching context • switching to user mode • jumping to the proper location in the user program to restart that program • Dispatch latency–time it takes for the dispatcher to stop one process and start another running

  13. Context Switch from ch. 3

  14. Scheduling Criteria • Based on the scheduling criteria, the performance of various scheduling algorithm could be evaluated. • Different scheduling algorithms have different properties. • CPU utilization– ratio (%) of the amount of time while the CPU was busy per time unit. • Throughput– # of processes that complete their execution per time unit. • Turnaround time– the interval from the time of submission of a process to the time of completion. Sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O • Waiting time– Amount of time a process has been waiting in the ready queue, which is affected by scheduling algorithm • Response time– In an interactive system, amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment)

  15. Optimization Criteria • It is desirable to maximize: • The CPU utilization • The throughput • It is desirable to minimize: • The turnaround time • The waiting time • The response time • However in some circumstances, it is desirable to optimize the minimum or maximum values rather than the average. • Interactive systems, it is more important to minimize the variance in the response time than minimize the average response time.

  16. Process Scheduling Algorithms • First-Come, First-Served Scheduling (FCFS) • Shortest-Job-First Scheduling (SJF) • Priority Scheduling • Round-Robin Scheduling • Our measure of comparison is the average waiting time. • CPU utilization, Throughput, Turn arround time,

  17. P1 P2 P3 0 24 27 30 First-Come, First-Served (FCFS) Scheduling ProcessBurst Time P1 24 P2 3 P3 3 • Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart for the schedule is: • Waiting time for P1 = 0; P2 = 24; P3 = 27 • Average waiting time: (0 + 24 + 27)/3 = 17

  18. P2 P3 P1 0 3 6 30 FCFS Scheduling Suppose that the processes arrive in the order P2 , P3 , P1 • The Gantt chart for the schedule is: • Waiting time for P1 = 6;P2 = 0; P3 = 3 • Average waiting time: (6 + 0 + 3)/3 = 3 • Much better than previous case

  19. FCFS Scheduling 0 10 20 30 40 50 P0 I/O (20) CPU(11) CPU (10) P1 CPU(9) CPU(6) I/O(17) P2 CPU(4) CPU (4) CPU (4) I/O(24) I/O(4) P1 CPU(9) P2 CPU(4) P1 CPU(6) P0 CPU(10) P2 CPU(4) P2 CPU(4) P0 I/O(6) P0 CPU(11) 10 14 20 24 30 41 50 54

  20. FCFS Scheduling • FCFS scheduling algorithm is non-preemptive • Once the CPU has been allocated to a process, that process keeps the CPU until it releases the CPU, either by terminating or by requesting I/O. • is particularly troublesome for time-sharing systems. • Convoy effect occurs. • When one CPU-bound process with long CPU burst and many I/O-bound process which short CPU burst. • All I/O bound process waits for the CPU-bound process to get off the CPU while I/O is idle • All I/O- and CPU- bound processes executes I/O operation while CPU is idle. • results in low CPU and device utilization

  21. Shortest-Job-First (SJF) Scheduling • associate with each process the length of its next CPU burst. • use these lengths to schedule the process with the shortest time • Two schemes: • non-preemptive– once CPU given to the process it cannot be preempted until completes its CPU burst • preemptive– if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is knownas the Shortest-Remaining-Time-First (SRTF) • SJF is optimal– gives minimum average waiting time for a given set of processes

  22. P1 P3 P2 P4 0 3 7 8 12 16 Example of Non-Preemptive SJF ProcessArrival TimeBurst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4 • SJF (non-preemptive) • Average waiting time = (0 + 6 + 3 + 7)/4 = 4

  23. P1 P2 P3 P2 P4 P1 11 16 0 2 4 5 7 Example of Preemptive SJF ProcessArrival TimeBurst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4 • SJF (preemptive) • Average waiting time = (9 + 1 + 0 +2)/4 = 3

  24. Preemptive SJF Scheduling 0 10 20 30 40 50 P0 I/O (20) CPU(11) CPU (10) P1 CPU(9) CPU(6) I/O(17) P2 CPU(4) CPU (4) CPU (4) I/O(24) I/O(4) P1 CPU(6) P0 CPU(1) P2 CPU(4) P2 CPU(4) P2 CPU(4) P0 CPU(9) P1 I/O(4) P1 CPU(9) P2 I/O(2) P0 I/O(1) P0 CPU(11)

  25. Non-preemptive SJF Scheduling 0 10 20 30 40 50 P0 I/O (20) CPU(11) CPU (10) P1 CPU(9) CPU(6) I/O(17) P2 CPU(4) CPU (4) CPU (4) I/O(24) I/O(4) P1 CPU(6) P1 CPU(9) P2 CPU(4) P0 CPU(10) P2 CPU(4) P2 CPU(4) Idle (6) P0 CPU(11)

  26. Determining Length of Next CPU Burst • can only estimate the length • can be done by using the length of previous CPU bursts, using exponential averaging • The value of tn contains our most recent information. • n+1 stores the past history • The parameter  controls the relative weight of recent and past history in our prediction.

  27. Prediction of the Length of the Next CPU Burst • In this example, 0 = 10,  = ½ • 1 =  x t0 + (1- ) x 0 = ½ x 6 + ½x 10 = 8 • 2 =  x t2 + (1- ) x 2 = ½ x 4 + ½x 8 = 6

  28. Examples of Exponential Averaging •  = 0 • n+1 = n = n-1 = n-2 . … = 0 • Recent history does not count •  = 1 • n+1 =  tn • Only the actual last CPU burst counts • If we expand the formula, we get: n+1 =  tn + (1 - ) tn - 1+ … + (1 -  )j tn-j+ … + (1 -  )n +1 0 • Since both  and (1 - ) are less than or equal to 1, each successive term has less weight than its predecessor

  29. Priority Scheduling • A priority number (integer) is associated with each process • The CPU is allocated to the process with the highest priority (smallest integer  highest priority) • Preemptive • Non-preemptive • SJF is a priority scheduling where priority is the predicted next CPU burst time • Problem  Starvation– low priority processes may never execute • Solution  Aging– as time progresses, increase the priority of the process

  30. Example of Non-Preemptive Priority ProcessBurst TimePriority P1 10 3 P2 1 1 P3 2 4 P4 1 5 P5 5 2 • Priority Scheduling (non-preemptive) • Average waiting time = (0 + 1 + 6 + 16 + 18)/5 = 8.2 P2 P5 P4 P1 P3 16 6 18 19 0 1

  31. Preemptive Priority Scheduling 0 10 20 30 40 50 P0(1) I/O (20) CPU(11) CPU (10) P1(2) CPU(9) CPU(6) I/O(17) P2(3) CPU(4) CPU (4) CPU (4) I/O(24) I/O(4) P2 I/O(2) P2 CPU(4) P0 I/O(2) P0 CPU(11) P1 CPU(9) P1 CPU(6) P0 CPU(10) P2 CPU(4) P2 CPU(4) Idle: P2(I/O4) I/O is same to idle (원래 I/O에는 idle이 들어가야 합니다.)

  32. Non-preemptive Priority Scheduling 0 10 20 30 40 50 P0 (1) I/O (20) CPU(11) CPU (10) P1 (2) CPU(9) CPU(6) I/O(17) P2 (3) CPU(4) CPU (4) CPU (4) I/O(24) I/O(4) 이 경우는 특수 case로 preemptive나 non-preemptive나 답이 같습니다. (즉 앞장과 답이 같습니다.)

  33. Round Robin (RR) Scheduling • Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. • After this time has elapsed, the process is preempted and added to the end of the ready queue. • If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time (= q time units) in chunks of at most n x q time units at once. • No process waits more than (n-1) x q time units. • Performance depends on the size of the time quantum. • q large  RR is same as FIFO • q small  provides high concurrency: each of n processes has its own processor running at 1/n the speed of the real processor

  34. P1 P2 P3 P4 P1 P3 P4 P1 P3 P3 0 20 37 57 77 97 117 121 134 154 162 Example of RR with Time Quantum = 20 ProcessBurst Time P1 53 P2 17 P3 68 P4 24 • The Gantt chart is: • Typically, higher average turnaround than SJF, but better response

  35. RR Scheduling 0 10 20 • Time Quantum = 2 • (9번째 P1과 10번째 P2는 바뀌어도 상관 없음) • (20번째 P0와 21번째 P2도 바뀌어도 상관 없음) 30 40 50 P0(1) I/O (20) CPU(11) CPU (10) P1(2) CPU(9) CPU(6) I/O(17) P2(3) CPU(4) CPU (4) CPU (4) I/O(24) I/O(4) P0 CPU(2) P1 CPU(2) P1 CPU(2) P2 CPU(2) P0 CPU(2) P1 CPU(2) P2 CPU(2) P0 CPU(2) P1 CPU(2) P0 CPU(2) P2 CPU(2) P0 CPU(2) P2 CPU(2) P1 CPU(2) Idle P1 I/O (11) P1 CPU1) P0 CPU(2) P1 CPU(2) P1 CPU(2) P0 CPU(2) P0 CPU(2) P2 CPU(2) P0 CPU(2) P2 CPU(2) P0 CPU(1) P0 CPU(2)

  36. RR Scheduling 0 10 20 • Time Quantum = 5 30 40 50 P0(1) I/O (20) CPU(11) CPU (10) P1(2) CPU(9) CPU(6) I/O(17) P2(3) CPU(4) CPU (4) CPU (4) I/O(24) I/O(4) P0 CPU(5) P1 CPU(5) P1 CPU(4) Idle P1 I/O(15) P0 CPU(1) P2 CPU(4) P1 CPU(5) P0 CPU(5) P2 CPU(4) P0 CPU(5) P0 CPU(5) P2 CPU(4) P1 CPU(1)

  37. Time Quantum and Context Switch Time • The effect of context switching on the performance of RR scheduling, for example one process of 10 time quantum. • quantum = 12 time units, finished in less than 1 time quantum • quantum = 6 time units, requires 2 quanta, 1 context switch • quantum = 1 time units, requires 10 quanta, 9 context switch

  38. Round Robin (RR) Scheduling • Thetime quantum qmust be large with respect to context switch, otherwise overhead is too high • If the context switching time is 10% of the time quantum, then about 10% of the CPU time will be spent in context switching • Most modern OS have time quanta ranging from 10 to 100milliseconds, • The time required for a context switch is typically less than 10 microseconds; thus the context-switch time is a small fraction of the time quantum.

  39. Turnaround Time varies with the Time Quantum • Theturnaround timedepends on the size ofthe time quantum • The average turnaroundtime can be improvedif most processes finish their next CPU burst in a single time quantum. • When SJF and RR used • If quantum = 6 and 7, average turnaround time = 10.5 • If quantum = 1, average turnaround time = 11

  40. Scheduling Algorithm with multi-Queues • Multi-level Queue Scheduling • Multi-level FeedbackQueue Scheduling

  41. Multilevel Queue • Ready queue is partitioned into separate queues:foreground (interactive)background (batch) • The processes are permanently assigned to one queue, generally based on some property, or process type. • Each queue has its own scheduling algorithm • foreground–RR • background–FCFS • Scheduling must be done between the queues • Fixed priority scheduling - serve all from foreground then from background, Possibility of starvation. • Time slicescheduling– each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR, 20% to background in FCFS

  42. Multilevel Queue Scheduling • No process in the batch queue could run unless the queues with high priority were all empty. • If an interactive editing process entered the ready queue while a batch process was running, the batch process would be preempted.

  43. Multi-level Queue Scheduling 0 10 20 • Two-level Queues: SJF with priority 0, FCFS with priority 1 • Fixed Priority Queue • Preemptive 30 40 50 P0 (1) I/O (20) CPU(11) CPU (10) P1 (0) CPU(9) CPU(6) I/O(17) P2 (0) CPU(4) CPU (4) CPU (4) I/O(24) I/O(4)

  44. Multilevel Feedback Queue • A process can move between the various queues; aging can be implemented in this way • Multilevel-feedback-queue scheduler defined by the following parameters: • number of queues • scheduling algorithms for each queue • method used to determine when to upgrade a process • method used to determine when to demote a process • method used to determine which queue a process will enter when that process needs service

  45. Example of Multilevel Feedback Queue • Three queues: • Q0–RR with time quantum 8 milliseconds • Q1–RR time quantum 16 milliseconds • Q2–FCFS • Scheduling • A new job enters queue Q0which is servedRR. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1. • At Q1 job is again served RR and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q2. • The job is serverd based on FCFS in queue Q2

  46. Multilevel Feedback Queues

  47. Multi-level FeedbackQueue Scheduling 0 10 20 • Three-level Queues: • RR with quantum 3, RR with quantum 6, FCFS with priority 1 30 40 50 P0 (1) I/O (20) CPU(11) CPU (10) P1 (0) CPU(9) CPU(6) I/O(17) P2 (0) CPU(4) CPU (4) CPU (4) I/O(24) I/O(4)

  48. Multiple-Processor Scheduling • CPU scheduling more complex when multiple CPUs are available • Homogeneous processors within a system or heterogeneous processors within a system • Asymmetric multiprocessing vs. Symmetric multiprocessing (SMP) • Symmetric Multiprocessing (SMP)– each processor makes its own scheduling decisions. • Asymmetric multiprocessing– only one processor accesses the system data structures, alleviating the need for data sharing. • Load sharing on SMP system • keeps the workload evenly distributed across all processors in an SMP system. • Push migration vs. Pull migration

  49. Real-Time Scheduling • Hard real-time systems – required to complete a critical task within a guaranteed amount of time • Soft real-time computing – requires that critical processes receive priority over less fortunate ones

  50. Thread Scheduling • On operating systems that support kernel-level thread, it is kernel-level threads, not processes, that are being scheduled by the operating system. • Local Scheduling– How the threads library decides which thread to put onto an available LWP • Global Scheduling– How the kernel decides which kernel thread to run next

More Related