Chapter 5 process scheduling
Download
1 / 52

Chapter 5: Process Scheduling - PowerPoint PPT Presentation


  • 187 Views
  • Updated On :

Chapter 5: Process Scheduling. Lecture 5 10.14.2008 Hao-Hua Chu. Announcement. Nachos assignment #2 out on course webpage Due in two weeks TA will talk about this assignment today. Persuasive computing. Using computing to change human behaviors Baby Think It Over Textrix VR Bike

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Chapter 5: Process Scheduling' - elisa


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Chapter 5 process scheduling l.jpg

Chapter 5: Process Scheduling

Lecture 5

10.14.2008

Hao-Hua Chu


Announcement l.jpg
Announcement

  • Nachos assignment #2 out on course webpage

    • Due in two weeks

    • TA will talk about this assignment today.


Persuasive computing l.jpg
Persuasive computing

  • Using computing to change human behaviors

    • Baby Think It Over

    • Textrix VR Bike

    • Slot Machine


Example playful tray ntu l.jpg
Example: Playful Tray(NTU)

  • How to encourage good eating habit in young children?

  • Sense to recognize behavior

    • Weight sensor underneath the tray to sense eating actions

    • Eating actions as game input

  • Play to engage behavior change

    • Interactive games: coloring cartoon character or penguin fishing


Outline l.jpg
Outline

  • The scheduling problem

  • Defining evaluation criteria

    • CPU utilization, throughput, turnaround time, waiting time, response time

  • Scheduling algorithms

    • FCFS, SJF, RR, Multilevel queue, Multilevel feedback queue

    • Preemptive vs. Non-preemptive

  • Evaluating scheduling algorithms

    • Deterministic model, queueing model, simulation, implementation

  • Others

    • Multiple-Processor scheduling, process affinity, load balancing, thread Scheduling


The scheduling problem for the cpu scheduler l.jpg
The Scheduling Problem for the CPU Scheduler

  • How to maximize CPU utilization in multiprogramming?

    • Multiprogramming

    • CPU utilization – other performance parameters?

  • Consider that a process follows the CPU–I/O Burst Cycle

    • Process execution consists of a cycle of CPU execution and I/O wait

    • A typical CPU burst distribution (next slides): Web browser, web server




Problem solving skill l.jpg
Problem solving skill

  • Before designing your solutions ….

  • Model the problem

    • Put the problem in a form that can be solved (using an algorithm)

  • Define the evaluation metrics

    • “Measure” how successful is a solution

  • Now design your solutions


Modeling the scheduling problem l.jpg
Modeling the Scheduling Problem

  • For the CPU scheduler

    • Selects from among the processes in memory (ready queue) that are ready to execute, and allocates the CPU to one of them.

  • When does the CPU scheduler make a scheduling decision?


Cpu scheduling decisions l.jpg
CPU Scheduling Decisions

  • CPU scheduling decisions may take place when a process:

    1. Switches from running to waiting state: how does it happen?

    2. Switches from running to ready state

    3. Switches from waiting to ready

    4. Terminates

  • Scheduling under 1 and 4 is nonpreemptive

  • All other scheduling is preemptive

  • Preemption: (chinese definition)


Dispatcher l.jpg
Dispatcher

  • Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves:

    • switching context

    • switching to user mode

    • jumping to the proper location in the user program to restart that program

  • Dispatch latency – time it takes for the dispatcher to stop one process and start another running


Scheduling criteria l.jpg
Scheduling Criteria

  • CPU utilization – keep the CPU as busy as possible

  • Throughput – # of processes that complete their execution per time unit

  • Turnaround time – amount of time to execute a particular process

  • Waiting time – amount of time a process has been waiting in the ready queue

  • Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment)


More scheduling criteria l.jpg

P1

P2

P3

0

24

27

30

More Scheduling Criteria

  • Max CPU utilization

  • Max throughput

  • Min turnaround time

  • Min waiting time

  • Min response time

For P2

Response time

Submitted time

Turnaround time

Waiting time


First come first served fcfs scheduling l.jpg

P1

P2

P3

0

24

27

30

First-Come, First-Served (FCFS) Scheduling

ProcessBurst Time

P1 24

P2 3

P3 3

  • Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart for the schedule is:

  • Waiting time for P1 = 0; P2 = 24; P3 = 27

  • Average waiting time: (0 + 24 + 27)/3 = 17


Fcfs scheduling cont l.jpg

P2

P3

P1

0

3

6

30

FCFS Scheduling (Cont.)

Suppose that the processes arrive in the order

P2 , P3 , P1

  • The Gantt chart for the schedule is:

  • Waiting time for P1 = 6;P2 = 0; P3 = 3

  • Average waiting time: (6 + 0 + 3)/3 = 3

  • Why is this much better than previous case?

    • Convoy effect short process behind long process


Shortest job first sjr scheduling l.jpg
Shortest-Job-First (SJR) Scheduling

  • Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time

  • Two schemes:

    • Nonpreemptive: once CPU given to the process it cannot be preempted until completes its CPU burst

    • Preemptive: if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the Shortest-Remaining-Time-First (SRTF)

  • SJF is optimal: give minimum average waiting time for a given set of processes


Example of non preemptive sjf l.jpg

P1

P3

P2

P4

0

3

7

8

12

16

Example of Non-Preemptive SJF

Process Arrival TimeBurst Time

P1 0.0 7

P2 2.0 4

P3 4.0 1

P4 5.0 4

  • SJF (non-preemptive)

  • Average waiting time = (0 + 6 + 3 + 7)/4 = 4


Example of preemptive sjf l.jpg

P1

P2

P3

P2

P4

P1

11

16

0

2

4

5

7

Example of Preemptive SJF

Process Arrival TimeBurst Time

P1 0.0 7

P2 2.0 4

P3 4.0 1

P4 5.0 4

  • SJF (preemptive)

  • Average waiting time = (9 + 1 + 0 +2)/4 = 3

5

0

2

0

0

0


Shortest job first sjr is optimal l.jpg
Shortest-Job-First (SJR) is Optimal

  • SJF is optimal

    • Give minimum average waiting time for a given set of processes

  • What is the strong assumption in SJF?

    • Know CPU burst time a-priori

  • How to estimate (guess) the CPU burst time?

  • Is SJR still optimal if the estimated CPU burst time is incorrect?

  • What is the other assumption in SJF?

    • All processes have equal importance to users.


Determining length of next cpu burst l.jpg
Determining Length of Next CPU Burst

  • Can only estimate the length

  • Can be done by using the length of previous CPU bursts, using exponential averaging



Examples of exponential averaging l.jpg
Examples of Exponential Averaging

  •  =0

    • n+1 = n

    • Recent history does not count

  •  =1

    • n+1 = tn

    • Only the actual last CPU burst counts

  • If we expand the formula, we get:

    n+1 =  tn+(1 - ) tn-1+ …

    +(1 -  )j tn-j+ …

    +(1 -  )n +1 0

  • Since both  and (1 - ) are less than or equal to 1, each successive term has less weight than its predecessor


Priority scheduling l.jpg
Priority Scheduling

  • A priority number (integer) is associated with each process

  • The CPU is allocated to the process with the highest priority (smallest integer = highest priority)

    • Non-preemptive

    • Preemptive -> when …

  • SJF is actually a priority scheduling algorithm

    • What is the used priority?

    • The predicted next CPU burst time


Priority scheduling25 l.jpg
Priority Scheduling

  • What are the potential problems with priority scheduling?

    • Can a process starve (although it is ready but never had a chance to be executed)?

    • Starvation – low priority processes may never execute

  • What to solve starvation?

    • Aging – as time progresses increase the priority of the process

  • Priority scheduling is unfair to all processes

    • How to create a (preemptive) scheduling algorithm that is fair to all processes?

    • That is, share the processor fairly among all processes


Round robin rr l.jpg
Round Robin (RR)

  • Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds.

    • After this time has elapsed, the process is preempted and added to the end of the ready queue.

  • If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once.

    • What is the maximum waiting time for a process?

    • (n-1)q time units


Example of rr with time quantum 20 l.jpg

P1

P2

P3

P4

P1

P3

P4

P1

P3

P3

0

20

37

57

77

97

117

121

134

154

162

Example of RR with Time Quantum = 20

ProcessBurst Time

P1 53

P2 17

P3 68

P4 24

  • The Gantt chart is:

  • How does preemptive RR compare with preemptive SJF?

    • Average turnaround time? Average waiting time?

33

13

0

48

28

8

0

4


Round robin rr28 l.jpg
Round Robin (RR)

  • How does the size of time quanta (q) affect performance?

    • Effects on context switching time & turnaround time?

    • Large q?

      • q large  FIFO, smaller context switching time

    • Small q?

      • q small  larger context switching time, turnaround time may increase.

      • For example: 3 processes of 10 time units each. q=1 (avg turnaround time = 29) vs. q=10 (avg turnaround time = 20)


Effect of time quanta size on context switch time l.jpg
Effect of Time Quanta Size on Context Switch Time

  • Smaller time quantum size leads to more context switches


Effect of time quanta size on turnaround time l.jpg
Effect of Time Quanta Size on Turnaround Time

  • Smaller time quanta “may” increase turnaround time.


Round robin shortest job first l.jpg
Round Robin & Shortest-Job-First

  • Their strong assumption is that all processes have equal importance to users.

    • background (batch - virus checking or video compressing) vs. foreground processes (interactive processes - word document).

      • Foreground processes: waiting/response time is important (excessive context switching is okay).

      • Background processes: waiting time is not important (excessive context switching decreases throughput)

  • What is the solution?

    • Is it possible to have multiple scheduling algorithms (running at the same time) for different types of processes?


Multilevel queue l.jpg
Multilevel Queue

  • Ready queue is partitioned into separate queues:

    • foreground (interactive)

    • background (batch)

  • Each queue has its own scheduling algorithm

    • foreground – RR, smaller time quantum

    • background – FCFS, larger time quantum

  • Scheduling must be done between the queues

    • Fixed priority scheduling: serve all from foreground then from background.

    • Time slice: each queue gets a certain amount of CPU time

      • 80% to foreground in RR and 20% to background in FCFS



Multilevel queue34 l.jpg
Multilevel Queue

  • Multilevel queue assumes processes are permanently assigned to a queue at the start.

  • Two problems: not flexible & starvation

    • It is difficult to predict the nature of a process at the start (the level of interactivity, CPU-bound, I/O-bound, etc.)

    • In the fixed-priority scheduling, low priority process may never be executed.

  • Good thing: low overhead

  • How to solve these two problems with a bit more overhead?


Multilevel feedback queue l.jpg
Multilevel Feedback Queue

  • A process can move between the various queues; aging can be implemented this way

  • Multilevel-feedback-queue scheduler defined by the following parameters:

    • number of queues

    • scheduling algorithms for each queue

    • method used to determine when to upgrade a process

    • method used to determine when to demote a process

    • method used to determine which queue a process will enter when that process needs service


Example of multilevel feedback queue l.jpg
Example of Multilevel Feedback Queue

  • Feedback: based on the observed process’s CPU burst, enables upgrade or downgrade between queues.

    • CPU-bound process downgrades to low priority queue.

    • I/O-bound process upgrades to high priority queue.

  • Three queues:

    • Q0 – RR with time quantum 8 milliseconds

    • Q1 – RR time quantum 16 milliseconds

    • Q2 – FCFS


Example of multilevel feedback queue37 l.jpg
Example of Multilevel Feedback Queue

  • Scheduling

    • A new job enters queue Q0which is servedFCFS. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1.

    • At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q2.

  • Comparison to Multilevel Queue:

    • Benefits: flexibility & no starvation

    • Cost: scheduling overhead



Multiple processor scheduling l.jpg
Multiple-Processor Scheduling

  • Scheduling more complex for multi-processor systems

    • Asymmetric multiprocessing vs. symmetric multiprocessing (SMP)

      • Asymmetric: master vs. slave processor, only the master processor controls CPU scheduler

      • Symmetric: each processor has its own CPU scheduler

      • Tradeoff: synchronization overhead (running queue) vs. concurrency

    • Processor Affinity

      • A higher cost in process migration to another processor, why?

        • A process’s cache memory on a specific processor

    • Load Balancing

      • Keep load evenly distributed on processors

      • Counteracts with processor affinity

    • Hyperthreading – logical CPUs, similar to physical CPUs


Thread scheduling l.jpg
Thread Scheduling

  • Two scopes of scheduling

    • Global Scheduling: the CPU scheduler decides which kernel thread to run next

    • Local Scheduling: the user-level threads library decides which thread to put onto an available LWP


How to evaluate scheduling algorithms l.jpg
How to evaluate scheduling algorithms?

  • Deterministic modeling – takes a particular predetermined workload and defines the performance of each algorithm for that workload

  • Queueing models – analytical model based on probability

    • Distributions on process arrival time, CPU burst, I/O time, completion time, tec.

  • Simulation

    • Randomly generated CPU load or collect real traces

  • Implementation


Pthread scheduling api l.jpg
Pthread Scheduling API

#include <pthread.h>

#include <stdio.h>

#define NUM THREADS 5

int main(int argc, char *argv[])

{

int i;

pthread t tid[NUM THREADS];

pthread attr t attr;

/* get the default attributes */

pthread attr init(&attr);

/* set the scheduling algorithm to PROCESS or SYSTEM */

pthread attr setscope(&attr, PTHREAD SCOPE SYSTEM);

/* set the scheduling policy - FIFO, RT, or OTHER */

pthread attr setschedpolicy(&attr, SCHED OTHER);

/* create the threads */

for (i = 0; i < NUM THREADS; i++)

pthread create(&tid[i],&attr,runner,NULL);


Pthread scheduling api43 l.jpg
Pthread Scheduling API

/* now join on each thread */

for (i = 0; i < NUM THREADS; i++)

pthread join(tid[i], NULL);

}

/* Each thread will begin control in this function */

void *runner(void *param)

{

printf("I am a thread\n");

pthread exit(0);

}


Operating system examples l.jpg
Operating System Examples

  • Solaris scheduling

  • Windows XP scheduling

  • Linux scheduling



Solaris dispatch table l.jpg
Solaris Dispatch Table

  • Multilevel feedback queue for the Interactive and time-sharing class



Linux scheduling l.jpg
Linux Scheduling

  • Two algorithms: time-sharing and real-time

  • Time-sharing

    • Prioritized credit-based – process with most credits is scheduled next

    • Credit subtracted when timer interrupt occurs

    • When credit = 0, another process chosen

    • When all processes have credit = 0, recrediting occurs

      • Based on factors including priority and history

  • Real-time

    • Soft real-time

    • Posix.1b compliant – two classes

      • FCFS and RR

      • Highest priority process always runs first




Review l.jpg
Review

  • The scheduling problem

  • Defining evaluation criteria

    • CPU utilization, throughput, turnaround time, waiting time, response time

  • Scheduling algorithms

    • FCFS, SJF, RR, Multilevel queue, Multilevel feedback queue

    • Preemptive vs. Non-preemptive

  • Evaluating scheduling algorithms

    • Deterministic model, queueing model, simulation, implementation

  • Others

    • Multiple-Processor scheduling, process affinity, load balancing, thread Scheduling



ad