saurav karmakar l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
CSC 4320/6320 Operating Systems Lecture 5 CPU Scheduling PowerPoint Presentation
Download Presentation
CSC 4320/6320 Operating Systems Lecture 5 CPU Scheduling

Loading in 2 Seconds...

play fullscreen
1 / 57

CSC 4320/6320 Operating Systems Lecture 5 CPU Scheduling - PowerPoint PPT Presentation


  • 163 Views
  • Uploaded on

Saurav Karmakar. CSC 4320/6320 Operating Systems Lecture 5 CPU Scheduling. Chapter 5: CPU Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm Evaluation. Objectives.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'CSC 4320/6320 Operating Systems Lecture 5 CPU Scheduling' - rollo


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
chapter 5 cpu scheduling
Chapter 5: CPU Scheduling
  • Basic Concepts
  • Scheduling Criteria
  • Scheduling Algorithms
  • Thread Scheduling
  • Multiple-Processor Scheduling
  • Operating Systems Examples
  • Algorithm Evaluation
objectives
Objectives
  • To introduce CPU scheduling, which is the basis for multiprogrammed operating systems
  • To describe various CPU-scheduling algorithms
  • To discuss evaluation criteria for selecting a CPU-scheduling algorithm for a particular system
basic concepts
Basic Concepts
  • Uniprocessor system one process may run at a time
  • Objective of multiprogramming

some process running at all times

 to maximize CPU utilization

  • When one process has to wait, the operating system takes the CPU away from that process and gives the CPU to another process
  • Almost all computer resources are scheduled before use
cpu i o burst cycle
CPU-I/O Burst Cycle
  • Success of CPU scheduling follows :
    • Process execution  CPU–I/O Burst Cycle
      • Consists of a cycle of CPU execution and I/O wait
  • Basically While a process waits for I/O, CPU sits idle if no multiprogramming
  • Instead the OS can give CPU to another process
  • CPU burst distribution
    • Distribution for frequency vs duration of CPU bursts.
    • Exponential or hyperexpoinential in nature
cpu scheduler
CPU Scheduler
  • Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them : Short-term Scheduler
    • The ready queue is not necessarily a FIFO queue
  • CPU scheduling decisions may take place when a process:

1. Switches from running to waiting state

2. Switches from running to ready state

3. Switches from waiting to ready

    • Terminates
  • Scheduling under 1 and 4 leaves us no choice
  • But option 2 and 3 does.
cpu scheduler9
CPU Scheduler
  • Nonpreemptive : once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either
  • Scheduling under 1 and 4 is Nonpreemptive/Cooperative
  • All other scheduling is Preemptive
  • Preemptive Scheduling incurs some costs :
    • Access to shared data
    • Effects on the design of operating system kernel
    • Effects of interrupts
dispatcher
Dispatcher
  • Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves:
    • Switching context
    • Switching to user mode
    • Jumping to the proper location in the user program to restart that program
  • Dispatch latency – time it takes for the dispatcher to stop one process and start another running
first come first served fcfs scheduling

P1

P2

P3

0

24

27

30

First-Come, First-Served (FCFS) Scheduling

ProcessBurst Time

P1 24

P2 3

P3 3

  • Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart for the schedule is:
  • Waiting time for P1 = 0; P2 = 24; P3 = 27
  • Average waiting time: (0 + 24 + 27)/3 = 17
fcfs scheduling cont

P2

P3

P1

0

3

6

30

FCFS Scheduling (Cont)

Suppose that the processes arrive in the order

P2 , P3 , P1

  • The Gantt chart for the schedule is:
  • Waiting time for P1 = 6;P2 = 0; P3 = 3
  • Average waiting time: (6 + 0 + 3)/3 = 3
  • Much better than previous case
  • So Average waiting time vary substantially if the burst time for processes vary – Generally quite long.
  • This is non-preemptive in nature  troublesome for time sharing system
cpu scheduling

FCFS

Algorithm

  • Process Arrival Service
      • Time Time
      • 1 0 8
      • 2 1 4
      • 3 2 9
      • 4 3 5

CPU SCHEDULING

FCFS

P1

P2

P3

P4

0

8

12

21

26

Average wait = ( (8-0) + (12-1) + (21-2) + (26-3) )/4

= 61/4 = 15.25

Residence Time

at the CPU

5: CPU-Scheduling

shortest job first sjf scheduling
Shortest-Job-First (SJF) Scheduling
  • Associate with each process the length of its next CPU burst.
  • Using these lengths to schedule the process with the shortest time
  • If two processes have the same length next CPU burst, FCFS scheduling is used to break the tie.
  • Scheduling depends on the length of the next CPU burst(lower the better)
  • SJF is optimal – gives minimum average waiting time for a given set of processes
    • Difficulty knowing the length of the next CPU request
example of sjf

P3

P2

P4

P1

3

9

16

24

0

Example of SJF

ProcessBurst Time

P1 6

P2 8

P3 7

P4 3

  • SJF scheduling chart
  • Average waiting time = (3 + 16 + 9 + 0) / 4 = 7
  • By moving a short process before a long one, the waiting time of the short process decreases more than it increases the waiting time of the long process. Consequently, the average waiting time decreases.
  • SJF scheduling is used frequently in long-term scheduling.
determining length of next cpu burst
Determining Length of Next CPU Burst
  • How to implement it at short-term scheduler ?
  • Can only estimate the length
  • Can be done by using the length of previous CPU bursts, using exponential averaging
examples of exponential averaging
Examples of Exponential Averaging
  •  =0
    • n+1 = n
    • Recent history does not count
  •  =1
    • n+1 =  tn
    • Only the actual last CPU burst counts
  • If we expand the formula, we get:

n+1 =  tn+(1 - ) tn-1+ …

+(1 -  )j tn-j+ …

+(1 -  )n +1 0

  • Since both  and (1 - ) are less than or equal to 1, each successive term has less weight than its predecessor
slide26

Preemptive SJF

Algorithm

Shortest Remaining Time First Algorithm

  • Process Arrival Service
      • Time Time
      • 1 0 8
      • 2 1 4
      • 3 2 9
      • 4 3 5

Preemptive Shortest Job First

P1

P2

P4

P1

P3

0

1

5

10

26

17

Average wait = ( (10-1) + (1-1) + (17-2) + (5-3) )/4

= 26/4 = 6.5

5: CPU-Scheduling

priority scheduling
Priority Scheduling
  • SJF is a special case of the general priority schedule algorithm
      • Priority inverse of the next CPU Burst
  • A priority number (integer) is associated with each process
  • The CPU is allocated to the process with the highest priority (smallest integer  highest priority)
    • Preemptive
    • Nonpreemptive
  • Problem  Starvation– low priority processes may never execute
  • Solution Aging – as time progresses increase the priority of the process
priority scheduling29
Priority Scheduling
  • Average Waiting Time = (0+1+6+16+18)/5 = 8.2
what about fairness
What about fairness ?
  • What about it?
    • Strict fixed-priority scheduling between queues is unfair (run highest, then next, etc):
      • long running jobs may never get CPU
      • In Multics, shut down machine, found 10-year-old job
      • Must give long-running jobs a fraction of the CPU even when there are shorter jobs to run
    • Tradeoff: fairness gained by hurting avg response time!
  • How to implement fairness?
    • Could give each queue some fraction of the CPU
      • What if one long-running job and 100 short-running ones?
      • Like express lanes in a supermarket—sometimes express lanes get so long, get better service by going into one of the other lines
fair share scheduling
Fair Share Scheduling
  • Problems with priority-based systems

– Priorities are absolute: no guarantees when multiple jobs with same priority

– No encapsulation and modularity

    • Behavior of a system module is unpredictable: a function of absolute priorities assigned to tasks in other modules
  • Solution: Fair-share scheduling

Each job has a share: some measure of its relative importance

denotes user’s share of system resources as a fraction of the total

usage of those resources

 e.g., if user A’s share is twice that of user B » then, in the long term, A will receive twice as many resources as B

  • Traditional implementations

– keep track of per-process CPU utilization (a running average)

– reprioritize processes to ensure that everyone is getting their share

– are slow!

lottery scheduling
Lottery Scheduling
  • Yet another alternative here.
    • Give each job some number of lottery tickets
    • On each time slice, randomly pick a winning ticket
    • On average, CPU time is proportional to number of tickets given to each job
  • How to assign tickets?
    • To approximate SRTF, short running jobs get more, long running jobs get fewer
    • To avoid starvation, every job gets at least one ticket (everyone makes progress)
  • Advantage over strict priority scheduling:
    • behaves gracefully as load changes
    • Adding or deleting a job affects all jobs proportionally, independent of how many tickets each job possesses
lottery scheduling example
Lottery Scheduling Example
  • Assume short jobs get 10 tickets, long jobs get 1 ticket
round robin rr
Round Robin (RR)
  • Designed specially for time sharing system.
  • Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue.
    • The ready queue generally is circular queue.
    • Avg wait time is often long
  • If there are nprocesses in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units.
  • Performance
    • q large  FIFO
    • q small  q must be large with respect to context switch, otherwise overhead is too high ; processor sharing
example of rr with time quantum 4

P1

P2

P3

P1

P1

P1

P1

P1

0

10

14

18

22

26

30

4

7

Example of RR with Time Quantum = 4

ProcessBurst Time

P1 24

P2 3

P3 3

  • The Gantt chart is:
  • Avg Waiting Time = ((10-4)+4+7)/3 =5.66
  • Typically, higher average turnaround than SJF, but no better response
multilevel queue
Multilevel Queue
  • Ready queue is partitioned into

separate queues:foreground (interactive)background (batch)

  • Each queue has its own scheduling algorithm
    • foreground – RR
    • background – FCFS
  • Scheduling must be done between the queues
    • Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation.
    • Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR 20% to background in FCFS
multilevel feedback queue
Multilevel Feedback Queue
  • A process can move between the various queues; aging can be implemented this way
  • Multilevel-feedback-queue scheduler defined by the following parameters:
    • number of queues
    • scheduling algorithms for each queue
    • method used to determine when to upgrade a process
    • method used to determine when to demote a process
    • method used to determine which queue a process will enter when that process needs service
example of multilevel feedback queue
Example of Multilevel Feedback Queue
  • Three queues:
    • Q0 – RR with time quantum 8 milliseconds
    • Q1 – RR time quantum 16 milliseconds
    • Q2 – FCFS
  • Scheduling
    • A new job enters queue Q0which is servedFCFS. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1.
    • At Q1 job is again served FCFS

and receives 16 additional milliseconds.

If it still does not complete,

it is preempted and moved to queue Q2.

thread scheduling contention scope
Thread Scheduling : Contention Scope
  • The contention scope of a user thread defines how it is mapped to a kernel thread.
  • System contention scope/global contention scope user thread is a user thread that is directly mapped to one kernel thread.
    • All user threads in a 1:1 thread model have system contention scope.
  • Process contention scope/local contention scope user thread is a user thread that shares a kernel thread with other (process contention scope) user threads in the process.
    • All user threads in a M:1 thread model have process contention scope.
contention scope
Contention Scope
  • In an M:N thread model, user

threads can have either system

or process contention scope – Mixed Scope

  • The concurrency levelis a property of M:N threads libraries.
  • It defines the number of VPs used to run the process contention scope user threads.
  • This number cannot exceed the number of process contention scope user threads, and is usually dynamically set by the threads library.
pthread scheduling
Pthread Scheduling
  • Pthread API allows specifying either PCS or SCS during

thread creation

    • PTHREAD_SCOPE_ PROCESS schedules threads using PCS scheduling
    • PTHREAD_SCOPE_SYSTEM schedules threads using SCS scheduling.
    • Two functions for setting and getting :
    • pthread_attr_setscope(pthread_attr_t *attr, int scope)
    • pthread_attr_getscope(pthread_attr_t *attr, int scope)
slide44
#include <pthread.h>

#include <stdio.h>

#define NUM THREADS 5

int main(int argc, char *argv[])

{

int i;

pthread_t tid[NUM THREADS];

pthread _attr_t attr;

/* get the default attributes */

pthread_attr_init(&attr);

/* set the scheduling algorithm to PROCESS or SYSTEM */

pthread_attr _setscope(&attr, PTHREAD_SCOPE_SYSTEM);

/* set the scheduling policy - FIFO, RR, or OTHER */

pthread_attr_setschedpolicy(&attr, SCHED_OTHER);

/* create the threads */

for (i = 0; i < NUM THREADS; i++)

pthread_create(&tid[i],&attr,runner,NULL);

/* now join on each thread */

for (i = 0; i < NUM THREADS; i++)

pthread_join(tid[i], NULL);

}

/* Each thread will begin control in this function */

void *runner(void *param){

printf("I am a thread\n");

pthread_exit(0);

}

multiple processor scheduling
Multiple-Processor Scheduling
  • CPU scheduling more complex when multiple CPUs are available
  • Here we consider homogeneous processors within a multiprocessor
  • Asymmetric multiprocessing – only one processor accesses the system data structures, alleviating the need for data sharing
  • Symmetric multiprocessing (SMP) – each processor is self-scheduling, all processes in common ready queue, or each has its own private queue of ready processes
  • Processor affinity
    • Process has affinity for processor on which it is currently running
    • For avoiding to invalidate and repopulate caches due to migration
  • Soft Affinity (Ex : Solaris)
  • Hard Affinity (Ex : Linux, Solaris)
multiple processor scheduling load balancing
Multiple-Processor Scheduling : Load Balancing
  • Distributing the load
  • Necessary for system, where each processor has it’s own private queue.
  • Approaches :
    • Push Migration
    • Pull Migration
      • Generally implemented together
      • It often counteracts the benefit of processor affinity.
multicore processors
Multicore Processors
  • Recent trend to place multiple processor cores on same physical chip
  • Faster and consume less power
  • Multiple threads per core also growing
    • Takes advantage of memory stall to make progress on another thread while memory retrieve happens
virtualization
Virtualization
  • It creates fake impression of the CPU(resources) for the guest operating systems running in virtual machines.
  • E.g Running a time sharing system or real time OS in virtual machine
  • Thus it can undo the good scheduling efforts.
algorithm evaluation
Algorithm Evaluation
  • Need to make our criteria more specific by adding constraints
  • Approaches :
    • Analytic Evaluation
    • Deterministic modeling – takes a particular predetermined workload and defines the performance of each algorithm for that workload
    • It requires exact input and outcome is bound to the defined cases.
deterministic modelling
Deterministic Modelling

ProcessBurst Time

P1 10

P2 29

P3 3

P4 7

P5 12

FIFO

Non preemptive SJF

RR with TQ=10

queueing models
Queueing Models
  • Basically analysis comes from different distributions
  • Considers a computer system as a network of servers where each server has a queue off waiting processes.
  • Computes utilization, avg queue length, avg wait time through arrival and service rates
  • This is queueing network analysis
  • Little’s Formula : n= λ*W
  • Has limitations on the classes of algorithm it could be applied.
  • Mathematical equations does not always represent realistic situations
evaluation of cpu schedulers by simulation
Evaluation of CPU schedulers by Simulation

Implementation : High Cost for evaluation

Dynamic nature of coders/users

Best to have options for fine tuning .

linux scheduling
Linux Scheduling
  • Preemptive priority based implementation
  • Two priority ranges: time-sharing and real-time
  • Real-timerange from 0 to 99 and nice value from 100 to 140
list of tasks indexed according to priorities
List of Tasks Indexed According to Priorities
  • Kernel maintains list of all runnable tasks in a runqueue data structure
  • Each process maintains own runqueue
  • Each runqueue contains two priority arrays : active & expired