Scheduling
Download
1 / 81

Scheduling - PowerPoint PPT Presentation


  • 136 Views
  • Uploaded on

operating systems. Scheduling. operating systems. There are a number of issues that affect the way work is scheduled on the cpu. operating systems. Batch vs. Interactive. operating systems. Batch System vs. Interactive System. Scheduling Issues.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Scheduling' - imani-huffman


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Scheduling

operating

systems

Scheduling


operating

systems

There are a number of issues

that affect the way work is

scheduled on the cpu.


operating

systems

Batch vs. Interactive


Scheduling issues

operating

systems

Batch System vs. Interactive System

Scheduling Issues

In a batch system, there are no users impatiently waiting

at terminals for a quick response.

On large mainframe systems where batch jobs usually run,

CPU time is still a precious resource.

Metrics for a batch system include

* Throughput – number of jobs per hour that can be run

* Turnaround – the average time for a job to complete

* CPU utilization – keep the cpu busy all of the time


Scheduling issues1

operating

systems

Batch System vs. Interactive System

Scheduling Issues

In interactive systems the goal is to minimize response times.

Proportionality – complex things should take more time than simple things. Closing a window should be immediate. Making a dial-up connection would be expected to take a longer time.

One or two users should not be able to hog the cpu


operating

systems

Single vs Multiple User


Scheduling issues2

operating

systems

Single User vs. Multi-User Systems

Scheduling is far less complex on a single user system:

* In today’s Personal computers (single user systems),

it is rare for a user to run multiple processes at the

same time.

* On a personal computer most wait time is for

user input.

* CPU cycles on a personal computer are cheap.

Scheduling Issues


Scheduling issues3

operating

systems

Compute vs. I/O Bound Programs

Most programs exhibit a common behavior, they

compute for a while, Then they do some I/O.

Scheduling Issues

Compute Bound:

Relatively long bursts of CPU activity with short intervals waiting for I/O

I/O Bound:

Relatively short bursts of CPU activity with frequent long waits for I/O


Scheduling issues4

operating

systems

When to Schedule

Job Scheduling: When new jobs enter the system

Select jobs from a queue of incoming jobs and

place them on a process queue where they will

be subject to Process Scheduling.

The goal of the job scheduler is to put jobs in

a sequence that will use all of the system’s

resources as fully as possible.

Scheduling Issues

Example: What happens if several I/O bound jobs are

scheduled at the same time?


Scheduling issues5

operating

systems

When to Schedule

Scheduling Issues

Process Scheduling or Short Term Scheduling:

For all jobs on the process queue, Process Scheduling

determines which job gets the CPU next, and for

how long. It decides when processing should be

interrupted, and when a process completes or

should be terminated.


Scheduling issues6

operating

systems

Scheduling Issues

Preemptive vs. non-Preemptive Scheduling

Non-preemptive scheduling starts a process running

and lets it run until it either blocks, or voluntarily gives

up the CPU.

Preemptive scheduling starts a process and only lets

it run for a maximum of some fixed amount of time.


Scheduling criteria

operating

systems

Pick the criteria that are important to you.

One algorithm cannot maximize all criteria.

Scheduling Criteria

Turnaround time – complete programs quickly

Response Time – quickly respond to interactive requests

Deadlines – meet deadlines

Predictability – simple jobs should run quickly, complex job longer

Throughput – run as many jobs as possible over a time period


Scheduling criteria1

operating

systems

continued

Scheduling Criteria

CPU Utilization – maximize how the cpu is used

Fairness – give everyone an equal share of the cpu

Enforcing Priorities – give cpu time based on priority

Enforcing Installation Policies – give cpu time based on policy

Balancing Resources – maximize use of files, printers, etc


Pre-emption or voluntary yield

Ready

List

CPU

Scheduler

new

PCB

PCB

running

PCB

ready

request

Resource Mgr

allocate

PCB

PCB

resources

blocked


A process entering

ready state

Ready

List

Scheduler

PCB

Enqueuer

To the

CPU

Dispatcher

Context Switcher


The cost of a context switch

Assume that your machine has 32-bit registers

The context switch uses normal load and store ops.

Lets assume that it takes 50 nanoseconds to store

the contents of a register in memory.

If our machine has 32 general purpose registers and

8 status registers, it takes

(32 + 8 ) * 50 nanoseconds = 2 microseconds

to store all of the registers.


Another 2 microseconds are required to load the

registers for the new process.

Keep in mind that the dispatcher itself is a process

that requires a context switch. So we could estimate

the total time required to do a context switch as 8+

microseconds.

On a 1Gh machine, register operations take about

2 nanoseconds. If we divide our 8 microseconds by

2 nanoseconds we could execute upwards of 4000

register instructions while a context switch is going on.

This only accounts for saving and restoring registers.

It does not account for any time required to load memory.


Optimal scheduling

Given a set of processes where the cpu time required for each to

complete is known beforehand, it is possible to select the best

possible scheduling of the jobs, if

- we assume that no other jobs will enter the system

- we have a pre-emptive scheduler

- we have a specific goal (i.e. throughput) to meet

Optimal Scheduling

This is done by considering every possible ordering of time slices for

each process, and picking the “best” one.

But this is not very realistic – why not?


Optimal scheduling1

Given a set of processes where the cpu time required for each to

complete is known beforehand, it is possible to select the best

possible scheduling of the jobs, if

- we assume that no other jobs will enter the system

- we have a pre-emptive scheduler

- we have a specific goal (i.e. throughput) to meet

Optimal Scheduling

This is done by considering every possible ordering of time slices for

each process, and picking the “best” one.

This could take more time than actually running the thread!

But this is not very realistic – why not?

Are there any examples where these requirements hold?


Scheduling model

P = {p each toi | 0 < i < n}

Scheduling Model

P is a set of processes

Each process pi in the set is represented by a

descriptor {pi, j} that specifies a list of threads.

Each thread contains a state field S(pi, j) such that

S(pi, j) is one of {running, blocked, ready}


Some Common Performance Metrics each to

Service Time(pi,j)

The amount of time a thread needs to be in

running state until it is completed.

Wait TimeW (pi,j)

The time the thread spends waiting in the

ready state before its first transition to

running state. *

Turnaround TimeT (Pi, j)

The amount of time between the moment

the thread first enters the ready state

and the moment the thread exits the

running state for the last time.

* Silberschatz uses a different definition of wait time.


Some Common Performance Metrics each to

Response TimeT (Pi, j)

In an interactive system, one of the most

important performance metrics is response

time. This is the time that it takes for the

system to respond to some user action.


System load

System Load each to

If  is the mean arrival rate of new jobs into the system,

and  is the mean service time, then the fraction of the

time that the cpu is busy can be calculated as

p =  / .

This assumes no time for context switching and that the

cpu has sufficient capacity to service the load.

Note: This is not the same lambda (λ) we saw a few slides back.


For example, given an average arrival rate of 10 threads each to

per minute and an average service time of 3 seconds,

 = 10 threads per minute

 = 20 threads per minute ( 60 / 3)

p = 10 / 20 = 50%

What can you say about this system if the arrival rate, ,

is greater than the mean service time, ?


Scheduling algorithms

Scheduling Algorithms each to

First-Come First Served

Shortest Job First

Priority Scheduling

Deadline Scheduling

Shortest Remaining Time Next

Round Robin

Multi-level Queue

Multi-level Feedback Queue


First come first served

The simplest of scheduling algorithms. each to

The Ready List is a fifo queue.

When a process enters the ready queue, it’s PCB is linked

to the tail of the queue. When the CPU is free, the scheduler

picks the process that is at the head of the queue.

First-come first-served is a non-preemptive scheduling

algorithm. Once a process gets the CPU it keeps it until

it either finishes, blocks for I/O, or voluntarily gives up

the CPU.

First-Come First-Served


When a process blocks, the next process in the queue is run. When the blocked process becomes ready, it is added back in to the end of the ready list, just as if it were a new process.


Waiting times in a First-Come First-Served System can vary substantially and can be very long. Consider three jobs with the following service times (no blocking):

i  i

1 24ms

2 3ms

3 3ms

If the processes arrive in the order p1, p2, and then p3

P1

P2

P3

Gannt Chart

0

24

27

30

Compute each thread’s turnaround time

T (p1) = (p1) = 24ms

T (p2) = (p2) + T (P1) = 3ms + 24ms = 27ms

T (p3) = (p3) + T (p2) = 3ms + 27ms = 30ms

Average turnaround time = (24 + 27 + 30)/3 = 81 / 3 = 27ms


Waiting times in a First-Come First-Served System can vary substantially and can be very long. Consider three jobs with the following run times (no blocking):

i  i

1 24ms

2 3ms

3 3ms

If the processes arrive in the order p1, p2, and then p3

P1

P2

P3

0

24

27

30

Compute each thread’s wait time

W (p1) = 0

W (p2) = T (P1) = 24ms

W (p3) = T (p2) = 27ms

Average wait time = (0 + 24 + 27 ) / 3 = 51 / 3 = 17ms


Note how re-ordering the arrival times can significantly substantially and can be very long. Consider three jobs with the following run times (no blocking):

alter the average turnaround time and average wait time!

i  i

1 3ms

2 3ms

3 24ms

P2

P3

P1

0

3

6

30

Compute each thread’s turnaround time

T (p1) = (p1) = 3ms

T (p2) = (p2) + T (P2) = 3ms + 3ms = 6ms

T (p3) = (p3) + T (p2) = 6ms + 24ms = 30ms

Average turnaround time = (3 + 6 + 30)/3 = 39 / 3 = 13ms


Note how re-ordering the arrival times can significantly alter The average turnaround and average wait times.

i  i

1 3ms

2 3ms

3 24ms

P2

P3

P1

0

3

6

30

Compute each thread’s wait time

W (p1) = 0

W (p2) = T (P1) = 3ms

W (p3) = T (p2) = 6ms

Average wait time = (0 + 3 + 6 ) / 3 = 9 / 3 = 3ms


Try your hand at calculating average turnaround alter The average turnaround and average wait times.

and average wait times.

i  i

1 350ms

2 125ms

3 475ms

4 250ms

5 75ms


i alter The average turnaround and average wait times. i

1 350ms

2 125ms

3 475ms

4 250ms

5 75ms

100

400

500

600

700

800

900

1000

1100

200

300

1200


Try your hand at calculating average turnaround alter The average turnaround and average wait times.

and average wait times.

i  i

1 350ms

2 125ms

3 475ms

4 250ms

5 75ms

p4

p5

p1

p2

p3

1275

475

1200

0

350

950

Average turnaround = (350 + 475 + 950 + 1200 + 1275) / 5 = 850

Average wait time = (0 + 350 + 475 + 950 + 1200) / 5 = 595


The convoy effect

The Convoy Effect alter The average turnaround and average wait times.

Assume a situation where there is one CPU bound process

and many I/O bound processes. What effect does this have

on the utilization of system resources?


The convoy effect1
The Convoy Effect alter The average turnaround and average wait times.

Blocked

CPU

I/O

Ready Queue

I/O

CPU

I/O


The convoy effect2

I/O alter The average turnaround and average wait times.

I/O

I/O

The Convoy Effect

Blocked

CPU

Ready Queue


The convoy effect3

I/O alter The average turnaround and average wait times.

CPU

I/O

I/O

The Convoy Effect

Blocked

CPU

Ready Queue


The convoy effect4

I/O alter The average turnaround and average wait times.

CPU

I/O

I/O

The Convoy Effect

Blocked

CPU

Run a long time

Ready Queue

Remember, first come first served scheduling is non-preemptive.


Shortest job next scheduling

Shortest Job Next scheduling is also a non-preemptive alter The average turnaround and average wait times.

algorithm. The scheduler picks the job from the ready list that has the shortest expected CPU time.

It can be shown that the Shortest Job Next algorithm gives the shortest average waiting time. However, there is a danger. What is it?

Shortest Job Next Scheduling

starvation for longer processes as long as there is a supply of

short jobs.


Consider the case where the following jobs are in the ready list:

Process  i

1 6ms

2 8ms

3 7ms

4 3ms

Scheduling according to predicted processor time:

P4

P1

P3

P2

0

3

9

16

24

Average turnaround time = (3 + 9 + 16 +24)/4 = 13ms


Consider the case where the following jobs are in the ready list:

Process  i

1 6ms

2 8ms

3 7ms

4 3ms

Scheduling according to predicted processor time:

P4

P1

P3

P2

0

3

9

16

24

Average wait time = (0 + 3 + 9 +16)/4 = 7ms

If we were using fcfs scheduling the average wait time

would have been 10.25ms.


There is a practical issue involved in actually list:

implementing SJN, can you guess what it is?

You don’t know how long the job will really take!


For batch jobs the user can estimate how long the list:

process will take and provide this as part of the job parameters. Users are motivated to be as accurate as possible, because if a job exceeds the estimate, it could be kicked out of the system.

In a production environment, the same jobs are run over and over again (for example, a payroll program), so you could easily base the estimate on previous runs of the same program.


n list:

1

n

Sn+1

=

Ti

i=1

For interactive systems, it is possible to predict the time

for the next CPU burst of a process based on its history.

Consider the following:

Where

Ti = processor execution time for the ith burst

Si = predicted time for the ith instance


n list:

1

n

Sn+1

=

Ti

1

n

n – 1

n

Sn+1 = T + S

i=1

n

n

To avoid recalculating the average each time, we can write this as

It is common to weight more recent instances more than earlier ones, Because they are better predictors of future behavior. This is done with a technique called exponential averaging.

Sn+1 = Tn - (1 - )Sn


Priority scheduling

In priority scheduling, each job has a given priority, and the Scheduler always picks the job with the highest priority to run next. If two jobs have equal priority, they are run in FCFS order.

Priorities range across some fixed set of values. It is up to the scheduler to define whether or not the lowest value is also the lowest priority.

Note that shortest job next scheduling is really a case of priority scheduling, where the priority is the inverse of the predicted time of the next cpu burst.

Priority scheduling can either be preemptive or not.

Priority Scheduling


Priorities can be assigned internally or externally. the Scheduler always picks the job with the highest priority to run next. If two jobs have equal priority, they are run in FCFS order.

Internally assigned priorities are based on various characteristics of the process, such as

* memory required

* number of open files

* average cpu burst time

* i/o bound or cpu bound

Externally assigned priorities are based on things such as

* the importance of the job

* the funds being used to pay for the job

* the department that owns the job

* other, often political, factors


Deadline scheduling

Real-time systems are characterized by having the Scheduler always picks the job with the highest priority to run next. If two jobs have equal priority, they are run in FCFS order.

threads that must complete execution prior to

some time deadline.

The critical performance measurement of such a

system is whether the system will be able to meet

the scheduling deadlines for all such threads.

Measures of turnaround time and wait time are

irrelevant.

Deadline Scheduling


In order to manage deadline scheduling, the system must have the Scheduler always picks the job with the highest priority to run next. If two jobs have equal priority, they are run in FCFS order.

complete knowledge of the maximum service time for each

process.

In periodic scheduling, a thread has a recurring service time

and deadline, so the deadline must be met for each period in

the thread’s life. A process is admitted to the ready list only

if it can be guaranteed that the system can supply the

specified service time before each deadline imposed by all

of the processes.

Example: In a streaming media application, a recurring thread

must meet its service criteria and deadline in order to

prevent jitter and latency in audio or video processing.


What problem do you think priority scheduling suffers from? the Scheduler always picks the job with the highest priority to run next. If two jobs have equal priority, they are run in FCFS order.

Starvation!

There is a legend that says when MIT shut down

their IBM 7094 in 1973, they found a low priority

job that had been in the queue since 1967, and had

never been run!

Can you think of a scheme to deal with this problem?


Round robin scheduling

With round robin scheduling, jobs are stored in a FCFS queue.

A clock is provided that generates an interrupt at periodic intervals. When an interrupt occurs, the running process is stopped and placed at the end of the ready list. The scheduler then picks the front job from the ready list and dispatches it.

Round Robin is pre-emptive!

Round Robin Scheduling


Time quantums

  • The rules are that queue.

  • No process ever gets more than 1 time quantum in a row

  • If the process takes more than one time quantum, it is

  • stopped when it’s time quantum runs out and another

  • process is run. The process moves to the tail of the list.

  • 3. If a process does not use up it’s time quantum, (normally

  • because it blocks) then the scheduler returns it to the

  • tail of the list when it is unblocked and picks another

  • process to run.

Time Quantums

The most significant design factor in round robin scheduling is the Time Quantum, the amount of time that each process is allowed to run. Why?


What happens if the time quantum is very, very long? queue.

If the time quantum is very long, then the performance of

Round robin scheduling approaches that of FCFS scheduling. Interactive users would complain bitterly!


Too short queue.

efficiency goes down because

of context switches.

Too long

Interactive users lose

responsiveness and the

system appears sluggish.

What happens if the time quantum is very short.

Suppose that the time to do a context switch on a machine

is 1ms and the quantum time is 4 ms. Remember that no

useful work gets done during a context switch, so 20% of

the cpu time is wasted doing context switches.


Relative Treatment of CPU Bound queue.

and I/O Bound Processes

The I/O bound processes tend to run in short bursts,

not using their entire time quantum. CPU bound

processes use their entire time quantum every

time they are run. The net effect is that CPU bound

processes get an unfair proportion of the CPU

(but maybe that’s the way it should be)


Round robin scheduling1

Round Robin Scheduling queue.

In the following example, we will assume a time

quantum (time slice) of 4 ms. We will not take

the time for a context switch into account.

Process  i

1 24ms

2 3ms

3 3ms


Process queue.  i

1 24ms

2 3ms

3 3ms

4

16

20

24

28

32

36

8

12

Time slice is 4ms


Process queue.  i

1 24ms

2 3ms

3 3ms

Process 1 starts

It’s Time slice expires

4

16

20

24

28

32

36

8

12

Time slice is 4ms


Process queue.  i

1 24ms

2 3ms

3 3ms

Process 2 starts

It finishes its work

P1

4

16

20

24

28

32

36

8

12

7

Time slice is 4ms


Process queue.  i

1 24ms

2 3ms

3 3ms

Process 3 starts

It finishes its work

P1

P2

4

16

20

24

28

32

36

8

12

10

7

Time slice is 4ms


Process queue.  i

1 24ms

2 3ms

3 3ms

Process 1 starts

Its time slice expires

P1

P3

P2

4

16

20

24

28

32

36

8

12

10

7

Time slice is 4ms


Round robin scheduling2

Process queue.  i

1 24ms

2 3ms

3 3ms

Round Robin Scheduling

P1

P3

P2

p1

p2

p3

p1

p1

p1

p1

p1

30

0

22

7

10

14

18

26

4

Average wait time = (0 + 4 + 7) / 3 = 3.66 ms

Significant Improvement in Wait Time!

Wait time for this set of jobs using fcfs was 17ms


Round robin scheduling3

Process queue.  i

1 24ms

2 3ms

3 3ms

Round Robin Scheduling

P2

P3

P1

p1

p2

p3

p1

p1

p1

p1

p1

30

0

22

7

10

14

18

26

4

Average Turnaround time = (30 + 7 + 10) / 3 = 15.66

Round Robin does little to improve average turnaround time.

with fcfs turnaround for this set of jobs was 27ms/13ms


Round robin scheduling4

Now, consider the case where the context switch plus queue.

the scheduler time take 2ms

Round Robin Scheduling

Process  i

1 24ms

2 3ms

3 3ms

p1

p1

p1

p1

p2

p1

p3

p1

26

44

28

32

34

38

40

20

9

22

16

0

6

11

14

4

Average wait time = (0 + 6 + 11) / 3 = 6.66 ms

Without context switching this was 3.66ms


Round robin scheduling5

Now, consider the case where the context switch plus queue.

the scheduler time take 2ms

Round Robin Scheduling

Process  i

1 24ms

2 3ms

3 3ms

p1

p1

p1

p1

p2

p1

p3

p1

26

44

28

32

34

38

40

20

9

22

16

0

6

11

14

4

Average turnaround time = (44 + 9 + 14) / 3 = 22.33 ms

Without context switching this was 15.66ms - about a 42% difference


Shortest remaining time next

This is the pre-emptive version of Shortest Job Next. It allocates the processor to the job closest to completion. Note that a running job could be pre-empted if a new job with a shorter completion time arrives on the queue.

This algorithm is not suited for interactive systems since it requires knowledge of the time required for a process to complete.

Shortest Remaining Time Next


Arrival Time 0 1 2 3 allocates the processor to the job closest to completion. Note that a running job could be pre-empted if a new job with a shorter completion time arrives on the queue.

Job # 1 2 3 4

Service Time 6ms 3ms 1ms 4ms

Job 1 arrives and gets scheduled.

It starts to run.

P1

0


Arrival Time 0 1 2 3 allocates the processor to the job closest to completion. Note that a running job could be pre-empted if a new job with a shorter completion time arrives on the queue.

Job # 1 2 3 4

Service Time 6ms 3ms 1ms 4ms

5 ms left

P1

P2

0

1

Job 1 runs for 1 ms.

When Job 2 arrives on the queue,

it’s time to completion is less

than the time required to complete

Job 1, so Job 1 is pre-empted

By job 2.


Arrival Time 0 1 2 3 allocates the processor to the job closest to completion. Note that a running job could be pre-empted if a new job with a shorter completion time arrives on the queue.

Job # 1 2 3 4

Service Time 6ms 3ms 1ms 4ms

2 ms left

5 ms left

P2

P1

P3

0

1

2

Job 2 runs for 1 ms.

When Job 3 arrives on the queue,

it’s time to completion is less

than the time required to complete

Job 2, so Job 2 is pre-empted

By job 3.


Arrival Time 0 1 2 3 allocates the processor to the job closest to completion. Note that a running job could be pre-empted if a new job with a shorter completion time arrives on the queue.

Job # 1 2 3 4

Service Time 6ms 3ms 1ms 4ms

done

5ms left

2ms left

P2

P1

P3

0

1

2

3

Job 3 runs to completion.

The scheduler now looks to see which job has the

shortest time to completion. Job #4 has arrived .


Arrival Time 0 1 2 3 allocates the processor to the job closest to completion. Note that a running job could be pre-empted if a new job with a shorter completion time arrives on the queue.

Job # 1 2 3 4

Service Time 6ms 3ms 1ms 4ms

done

5ms left

2ms left

P2

P1

P3

P2

0

1

2

3

5

Job 3 runs to completion.

The scheduler now looks to see which job has the

shortest time to completion. Job #4 has arrived .

Job #2 has the shortest remaining time (2ms)

It will run to completion.


Arrival Time 0 1 2 3 allocates the processor to the job closest to completion. Note that a running job could be pre-empted if a new job with a shorter completion time arrives on the queue.

Job # 1 2 3 4

Service Time 6ms 3ms 1ms 4ms

done

5ms left

2ms left

P2

P1

P3

P2

P4

P1

0

1

2

3

5

9

14

Job 4 runs to completion.

The scheduler now looks to see which job has the

shortest time to completion.

There is only 1 job left, #1.

It is scheduled and runs to completion.


Arrival Time 0 1 2 3 allocates the processor to the job closest to completion. Note that a running job could be pre-empted if a new job with a shorter completion time arrives on the queue.

Job # 1 2 3 4

Service Time 6ms 3ms 1ms 4ms

5ms left

2ms left

P1

P2

P3

P2

P4

P1

0

1

2

3

5

9

14

Average turnaround time = (14 + 4 + 1 + 6) / 4 = 6.25ms


Multi level queues

Multi level queues are useful when all processes to be run can be put into one of a small set of categories.

Multi-Level Queues

System Processes

Interactive Processes

Batch Processes

for example, student

jobs won’t run at all,

unless all of the other

queues are empty.

Student Processes

Each queue may have its own scheduling algorithm. For example,

interactive processes may run round robin, while batch processes are

run FCFS. Scheduling between queues is usually a fixed-priority

pre-emptive scheduling.


Multilevel feedback queues

Like multilevel queues, but with the ability of jobs to can be put into one of a small set of categories.

move from one queue to another. This separates jobs

according to their cpu burst characteristics. For example,

jobs that tend to use too much cpu time will be moved

to a lower priority queue.

Similarly, jobs that wait too long to get the cpu will move

to a higher priority queue. This type of aging tends to

Eliminate starvation.

Multilevel Feedback Queues


Lottery scheduling

Processes get lottery tickets for various system can be put into one of a small set of categories.

resources, such as the cpu. When a scheduling

decision has to be made, the OS picks a lottery

ticket at random. The holder of the ticket gets the

resource.

More important (higher priority) processes can be

given more lottery tickets to increase their chance

of winning.

Cooperative processes can exchange tickets to

boost one processes chance of winning and lessen

another’s.

Lottery Scheduling


Sun solaris a unix o s

Solaris has four scheduling classes can be put into one of a small set of categories.

* Time Sharing

* Real-Time

* System

* Interactive

Time sharing and Interactive use a multi-level

feedback queue – scheduling is done on threads

Each priority is assigned a different time quantum

At the end of a time quantum the priority of a thread

is lowered by 10

Sun Solaris (a Unix O/S)


Windows xp

Windows XP can be put into one of a small set of categories.

Windows XP uses a quantum based, multiple priority

feedback scheduling algorithm.

Scheduling is done on threads, not processes.

When a thread’s time quantum expires, its priority is

lowered by 1.

When a thread moves to the ready list after being

blocked, its priority is boosted. The largest boost is

for threads waiting for keyboard input.


Linux 2 5

Linux 2.5 can be put into one of a small set of categories.

Linux 2.5 uses a quantum based, multiple priority

feedback scheduling algorithm.

When a thread’s time quantum expires, its priority is

lowered by some amount.

When a thread moves to the ready list after being

blocked, its priority is boosted. , based on how long

it was in the waiting state.


ad