omse 510 computing foundations 7 more ipc multithreading n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
OMSE 510: Computing Foundations 7: More IPC & Multithreading PowerPoint Presentation
Download Presentation
OMSE 510: Computing Foundations 7: More IPC & Multithreading

Loading in 2 Seconds...

play fullscreen
1 / 176

OMSE 510: Computing Foundations 7: More IPC & Multithreading - PowerPoint PPT Presentation


  • 100 Views
  • Uploaded on

OMSE 510: Computing Foundations 7: More IPC & Multithreading. Chris Gilmore <grimjack@cs.pdx.edu> Portland State University/OMSE. Material Borrowed from Jon Walpole’s lectures. Today. Classical IPC Problems Monitors Message Passing Scheduling. Classical IPC problems.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'OMSE 510: Computing Foundations 7: More IPC & Multithreading' - lilith


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
omse 510 computing foundations 7 more ipc multithreading

OMSE 510: Computing Foundations7: More IPC & Multithreading

Chris Gilmore <grimjack@cs.pdx.edu>

Portland State University/OMSE

Material Borrowed from Jon Walpole’s lectures

today
Today
  • Classical IPC Problems
  • Monitors
  • Message Passing
  • Scheduling
classical ipc problems
Classical IPC problems
  • Producer Consumer (bounded buffer)
  • Dining philosophers
  • Sleeping barber
  • Readers and writers
producer consumer problem

InP

8 Buffers

OutP

Producer consumer problem
  • Also known as the bounded buffer problem

Producer

Producer and consumer

are separate threads

Consumer

is this a valid solution

thread consumer {

    • while(1){
    • while (count==0) {
    • no_op
    • }
    • c = buf[OutP]
    • OutP = OutP + 1 mod n
    • count--
    • // Consume char
    • }
  • }
  • thread producer {
    • while(1){
    • // Produce char c
    • while (count==n) {
    • no_op
    • }
    • buf[InP] = c
    • InP = InP + 1 mod n
    • count++
    • }
  • }
Is this a valid solution?

n-1

0

Global variables:

char buf[n]

int InP = 0 // place to add

int OutP = 0 // place to get

int count

1

2

how about this
How about this?
  • 0 thread consumer {
  • while(1) {
  • while (count==0) {
  • sleep(empty)
  • }
  • c = buf[OutP]
  • OutP = OutP + 1 mod n
  • count--;
  • if (count == n-1)
  • wakeup(full)
  • // Consume char
  • }
  • 12 }
  • 0 thread producer {
  • while(1) {
  • // Produce char c
  • if (count==n) {
  • sleep(full)
  • }
  • buf[InP] = c;
  • InP = InP + 1 mod n
  • count++
  • if (count == 1)
  • wakeup(empty)
  • }
  • 12 }

n-1

0

Global variables:

char buf[n]

int InP = 0 // place to add

int OutP = 0 // place to get

int count

1

2

does this solution work

0 thread consumer {

  • while(1){
  • down(full_buffs)
  • c = buf[OutP]
  • OutP = OutP + 1 mod n
  • up(empty_buffs)
  • // Consume char...
  • }
  • 8 }
Does this solution work?

Global variables

semaphore full_buffs = 0;

semaphore empty_buffs = n;

char buff[n];

int InP, OutP;

  • 0 thread producer {
  • while(1){
  • // Produce char c...
  • down(empty_buffs)
  • buf[InP] = c
  • InP = InP + 1 mod n
  • up(full_buffs)
  • 7 }
  • 8 }
producer consumer problem1

InP

8 Buffers

OutP

Producer consumer problem
  • What is the shared state in the last solution?
  • Does it apply mutual exclusion? If so, how?

Producer

Producer and consumer

are separate threads

Consumer

definition of deadlock
Definition of Deadlock

A set of processes is deadlocked if each

process in the set is waiting for an event that only

another process in the set can cause

  • Usually the event is the release of a currently held resource
  • None of the processes can …
    • be awakened
    • run
    • release resources
deadlock conditions
Deadlock conditions
  • A deadlock situation can occur if and only if the following conditions hold simultaneously
    • Mutual exclusion condition – resource assigned to one process
    • Hold and wait condition – processes can get more than one resource
    • No preemption condition
    • Circular wait condition – chain of two or more processes (must be waiting for resource from next one in chain)
resource acquisition scenarios
Resource acquisition scenarios

Thread A:

acquire (resource_1)

use resource_1

release (resource_1)

Example:

var r1_mutex: Mutex

...

r1_mutex.Lock()

Use resource_1

r1_mutex.Unlock()

resource acquisition scenarios1
Resource acquisition scenarios

Thread A:

acquire (resource_1)

use resource_1

release (resource_1)

Another Example:

var r1_sem: Semaphore

...

r1_sem.Down()

Use resource_1

r1_sem.Up()

resource acquisition scenarios2
Resource acquisition scenarios

Thread A:

Thread B:

acquire (resource_1)

use resource_1

release (resource_1)

acquire (resource_2)

use resource_2

release (resource_2)

resource acquisition scenarios3
Resource acquisition scenarios

Thread A:

Thread B:

acquire (resource_1)

use resource_1

release (resource_1)

acquire (resource_2)

use resource_2

release (resource_2)

No deadlock can occur here!

resource acquisition scenarios 2 resources
Resource acquisition scenarios: 2 resources

Thread A:

Thread B:

acquire (resource_1)

acquire (resource_2)

use resources 1 & 2

release (resource_2)

release (resource_1)

acquire (resource_1)

acquire (resource_2)

use resources 1 & 2

release (resource_2)

release (resource_1)

resource acquisition scenarios 2 resources1
Resource acquisition scenarios: 2 resources

Thread A:

Thread B:

acquire (resource_1)

acquire (resource_2)

use resources 1 & 2

release (resource_2)

release (resource_1)

acquire (resource_1)

acquire (resource_2)

use resources 1 & 2

release (resource_2)

release (resource_1)

No deadlock can occur here!

resource acquisition scenarios 2 resources2
Resource acquisition scenarios: 2 resources

Thread A:

Thread B:

acquire (resource_1)

use resources 1

release (resource_1)

acquire (resource_2)

use resources 2

release (resource_2)

acquire (resource_2)

use resources 2

release (resource_2)

acquire (resource_1)

use resources 1

release (resource_1)

resource acquisition scenarios 2 resources3
Resource acquisition scenarios: 2 resources

Thread A:

Thread B:

acquire (resource_1)

use resources 1

release (resource_1)

acquire (resource_2)

use resources 2

release (resource_2)

acquire (resource_2)

use resources 2

release (resource_2)

acquire (resource_1)

use resources 1

release (resource_1)

No deadlock can occur here!

resource acquisition scenarios 2 resources4
Resource acquisition scenarios: 2 resources

Thread A:

Thread B:

acquire (resource_1)

acquire (resource_2)

use resources 1 & 2

release (resource_2)

release (resource_1)

acquire (resource_2)

acquire (resource_1)

use resources 1 & 2

release (resource_1)

release (resource_2)

resource acquisition scenarios 2 resources5
Resource acquisition scenarios: 2 resources

Thread A:

Thread B:

acquire (resource_1)

acquire (resource_2)

use resources 1 & 2

release (resource_2)

release (resource_1)

acquire (resource_2)

acquire (resource_1)

use resources 1 & 2

release (resource_1)

release (resource_2)

Deadlock is possible!

examples of deadlock
Examples of deadlock
  • Deadlock occurs in a single program
    • Programmer creates a situation that deadlocks
    • Kill the program and move on
    • Not a big deal
  • Deadlock occurs in the Operating System
    • Spin locks and locking mechanisms are mismanaged within the OS
    • Threads become frozen
    • System hangs or crashes
    • Must restart the system and kill and applications
dining philosophers problem
Dining philosophers problem
  • Five philosophers sit at a table
  • One fork between each philosopher
  • Why do they need to synchronize?
  • How should they do it?

Each philosopher is

modeled with a thread

while(TRUE) {

Think();

Grab first fork;

Grab second fork;

Eat();

Put down first fork;

Put down second fork;

}

is this a valid solution1
Is this a valid solution?

#define N 5

Philosopher() {

while(TRUE) {

Think();

take_fork(i);

take_fork((i+1)% N);

Eat();

put_fork(i);

put_fork((i+1)% N);

}

}

working towards a solution
Working towards a solution …

take_forks(i)

#define N 5

Philosopher() {

while(TRUE) {

Think();

take_fork(i);

take_fork((i+1)% N);

Eat();

put_fork(i);

put_fork((i+1)% N);

}

}

put_forks(i)

working towards a solution1
Working towards a solution …

#define N 5

Philosopher() {

while(TRUE) {

Think();

take_forks(i);

Eat();

put_forks(i);

}

}

picking up forks
Picking up forks

int state[N]

semaphore mutex = 1

semaphore sem[i]

// only called with mutex set!

test(int i) {

if (state[i] == HUNGRY &&

state[LEFT] != EATING &&

state[RIGHT] != EATING){

state[i] = EATING;

up(sem[i]);

}

}

take_forks(int i) {

down(mutex);

state [i] = HUNGRY;

test(i);

up(mutex);

down(sem[i]);

}

putting down forks
Putting down forks

int state[N]

semaphore mutex = 1

semaphore sem[i]

// only called with mutex set!

test(int i) {

if (state[i] == HUNGRY &&

state[LEFT] != EATING &&

state[RIGHT] != EATING){

state[i] = EATING;

up(sem[i]);

}

}

put_forks(int i) {

down(mutex);

state [i] = THINKING;

test(LEFT);

test(RIGHT);

up(mutex);

}

dining philosophers
Dining philosophers
  • Is the previous solution correct?
  • What does it mean for it to be correct?
  • Is there an easier way?
the sleeping barber problem1
The sleeping barber problem
  • Barber:
    • While there are people waiting for a hair cut, put one in the barber chair, and cut their hair
    • When done, move to the next customer
    • Else go to sleep, until someone comes in
  • Customer:
    • If barber is asleep wake him up for a haircut
    • If someone is getting a haircut wait for the barber to become free by sitting in a chair
    • If all chairs are all full, leave the barbershop
designing a solution
Designing a solution
  • How will we model the barber and customers?
  • What state variables do we need?
    • .. and which ones are shared?
    • …. and how will we protect them?
  • How will the barber sleep?
  • How will the barber wake up?
  • How will customers wait?
  • What problems do we need to look out for?
is this a good solution
Is this a good solution?

const CHAIRS = 5

var customers: Semaphore

barbers: Semaphore

lock: Mutex

numWaiting: int = 0

Customer Thread:

Lock(lock)

if numWaiting < CHAIRS

numWaiting = numWaiting+1

Up(customers)

Unlock(lock)

Down(barbers)

GetHaircut()

else -- give up & go home

Unlock(lock)

endIf

Barber Thread:

while true

Down(customers)

Lock(lock)

numWaiting = numWaiting-1

Up(barbers)

Unlock(lock)

CutHair()

endWhile

the readers and writers problem
The readers and writers problem
  • Multiple readers and writers want to access a database (each one is a thread)
  • Multiple readers can proceed concurrently
  • Writers must synchronize with readers and other writers
    • only one writer at a time !
    • when someone is writing, there must be no readers !

Goals:

    • Maximize concurrency.
    • Prevent starvation.
designing a solution1
Designing a solution
  • How will we model the readers and writers?
  • What state variables do we need?
    • .. and which ones are shared?
    • …. and how will we protect them?
  • How will the writers wait?
  • How will the writers wake up?
  • How will readers wait?
  • How will the readers wake up?
  • What problems do we need to look out for?
is this a valid solution to readers writers
Is this a valid solution to readers & writers?

var mut: Mutex = unlocked

db: Semaphore = 1

rc: int = 0

Reader Thread:

while true

Lock(mut)

rc = rc + 1

if rc == 1

Down(db)

endIf

Unlock(mut)

... Read shared data... Lock(mut)

rc = rc - 1

if rc == 0

Up(db)

endIf

Unlock(mut)

... Remainder Section...

endWhile

Writer Thread:

while true

...Remainder Section...

Down(db)

...Write shared data... Up(db)

endWhile

readers and writers solution
Readers and writers solution
  • Does the previous solution have any problems?
    • is it “fair”?
    • can any threads be starved? If so, how could this be fixed?
monitors
Monitors
  • It is difficult to produce correct programs using semaphores
    • correct ordering of up and down is tricky!
    • avoiding race conditions and deadlock is tricky!
    • boundary conditions are tricky!
  • Can we get the compiler to generate the correct semaphore code for us?
    • what are suitable higher level abstractions for synchronization?
monitors1
Monitors
  • Related shared objects are collected together
  • Compiler enforces encapsulation/mutual exclusion
    • Encapsulation:
      • Local data variables are accessible only via the monitor’s entry procedures (like methods)
    • Mutual exclusion
      • A monitor has an associated mutex lock
      • Threads must acquire the monitor’s mutex lock before invoking one of its procedures
monitors and condition variables
Monitors and condition variables
  • But we need two flavors of synchronization
    • Mutual exclusion
      • Only one at a time in the critical section
      • Handled by the monitor’s mutex
    • Condition synchronization
      • Wait until a certain condition holds
      • Signal waiting threads when the condition holds
monitors and condition variables1
Monitors and condition variables
  • Condition variables (cv) for use within monitors
    • wait(cv)
      • thread blocked (queued) until condition holds
      • monitor mutex released!!
    • signal(cv)
      • signals the condition and unblocks (dequeues) a thread
monitor structures
Monitor structures

shared data

monitor entry queue

x

condition variables

y

Local to monitor

(Each has an associated

list of waiting threads)

List of threads

waiting to

enter the monitor

“entry” methods

Can be called from

outside the monitor.

Only one active at

any moment.

local methods

initialization

code

monitor example for mutual exclusion
Monitor example for mutual exclusion

process Producer

begin

loop

<produce char “c”>

BoundedBuffer.deposit(c)

end loop

end Producer

monitor: BoundedBuffer

var buffer : ...;

nextIn, nextOut :... ;

entry deposit(c: char)

begin

...

end

entry remove(var c: char)

begin

...

end

end BoundedBuffer

process Consumer

begin

loop

BoundedBuffer.remove(c)

<consume char “c”>

end loop

end Consumer

observations
Observations
  • That’s much simpler than the semaphore-based solution to producer/consumer (bounded buffer)!
  • … but where is the mutex?
  • … and what do the bodies of the monitor procedures look like?
monitor example with condition variables
Monitor example with condition variables

monitor : BoundedBuffer

var buffer : array[0..n-1] of char

nextIn,nextOut : 0..n-1 := 0

fullCount : 0..n := 0

notEmpty, notFull : condition

entry deposit(c:char) entry remove(var c: char)

begin begin

if (fullCount = n) then if (fullCount = n) then

wait(notFull) wait(notEmpty)

end if end if

buffer[nextIn] := c c := buffer[nextOut]

nextIn := nextIn+1 mod n nextOut := nextOut+1 mod n

fullCount := fullCount+1 fullCount := fullCount-1

signal(notEmpty) signal(notFull)

end deposit end remove

end BoundedBuffer

condition variables
Condition variables

“Condition variables allow processes to synchronize based on some state of the monitor variables.”

condition variables in producer consumer
Condition variables in producer/consumer

“NotFull” condition

“NotEmpty” condition

  • Operations Wait() and Signal() allow synchronization within the monitor
  • When a producer thread adds an element...
    • A consumer may be sleeping
    • Need to wake the consumer... Signal
condition synchronization semantics
Condition synchronization semantics
  • “Only one thread can be executing in the monitor at any one time.”
  • Scenario:
    • Thread A is executing in the monitor
    • Thread A does a signal waking up thread B
    • What happens now?
    • Signaling and signaled threads can not both run!
    • … so which one runs, which one blocks, and on what queue?
monitor design choices
Monitor design choices
  • Condition variables introduce a problem for mutual exclusion
    • only one process active in the monitor at a time, so what to do when a process is unblocked on signal?
    • must not block holding the mutex, so what to do when a process blocks on wait?
  • Should signals be stored/remembered?
    • signals are not stored
    • if signal occurs before wait, signal is lost!
  • Should condition variables count?
monitor design choices1
Monitor design choices
  • Choices when A signals a condition that unblocks B
    • A waits for B to exit the monitor or blocks again
    • B waits for A to exit the monitor or block
    • Signal causes A to immediately exit the monitor or block (… but awaiting what condition?)
  • Choices when A signals a condition that unblocks B & C
    • B is unblocked, but C remains blocked
    • C is unblocked, but B remains blocked
    • Both B & C are unblocked … and compete for the mutex?
  • Choices when A calls wait and blocks
    • a new external process is allowed to enter
    • but which one?
option 1 hoare semantics
Option 1: Hoare semantics
  • What happens when a Signal is performed?
    • signaling thread (A) is suspended
    • signaled thread (B) wakes up and runs immediately
  • Result:
    • B can assume the condition is now true/satisfied
    • Hoare semantics give strong guarantees
    • Easier to prove correctness
  • When B leaves monitor, A can run.
      • A might resume execution immediately
      • ... or maybe another thread (C) will slip in!
option 2 mesa semantics xerox parc
Option 2: MESA Semantics (Xerox PARC)
  • What happens when a Signal is performed?
    • the signaling thread (A) continues.
    • the signaled thread (B) waits.
    • when A leaves monitor, then B runs.
  • Issue: What happens while B is waiting?
    • can another thread (C) run after A signals, but before B runs?
  • In MESA semantics a signal is more like a hint
    • Requires B to recheck the state of the monitor variables (the invariant) to see if it can proceed or must wait some more
code for the deposit entry routine
Code for the “deposit” entry routine

monitor BoundedBuffer

var buffer: array[n] ofchar

nextIn, nextOut: int = 0

cntFull: int = 0

notEmpty: Condition

notFull: Condition

entry deposit(c: char)

if cntFull == N

notFull.Wait()

endIf

buffer[nextIn] = c

nextIn = (nextIn+1) mod N

cntFull = cntFull + 1

notEmpty.Signal()

endEntry

entry remove()

...

endMonitor

Hoare Semantics

code for the deposit entry routine1
Code for the “deposit” entry routine

monitor BoundedBuffer

var buffer: array[n] ofchar

nextIn, nextOut: int = 0

cntFull: int = 0

notEmpty: Condition

notFull: Condition

entry deposit(c: char)

while cntFull == N

notFull.Wait()

endWhile

buffer[nextIn] = c

nextIn = (nextIn+1) mod N

cntFull = cntFull + 1

notEmpty.Signal()

endEntry

entry remove()

...

endMonitor

MESA Semantics

code for the remove entry routine
Code for the “remove” entry routine

monitor BoundedBuffer

var buffer: array[n] ofchar

nextIn, nextOut: int = 0

cntFull: int = 0

notEmpty: Condition

notFull: Condition

entry deposit(c: char)

...

entry remove()

if cntFull == 0

notEmpty.Wait()

endIf

c = buffer[nextOut]

nextOut = (nextOut+1) mod N

cntFull = cntFull - 1

notFull.Signal()

endEntry

endMonitor

Hoare Semantics

code for the remove entry routine1
Code for the “remove” entry routine

monitor BoundedBuffer

var buffer: array[n] ofchar

nextIn, nextOut: int = 0

cntFull: int = 0

notEmpty: Condition

notFull: Condition

entry deposit(c: char)

...

entry remove()

while cntFull == 0

notEmpty.Wait()

endWhile

c = buffer[nextOut]

nextOut = (nextOut+1) mod N

cntFull = cntFull - 1

notFull.Signal()

endEntry

endMonitor

MESA Semantics

hoare semantics
“Hoare Semantics”

What happens when a Signal is performed?

The signaling thread (A) is suspended.

The signaled thread (B) wakes up and runs immediately.

B can assume the condition is now true/satisfied

From the original Hoare Paper:

“No other thread can intervene [and enter the monitor] between the signal and the continuation of exactly one waiting thread.”

“If more than one thread is waiting on a condition, we postulate that the signal operation will reactivate the longest waiting thread. This gives a simple neutral queuing discipline which ensures that every waiting thread will eventually get its turn.”

implementing hoare semantics
Implementing Hoare Semantics
  • Thread A holds the monitor lock
  • Thread A signals a condition that thread B was waiting on
  • Thread B is moved back to the ready queue?
    • B should run immediately
    • Thread A must be suspended...
    • the monitor lock must be passed from A to B
  • When B finishes it releases the monitor lock
  • Thread A must re-aquire the lock
    • Perhaps A is blocked, waiting to re-aquire the lock
implementing hoare semantics1
Implementing Hoare Semantics
  • Problem:
    • Possession of the monitor lock must be passed directly from A to B and then back to A
    • Simply ending monitor entry methods with

monLock.Unlock()

… will not work

    • A’s request for the monitor lock must be expedited somehow
implementing hoare semantics2
Implementing Hoare Semantics
  • Implementation Ideas:
    • Consider a thread like A that hands off the mutex lock to a signaled thread, to be “urgent”.
      • Thread C is not “urgent”
    • Consider two wait lists associated with each MonitorLock
      • UrgentlyWaitingThreads
      • NonurgentlyWaitingThreads
    • Want to wake up urgent threads first, if any
brinch hansen semantics
Brinch-Hansen Semantics
  • Hoare Semantics
    • On signal, allow signaled process to run
    • Upon its exit from the monitor, signaler process continues.
  • Brinch-Hansen Semantics
    • Signaler must immediately exit following any invocation of signal
    • Restricts the kind of solutions that can be written
    • … but monitor implementation is easier
reentrant code
Reentrant code
  • A function/method is said to be reentrant if...

A function that has been invoked may be invoked again before the first invocation has returned, and will still work correctly

  • Recursive routines are reentrant
  • In the context of concurrent programming...

A reentrant function can be executed simultaneously by more than one thread, with no ill effects

reentrant code1
Reentrant Code
  • Consider this function...

integer seed;

integer rand() {

seed = seed * 8151 + 3423

return seed;

}

  • What if it is executed by different threads concurrently?
reentrant code2
Reentrant Code
  • Consider this function...

integer seed;

integer rand() {

seed = seed * 8151 + 3423

return seed;

}

  • What if it is executed by different threads concurrently?
    • The results may be “random”
    • This routine is not reentrant!
when is code reentrant
When is code reentrant?
  • Some variables are
    • “local” -- to the function/method/routine
    • “global” -- sometimes called “static”
  • Access to local variables?
    • A new stack frame is created for each invocation
    • Each thread has its own stack
  • What about access to global variables?
    • Must use synchronization!
making this function threadsafe
Making this function threadsafe

integer seed;

semaphore m = 1;

integer rand() {

down(m);

seed = seed * 8151 + 3423

tmp = seed;

up(m);

return tmp;

}

making this function reentrant
Making this function reentrant

integer seed;

integer rand( *seed ) {

*seed = *seed * 8151 + 3423

return *seed;

}

message passing
Message Passing
  • Interprocess Communication
    • via shared memory
    • across machine boundaries
  • Message passing can be used for synchronization or general communication
  • Processes use send and receive primitives
    • receive can block (like waiting on a Semaphore)
    • send unblocks a process blocked on receive(just as a signal unblocks a waiting process)
producer consumer with message passing
Producer-consumer with message passing
  • The basic idea:
    • After producing, the producer sends the data to consumer in a message
    • The system buffers messages
      • The producer can out-run the consumer
      • The messages will be kept in order
    • But how does the producer avoid overflowing the buffer?
      • After consuming the data, the consumer sends back an “empty” message
    • A fixed number of messages (N=100)
    • The messages circulate back and forth.
producer consumer with message passing1
Producer-consumer with message passing

thread producer

var c, em: char

while true

// Produce char c...

Receive(consumer, &em) -- Wait for an empty msg

Send(consumer, &c) -- Send c to consumer

endWhile

end

producer consumer with message passing2

thread consumer

var c, em: char

while true

Receive(producer, &c) -- Wait for a char

Send(producer, &em) -- Send empty message back

// Consume char...

endWhile

end

Producer-consumer with message passing

const N = 100 -- Size of message buffer

var em: char

for i = 1 to N -- Get things started by

Send (producer, &em) -- sending N empty messages

endFor

design choices for message passing
Design choices for message passing
  • Option 1:Mailboxes
    • System maintains a buffer of sent, but not yet received, messages
    • Must specify the size of the mailbox ahead of time
    • Sender will be blocked if the buffer is full
    • Receiver will be blocked if the buffer is empty
design choices for message passing1
Design choices for message passing
  • Option 2:No buffering
    • If Send happens first, the sending thread blocks
    • If Receiver happens first, the receiving thread blocks
    • Sender and receiver mustRendezvous(ie. meet)
    • Both threads are ready for the transfer
    • The data is copied / transmitted
    • Both threads are then allowed to proceed
barriers
Barriers
  • (a) Processes approaching a barrier
  • (b) All processes but one blocked at barrier
  • (c) Last process arrives; all are let through
slide74
Quiz
  • What is the difference between a monitor and a semaphore?
    • Why might you prefer one over the other?
  • How do the wait/signal methods of a condition variable differ from the up/down methods of a semaphore?
  • What is the difference between Hoare and Mesa semantics for condition variables?
    • What implications does this difference have for code surrounding a wait() call?
scheduling
Scheduling

Responsibility of the Operating System

Process state model

New Process

Termination

Ready

Running

Blocked

cpu scheduling criteria
CPU scheduling criteria
  • CPU Utilization – how busy is the CPU?
  • Throughput – how many jobs finished/unit time?
  • Turnaround Time –how long from job submission to job termination?
  • Response Time – how long (on average) does it take to get a “response” from a “stimulus”?
  • Missed deadlines – were any deadlines missed?
scheduler options
Scheduler options
  • Priorities
    • May use priorities to determine who runs next
      • amount of memory, order of arrival, etc..
    • Dynamic vs. Static algorithms
      • Dynamically alter the priority of the tasks while they are in the system (possibly with feedback)
      • Static algorithms typically assign a fixed priority when the job is initially started.
  • Preemptive vs. Nonpreemptive
    • Preemptive systems allow the task to be interrupted at any time so that the O.S. can take over again.
scheduling policies
Scheduling policies
  • First-Come, First Served (FIFO)
  • Shortest Job First (non-preemeptive)
  • Shortest Job First (with preemption)
  • Round-Robin Scheduling
  • Priority Scheduling
  • Real-Time Scheduling
first come first served fifo
First-Come, First-Served (FIFO)
  • Start jobs in the order they arrive (FIFO queue)
  • Run each job until completion
first come first served fifo1
First-Come, First-Served (FIFO)
  • Start jobs in the order they arrive (FIFO queue)
  • Run each job until completion

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

first come first served fifo2
First-Come, First-Served (FIFO)
  • Start jobs in the order they arrive (FIFO queue)
  • Run each job until completion

Total time taken,

from submission to

completion

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

first come first served fifo3

0

5

10

15

20

First-Come, First-Served (FIFO)
  • Start jobs in the order they arrive (FIFO queue)
  • Run each job until completion

Total time taken,

from submission to

completion

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Arrival Times of the Jobs

first come first served fifo4

0

5

10

15

20

First-Come, First-Served (FIFO)
  • Start jobs in the order they arrive (FIFO queue)
  • Run each job until completion

Total time taken,

from submission to

completion

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

first come first served fifo5

0

5

10

15

20

First-Come, First-Served (FIFO)
  • Start jobs in the order they arrive (FIFO queue)
  • Run each job until completion

Total time taken,

from submission to

completion

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

first come first served fifo6

0

5

10

15

20

First-Come, First-Served (FIFO)
  • Start jobs in the order they arrive (FIFO queue)
  • Run each job until completion

Total time taken,

from submission to

completion

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

first come first served fifo7

0

5

10

15

20

First-Come, First-Served (FIFO)
  • Start jobs in the order they arrive (FIFO queue)
  • Run each job until completion

Total time taken,

from submission to

completion

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

first come first served fifo8

0

5

10

15

20

First-Come, First-Served (FIFO)
  • Start jobs in the order they arrive (FIFO queue)
  • Run each job until completion

Total time taken,

from submission to

completion

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

first come first served fifo9

0

5

10

15

20

First-Come, First-Served (FIFO)
  • Start jobs in the order they arrive (FIFO queue)
  • Run each job until completion

Total time taken,

from submission to

completion

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

first come first served fifo10

0

5

10

15

20

First-Come, First-Served (FIFO)
  • Start jobs in the order they arrive (FIFO queue)
  • Run each job until completion

Total time taken,

from submission to

completion

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

first come first served fifo11

0

5

10

15

20

First-Come, First-Served (FIFO)
  • Start jobs in the order they arrive (FIFO queue)
  • Run each job until completion

Total time taken,

from submission to

completion

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

first come first served fifo12

0

5

10

15

20

First-Come, First-Served (FIFO)
  • Start jobs in the order they arrive (FIFO queue)
  • Run each job until completion

Total time taken,

from submission to

completion

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

first come first served fifo13

0

5

10

15

20

First-Come, First-Served (FIFO)
  • Start jobs in the order they arrive (FIFO queue)
  • Run each job until completion

Total time taken,

from submission to

completion

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

first come first served fifo14

0

5

10

15

20

First-Come, First-Served (FIFO)
  • Start jobs in the order they arrive (FIFO queue)
  • Run each job until completion

Total time taken,

from submission to

completion

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

first come first served fifo15

0

5

10

15

20

First-Come, First-Served (FIFO)
  • Start jobs in the order they arrive (FIFO queue)
  • Run each job until completion

Total time taken,

from submission to

completion

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

first come first served fifo16

0

5

10

15

20

First-Come, First-Served (FIFO)
  • Start jobs in the order they arrive (FIFO queue)
  • Run each job until completion

Total time taken,

from submission to

completion

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3 0

2 2 6 1

3 4 4 5

4 6 5 7

5 8 2 10

first come first served fifo17

0

5

10

15

20

First-Come, First-Served (FIFO)
  • Start jobs in the order they arrive (FIFO queue)
  • Run each job until completion

Total time taken,

from submission to

completion

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3 0 3

2 2 6 1 7

3 4 4 5 9

4 6 5 7 12

5 8 2 10 12

shortest job first
Shortest Job First
  • Select the job with the shortest (expected) running time
  • Non-Preemptive
shortest job first1

0

5

10

15

20

Shortest Job First
  • Select the job with the shortest (expected) running time
  • Non-Preemptive

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Same Job Mix

shortest job first2

0

5

10

15

20

Shortest Job First
  • Select the job with the shortest (expected) running time
  • Non-Preemptive

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

shortest job first3

0

5

10

15

20

Shortest Job First
  • Select the job with the shortest (expected) running time
  • Non-Preemptive

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

shortest job first4

0

5

10

15

20

Shortest Job First
  • Select the job with the shortest (expected) running time
  • Non-Preemptive

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

shortest job first5

0

5

10

15

20

Shortest Job First
  • Select the job with the shortest (expected) running time
  • Non-Preemptive

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

shortest job first6

0

5

10

15

20

Shortest Job First
  • Select the job with the shortest (expected) running time
  • Non-Preemptive

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

shortest job first7

0

5

10

15

20

Shortest Job First
  • Select the job with the shortest (expected) running time
  • Non-Preemptive

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

shortest job first8

0

5

10

15

20

Shortest Job First
  • Select the job with the shortest (expected) running time
  • Non-Preemptive

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

shortest job first9

0

5

10

15

20

Shortest Job First
  • Select the job with the shortest (expected) running time
  • Non-Preemptive

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

shortest job first10

0

5

10

15

20

Shortest Job First
  • Select the job with the shortest (expected) running time
  • Non-Preemptive

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

shortest job first11

0

5

10

15

20

Shortest Job First
  • Seslect the job with the shortest (expected) running time
  • Non-Preemptive

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

shortest job first12

0

5

10

15

20

Shortest Job First
  • Select the job with the shortest (expected) running time
  • Non-Preemptive

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

shortest job first13

0

5

10

15

20

Shortest Job First
  • Select the job with the shortest (expected) running time
  • Non-Preemptive

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

shortest job first14

0

5

10

15

20

Shortest Job First
  • Select the job with the shortest (expected) running time
  • Non-Preemptive

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3 0

2 2 6 1

3 4 4 7

4 6 5 9

5 8 2 1

shortest job first15

0

5

10

15

20

Shortest Job First
  • Select the job with the shortest (expected) running time
  • Non-Preemptive

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3 0 3

2 2 6 1 7

3 4 4 7 11

4 6 5 9 14

5 8 2 1 3

shortest remaining time

0

5

10

15

20

Shortest Remaining Time
  • Preemptive version of SJF

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Same Job Mix

shortest remaining time1

0

5

10

15

20

Shortest Remaining Time
  • Preemptive version of SJF

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

shortest remaining time2

0

5

10

15

20

Shortest Remaining Time
  • Preemptive version of SJF

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

shortest remaining time3

0

5

10

15

20

Shortest Remaining Time
  • Preemptive version of SJF

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

shortest remaining time4

0

5

10

15

20

Shortest Remaining Time
  • Preemptive version of SJF

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

shortest remaining time5

0

5

10

15

20

Shortest Remaining Time
  • Preemptive version of SJF

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

shortest remaining time6

0

5

10

15

20

Shortest Remaining Time
  • Preemptive version of SJF

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

shortest remaining time7

0

5

10

15

20

Shortest Remaining Time
  • Preemptive version of SJF

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

shortest remaining time8

0

5

10

15

20

Shortest Remaining Time
  • Preemptive version of SJF

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

shortest remaining time9

0

5

10

15

20

Shortest Remaining Time
  • Preemptive version of SJF

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

shortest remaining time10

0

5

10

15

20

Shortest Remaining Time
  • Preemptive version of SJF

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

shortest remaining time11

0

5

10

15

20

Shortest Remaining Time
  • Preemptive version of SJF

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

shortest remaining time12

0

5

10

15

20

Shortest Remaining Time
  • Preemptive version of SJF

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

shortest remaining time13

0

5

10

15

20

Shortest Remaining Time
  • Preemptive version of SJF

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

shortest remaining time14

0

5

10

15

20

Shortest Remaining Time
  • Preemptive version of SJF

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3 0

2 2 6 7

3 4 4 0

4 6 5 9

5 8 2 0

shortest remaining time15

0

5

10

15

20

Shortest Remaining Time
  • Preemptive version of SJF

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3 0 3

2 2 6 7 13

3 4 4 0 4

4 6 5 9 14

5 8 2 0 2

round robin scheduling
Round-Robin Scheduling
  • Goal: Enable interactivity
    • Limit the amount of CPU that a process can have at one time.
  • Time quantum
    • Amount of time the OS gives a process before intervention
    • The “time slice”
    • Typically: 1 to 100ms
round robin scheduling1

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

round robin scheduling2

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Ready List:

round robin scheduling3

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Ready List:

round robin scheduling4

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Ready List:

round robin scheduling5

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Ready List:

round robin scheduling6

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Ready List:

round robin scheduling7

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Ready List:

round robin scheduling8

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Ready List:

round robin scheduling9

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Ready List:

round robin scheduling10

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Ready List:

round robin scheduling11

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Ready List:

round robin scheduling12

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Ready List:

round robin scheduling13

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Ready List:

round robin scheduling14

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Ready List:

round robin scheduling15

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Ready List:

round robin scheduling16

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Ready List:

round robin scheduling17

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Ready List:

round robin scheduling18

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Ready List:

round robin scheduling19

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Ready List:

round robin scheduling20

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Ready List:

round robin scheduling21

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Ready List:

round robin scheduling22

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Ready List:

round robin scheduling23

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Ready List:

round robin scheduling24

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Ready List:

round robin scheduling25

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Ready List:

round robin scheduling26

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

Ready List:

round robin scheduling27

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing

Process Time Time

1 0 3

2 2 6

3 4 4

4 6 5

5 8 2

round robin scheduling28

0

5

10

15

20

Round-Robin Scheduling

Arrival Processing Turnaround

Process Time Time Delay Time

1 0 3 1 4

2 2 6 10 16

3 4 4 9 13

4 6 5 9 14

5 8 2 5 7

round robin scheduling29
Round-Robin Scheduling
  • Effectiveness of round-robin depends on
    • The number of jobs, and
    • The size of the time quantum.
  • Large # of jobs means that the time between scheduling of a single job increases
    • Slow responses
  • Larger time quantum means that the time between the scheduling of a single job also increases
    • Slow responses
  • Smaller time quantum means higher processing rates but also more overhead!
priority scheduling
Priority scheduling
  • Assign a priority (number) to each process
  • Schedule processes based on their priority
  • Higher priority jobs processes get more CPU time
  • Managing priorities
    • Can use “nice” to reduce your priority
    • Can periodically adjust a process’ priority
      • Prevents starvation of a lower priority process
      • Can improve performance of I/O-bound processes by basing priority on fraction of last quantum used
multi level queue scheduling
Multi-Level Queue Scheduling
  • Multiple queues, each with its own priority.
    • Equivalently: each priority has its own ready queue
  • Within each queue...Round-robin scheduling.
  • Simplist Approach:
    • A Process’s priority is fixed & unchanging

High priority

CPU

Low priority

multi level feedback queue scheduling
Multi-Level Feedback Queue Scheduling
  • Problem: Fixed priorities are too restrictive
    • Processes exhibit varying ratios of CPU to I/O times.
  • Dynamic Priorities
    • Priorities are altered over time, as process behavior changes!
multi level feedback queue scheduling1
Multi-Level Feedback Queue Scheduling
  • Problem: Fixed priorities are too restrictive
    • Processes exhibit varying ratios of CPU to I/O times.
  • Dynamic Priorities
    • Priorities are altered over time, as process behavior changes!
  • Issue: When do you change the priority of a process and how often?
multi level feedback queue scheduling2
Multi-Level Feedback Queue Scheduling
  • Problem: Fixed priorities are too restrictive
    • Processes exhibit varying ratios of CPU to I/O times.
  • Dynamic Priorities
    • Priorities are altered over time, as process behavior changes!
  • Issue: When do you change the priority of a process and how often?
  • Solution: Let the amount of CPU used be an indication of how a process is to be handled
    • Expired time quantum  more processing needed
    • Unexpired time quantum  less processing needed
  • Adjusting quantum and frequency vs. adjusting priority?
multi level feedback queue scheduling3
Multi-Level Feedback Queue Scheduling
  • n priority levels, round-robin scheduling within a level
  • Quanta increase as priority decreases
  • Jobs are demoted to lower priorities if they do not complete within the current quantum

??

High priority

CPU

Low priority

multi level feedback queue scheduling4
Multi-Level Feedback Queue Scheduling
  • Details, details, details...
    • Starting priority?
      • High priority vs. low priority
    • Moving between priorities:
      • How long should the time quantum be?
lottery scheduling
Lottery Scheduling
  • Scheduler gives each thread some lottery tickets
  • To select the next process to run...
    • The scheduler randomly selects a lottery number
    • The winning process gets to run
  • Example Thread A gets 50 tickets

Thread B gets 15 tickets

Thread C gets 35 tickets

There are 100 tickets outstanding.

lottery scheduling1
Lottery Scheduling
  • Scheduler gives each thread some lottery tickets.
  • To select the next process to run...
    • The scheduler randomly selects a lottery number
    • The winning process gets to run
  • Example Thread A gets 50 tickets 50% of CPU

Thread B gets 15 tickets 15% of CPU

Thread C gets 35 tickets 35% of CPU

There are 100 tickets outstanding.

lottery scheduling2
Lottery Scheduling
  • Scheduler gives each thread some lottery tickets.
  • To select the next process to run...
  • The scheduler randomly selects a lottery number
  • The winning process gets to run
  • Example Thread A gets 50 tickets 50% of CPU

Thread B gets 15 tickets 15% of CPU

Thread C gets 35 tickets 35% of CPU

There are 100 tickets outstanding.

• Flexible

• Fair

• Responsive

a brief look at real time systems
A Brief Look at Real-Time Systems
  • Assume processes are relatively periodic
    • Fixed amount of work per period (e.g. sensor systems or multimedia data)
a brief look at real time systems1
A Brief Look at Real-Time Systems
  • Assume processes are relatively periodic
    • Fixed amount of work per period (e.g. sensor systems or multimedia data)
  • Two “main” types of schedulers...
  • Rate-Monotonic Schedulers
  • Earliest-Deadline-First Schedulers
a brief look at real time systems2
A Brief Look at Real-Time Systems
  • Assume processes are relatively periodic
    • Fixed amount of work per period (e.g. sensor systems or multimedia data)
  • Two “main” types of schedulers...
  • Rate-Monotonic Schedulers
    • Assign a fixed, unchanging priority to each process
    • No dynamic adjustment of priorities
      • Less aggressive allocation of processor
  • Earliest-Deadline-First Schedulers
    • Assign dynamic priorities based upon deadlines
a brief look at real time systems3
A Brief Look at Real-Time Systems
  • Typically real-time systems involve several steps (that aren’t in traditional systems)
  • Admission control
    • All processes must ask for resources ahead of time.
    • If sufficient resources exist,the job is “admitted” into the system.
  • Resource allocation
    • Upon admission...
    • the appropriate resources need to be reserved for the task.
  • Resource enforcement
    • Carry out the resource allocations properly
rate monotonic schedulers
Rate Monotonic Schedulers
  • For preemptable, periodic processes (tasks)
  • Assigns a fixed priority to each task
    • T = The period of the task
    • C = The amount of processing per task period
  • In RMS scheduling, the question to answer is...
    • What priority should be assigned to a given task?

Process P1

T = 1 second

C = 1/2 second / period

rate monotonic schedulers2
Rate Monotonic Schedulers

P1

P2

P1PRI > P2PRI

P2PRI > P1PRI