Cpu scheduling deadlock
1 / 43

CPU Scheduling & Deadlock - PowerPoint PPT Presentation

  • Uploaded on

CPU Scheduling & Deadlock. Operating Systems Lecture 4. Process Management. Concept of a Process Context-Change Process Life Cycle Process Creation Process Spawning. Process State Diagrams. Three State Model Ready Running Blocked Five State Model Ready Running Blocked

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about ' CPU Scheduling & Deadlock' - sandra-carver

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Cpu scheduling deadlock

CPU Scheduling&Deadlock

Operating Systems Lecture 4

Process management
Process Management

  • Concept of a Process

  • Context-Change

  • Process Life Cycle

  • Process Creation

  • Process Spawning

OS1 - Lecture 2 – Scheduling – Paul Flynn

Process state diagrams
Process State Diagrams

  • Three State Model

    • Ready

    • Running

    • Blocked

  • Five State Model

    • Ready

    • Running

    • Blocked

    • Ready Suspended

    • Blocked Suspended

OS1 - Lecture 2 – Scheduling – Paul Flynn

Cpu scheduling
CPU Scheduling

  • Scheduling the processor among all ready processes

  • The goal is to achieve:

    • High processor utilization

    • High throughput

      • number of processes completed per of unit time

    • Low response time

      • time elapsed from the submission of a request until the first response is produced

OS1 - Lecture 2 – Scheduling – Paul Flynn

Classification of scheduling activity
Classification of Scheduling Activity

  • Long-term: which process to admit?

  • Medium-term: which process to swap in or out?

  • Short-term: which ready process to execute next?

OS1 - Lecture 2 – Scheduling – Paul Flynn

Queuing diagram for scheduling
Queuing Diagram for Scheduling

OS1 - Lecture 2 – Scheduling – Paul Flynn

Long term scheduling
Long-Term Scheduling

  • Determines which programs are admitted to the system for processing

  • Controls the degree of multiprogramming

  • Attempts to keep a balanced mix of processor-bound and I/O-bound processes

    • CPU usage

    • System performance

OS1 - Lecture 2 – Scheduling – Paul Flynn

Medium term scheduling
Medium-Term Scheduling

  • Makes swapping decisions based on the current degree of multiprogramming

    • Controls which remains resident in memory and which jobs must be swapped out to reduce degree of multiprogramming

OS1 - Lecture 2 – Scheduling – Paul Flynn

Short term scheduling
Short-Term Scheduling

  • Selects from among ready processes in memory which one is to execute next

    • The selected process is allocated the CPU

  • It is invoked on events that may lead to choose another process for execution:

    • Clock interrupts

    • I/O interrupts

    • Operating system calls and traps

    • Signals

OS1 - Lecture 2 – Scheduling – Paul Flynn

Characterization of scheduling policies
Characterization of Scheduling Policies

  • The selection function determines which ready process is selected next for execution

  • The decision mode specifies the instants in time the selection function is exercised

    • Nonpreemptive

      • Once a process is in the running state, it will continue until it terminates or blocks for an I/O

    • Preemptive

      • Currently running process may be interrupted and moved to the Ready state by the OS

      • Prevents one process from monopolizing the processor

OS1 - Lecture 2 – Scheduling – Paul Flynn

Short term scheduler dispatcher
Short-Term SchedulerDispatcher

  • The dispatcher is the module that gives control of the CPU to the process selected by the short-term scheduler

  • The functions of the dispatcher include:

    • Switching context

    • Switching to user mode

    • Jumping to the location in the user program to restart execution

  • The dispatch latency must be minimal

OS2 - Lecture 2 – Scheduling – Paul Flynn

The cpu i o cycle
The CPU-I/O Cycle

  • Processes require alternate use of processor and I/O in a repetitive fashion

  • Each cycle consist of a CPU burst followed by an I/O burst

    • A process terminates on a CPU burst

  • CPU-bound processes have longer CPU bursts than I/O-bound processes

OS1 - Lecture 2 – Scheduling – Paul Flynn

Short term scheduling criteria
Short-Term Scheduling Criteria

  • User-oriented criteria

    • Response Time: Elapsed time between the submission of a request and the receipt of a response

    • Turnaround Time: Elapsed time between the submission of a process to its completion

  • System-oriented criteria

    • Processor utilization

    • Throughput: number of process completed per unit time

    • fairness

OS1 - Lecture 2 – Scheduling – Paul Flynn

Scheduling algorithms
Scheduling Algorithms

  • First-Come, First-Served Scheduling

  • Shortest-Job-First Scheduling

    • Also referred to as Shortest Job Next

  • Highest Response Ratio Next (HRN)

  • Shortest Remaining Time (SRT)

  • Round-Robin Scheduling

  • Multilevel Feedback Queue Scheduling

OS1 - Lecture 2 – Scheduling – Paul Flynn

Process mix example
Process Mix Example





















Service time = total processor time needed in one (CPU-I/O) cycle

Jobs with long service time are CPU-bound jobs and are referred

to as “long jobs”

OS1 - Lecture 2 – Scheduling – Paul Flynn

First come first served fcfs
First Come First Served (FCFS)

  • Selection function: the process that has been waiting the longest in the ready queue (hence, FCFS)

  • Decision mode: non-preemptive

    • a process runs until it blocks for an I/O

OS1 - Lecture 2 – Scheduling – Paul Flynn

Fcfs drawbacks
FCFS drawbacks

  • Favors CPU-bound processes

    • A CPU-bound process monopolizes the processor

    • I/O-bound processes have to wait until completion of CPU-bound process

      • I/O-bound processes may have to wait even after their I/Os are completed (poor device utilization)

    • Better I/O device utilization could be achieved if I/O bound processes had higher priority

OS1 - Lecture 2 – Scheduling – Paul Flynn

Shortest job first shortest process next
Shortest Job First (Shortest Process Next)

  • Selection function: the process with the shortest expected CPU burst time

    • I/O-bound processes will be selected first

  • Decision mode: non-preemptive

  • The required processing time, i.e., the CPU burst time, must be estimated for each process

OS1 - Lecture 2 – Scheduling – Paul Flynn

Sjf spn critique
SJF / SPN Critique

  • Possibility of starvation for longer processes

  • Lack of preemption is not suitable in a time sharing environment

  • SJF/SPN implicitly incorporates priorities

    • Shortest jobs are given preferences

    • CPU bound process have lower priority, but a process doing no I/O could still monopolize the CPU if it is the first to enter the system

OS1 - Lecture 2 – Scheduling – Paul Flynn

Highest response ratio next hrn
Highest Response Ratio Next (HRN)

  • Based on SJF with formula introduced

  • Priority Based - P

  • Time Waiting + Run Time / Run Time = P

  • The process with the HIGHEST Priority will be selected for running

  • Non-Preemptive

  • Reduces SJF bias against Short Jobs

OS1 - Lecture 2 – Scheduling – Paul Flynn


  • Implemented by having multiple ready queues to represent each level of priority

  • Scheduler the process of a higher priority over one of lower priority

  • Lower-priority may suffer starvation

  • To alleviate starvation allow dynamic priorities

    • The priority of a process changes based on its age or execution history

OS1 - Lecture 2 – Scheduling – Paul Flynn

Round robin

  • Selection function: same as FCFS

  • Decision mode: preemptive

    • a process is allowed to run until the time slice period (quantum, typically from 10 to 100 ms) has expired

    • a clock interrupt occurs and the running process is put on the ready queue

OS1 - Lecture 2 – Scheduling – Paul Flynn

Rr time quantum
RR Time Quantum

  • Quantum must be substantially larger than the time required to handle the clock interrupt and dispatching

  • Quantum should be larger then the typical interaction

    • but not much larger, to avoid penalizing I/O bound processes

OS1 - Lecture 2 – Scheduling – Paul Flynn

Round robin critique
Round Robin: critique

  • Still favors CPU-bound processes

    • An I/O bound process uses the CPU for a time less than the time quantum before it is blocked waiting for an I/O

    • A CPU-bound process runs for all its time slice and is put back into the ready queue

      • May unfairly get in front of blocked processes

OS1 - Lecture 2 – Scheduling – Paul Flynn

Multilevel feedback scheduling
Multilevel Feedback Scheduling

  • Preemptive scheduling with dynamic priorities

  • N ready to execute queues with decreasing priorities:

  • Dispatcher selects a process for execution from RQi only if RQi-1 to RQ0 are empty

OS1 - Lecture 2 – Scheduling – Paul Flynn

Multilevel feedback scheduling1
Multilevel Feedback Scheduling

  • New process are placed in RQ0

  • After the first quantum, they are moved to RQ1 after the first quantum, and to RQ2 after the second quantum, … and to RQN after the Nth quantum

  • I/O-bound processes remain in higher priority queues.

    • CPU-bound jobs drift downward.

    • Hence, long jobs may starve

OS1 - Lecture 2 – Scheduling – Paul Flynn

Multiple feedback queues
Multiple Feedback Queues

Different RQs may have different quantum values

OS1 - Lecture 2 – Scheduling – Paul Flynn

Algorithm comparison
Algorithm Comparison

  • Which one is the best?

  • The answer depends on many factors:

    • the system workload (extremely variable)

    • hardware support for the dispatcher

    • relative importance of performance criteria (response time, CPU utilization, throughput...)

    • The evaluation method used (each has its limitations...)

OS1 - Lecture 2 – Scheduling – Paul Flynn



3.1. Resource

3.2. Introduction to deadlocks

3.3. The ostrich algorithm

3.4. Deadlock detection and recovery

3.5. Deadlock avoidance

3.6. Deadlock prevention

3.7. Other issues

Deadlock recap
Deadlock Recap

  • Process is deadlocked if it is waiting for an event that will never occur

  • Most common situation is where two processes are involved on is holding resource required by the other and also looking for resource held by the other process.

Deadlock example
Deadlock Example

  • Airline booking example. Alan wants to books seat on flight AB123 so locks this file. Same time Brian wants to book flight on AB456 and locks this file. Alan wants to book return flight on AB456 and Brian wants to book return flight on AB123. No each user as a file locked and is requesting a file locked by the other.

Deadlock example cont
Deadlock Example cont

  • Alan locks AB123

  • Brian lock AB456

  • Alan requests AB456

  • Brian requests AB123

Conditions for deadlock
Conditions for deadlock

  • There are 4 conditions necessary for deadlock

    • Mutual exclusion

    • Resource holding

    • No premption

    • Circular wait

Conditions explained
Conditions explained

  • Mutual Exclusion only one process can use a resource at a time

  • Resource holding process can hold a resource while requesting another

  • No premption resources cannot be forcibly removed from a process

  • Circular wait closed circle exists where each process is holding a resource required by another.

Dealing with deadlock
Dealing with deadlock

  • Three ways of dealing with deadlocks

    • Deadlock prevention

    • Deadlock avoidance

    • Deadlock detection

Deadlock prevention
Deadlock prevention

  • Prevent any one of the 4 conditions from occuring will prevent deadlock

  • Mutual exclusion – cannot prevent

  • Resource holding – to prevent this a process must be allocated all it resources at once called one shot allocation very inefficient. Process may have to wait for resources it might not need. Process may hold resources for long time without using.

Deadlock prevention cont
Deadlock prevention cont

  • No premption to deny this condition two ways

    • If a process is holding a resource requests another it could be forces to give up the resource it is holding.

    • A resources required by a process and held by a second could be forcibly removed from the second. Not possible with serially reusable resources.

Deadlock prevention1
Deadlock prevention

  • Circular wait – this condition can be prevented if resources are organsied in a particual order and require that resources follow an order.

Deadlock avoidance
Deadlock avoidance

  • Deadlock prevention means that deadlock will not occur due to fact that we deny one of the 4 conditions necessary. Innefficient

  • Deadlock avoidance attempts to predict the possibility of deadlock as each resources request is made.

  • Example if process A requests a resource held by process B then make sure that process B is not waiting for resource held by A

Bankers algorithm
Bankers Algorithm

  • The most common method of deadlock avoidance is to use the bankers algorithm. Uses banking anology, banker will only grant loan if he can meet the needs of customers based on their projected future loan requirements.

  • Example three processes P1,P2 & P3 and 10 resources available. Table shows requirements

Example cont
Example cont.

Total maximum needs =21 means that allocation cannot be met at one time. We need a sequence of allocations which will allow all processes to finish. If we start with P2 when it is finished it will release 5 resources. Next if we allow P1 to run it will release 8 resources and so P3 will be able to finish.

Problems with deadlock avoidance
Problems with deadlock avoidance

  • Each process has to pre-declare its maximum rersource requirements. This is not realistic for interactive systems.

  • The avoidance algorithm must be executed every time a resource request is made. For a multi-user system with a large number of user processes, the processing overhead would be severe.

Deadlock detection
Deadlock detection

  • A deadlock detection strategy accepts the risk of deadlock occuring and periodically executes a procedure to detect any in place.

  • Breaking a deadlock implies that a process must be aborted or resources prempted from processes, either could result in loss of work.