chapter 4 processor management n.
Skip this Video
Loading SlideShow in 5 Seconds..
Chapter 4 - Processor Management PowerPoint Presentation
Download Presentation
Chapter 4 - Processor Management

Loading in 2 Seconds...

play fullscreen
1 / 34

Chapter 4 - Processor Management - PowerPoint PPT Presentation

  • Uploaded on

Chapter 4 - Processor Management. Introduction - Processor Management. Process Manager Job Scheduling Process Scheduling Interrupt Management. Introduction - Processor Management. Single-user systems the processor is busy only when the user is executing a job at other times it is idle.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

Chapter 4 - Processor Management

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

Introduction - Processor Management

  • Process Manager
    • Job Scheduling
    • Process Scheduling
    • Interrupt Management

Introduction - Processor Management

  • Single-user systems
    • the processor is busy only when the user is executing a job
    • at other times it is idle

Introduction - Processor Management

  • Multiprogramming
    • Many users with many jobs on the system
    • The processor must be allocated to each job in a fair and efficient manner

Introduction - Processor Management

  • Processor (CPU) Central Processing Unit
      • CPU: performs calculations & executes programs
      • Process: single instance of executable program, a calculation is a process
      • Job or Program: work (instructions) submitted by the User

Processor Management

  • Multiprogramming requires that the processor be:
    • Allocated to each job or each process
    • deallocated at an appropriate moment
process management
Process Management

A single processor can be shared by several jobs or processes if the OS has:

  • scheduling policy
  • scheduling algorithm

to determine when to stop one job & proceed with another

job scheduling vs process scheduling
Job Scheduling vs. Process Scheduling
  • Process Manager is two submanagers:
    • Job Scheduler

in charge of job scheduling

    • Process Scheduler

in charge of process scheduling


Job Scheduler vs.Process Scheduling

Job Scheduler

  • high-level scheduler
  • selects jobs from a queue of incoming jobs, places them into a process queue
  • goal to put jobs in sequence to fully use resources
  • strives for balanced mix between jobs that require large amount of I/O interaction & jobs that require large amounts of computation

Process Scheduler

Takes over after a job has been placed on the READY queue by the Job Scheduler

  • (low-level scheduler ) assigns the CPU to execute each job in READY queue
  • alternates between CPU cycles & I/O cycles
  • I/O-bound jobs: printing series of documents
  • CPU-bound jobs: finding 100 prime numbers

Process Scheduler

  • In highly interactive environments -
  • 3rd Layer - middle-level scheduler

manages jobs swapped out & in to allow jobs to be completed faster

See Fig. 4.1 p.79


Job and Process Status

As a job moves through the system, it’s always in one of five states (or at least three)

  • HOLD

See Fig. 4.2 p.79


Process Control Blocks

Each process in a system is represented by data structures (PCB) that contain basic job information

  • Process Identification
  • Process Status
  • Process State
  • Accounting

See Fig. 4.3 p.80


PCBs and Queuing

  • PCB
    • created when Job Scheduler accepts the job & is updated as the job progresses from beginning to end of its execution
  • Queues
    • use PCBs to track jobs
    • PCBs are lists of information linked to form queues

See Fig. 4.4 p.82


Process Scheduling Policies

Before the OS can schedule jobs in a multiprogramming environment, it needs to resolve three system limitations:

  • there are a finite number of resources (disk drives, printers, tape drives)
  • some resources, once they’re allocated, can’t be shared with another job (printers)
  • some resources require operator intervention - they can’t be reassigned automatically from job to job (such as tape drives)

Process Scheduling Policies

What is a “good” process scheduling policy?

  • Maximize
    • throughput
    • CPU efficiency
  • Minimize
    • response time
    • turnaround time
    • waiting time
  • Ensure fairness for all jobs

Process Scheduling Policies

“Good” process scheduling policy contradictions

  • If a system favors one type of user, it may harm or does not efficiently use resources
  • System designer determines which criteria is most important for a specific system
    • “maximize CPU utilization while minimizing response time and balancing the use of all system components through a mix of I/O-bound and CPU-bound jobs”

Process Scheduling Policies

Problem: Job claims the CPU for a very long time before issuing an I/O request

Solution: Process Scheduler uses a timing mechanism and periodically interrupts running processes when a predetermined slice of time has expired.

  • An I/O request is called a “natural wait” in multiprogramming environments to allow the processor to be allocated to another job

Process Scheduling Policies

Scheduling Strategies

  • Preemptive Scheduling Policy
    • interrupts the processing of a job and transfers the CPU to another job, widely used in time-sharing environments
  • Nonpreemptive Scheduling Policy
    • functions without external interrupts to the job
    • once a job captures the processor and begins execution, it remains RUNNING until it issues an I/O request (natural wait) or until finished (with exceptions made for infinite loops)

Process Scheduling Algorithms

  • Based on a specific policy, Process Scheduler relies on a process scheduling algorithm to allocate the CPU and move jobs through the system
  • Most current systems, with emphasis on interactive use and response time, use an algorithm that takes care of immediate requests of interactive users

Six common process scheduling algorithms


Process Scheduling Algorithms

  • First Come First Served (FCFS)
    • nonpreemptive scheduling algorithm
    • handles jobs according to their arrival
    • the earlier jobs arrive, the sooner they’re served
    • simple algorithm to implement, it uses FIFO queue
    • good for batch systems, not good for interactive systems due to delay slow response time
    • turnaround time is unpredictable

See Fig. 4.5 & 4.6 p.85


Process Scheduling Algorithms

  • Shortest Job Next (SJN)
    • nonpreemptive scheduling algorithm (also known as Shortest Job First or SFJ
    • handles jobs based on their length in their CPU cycle
    • good for batch systems, estimates CPU time is required to run the job given in advance
    • not good for interactive systems that don’t estimate in advance the CPU time required to run their jobs
    • optimal only when all of the jobs are available at the same time & the CPU estimates are available & accurate

See Fig. 4.7 p.86


Process Scheduling Algorithms

  • Priority Scheduling
    • nonpreemptive scheduling algorithm
    • used in batch systems, although slower turnaround time to some users
    • gives preferential treatment to important jobs
    • allows programs with highest priority to be processed first
    • priorities can be assigned by system administrators
    • jobs are usually linked to one of several READY queues

Process Scheduling Algorithms

  • Priority Scheduling priorities can be determined by the Process Manager based on characteristics intrinsic to the job such as:
    • memory requirements
    • number & type of peripheral devices
    • Total CPU time
    • Amount of time already spent in the system

Process Scheduling Algorithms

  • Shortest Remaining Time (SRT)
    • preemptive version of SJN algorithm
    • processor is allocated to the job closest to completion
    • requires advance knowledge of the CPU required to finish each job, can’t be used for interactive system
    • often used in batch systems when it is desirable to give preference to short jobs
    • context switching required, a job processing information must be save in its PCB for when a job is executed

See Fig. 4.8 & 4.9 p.88-89


Process Scheduling Algorithms

  • Round Robin
    • preemptive process scheduling algorithm
    • used in interactive systems because it easy to implement
    • based on a predetermined slice of time that’s given to each job to ensure the CPU is shared equally among all active processes & is not monopolized by any one job

Process Scheduling Algorithms

  • Round Robin

time slice is called a time quantum & its size is crucial to system performance

    • Two general rules of thumb for selecting the “proper” time quantum
      • it should be long enough to allow for 80% of the CPU cycles to run to completion
      • it should be at least 100 times longer than the time required to perform on context switch

These rules are used in some systems are flexible

See Fig. 4.10&11 p.90-91


Process Scheduling Algorithms

  • Multiple Level Queues
    • not really scheduling algorithm
    • it works in conjunction with other schemes
    • it is found in systems with jobs that can be grouped according to common characteristics
    • priority scheduling is one kind of multiple level queue with different queues for each priority level

Process Scheduling Algorithms

  • Multiple Level Queues

Four primary methods of movement

    • no movement between queues
    • movement between queues
    • variable time quantum per queue
    • Called Aging
      • indefinite postponement-a job’s execution is delayed indefinitely because it is repeatedly preempted so other jobs can be processed which may lead to “starvation”

A Word About Interrupts

  • Interrupts are a way for an OS to get the attention of the CPU
  • In Chapter 3, the Memory Manager issued interrupts to accommodate job requests
  • Other interrupts are caused by events internal to the process
  • I/O interrupts are issued when a READ or WRITE command is issued

A Word About Interrupts

  • Internal Interrupts or synchronous interrupts occur as a result of an arithmetic operation or job instruction being processed.
  • Illegal arithmetic operations
    • attempts to divide by zero
    • floating point operation generating an overflow or underflows
  • Illegal job instructions
    • attempts to access protected or nonexistent storage
    • attempts to use an undefined operation code, operating on invalid data

A Word About Interrupts

  • Interrupts handler is the control program that handles the interruption sequence. When the OS detects a nonrecoverable error, the interrupt handler follows this sequence:
    • type of interrupt is described & stored to be passed on to the user as an error message
    • state of the interrupt process is saved
    • interrupt is processed
    • processor resumes normal operation


  • Process Manger allocates the CPU among all system users
  • Distinctions between job scheduling
    • selection of jobs based on characteristics
    • process scheduling
    • instant-by-instant allocation of CPU


  • Interrupts are generated & resolved by interrupt handler
  • Scheduling algorithms have unique
    • characteristics
    • objectives
    • applications
  • A system designer can choose the best policies & algorithm after evaluating their strengths & weaknesses

See Tab 4.1 p.95