Chapter Four : Processor Management
Download

Chapter Four : Processor Management







Advertisement
/ 36 []
Download Presentation
Comments
natasha
From:
|  
(112) |   (0) |   (0)
Views: 65 | Added: 07-05-2012
Rate Presentation: 0 0
Description:
Introduction Job Scheduling Versus Process Scheduling Process Scheduler Process Identification Process Status Process State Accounting Process Scheduling Policies Process Scheduling Algorithms Cache Memory Interrupts. Job Scheduling Process Scheduling Interrupt Management.
Chapter Four : Processor Management

An Image/Link below is provided (as is) to

Download Policy: Content on the Website is provided to you AS IS for your information and personal use only and may not be sold or licensed nor shared on other sites. SlideServe reserves the right to change this policy at anytime. While downloading, If for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.











- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -




Slide 1

Introduction

Job Scheduling Versus Process Scheduling

Process Scheduler

Process Identification

Process Status

Process State

Accounting

Process Scheduling Policies

Process Scheduling Algorithms

Cache Memory

Interrupts

Job Scheduling

Process Scheduling

Interrupt Management

Chapter Four : Processor Management

Slide 2

How Does Processor Manager Allocate CPU(s) to Jobs?

  • Process Manager performs job scheduling, process scheduling and interrupt management.

  • In single-user systems, processor is busy only when user is executing a job—at all other times it is idle.

    • Processor management is simple.

  • In multiprogramming environment, processor must be allocated to each job in a fair and efficient manner.

    • Requires scheduling policy and a scheduling algorithm.

Slide 3

Some Important Terms

  • Program – inactive unit, such as a file stored on a disk.

    • To an operating system, a program or job is a unit of work that has been submitted by user.

    • “Job” is usually associated with batch systems.

  • Process (task) – active entity, which requires a set of resources, including a processor and special registers to perform its function.

    • A single instance of an executable program.

Slide 4

Important Terms - 2

  • Thread of control (thread) – a portion of a process that can run independently.

  • Processor ( CPU, central processing unit) –part of machine that performs calculations and executes programs.

  • Multiprogramming requires that the processor be “allocated” to each job or to each process for a period of time and “deallocated” at an appropriate moment.

Slide 5

Job Scheduling vs. Process Scheduling

  • Processor Manager has 2 sub-managers:

  • Job Scheduler

    • in charge of job scheduling.

  • Process Scheduler

    • in charge of process scheduling.

Slide 6

Job Scheduler

  • High-level scheduler.

  • Selects jobs from a queue of incoming jobs.

  • Places them in process queue (batch or interactive), based on each job’s characteristics.

  • Goal is to put jobs in a sequence that uses all system’s resources as fully as possible.

  • Strives for balanced mix of jobs with large I/O interaction and jobs with lots of computation.

    • Tries to keep most system components busy most of time.

Slide 7

Process Scheduler

  • Low-level scheduler – assigns the CPU to execute processes of those jobs placed on ready queue by Job Scheduler.

  • After a job has been placed on the READY queue by Job Scheduler, Process Scheduler that takes over.

    • Determines which jobs will get CPU, when, and for how long.

    • Decides when processing should be interrupted.

    • Determines queues job should be moved to during execution.

    • Recognizes when a job has concluded and should be terminated.

Slide 8

CPU Cycles and I/O Cycles

  • To schedule CPU, Process Scheduler uses common trait among most computer programs: they alternate between CPU cycles and I/O cycles.

Slide 9

Poisson Distribution Curve

  • I/O-bound jobs (such as printing a series of documents) have many brief CPU cycles and long I/O cycles.

  • CPU-bound jobs (such as finding the first 300 prime numbers) have long CPU cycles and shorter I/O cycles.

  • Total effect of all CPU cycles, from both I/O-bound and CPU-bound jobs, approximates a Poisson distribution curve.

    • Figure 4.1

Slide 10

Processor Manager : Middle-Level Scheduler

  • In a highly interactive environment there’s a third layer

    • called middle-level scheduler.

  • Removes active jobs from memory to reduce degree of multiprogramming and allows jobs to be completed faster.

Slide 11

Job and Process Status

  • Job status -- one of the 5 states that a job takes as it moves through the system.

    • HOLD

    • READY

    • WAITING

    • RUNNING

    • FINISHED

Slide 12

Hold

Ready

Running

Waiting

Job and Process Status

Admitted

Finished

Interrupt

Exit

Scheduler dispatch

I/O or event completion

I/O or event wait

Handled by Process Scheduler

Handled by Job Scheduler

Slide 13

Transition Among Process States

  • HOLD to READY : Job Scheduler using a predefined policy.

  • READY to RUNNING : Process Scheduler using some predefined algorithm

  • RUNNING back to READY : Process Scheduler according to some predefined time limit or other criterion.

  • RUNNING to WAITING : Process Scheduler and is initiated by an instruction in the job.

  • WAITING to READY : Process Scheduler and is initiated by signal from I/O device manager that I/O request has been satisfied and job can continue.

  • RUNNING to FINISHED : Process Scheduler or Job Scheduler.

Slide 14

Process Control Block (PCB)

  • Process Control Block (PCB) -- data structure that contains basic info about the job

    • Process identification

    • Process status (HOLD, READY, RUNNING, WAITING)

    • Process state (process status word, register contents, main memory info, resources, process priority)

    • Accounting (CPU time, total amount of time, I/O operations, number input records read, etc.)

Slide 15

PCBs and Queuing

  • PCB of job created when Job Scheduler accepts it

    • updated as job goes from beginning to termination.

  • Queues use PCBs to track jobs.

    • PCBs, not jobs, are linked to form queues.

    • E.g., PCBs for every ready job are linked on READY queue; all PCBs for jobs just entering system are linked on HOLD queue.

    • Queues must be managed by process scheduling policies and algorithms.

Slide 16

Process Scheduling Policies

  • Before operating system can schedule all jobs in a multiprogramming environment, it must resolve three limitations of system:

    • finite number of resources (such as disk drives, printers, and tape drives)

    • some resources can’t be shared once they’re allocated (such as printers)

    • some resources require operator intervention (such as tape drives).

Slide 17

Maximize throughput by running as many jobs as possible in a given amount of time.

Maximize CPU efficiency by keeping CPU busy 100 % of time.

Ensure fairness for all jobs by giving everyone an equal amount of CPU and I/O time.

Minimize response time by quickly turning around interactive requests.

Minimize turnaround time by moving entire jobs in/out of system quickly.

Minimize waiting time by moving jobs out of READY queue as quickly as possible.

A Good Scheduling Policy

Slide 18

Interrupts

  • There are instances when a job claims CPU for a very long time before issuing an I/O request.

    • builds up READY queue & empties I/O queues.

    • Creates an unacceptable imbalance in the system.

  • Process Scheduler uses a timing mechanism to periodically interrupts running processes when a predetermined slice of time has expired.

    • suspends all activity on the currently running job and reschedules it into the READY queue.

Slide 19

Preemptive & Non-preemptive Scheduling Policies

  • Preemptive scheduling policy interrupts processing of a job and transfers the CPU to another job.

  • Non-preemptive scheduling policy functions without external interrupts.

    • Once a job captures processor and begins execution, it remains in RUNNING state uninterrupted.

    • Until it issues an I/O request (natural wait) or until it is finished (exception for infinite loops).

Slide 20

Process Scheduling Algorithms

  • First Come First Served (FCFS)

  • Shortest Job Next (SJN)

  • Priority Scheduling

  • Shortest Remaining Time (SRT)

  • Round Robin

  • Multiple Level Queues

Slide 21

First Come First Served (FCFS)

  • Non-preemptive.

  • Handles jobs according to their arrival time -- the earlier they arrive, the sooner they’re served.

  • Simple algorithm to implement -- uses a FIFO queue.

  • Good for batch systems; not so good for interactive ones.

  • Turnaround time is unpredictable.

Slide 22

FCFS Example

Process CPU Burst (Turnaround Time)

A 15 milliseconds

B 2 milliseconds

C 1 millisecond

  • If they arrive in order of A, B, and C.

    What does the time line look like?

    What’s the average turnaround time?

Slide 23

Shortest Job Next (SJN)

  • Non-preemptive.

  • Handles jobs based on length of their CPU cycle time.

    • Use lengths to schedule process with shortest time.

  • Optimal – gives minimum average waiting time for a given set of processes.

    • optimal only when all of jobs are available at same time and the CPU estimates are available and accurate.

  • Doesn’t work in interactive systems because users don’t estimate in advance CPU time required to run their jobs.

Slide 24

Priority Scheduling

  • Non-preemptive.

  • Gives preferential treatment to important jobs.

    • Programs with highest priority are processed first.

    • Aren’t interrupted until CPU cycles are completed or a natural wait occurs.

  • If 2+ jobs with equal priority are in READY queue, processor is allocated to one that arrived first (first come first served within priority).

  • Many different methods of assigning priorities by system administrator or by Processor Manager.

Slide 25

Shortest Remaining Time (SRT)

  • Preemptive version of the SJN algorithm.

  • Processor allocated to job closest to completion.

    • This job can be preempted if a newer job in READY queue has a “time to completion” that's shorter.

  • Can’t be implemented in interactive system -- requires advance knowledge of CPU time required to finish each job.

  • SRT involves more overhead than SJN.

    • OS monitors CPU time for all jobs in READY queue and performs “context switching”.

Slide 26

Context Switching Is Required by All Preemptive Algorithms

  • When Job A is preempted

    • All of its processing information must be saved in its PCB for later (when Job A’s execution is continued).

    • Contents of Job B’s PCB are loaded into appropriate registers so it can start running again (context switch).

  • Later when Job A is once again assigned to processor another context switch is performed.

    • Info from preempted job is stored in its PCB.

    • Contents of Job A’s PCB are loaded into appropriate registers.

Slide 27

Round Robin

  • Preemptive.

  • Used extensively in interactive systems because it’s easy to implement.

  • Isn’t based on job characteristics but on a predetermined slice of time that’s given to each job.

    • Ensures CPU is equally shared among all active processes and isn’t monopolized by any one job.

  • Time slice is called a time quantum

    • size crucial to system performance (100 ms to 1-2 secs)

Slide 28

If Job’s CPU Cycle < Time Quantum

  • If job’s last CPU cycle & job is finished, then all resources allocated to it are released & completed job is returned to user.

  • If CPU cycle was interrupted by I/O request, then info about the job is saved in its PCB & it is linked at end of the appropriate I/O queue.

    • Later, when I/O request has been satisfied, it is returned to end of READY queue to await allocation of CPU.

Slide 29

Time Slices Should Be ...

  • Long enough to allow 80 % of CPU cycles to run to completion.

  • At least 100 times longer than time required to perform one context switch.

  • Flexible -- depends on the system.

Slide 30

Multiple Level Queues

  • Not a separate scheduling algorithm.

  • Works in conjunction with several other schemes where jobs can be grouped according to a common characteristic.

  • Examples:

    • Priority-based system with different queues for each priority level.

    • Put all CPU-bound jobs in 1 queue and all I/O-bound jobs in another. Alternately select jobs from each queue to keep system balanced.

    • Put batch jobs “background queue” & interactive jobs in a “foreground queue”; treat foreground queue more favorably than background queue.

Slide 31

Policies To Service Multi-level Queues

  • No movement between queues.

  • Move jobs from queue to queue.

  • Move jobs from queue to queue and increasing time quantums for “lower” queues.

  • Give special treatment to jobs that have been in system for a long time (aging).

Slide 32

Cache Memory

  • Cachememory -- quickly accessible memory that’s designed to alleviate speed differences between a very fast CPU and slower main memory.

  • Stores copy of frequently used data in an easily accessible memory area instead of main memory.

Slide 33

Types of Interrupts

  • Page interrupts to accommodate job requests.

  • Time quantum expiration.

  • I/O interrupts when issue READ or WRITE command.

  • Internal interrupts (synchronous interrupts) result from arithmetic operation or job instruction currently being processed.

  • Illegal arithmetic operations (e.g., divide by 0).

  • Illegal job instructions (e.g., attempts to access protected storage locations).

Slide 34

Interrupt Handler

  • Describe & store type of interrupt (passed to user as error message).

  • Save state of interrupted process (value of program counter, mode specification, and contents of all registers).

  • Process the interrupt

    • Send error message & state of interrupted process to user.

    • Halt program execution

    • Release any resources allocated to job are released

    • Job exits the system.

  • Processor resumes normal operation.

Slide 35

aging

cache memory

context switching

CPU-bound

first come first served

high-level scheduler

I/O-bound

indefinite postponement

internal interrupts

interrupt

interrupt handler

Job Scheduler

job status

low-level scheduler

middle-level scheduler

multiple level queues

multiprogramming

non-preemptive scheduling policy

Key Terms

Slide 36

preemptive scheduling policy

priority scheduling

process

Process Control Block

Process Scheduler

process scheduling algorithm

process scheduling policyprocess status

processor

queue

response time

round robin

shortest job next (SJN)

shortest remaining time

synchronous interrupts

time quantum

time slice

turnaround time

Key Terms - 2


Copyright © 2014 SlideServe. All rights reserved | Powered By DigitalOfficePro