1 / 20

Lecture Topics: 11/15

Lecture Topics: 11/15. CPU scheduling: Scheduling goals and algorithms. Review: Process State. A process can be in one of several states (new, ready, running, waiting, terminated). The OS keeps track of process state by maintaining a queue of PCBs for each state.

yahto
Download Presentation

Lecture Topics: 11/15

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture Topics: 11/15 • CPU scheduling: • Scheduling goals and algorithms

  2. Review: Process State • A process can be in one of several states (new, ready, running, waiting, terminated). • The OS keeps track of process state by maintaining a queue of PCBs for each state. • The ready queue contains PCBs of processes that are waiting to be assigned to the CPU.

  3. The Scheduling Problem • Need to share the CPU between multiple processes in the ready queue. • OS needs to decide which process gets the CPU next. • Once a process is selected, OS needs to do some work to get the process running on the CPU.

  4. How Scheduling Works • The scheduler is responsible for choosing a process from the ready queue. The scheduling algorithm implemented by this module determines how process selection is done. • The scheduler hands the selected process off to the dispatcher which gives the process control of the CPU.

  5. When Does The OS Make Scheduling Decisions ? • Scheduling decisions are always made: • when a process is terminated, and • when a process switches from running to waiting. • Scheduling decisions aremade when an interrupt occurs only in a preemptive system.

  6. How is the CPU Shared? • Non-preemptive scheduling: • The scheduler must wait for a running process to voluntarily relinquish the CPU (process either terminates or blocks). • Why would you choose this approach? • What are the disadvantages of using this scheme?

  7. How is the CPU Shared? • Preemptive scheduling: • The OS can force a running process to give up control of the CPU, allowing the scheduler to pick another process. • What are the advantages of using this scheme? • What are the implementation challenges presented by this approach?

  8. Scheduling Goals • Maximize throughput and resource utilization. • Need to overlap CPU and I/O activities. • Minimize response time, waiting time and turnaround time. • Share CPU in a fair way. • May be difficult to meet all these goals-- sometimes need to make tradeoffs.

  9. CPU and I/O Bursts • Typical process execution pattern: use the CPU for a while (CPU burst), then do some I/O operations (IO burst). • CPU bound processes perform I/O operations infrequently and tend to have long CPU bursts. • I/O bound processes spend less time doing computation and tend to have short CPU bursts.

  10. Scheduling Algorithms • First Come First Served (FCFS) • Scheduler selects the process at the head of the ready queue; typically non-preemptive • Example: 3 processes arrive at the ready queue in the following order: P1 ( CPU burst = 24 ms) P2 ( CPU burst = 3 ms) P3 ( CPU burst = 3 ms)

  11. Scheduling Algorithms • First Come First Served (FCFS) • Example: What’s the average waiting time? • FCFS pros & cons: • Simple to implement. • Average waiting time can be large if processes with short CPU bursts get stuck behind processes with long CPU bursts.

  12. Scheduling Algorithms • Round Robin (RR) • FCFS + preemptive scheduling. • Ready queue is treated as a circular queue. • Each process gets the CPU for a time quantum (or time slice), typically 10 - 100 ms. • A process runs until it blocks, terminates, or uses up its time slice.

  13. Scheduling Algorithms • Round Robin (RR) • Example: Assuming a time quantum of 4 ms, what’s the average waiting time? • Issue: What’s the right value for the time quantum? • Too long => poor response time • Too short => poor throughput

  14. Scheduling Algorithms • Round Robin (RR) • RR pros & cons: • Works well for short jobs; typically used in timesharing systems. • High overhead due to frequent context switches. • Increases average waiting time, especially if CPU bursts are the same length and need more than one time quantum.

  15. Scheduling Algorithms • Priority Scheduling • Select the process with the highest priority. • Priority based on some attribute of the process (e.g., memory requirements, owner of process, etc.) • Issue: • Starvation: low priority jobs may wait indefinitely. Can prevent starvation by aging (increase process priority as a function of the waiting time).

  16. Scheduling Algorithms • Shortest Job First (SJF) • Special case of priority scheduling (priority = expected length of CPU burst) • Non-preemptive version: scheduler chooses the process with the least amount of work to do (shortest CPU burst). • Preemptive version: scheduler chooses the process with the shortest remaining time to completion -- a running process can be interrupted.

  17. Scheduling Algorithms • Shortest Job First (SJF) • Example: What’s the average waiting time? • Issue: How do you predict the future? • In practice systems use past process behavior to predict the length of the next CPU burst.

  18. Scheduling Algorithms • Shortest Job First (SJF) • SJF pros & cons: • Better average response time. • It’s the best you can do to minimize average response time-- can prove the algorithm is optimal. • Difficult to predict the future. • Unfair-- possible starvation (many short jobs can keep long jobs from making any progress).

  19. Scheduling Algorithms • Multi-level Queues • Maintain multiple ready queues based on process “type” (e.g., interactive queue, CPU bound queue, etc.). • Each queue has a priority; a process is permanently assigned to a queue. • May use a different scheduling algorithm in each queue. • Need to decide how to schedule between queues.

  20. Scheduling Algorithms • Multi-level Feedback Queues • Adaptive algorithm: Processes can move between queues based on past behavior. • Round robin scheduling in each queue -- time slice increases in lower priority levels. • Queue assignment: • Put new process in highest priority queue. • If CPU burst takes longer than one time quantum, drop one level. • If CPU burst completes before timer expires, move up one level.

More Related