1 / 20

CSE451 Scheduling Autumn 2002

CSE451 Scheduling Autumn 2002. Gary Kimura Lecture #6 October 11, 2002. Today. Finish comparing user and kernel threads Whether it be processes or threads that are the basic execution unit the OS needs to some way of scheduling each process time on the CPU Why?

Download Presentation

CSE451 Scheduling Autumn 2002

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSE451 SchedulingAutumn 2002 Gary Kimura Lecture #6 October 11, 2002

  2. Today • Finish comparing user and kernel threads • Whether it be processes or threads that are the basic execution unit the OS needs to some way of scheduling each process time on the CPU • Why? • To make better utilization of the system

  3. Goals for Multiprogramming • In a multiprogramming system, we try to increase utilization and throughput by overlapping I/O and CPU activities. • This requires several os policy decisions: • Determine the multiprogramming level -- the number of jobs loaded in primary memory • Decide what job is to run next to guarantee good service • These decisions are long-term and short-term scheduling decisions, respectively. • Short-term scheduling executes more frequently, changes of multiprogramming level are more costly.

  4. Scheduling • The scheduler is a main component in the OS kernel • It chooses processes/threads to run from the ready queue. • The scheduling algorithm determines how jobs are scheduled. • In general, the scheduler runs: • When a process/thread switches from running to waiting • When an interrupt occurs • When a process/thread is created or terminated • In a preemptive system, the scheduler can interrupt a process/thread that is running. • In a non-preemptive system, the scheduler waits for a running process/thread to explicitly block

  5. Preemptive and Non-preemptive Systems • In a non-preemptive system the OS will not stop a running jobs until the job either exists or does an explicit wait • In a preemptive system the OS can potentially stop a job midstream in its execution in order to run another job • Quite often a timer going off and the current jobs time-slice or quantum being exhausted will cause preemption

  6. Preemptive and Non-preemptive System (continued) • I cannot over emphasize the need to understand the difference between preemptive and non-preemptive systems • Preemptive systems also come in various degrees • Preemptive user but non-preemptive kernel • Preemptive user and kernel • This affects your choice of scheduling algorithm, OS complexity, and system performance

  7. Scheduling Goals • Scheduling algorithms can have many different goals (which sometimes conflict) • maximize CPU utilization • maximize job throughput (#jobs/s) • minimize job turnaround time (Tfinish – Tstart) • minimize job waiting time (Avg(Twait): average time spent on wait queue) • minimize response time (Avg(Tresp): average time spent on ready queue) • Goals may depend on type of system • batch system: strive to maximize job throughput and minimize turnaround time • interactive systems: minimize response time of interactive jobs (such as editors or web browsers)

  8. Scheduler Non-goals • Schedulers typically try to prevent starvation • starvation occurs when a process is prevented from making progress, because another process has a resource it needs • A poor scheduling policy can cause starvation • e.g., if a high-priority process always prevents a low-priority process from running on the CPU • Synchronization can also cause starvation • we’ll see this in a future class • roughly, if somebody else always gets a lock I need, I can’t make progress

  9. Various Scheduling Algorithms • First-come First-served • Shortest Job First • Priority scheduling • Round Robin • Multi-level queue • We’ll examine each in turn

  10. First-Come First-Served (FCFS/FIFO) • First-come first-served (FCFS) • jobs are scheduled in the order that they arrive • “real-world” scheduling of people in lines • e.g. supermarket, bank tellers, MacDonalds, … • typically non-preemptive • no context switching at supermarket! • jobs treated equally, no starvation • except possibly for infinitely long jobs • Sounds perfect! • what’s the problem?

  11. FCFS picture time B Job A C • Problems: • average response time and turnaround time can be large • e.g., small jobs waiting behind long ones • results in high turnaround time • may lead to poor overlap of I/O and CPU B C Job A

  12. Shortest Job First (SJF) • Shortest job first (SJF) • choose the job with the smallest expected CPU burst • can prove that this has optimal min. average waiting time • Can be preemptive or non-preemptive • preemptive is called shortest remaining time first (SRTF) • Sounds perfect! • what’s the problem here?

  13. SJF Problem • Problem: impossible to know size of future CPU burst • from your theory class, equivalent to the halting problem • can you make a reasonable guess? • yes, for instance looking at past as predictor of future • but, might lead to starvation in some cases!

  14. Priority Scheduling • Assign priorities to jobs • choose job with highest priority to run next • if tie, use another scheduling algorithm to break (e.g. FCFS) • to implement SJF, priority = expected length of CPU burst • Abstractly modeled as multiple “priority queues” • put ready job on queue associated with its priority • Sound perfect! • what’s wrong with this?

  15. Priority Scheduling: problem • The problem: starvation • if there is an endless supply of high priority jobs, no low-priority job will ever run • Solution: “age” processes over time • increase priority as a function of wait time • decrease priority as a function of CPU time • many ugly heuristics have been explored in this space

  16. Round Robin (RR) • Round Robin scheduling (RR) • ready queue is treated as a circular FIFO queue • each job is given a time slice, called a quantum • job executes for duration of quantum, or until it blocks • time-division multiplexing (time-slicing) • great for timesharing • no starvation • can be preemptive or non-preemptive • Sounds perfect! • what’s wrong with this?

  17. RR problems • Problems: • what do you set the quantum to be? • no setting is “correct” • if small, then context switch often, incurring high overhead • if large, then response time drops • treats all jobs equally • if I run 100 copies of SETI@home, it degrades your service • how can I fix this?

  18. Multi-level Queues • Multi-level Queues • Probably one of the most common methods used • Implement multiple ready Queues based on the job priority • Multiple queues allow us to rank each job’s scheduling priority relative to other jobs in the system • Windows NT/2000 has 32 priority levels • Each running job is given a time slice or quantum • After each time slice the next job of highest priority is given a chance to run • Jobs can migrate up and down the priority levels based on various activities

  19. Multiple Processors • Multiple processors (i.e., more than one CPU on the system) changes some of the scheduling strategies • There are both heterogeneous systems and homogeneous systems • There is also the notion of asymmetric multiprocessing and symmetric multiprocessing • SMP is great but when we look at synchronization and locks we’ll understand why it is difficult to actually pull off.

  20. Next time • With so much potentially going on in the system how do we synchronize all of it?

More Related