1 / 12

Chapter 7 Scheduling

Chapter 7 Scheduling. Chien -Chung Shen CIS, UD cshen@cis.udel.edu. Scheduling. Low-level mechanisms - context switching High-level policies – scheduling policies Appears in many disciplines – assembly lines for efficiency How to develop a framework for studying scheduling policy?

saima
Download Presentation

Chapter 7 Scheduling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 7Scheduling Chien-Chung Shen CIS, UD cshen@cis.udel.edu

  2. Scheduling • Low-level mechanisms - context switching • High-level policies – scheduling policies • Appears in many disciplines – assembly lines for efficiency • How to develop a framework for studying scheduling policy? • assumptions of workload • metrics of performance • approaches

  3. Workload Assumptions • Workload – {processes} running in the system • critical to building policies - the more you know, the more fine-tuned your policy can be • unrealistic, initially, but will be relaxed later • Each job runs for the same amount of time • All jobs arrive at the same time • All jobs only use CPU (i.e., they perform no I/O) • The run-time of each job is known – scheduler know everything!

  4. Scheduling Metrics • To compare different scheduling policies • Metrics – used to measure something • Turnaround time - the time at which the job completes minus the time at which the job arrived in the system • a performance metric • Fairness metric vs. performance metrics • Jain’s Fairness Index

  5. First In First Out (FIFO) • First come, first served (FCFS) • Each job runs 10 sec. • Average turnaround time = ? = (10+20+30)/3 = 20

  6. First In First Out (FIFO) • What kind of workloadcould you construct to make FIFO perform poorly? • Average turnaround time = ? = (100+110+120)/3 = 110 • Convoy effect (supermarket checkout)

  7. Shortest Job First (SJF) • Average turnaround time = ? = (10+20+120)/3 = 50 • Assuming all jobs arrive at the same time, SJF is optimal

  8. Shortest Time-to-Completion First • SJF with A arrives at 0, and B, C at 10 • How would you do better? • Preemption • or Preemptive SJF (also optimal with • assumption that jobs arrive at any time)

  9. Metric: Response Time • Interactive applications • Response time = time from when the job arrives in a system to the first time it is scheduled • vs. turnaround time

  10. Round Robin • Instead of running jobs to completion, RR runs a job for a time slice (scheduling quantum) and switches to the next job in the ready queue; repeatedly does so until jobs are finished • Time slicing – length of time slice = multiple of timer-interrupt period • RR response time = 1 • SJF response time = 5 • Length of time slice on response time ???

  11. Round Robin • Amortization • used in systems when there is a fixed cost to some operation • By incurring that cost less often, the total cost to the system is reduced • Length of time slice on response time ??? • The shorter the time slice, the shorter (better) the response • Too short, cost of context switching dominates • Length of time slice is a tradeoff - long enough to amortize the cost of switching without making it so long that the system is no longer responsive • any policy (such as RR) that is fair(evenly divides CPU among active processes on a small time scale), will perform poorly on metrics such as turnaround time

  12. Incorporating I/O • Process is blocked waiting for I/O completion • When I/I completes, an interrupt is raised, and OS runs and moves the process that issued the I/O from blocked state back to the ready state • Poor use of CPU • Overlap CPU & I/O • higher CPU utilization • Interactive jobs get run frequently; while they are performing I/O, other CPU-intensive jobs run

More Related