1 / 87

Advanced Operating Systems

Advanced Operating Systems. Lecture 6: Scheduling. University of Tehran Dept. of EE and Computer Engineering By: Dr. Nasser Yazdani. How efficiently use resources. Sharing CPU and other resources of the systm. References

nowles
Download Presentation

Advanced Operating Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Advanced Operating Systems Lecture 6: Scheduling University of Tehran Dept. of EE and Computer Engineering By: Dr. Nasser Yazdani Distributed Operating Systems

  2. How efficiently use resources • Sharing CPU and other resources of the systm. • References • Surplus Fair Scheduling: A Proportional-Share CPU Scheduling Algorithm for Symmetric Multiprocessors • Scheduler Activations: Effective Kernel Support for User-Level Management of Parallelism", • Condor- A Hunter of Idle Workstation • Virtual-Time Round-Robin: An O(1) Proportional Share Scheduler • A SMART Scheduler for Multimedia Applications • Linux CPU scheduling, Distributed Operating Systems

  3. Outline • Scheduling • Scheduling policies. • Scheduling on Multiprocessor • Thread scheduling Distributed Operating Systems

  4. What is Scheduling? • OS policies and mechanisms to allocates resources to entities. • An O/S often has many pending tasks. • Threads, async callbacks, device input. • The order may matter. • Policy, correctness, or efficiency. • Providing sufficient control is not easy. • Mechanisms must allow policy to be expressed. • A good scheduling policy ensures that the most important entity gets the resources it needs Distributed Operating Systems

  5. Why Scheduling? • This topic was popular in the days of time sharing, when there was a shortage of resources. • It seemed irrelevant in era of PCs and workstations, when resources were plenty. • Now the topic is back from the dead to handle massive Internet servers with paying customers Where some customers are more important than others Distributed Operating Systems

  6. Resources to Schedule? • Resources you might want to schedule: CPU time, physical memory, disk and network I/O, and I/O bus bandwidth. • Entities that you might want to give resources to: users, processes, threads, web requests, or MIT accounts. Distributed Operating Systems

  7. Key problems ? • Gap between desired policy and available mechanism. The desired policies often include elements that not implementable. Furthermore, often there are many conflicting goals (low latency, high throughput, and fairness), and the scheduler must make a trade-off between the goals. • Interaction between different schedulers. One have to take a systems view. Just optimizing the CPU scheduler may do little to for the overall desired policy. Distributed Operating Systems

  8. Scheduling Policy Examples • Allocate cycles in proportion to money. • Maintain high throughput under high load. • Never delay high pri thread by > 1ms. • Maintain good interactive response. • Can we enforce policy with the thread scheduler? Distributed Operating Systems

  9. General plan • Understand where scheduling is occurring. • Expose scheduling decisions, allow control. • Account for resource consumption, to allow intelligent control. Distributed Operating Systems

  10. Parallel Computing • Speedup - the final measure of success • Parallelism vs Concurrency • Actual vs possible by application • Granularity • Size of the concurrent tasks • Reconfigurability • Number of processors • Communication cost • Preemption v. non-preemption • Co-scheduling • Some things better scheduled together Distributed Operating Systems

  11. Best place for scheduling? • Application is in best position to know its own specific scheduling requirements • Which threads run best simultaneously • Which are on Critical path • But Kernel must make sure all play fairly • MACH Scheduling • Lets process provide hints to discourage running • Possible to hand off processor to another thread • Makes easier for Kernel to select next thread • Allow interleaving of concurrent threads • Leaves low level scheduling in Kernel • Based on higher level info from application space Distributed Operating Systems

  12. Example • Give each process one equal CPU time. Interrupt every 10 msec and then selecting another in a round-robin fashion. Works if processes are compute-bound. What if a process gives up some of its 10 ms to wait for input? • How long should the quantum be? is 10 msec the right answer? Shorter quantum => better interactive performance, but lowers overall system throughput. • What if the environment computes for 1 msec and sends an IPC to the file server environment? Shouldn't the file server get more CPU time because it operates on behalf of all other functions? • Potential improvements: track "recent" CPU use (e.g., over the last second) and always run environment with least recent CPU use. (Still, if you sleep long enough you lose.) Other solution: directed yield; specify on the yield to which environment you are donating the remainder of the quantuam (e.g., to the file server so that it can compute on the environment's behalf). Distributed Operating Systems

  13. Scheduling is a System Problem • Thread/process scheduler can’t enforce policies by itself. • Needs cooperation from: • All resource schedulers. • Software structure. • Conflicting goals may limit effectiveness. Distributed Operating Systems

  14. Goals • Low latency • People typing at editors want fast response • - Network services can be latency-bound, not CPU-bound • High throughput • Minimize context switches to avoid wasting CPU, TLB • misses, cache misses, even page faults. • Fairness Distributed Operating Systems

  15. Scheduling Approaches • FIFO + Fair - High latency • Round robin + fair + low latency • poor throughput • STCF/SRTCF (shortest time/remaining time to completion first) + low latency + high throughput - unfair: Starvation Distributed Operating Systems

  16. Shortest Job First (SJF) • Two types: • Non-preemptive • Preemptive • Requirement: the elapse time needs to be known in advance • Optimal if all jobs are available simultaneously (provable) • Is SJF optimal if all the jobs are not available simultaneously? Distributed Operating Systems

  17. Preemptive SJF • Also called Shortest Remaining Time First • Schedule the job with the shortest remaining time required to complete • Requirement: the elapse time needs to be known in advance Distributed Operating Systems

  18. Interactive Scheduling • Usually preemptive • Time is sliced into quantum (time intervals) • Decision made at the beginning of each quantum • Performance Criteria • Min Response time • best proportionality • Representative algorithms: • Priority-based • Round-robin • Multi Queue & Multi-level Feedback • Shortest process time • Guaranteed Scheduling • Lottery Scheduling • Fair Sharing Scheduling Distributed Operating Systems

  19. Priority Scheduling • Each job is assigned a priority with FCFS within each priority level. • Select highest priority job over lower ones. • Rational: higher priority jobs are more mission-critical • Example: DVD movie player vs. send email • Problems: • May not give the best AWT • indefinite blocking or starvation a process Distributed Operating Systems

  20. Set Priority • Two approaches • Static (for system with well known and regular application behaviors) • Dynamic (otherwise) • Priority may be based on: • Cost to user. • Importance of user. • Aging • Percentage of CPU time used in last X hours. Distributed Operating Systems

  21. Pitfall: Priority Inversion • Low-priority thread X holds a lock. • High-priority thread Y waits for the lock. • Medium-priority thread Z pre-empts X. • Y is indefinitely delayed despite high priority. • When a higher priority process needs to read or modify kernel data that are currently being accessed by a lower priority process. • The higher priority process must wait! • But the lower priority cannot proceed quickly due to scheduling. • Solution: priority inheritance • When a lower-priority process accesses a resource, it inherits high-priority until it is done with the resource in question. And then its priority reverses to its natural value. Distributed Operating Systems

  22. Pitfall: Long Code Paths • Large-granularity locks are convenient. • Non-pre-emptable threads are an extreme case. • May delay high-priority processing. Distributed Operating Systems

  23. Pitfall: Efficiency • Efficient disk use requires unfairness. • Shortest-seek-first vs FIFO. • Read-ahead vs data needed now. • Efficient paging policy creates delays. • O/S may swap out my idle Emacs to free memory. • What happens when I type a key? • Thread scheduler doesn’t control these. Distributed Operating Systems

  24. Pitfall: Multiple Schedulers • Every resource with multiple waiting threads has a scheduler. • Locks, disk driver, memory allocator. • The schedulers may not cooperate or even be explicit. Distributed Operating Systems

  25. Example: UNIX • Goals: • Simple kernel concurrency model. • Limited pre-emption. • Quick response to device interrupts. • Many kinds of execution environments. • Some transitions are not possible. • Some transitions can’t be controlled. Distributed Operating Systems

  26. Process User Half Process User Half Kernel Half Kernel Half UNIX Environments User Kernel Timer Soft Interrupt Network Soft Interrupt Device Interrupt Device Interrupt Timer Interrupt Distributed Operating Systems

  27. UNIX: Process User Half • Interruptable. • Pre-emptable via timer interrupt. • We don’t trust user processes. • Enters kernel half via system calls, faults. • Save user state on stack. • Raise privilege level. • Jump to known point in the kernel. • Each process has a stack and saved registers. Distributed Operating Systems

  28. UNIX: Process Kernel Half • Executes system calls for its user process. • May involve many steps separated by sleep(). • Interruptable. • May postpone interrupts in critical sections. • Not pre-emptable. • Simplifies concurrent programming. • No context switch until voluntary sleep(). • No user process runs if a kernel half is runnable. • Each kernel half has a stack and saved registers. • Many processes may be sleep()ing in the kernel. Distributed Operating Systems

  29. UNIX: Device Interrupts • Device hardware asks CPU for an interrupt. • To signal new input or completion of output. • Cheaper than polling, lower latency. • Interrupts take priority over u/k half. • Save current state on stack. • Mask other interrupts. • Run interrupt handler function. • Return and restore state. • The real-time clock is a device. Distributed Operating Systems

  30. UNIX: Soft Interrupts • Device interrupt handlers must be short. • Expensive processing deferred to soft intr. • Can’t do it in kernel-half: process not known. • Example: TCP protocol input processing. • Example: periodic process scheduling. • Devices can interrupt soft intr. • Soft intr has priority over user & kernel processes. • But only entered on return from device intr. • Similar to async callback. • Can’t be high-pri thread, since no pre-emption. Distributed Operating Systems

  31. Process User Half Process User Half Kernel Half Kernel Half UNIX Environments User Kernel Soft Interrupt Device Interrupt Transfer w/ choice Transfer, limited choice Transfer, no choice Distributed Operating Systems

  32. Pitfall: Server Processes • User-level servers schedule requests. • X11, DNS, NFS. • They usually don’t know about kernel’s scheduling policy. • Network packet scheduling also interferes. Distributed Operating Systems

  33. Pitfall: Hardware Schedulers • Memory system scheduled among CPUs. • I/O bus scheduled among devices. • Interrupt controller chooses next interrupt. • Hardware doesn’t know about O/S policy. • O/S often doesn’t understand hardware. Distributed Operating Systems

  34. Time Quantum • Time slice too large • FIFO behavior • Poor response time • Time slice too small • Too many context switches (overheads) • Inefficient CPU utilization • Heuristic: 70-80% of jobs block within time-slice • Typical time-slice 10 to 100 ms • Time spent in system depends on size of job. Distributed Operating Systems

  35. Multi-Queue Scheduling • Hybrid between priority and round-robin • Processes assigned to one queue permanently • Scheduling between queues • Fixed Priorities • % CPU spent on queue • Example • System processes • Interactive programs • Background Processes • Student Processes • Address the starvation and infinite blocking problems Distributed Operating Systems

  36. Multi-Queue Scheduling: Example 20% 30% 50% Distributed Operating Systems

  37. Multi-Processor Scheduling: Load Sharing • Decides • Which process to run? • How long does it run • Where to run it? (CPU (horsepower)) I want to ride it … Process 2 Process n Process 1 Distributed Operating Systems

  38. Multi-Processor Scheduling Choices • Self-Scheduled • Each CPU dispatches a job from the ready queue • Master-Slave • One CPU schedules the other CPUs • Asymmetric • One CPU runs the kernel and the others runs the user applications. • One CPU handles network and the other handles applications Distributed Operating Systems

  39. Gang Scheduling for Multi-Processors • A collection of processes belonging to one job • All the processes are running at the same time • If one process is preempted, all the processes of the gang are preempted. • Helps to eliminate the time a process spends waiting for other processes in its parallel computation. Distributed Operating Systems

  40. Scheduling Approaches • Multilevel feedback queues • A job starts with the highest priority queue • If time slice expires, lower the priority by one level • If time slice does not expire, raise the priority by one level • Age long-running jobs Distributed Operating Systems

  41. Lottery Scheduling • Claim • Priority-based schemes are ad hoc • Lottery scheduling • Randomized scheme • Based on a currency abstraction • Idea: • Processes own lottery tickets • CPU randomly draws a ticket and execute the corresponding process Distributed Operating Systems

  42. Properties of Lottery Scheduling • Guarantees fairness through probability • Guarantees no starvation, as long as each process owns one ticket • To approximate SRTCF • Short jobs get more tickets • Long jobs get fewer Distributed Operating Systems

  43. Partially Consumed Tickets • What if a process is chosen, but it does not consume the entire time slice? • The process receives compensation tickets • Idea • Get chosen more frequently • But with shorter time slice Distributed Operating Systems

  44. Ticket Currencies • Load Insulation • A process can dynamically change its ticketing policies without affecting other processes • Need to convert currencies before transferring tickets Distributed Operating Systems

  45. Condor • Identifies idle workstations and schedules background jobs on them • Guarantees job will eventually complete • Analysis of workstation usage patterns • Only 30% • Remote capacity allocation algorithms • Up-Down algorithm • Allow fair access to remote capacity • Remote execution facilities • Remote Unix (RU) Distributed Operating Systems

  46. Condor Issues • Leverage: performance measure • Ratio of the capacity consumed by a job remotely to the capacity consumed on the home station to support remote execution • Checkpointing: save the state of a job so that its execution can be resumed • Transparent placement of background jobs • Automatically restart if a background job fails • Users expect to receive fair access • Small overhead Distributed Operating Systems

  47. Condor - scheduling • Hybrid of centralized static and distributed approach • Each workstation keeps own state information and schedule • Central coordinator assigns capacity to workstations • Workstations use capacity to schedule Distributed Operating Systems

  48. Real time Systems • Issues are scheduling and interrupts • Must complete task by a particular deadline • Examples: • Accepting input from real time sensors • Process control applications • Responding to environmental events • How does one support real time systems • If short deadline, often use a dedicated system • Give real time tasks absolute priority • Do not support virtual memory • Use early binding Distributed Operating Systems

  49. Real time Scheduling • To initiate, must specify • Deadline • Estimate/upper-bound on resources • System accepts or rejects • If accepted, agrees that it can meet the deadline • Places job in calendar, blocking out the resources it will need and planning when the resources will be allocated • Some systems support priorities • But this can violate the RT assumption for already accepted jobs Distributed Operating Systems

  50. User-level Thread Scheduling Possible Scheduling • 50-msec process quantum • run 5 msec/CPU burst Distributed Operating Systems

More Related