1 / 162

Govindrao Wanjari College of Engineering & Technology,Nagpur Department of CSE Session: 2017-18

Govindrao Wanjari College of Engineering & Technology,Nagpur Department of CSE Session: 2017-18 Branch/ Sem: CSE/4 th sem “ SCHEDULING IN OS ” Subject :OS Subject Teacher: Prof.V.P.Lonkar. OPERATING SYSTEMS SCHEDULING. Jerry Breecher. CPU Scheduling. What Is In This Chapter?

lakers
Download Presentation

Govindrao Wanjari College of Engineering & Technology,Nagpur Department of CSE Session: 2017-18

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Govindrao Wanjari College of Engineering & Technology,NagpurDepartment of CSE Session: 2017-18 Branch/ Sem: CSE/4th sem “SCHEDULING IN OS” Subject :OS Subject Teacher: Prof.V.P.Lonkar

  2. OPERATING SYSTEMS SCHEDULING Jerry Breecher

  3. CPU Scheduling What Is In This Chapter? • This chapter is about how to get a process attached to a processor. • It centers around efficient algorithms that perform well. • The design of a scheduler is concerned with making sure all users get their fair share of the resources.

  4. CPU Scheduling What Is In This Chapter? • Basic Concepts • Scheduling Criteria • Scheduling Algorithms • Multiple-Processor Scheduling • Real-Time Scheduling • Thread Scheduling • Operating Systems Examples • Java Thread Scheduling • Algorithm Evaluation

  5. Scheduling Concepts Multiprogramming A number of programs can be in memory at the same time. Allows overlap of CPU and I/O. Jobs (batch) are programs that run without user interaction. User (time shared) are programs that may have user interaction. Process is the common name for both. CPU - I/O burst cycle Characterizes process execution, which alternates, between CPU and I/O activity. CPU times are generally much shorter than I/O times. Preemptive Scheduling An interrupt causes currently running process to give up the CPU and be replaced by another process. CPU SCHEDULING

  6. The Scheduler • Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them • CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state 2. Switches from running to ready state 3. Switches from waiting to ready 4. Terminates • Scheduling under 1 and 4 is nonpreemptive • All other scheduling is preemptive CPU SCHEDULING

  7. The Dispatcher • Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: • switching context • switching to user mode • jumping to the proper location in the user program to restart that program • Dispatch latency – time it takes for the dispatcher to stop one process and start another running CPU SCHEDULING

  8. Criteria For Performance Evaluation • Note usage of the words DEVICE, SYSTEM, REQUEST, JOB. • UTILIZATION The fraction of time a device is in use. ( ratio of in-use time / total observation time ) • THROUGHPUT The number of job completions in a period of time. (jobs / second ) • SERVICE TIME The time required by a device to handle a request. (seconds) • QUEUEING TIME Time on a queue waiting for service from the device. (seconds) • RESIDENCE TIME The time spent by a request at a device. • RESIDENCE TIME = SERVICE TIME + QUEUEING TIME. • RESPONSE TIME Time used by a system to respond to a User Job. ( seconds ) • THINK TIME The time spent by the user of an interactive system to figure out the next request. (seconds) • The goal is to optimize both the average and the amount of variation. (but beware the ogre predictability.) CPU SCHEDULING

  9. Scheduling Behavior Most Processes Don’t Use Up Their Scheduling Quantum! CPU SCHEDULING

  10. Scheduling Algorithms • FIRST-COME, FIRST SERVED: • ( FCFS) same as FIFO • Simple, fair, but poor performance. Average queueing time may be long. • What are the average queueing and residence times for this scenario? • How do average queueing and residence times depend on ordering of these processes in the queue? CPU SCHEDULING

  11. Scheduling Algorithms • EXAMPLE DATA: • Process Arrival Service • Time Time • 1 0 8 • 2 1 4 • 3 2 9 • 4 3 5 CPU SCHEDULING FCFS P1 P2 P3 P4 0 8 12 21 26 Average wait = ( (8-0) + (12-1) + (21-2) + (26-3) )/4 = 61/4 = 15.25 Residence Time at the CPU

  12. Scheduling Algorithms • SHORTEST JOB FIRST: • Optimal for minimizing queueing time, but impossible to implement. Tries to predict the process to schedule based on previous history. • Predicting the time the process will use on its next schedule: • t( n+1 ) = w * t( n ) + ( 1 - w ) * T( n ) • Here: t(n+1) is time of next burst. • t(n) is time of current burst. • T(n) is average of all previous bursts . • W is a weighting factor emphasizing current or previous bursts. CPU SCHEDULING

  13. CPU SCHEDULING Scheduling Algorithms PREEMPTIVE ALGORITHMS: • Yank the CPU away from the currently executing process when a higher priority process is ready. • Can be applied to both Shortest Job First or to Priority scheduling. • Avoids "hogging" of the CPU • On time sharing machines, this type of scheme is required because the CPU must be protected from a run-away low priority process. • Give short jobs a higher priority – perceived response time is thus better. • What are average queueing and residence times? Compare with FCFS.

  14. Scheduling Algorithms • EXAMPLE DATA: • Process Arrival Service • Time Time • 1 0 8 • 2 1 4 • 3 2 9 • 4 3 5 CPU SCHEDULING Preemptive Shortest Job First P1 P2 P4 P1 P3 0 1 5 10 26 17 Average wait = ( (5-1) + (10-3) + (17-0) + (26-2) )/4 = 52/4 = 13.0

  15. Scheduling Algorithms • PRIORITY BASED SCHEDULING: • Assign each process a priority. Schedule highest priority first. All processes within same priority are FCFS. • Priority may be determined by user or by some default mechanism. The system may determine the priority based on memory requirements, time limits, or other resource usage. • Starvation occurs if a low priority process never runs. Solution: build aging into a variable priority. • Delicate balance between giving favorable response for interactive jobs, but not starving batch jobs. CPU SCHEDULING

  16. CPU SCHEDULING Scheduling Algorithms ROUND ROBIN: • Use a timer to cause an interrupt after a predetermined time. Preempts if task exceeds it’s quantum. • Train of events Dispatch Time slice occurs OR process suspends on event Put process on some queue and dispatch next • Use numbers in last example to find queueing and residence times. (Use quantum = 4 sec.) • Definitions: • Context Switch Changing the processor from running one task (or process) to another. Implies changing memory. • Processor Sharing Use of a small quantum such that each process runs frequently at speed 1/n. • Reschedule latency How long it takes from when a process requests to run, until it finally gets control of the CPU.

  17. CPU SCHEDULING Scheduling Algorithms ROUND ROBIN: • Choosing a time quantum • Too short - inordinate fraction of the time is spent in context switches. • Too long - reschedule latency is too great. If many processes want the CPU, then it's a long time before a particular process can get the CPU. This then acts like FCFS. • Adjust so most processes won't use their slice. As processors have become faster, this is less of an issue.

  18. Scheduling Algorithms • EXAMPLE DATA: • Process Arrival Service • Time Time • 1 0 8 • 2 1 4 • 3 2 9 • 4 3 5 Note: Example violates rules for quantum size since most processes don’t finish in one quantum. CPU SCHEDULING Round Robin, quantum = 4, no priority-based preemption P1 P2 P3 P4 P1 P3 P4 P3 0 4 8 12 16 20 24 25 26 Average wait = ( (20-0) + (8-1) + (26-2) + (25-3) )/4 = 74/4 = 18.5

  19. CPU SCHEDULING Scheduling Algorithms MULTI-LEVEL QUEUES: • Each queue has its scheduling algorithm. • Then some other algorithm (perhaps priority based) arbitrates between queues. • Can use feedback to move between queues • Method is complex but flexible. • For example, could separate system processes, interactive, batch, favored, unfavored processes

  20. CPU SCHEDULING Using Priorities Here’s how the priorities are used in Windows

  21. CPU SCHEDULING Scheduling Algorithms MULTIPLE PROCESSOR SCHEDULING: • Different rules for homogeneous or heterogeneous processors. • Load sharing in the distribution of work, such that all processors have an equal amount to do. • Each processor can schedule from a common ready queue ( equal machines ) OR can use a master slave arrangement. Real Time Scheduling: • Hard real-time systems – required to complete a critical task within a guaranteed amount of time. • Soft real-time computing – requires that critical processes receive priority over less fortunate ones.

  22. CPU SCHEDULING Linux Scheduling Two algorithms: time-sharing and real-time • Time-sharing • Prioritized credit-based – process with most credits is scheduled next • Credit subtracted when timer interrupt occurs • When credit = 0, another process chosen • When all processes have credit = 0, recrediting occurs • Based on factors including priority and history • Real-time • Soft real-time • Posix.1b compliant – two classes • FCFS and RR • Highest priority process runs first

  23. CPU SCHEDULING Algorithm Evaluation How do we decide which algorithm is best for a particular environment? • Deterministic modeling – takes a particular predetermined workload and defines the performance of each algorithm for that workload. • Queueing models.

  24. CPU SCHEDULING WRAPUP We’ve looked at a number of different scheduling algorithms. Which one works the best is application dependent. General purpose OS will use priority based, round robin, preemptive Real Time OS will use priority, no preemption.

  25. Govindrao Wanjari College of Engineering & Technology,NagpurDepartment of CSE Session: 2017-18 Branch/ Sem: CSE/4th sem “DEADLOCKS” Subject :OS Subject Teacher: Prof.V.P.Lonkar

  26. Review: What Can Go Wrong With Threads? • Safety hazards • “Program does the wrong thing” • Liveness hazards • “Program never does the right thing” • Performance hazards • Program is too slow due to excessive synchronization

  27. A Liveness Hazard: Starvation • A thread is ready, but the scheduler does not select it to run • Can be caused by shortest-job-first scheduling

  28. Deadlock • Each thread is waiting on a resource held by another thread • So, there is no way to make progress Thread 1 Thread 2 Thread 3

  29. Deadlock, Illustrated

  30. Necessary Conditions for Deadlock • Mutual exclusion • Resources cannot be shared • e.g., buffers, locks • Hold and wait (subset property) • A thread must hold a subset of its resource needs • And, the thread is waiting for more resources • No preemption • Resources cannot be taken away • Circular wait • A needs a resource that B has • B has a resource that A has

  31. Resource Allocation Graph with a Deadlock P => R: request edge R => P: assignment edge Silberschatz, Galvin and Gagne 2002

  32. Cycle Without Deadlock P => R: request edge R => P: assignment edge Why is there no deadlock here? Silberschatz, Galvin and Gagne 2002

  33. Graph Reduction • A graph can be reduced by a thread if all of that thread’s requests can be granted • Eventually, all resources by the reduced thread will be freed • Miscellaneous theorems (Holt, Havender): • There are no deadlocked threads iff the graph is completely reducible • The order of reductions is irrelevant • (Detail: resources with multiple units)

  34. Approaches to Deadlock • Avoid threads • Deadlock prevention • Break up one of the four necessary conditions • Deadlock avoidance • Stay live even in the presence of the four conditions • Detect and recover

  35. Application change background color GUI Framework mouse click Operating System Approach #1: Avoid Threads • Brain dead solution: avoid deadlock by avoiding threads • Example: GUI frameworks • Typically use a single event-dispatch thread

  36. Approach #2: Prevention Can we eliminate: • Mutual exclusion? • Hold and wait (subset property)? • No Preemption? • Circular waiting?

  37. Lock Ordering • We can avoid circular waiting by acquiring resources in a fixed order • Example: Linux rename operation • Each open file has a semaphore • Both semaphores must be held during file operations • Linux always uses the same order for down operations • Semaphore at the lowest memory address goes first

  38. Rename Example • Process 1: rename (“foo.txt”,”bar.txt”); • Process 2: rename (“bar.txt”,”foo.txt”); foo semaphore bar semaphore bar semaphore foo semaphore down() down() up() up() foo semaphore bar semaphore bar semaphore foo semaphore down() down() up() up()

  39. Approach #3: Avoidance • Intuition: the four conditions do not always lead to deadlock • “necessary but not sufficient” • We can stay live ifwe know the resource needs of the processes

  40. Bankers Algorithm Overview • Basic idea: ensure that we always have an “escape route” • The resource graph is reducible • This can be enforced with the bankers algorithm: • When a request is made • Pretend you granted it • Pretend all other legal requests were made • Can the graph be reduced? • If so, allocate the requested resource • If not, block the thread

  41. Deadlock Avoidance in Practice • Static provisioning • Ensure that each active process/thread can acquire some minimum set of resources • Use admission control to maintain this invariant • This is closely related to the idea of thrashing • Each thread needs some minimum amount of resources to run efficiently

  42. Approach #4: Detection and Recovery • Not commonly used • Detection is expensive • Recovery is tricky • Possible exception: databases

  43. Deadlock Detection • Detection algorithm (sketch) • Keep track of who is waiting for resources • Keep track of who holds resources • Assume that all runnable processes release all their resources • Does this unblock a waiting process? • If yes, release that process’s resources • If processes are still blocked, we are deadlocked

  44. Recovery • Must “back out” of a resource allocation decision • By preemption or by killing the process • This is particularly problematic for lock-based deadlocks • System can be in an inconsistent state

  45. Govindrao Wanjari College of Engineering & Technology,NagpurDepartment of CSE Session: 2017-18 Branch/ Sem: CSE/4th sem “MEMORY MANAGEMENT” Subject :OS Subject Teacher: Prof.V.P.Lonkar

  46. B R O K E R Memory CPU CPU generated Address Memory Address 7.1 Basics What functionalities do we want to provide? • Improved resource utilization • Independence and Protection • Liberation from resource limitations • Sharing of memory by concurrent processes

  47. Overall Goals of Memory Manager • Require minimal hardware support • Keep impact on memory accesses low • Keep memory management overhead low (for allocation and de-allocation of memory)

  48. 7.2 Simple Schemes for Memory Management 1. Separation of user and kernel Fence register Memory Low Kernel Y CPU > User CPU generated Address Memory Address N High Trap

  49. 7.2 Simple Schemes for Memory Management 2. Static Relocation • Memory bounds for process set at linking time when executable is created. • Memory bounds in PCB & hardware registers • Process can be swapped in & out (to same location) • Process is non-relocatable • A version exists where limits can be changed at load time. These remain fixed until completion

  50. 7.2 Simple Schemes for Memory Management 2. Static Relocation Memory Low Kernel Lower bound Upper bound P1 . . . . Y Y CPU < > CPU Address Memory Address N N P2 Trap Trap Pn High

More Related