1 / 14

CGS 3763 Operating Systems Concepts Spring 2013

CGS 3763 Operating Systems Concepts Spring 2013. Dan C. Marinescu Office: HEC 304 Office hours: M- Wd 11:30 - 12:30 A M. Last time: CPU Scheduling Process synchronization Today Answers to student questions . Process synchronization Semaphores Monitors

rock
Download Presentation

CGS 3763 Operating Systems Concepts Spring 2013

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11:30 - 12:30 AM

  2. Last time: CPU Scheduling Process synchronization Today Answers to student questions. Process synchronization Semaphores Monitors Thread coordination with a bounded buffer. Next time Process synchronization Reading assignments Chapter 6 of the textbook Lecture 27 – Friday, March 22, 2013 Lecture 27

  3. March 11th Monday: • How does the CPU decide which type of scheduling to use? In what applications would the different CPU scheduling techniques be applicable? Can a system utilize any of the algorithms or is it built with a specific one? If Round Robin is the fairest scheduler, why are there other types that are used? • How do you determine the length of the next CPU burst for one thread? Why is the waiting factor typically .5 to determine the length of the next CPU burst? • Difference between SRTF and SJF? • What is the importance of exponential averaging? Lecture 27

  4. March 13th Wednesday: • Priority Inversion, how does a thread acquire a lock? How do locks work? • How is the priorities set or determined by the scheduler? How does the computer know whether a process has a higher priority than another process? • Is there an error found on the computer when starvation happens? • Can a process age to have absolutely zero priority? And if it does, does it get ignored or does it get re-sent back into waiting? • In Priority scheduling would SJF have precedence over RR? Lecture 27

  5. March 15th Friday: • What methods are used for the system to determine which core to use for a specific process (since each core may finish different processes at different times, does it pre-allocate where a process will go?) • What is the average maximum temperature that a processor can function in? • Does a higher clock rate always indicate that a computer is inefficient? • What happens if one boosts a clock rate too much? • For NUMA, what happens if two processes or threads try to access the same data at the same time? • What is a system contention scope? • What are the drawbacks of Fair Share Scheduling? • Discuss the purpose of the Light Weight Process (LWP). • What is a homogeneous processor? • What is a soft and hard processor affinity? Lecture 27

  6. Contention scope • Contention scope  which threads compete with one another for CPU. • User level threads, many-to-one and many-to-many • Scheduled by the thread library • Process contention scope (PCS)  competition among the threads belonging to the same process • Kernel-level threads • The system scheduler • System contention scope (SCS) competition is among all threads in the system. Lecture 27

  7. Lecture 27

  8. System model • Resource types R1, R2, . . ., Rm (CPU cycles, memory space, I/O devices) • Each resource type Ri has Wi instances. • Resource access model: • request • use • release Lecture 27

  9. Wait-for-graph  directed graph (an edge connected one vertex to another has a direction associated with it). The vertices are the locks and the threads. Lecture 27

  10. Simultaneous conditions for deadlock • Mutual exclusion: only one process at a time can use a resource. • Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other processes. • No preemption: a resource can be released only voluntarily by the process holding it (presumably after that process has finished). • Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and P0 is waiting for a resource that is held by P0. The circular wait is reflected by a cycle in the Wait-for-Graph Lecture 27

  11. Lecture 27

  12. Semaphores • Abstract data structure introduced by Dijkstra to reduce complexity of threads coordination; has two components • C  count giving the status of the contention for the resource guarded by s • L  list of threads waiting for the semaphore s • Counting semaphore – for a resource with multiple copies. Supports two operations: V - signal() increments the semaphore C P  - wait() P decrements the semaphore C. • Binary semaphore: C is either 0 or 1. Lecture 27

  13. P and V counting semaphore operations • The value of the semaphore S is the number of units of the resource that are currently available. • ThePoperation forces a thread tosleep until a resource protected by the semaphore becomes available, at which time the resource is immediately claimed. • wait(): Decrements the value of semaphore variable by 1. If the value becomes negative, the process executing wait() is blocked, i.e., added to the semaphore's queue. • TheV operation is the inverse: it makes a resource available again after the thread has finished using it. • signal(): Increments the value of semaphore variable by 1. After the increment, if the pre-increment value was negative (meaning there are threads waiting for a resource), it transfers a blocked thread from the semaphore's waiting queue to the ready queue. Lecture 27

  14. The Wait (P) and Signal (V) operations P (s) (wait) { If s.C > 0 then s.C − −; else join s.L; } V (s) (signal) { If s.L is empty then s.C + +; else release a process from s.L; } Lecture 27

More Related