1 / 69

Operating System Concept

Operating System Concept. 授課教授 周立德 教授 助教 林昱宏. Ch7 Process Synchronization. Shared Memory and Race Condition Critical Section Design Critical Section SW solution HW solution Semaphore Monitor Synchronization Problem Bounded Buffer Reader and Writer Dining Philosophers.

idola
Download Presentation

Operating System Concept

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Operating System Concept 授課教授 周立德 教授 助教 林昱宏

  2. Ch7 Process Synchronization • Shared Memory and Race Condition • Critical Section • Design Critical Section • SW solution • HW solution • Semaphore • Monitor • Synchronization Problem • Bounded Buffer • Reader and Writer • Dining Philosophers

  3. Shared Memory and Race Condition • Def:Process 彼此間透過對shared memory之存取,達到溝通目的。OS不提供額外支援,只提供shared memory。 • Programmer的責任 • 需提供 Mutual Exclusive之存取控制之同步(Synchronization)機制 • Race Condition • 在shared memory溝通方式下,若未對共享變數提供互斥存取之同步機制,則可能造成共享變數之最終值會受process之執行相對順序的影響,及執行順序不同,其最終結果也就不可預期。

  4. Critical Section • Def : Process中對共享變數進行存取的敘述程式碼之集合,其他程式碼皆稱Remainder Section • Arch. • Repeat • Entry Section • C.S • Exit Section • R.S • Until False • C.Sdesign 是指設計Entry Section 及 Exit Section • Mutual Exclusion • Progress • Bounded Waiting

  5. Critical Section(Cont.) • Mutual Exclusion • If process Pi is executing in its critical section, then no other processes can be executing in their critical sections. • Progress • If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely. • Bounded Waiting • A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. • Assume that each process executes at a nonzero speed • No assumption concerning relative speed of the n processes.

  6. Design Critical Section • SW Solution • Algo. 1 • Mutual is OK • Progress is not OK • 不想進C.S的process會阻礙想進者 • Bounded Waiting is OK • Algo. 2 • Mutual is OK • Progress is not OK • Possibility of Deadlock exists(過度禮讓) • Bounded Waiting is OK • Algo. 3 • All conditions are satisfied

  7. Design Critical Section (Cont.) • HW instr. • Test-and-Set • Mutual Exclusion is OK • Progress is OK • Bounded Waiting is not OK • Other processes may starvation(同一個process可能多次進入C.S) • SWAP • Mutual Exclusion is OK • Progress is OK • Bounded Waiting is not OK

  8. Design Critical Section (Cont.) • Semaphore (Binary) • Solution to C.S design and synchronization • Data type • Two atomic operations P (S): whileS 0 dono-op; //called wait(s)S--; V(S): S++;//called signal(s)

  9. Design Critical Section (Cont.) • Critical Section Design • mutex : semaphore=1 (init. value) • Processi • Repeat • p(mutex); • C.S • v(mutex); • R.S • Until False • All conditions are satisfied

  10. Design Critical Section (Cont.) • Monitor • Def : a advanced data type used to resolve synchronization problem • Shared Data • Operations (Procedures) • Initiation Code • Mutual Exclusion is ready • Must Concentrate on synchronization

  11. Synchronization Problem • Bounded Buffer- Producer and Consumer • Shared variables • mutex : semaphore=1 • empty : semaphore=n • full : semaphore =0 • Producer • Repeat • Produce an item • wait(empty) • wait(mutex) • Add item to buffer • Signal(mutex) • Signal(full) • Until False • Consumer • Repeat • Wait(full) • Wait(mutex) • Retrieve an item from buffer • Signal(mutex) • Signal(empty) • Consume item • Until False

  12. Synchronization Problem • wrt : semaphore=1 • readcount : int =0 • mutex :mutex =1 • Writer • Repeat • wait(wrt) • Perform writing • signal(wrt) • Until False • Reader • Repeat • wait(mutex) • readcount=readcount+1 • If readcount==1 then • wait(wrt) • signal(mutex) • Perform reading • wait(mutex) • readcount=readcount-1 • If(readcount==0)then • signal(wrt) • signal(mutex) • Until False

  13. Synchronization Problem • Dining Philosophers • Chopstick[5]:semaphore • Process • Repeat • Wait(chopstick[i]) • Wait(chopstick[(i+1)Mod 5]) • Eating • Signal(chopstick) • Signal(chopstick[(i+1)Mod 5]) • thinking • Until false • Dead Lock may occur

  14. Synchronization Problem(Cont.) • Type dining-ph =Monitor • Var • state:array[5] of (thinking,hungry,eating) • self:array[5] of condition • Procedureentry pickup(i) • Begin state[i]=hungry • Test(i) • If(state[i]!=eating) then • Self[i].wait • Procedure test(i) • If (state[(k+4)mod5]!=eating AND state[k]==hungry AND state[(k+1)mod 5]!=eating ) then • State[k]=eating • Self[k].singal

  15. Synchronization Problem(Cont.) • Procedure putdown(i) • State[i]=thinking • Test((i+4) mod 5) • Test((i+1)mod 5) • Init. Code • For i=0 to 4 • State[i]=thinking • Usage • dp:dining-ph • Philosopher (i) • Repeat • dp.pickup(i) • Eating • dp.putdown(i) • Thinking • Until False

  16. Ch 8 Deadlock • Deadlock • Resource Allocation Graph • Basic facts • Handling Deadlocks • Combined Approach to Deadlock Handling

  17. Ch 8 Deadlock • Def : 系統中存在一組processes彼此形成circular waiting,造成processes皆無法執行下去造成cpu utilization and throughput 低落 • Deadlock四必要條件 • Mutual exclusion • Only one process at a time can use a resource. • Hold and wait • A process holding at least one resource is waiting to acquire additional resources held by other processes. • No preemption • A resource can be released only voluntarily by the process holding it, after that process has completed its task. • Circular wait • There exists a set {P0, P1, …, P0} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and P0 is waiting for a resource that is held by P0.

  18. Resource Allocation Graphwith deadlock

  19. Basic Facts • No cycle→No deadlock • 有Cycle,不一定有deadlock(for multi-resource instance) • If all resources are single instance, then a cycle exists means that a deadlock exists

  20. Handling Deadlocks • Deadlock prevention • Deadlock avoidance • Pros • no deadlock • Cons • utilization lower and throughput lower • Deadlock detection and recovery • Pros • utilization和throughput相對較高 • Cons • 系統可能進入死結狀態 • Cost高

  21. Handling Deadlocks • Deadlock prevention • Break one of four requiremnets • Mutual Exclusion • Impossible(某些資源本身就具互斥性質) • Hold and wait • 規定除非process可以一次取得完成工作所須知全部資源,才准許持有resource,否則不得持有任何資源 • 允許process先持有部份資源,但若要提出其他資源申請之前,需先釋放持有之所有資源才可申請. • No preemption • 改成preemption即可,即高優先權process可搶奪其他process之資源來完成工作 • Circular waiting • 賦予各資源編號,process需依資源編號順序提出申請

  22. Handling Deadlocks • Deadlock Avoidance • When a process requests an available resource • How much resource is hold by each process currently • How much resource does each process need to complete jobs • Available resource in system • Execute Banker’s algo.(included safety algo.) • If system is in safety state ,then permit requests • Or reject this request and process must wait for next time to request

  23. Data Structures for the Banker’s Algorithm Let n = number of processes, and m = number of resources types. • Available: Vector of length m. If available [j] = k, there are k instances of resource type Rjavailable. • Max: n x m matrix. If Max [i,j] = k, then process Pimay request at most k instances of resource type Rj. • Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently allocated k instances of Rj. • Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of Rjto complete its task. Need [i,j] = Max[i,j] – Allocation [i,j].

  24. Safety Algorithm 1. Let Work and Finish be vectors of length m and n, respectively. Initialize: Work := Available Finish [i] = false for i - 1,3, …, n. 2.Find and i such that both: (a) Finish [i] = false (b) Needi Work If no such i exists, go to step 4. 3.Work := Work + AllocationiFinish[i] := truego to step 2. 4.If Finish [i] = true for all i, then the system is in a safe state.

  25. Resource-Request Algorithm for Process Pi Requesti = request vector for process Pi. If Requesti[j] = k then process Pi wants k instances of resource type Rj. 1. If Requesti Needigo to step 2. Otherwise, raise error condition, since process has exceeded its maximum claim. 2. If Requesti Available, go to step 3. Otherwise Pi must wait, since resources are not available. 3. Pretend to allocate requested resources to Pi by modifying the state as follows: Available := Available = Requesti; Allocationi:= Allocationi + Requesti; Needi:= Needi – Requesti;; • If safe  the resources are allocated to Pi. • If unsafe  Pi must wait, and the old resource-allocation state is restored

  26. Algo. Summary • Check requesti <= Needi • Check requesti <= Available • Make a sheet • Run safety algo. • Goal of safety algo. • Find more than or equal to one safe sequence, OS follows this sequence allocate resources and make all processes complete their jobs

  27. Detection Algorithm 1. Let Work and Finish be vectors of length m and n, respectively Initialize: (a) Work :- Available (b) For i = 1,2, …, n, if Allocationi 0, then Finish[i] := false;otherwise, Finish[i] := true. 2. Find an index i such that both: (a) Finish[i] = false (b) Requesti Work If no such i exists, go to step 4.

  28. Detection Algorithm (Cont.) 3. Work := Work + AllocationiFinish[i] := truego to step 2. 4. If Finish[i] = false, for some I, 1  i  n, then the system is in deadlock state. Moreover, if Finish[i] = false,then Pi is deadlocked. Algorithm requires an order of m x n2 operations to detect whether the system is in deadlocked state.

  29. Recovery Algo. • Kill processes • Kill all processes • Kill the process one by one • Resource Preemption • Selecting a victim – minimize cost. • Rollback – return to some safe state, restart process fro that state. • Starvation – same process may always be picked as victim, include number of rollback in cost factor.

  30. Combined Approach to Deadlock Handling • Combine the three basic approaches • prevention • avoidance • detection allowing the use of the optimal approach for each of resources in the system. • Use most appropriate technique for handling deadlocks within each class.

  31. Ch9 Memory Management • Address binding of instructions and data to memory addresses can happen at three different stages • Compile time • Load time • Execution time

  32. Dynamic Loading • Routine is not loaded until it is called • Better memory-space utilization; unused routine is never loaded. • Useful when large amounts of code are needed to handle infrequently occurring cases. • No special support from the operating system is required implemented through program design.

  33. Dynamic Linking • Linking postponed until execution time. • Small piece of code, stub, used to locate the appropriate memory-resident library routine. • Stub replaces itself with the address of the routine, and executes the routine. • Operating system needed to check if routine is in processes’ memory address. • E.g Dynamic Linking library

  34. Logical vs. Physical Address Space • The concept of a logical address space that is bound to a separate physicaladdress space is central to proper memory management. • Logical address – generated by the CPU; also referred to as virtual address. • Physical address – address seen by the memory unit. • Logical and physical addresses are the same in compile-time and load-time address-binding schemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme.

  35. Swapping • A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution. • Backing store – fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images. • Roll out, roll in – swapping variant used for priority-based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be loaded and executed. • Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped. • Modified versions of swapping are found on many systems, i.e., UNIX and Microsoft Windows.

  36. Memory allocation • First-fit: Allocate the first hole that is big enough. • Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size. Produces the smallest leftover hole. • Worst-fit: Allocate the largest hole; must also search entier list. Produces the largest leftover hole. • First-fit and best-fit better than worst-fit in terms of speed and storage utilization.

  37. Fragmentation • External Fragmentation : 在連續性配置下,所有free blocks之size皆無法滿足process大小需求,但這些free blocks size加總大於等於process size,但由於不連續,依然不能配置,形成記憶體浪費 • Internal Fragmentation : 配給process之空間超過process所需,其造成的差值空間,此process用不到且其他process亦無法使用 • First-fit and Best-fit 無internal fragmentation ,but external fragmentation

  38. Paging • Divide physical memory into fixed-sized blocks called frames • Divide logical memory into blocks of same size called pages. • Keep track of all free frames. • To run a program of size n pages, need to find n free frames and load program. • Set up a page table to translate logical to physical addresses. • Internal fragmentation.

  39. Paging(Cont.)

  40. Page Table • Register • Memory and PTBR(Page Table Base Register) • TLB(Translation Lookaside Buffer) register • Effective Memory Access Time • P*(TLB access time + memory access time)+(1-p)*(TLB access time +2*memory access time) • P:TLB hit ratio

  41. Multi-level paging • Def : paging the page table • By multi-paging, a large page table can be divided into more small piece one separately in memory

  42. Inverted page table • 以physical memory 之frame 為記錄對象,若有n個frames則inverted page table 就有n個entry,entry紀錄<process id, page no.>

  43. Segmentation • Memory-management scheme that supports user view of memory. • Physical memory • 視為一個夠大的連續可用區塊 • Logical memory • 視為一組segment之集合,而各段大小不一定相等

  44. Paged segment memory management

  45. Page v.s. segment

  46. Ch10 Virtual Memory • Virtual memory – separation of user logical memory from physical memory. • Only part of the program needs to be in memory for execution. • Logical address space can therefore be much larger than physical address space. • Need to allow pages to be swapped in and out. • Virtual memory can be implemented via: • Demand paging • Demand segmentation

  47. Demand paging • 以paging memory management為基礎,採lazy swapper,程序執行不需全部載入pages,而是載入所需.若試著存取不在memory的pages則page fault.需載入lost pages 使process繼續執行

  48. Valid-Invalid Bit • With each page table entry a valid–invalid bit is associated(1  in-memory, 0 not-in-memory) • Initially valid–invalid but is set to 0 on all entries. • Example of a page table snapshot. • During address translation, if valid–invalid bit in page table entry is 0  page fault. Frame # valid-invalid bit 1 1 1 1 0  0 0 page table

  49. Page Fault • If there is ever a reference to a page, first reference will trap to OS  page fault • OS looks at another table to decide: • Invalid reference  abort. • Just not in memory. • Get empty frame. • Swap page into frame. • Reset tables, validation bit = 1. • Restart instruction: Least Recently Used • block move auto increment/decrement location

  50. What happens if there is no free frame? • Page replacement – find some page in memory, but not really in use, swap it out. • algorithm • performance – want an algorithm which will result in minimum number of page faults. • Same page may be brought into memory several times.

More Related