1 / 93

Process Synchronization

Process Synchronization. ICS 240: Operating Systems Instructor: William McDaniel Albritton Information and Computer Sciences Department at Leeward Community College

edith
Download Presentation

Process Synchronization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Process Synchronization ICS 240: Operating Systems Instructor: William McDaniel Albritton Information and Computer Sciences Department at Leeward Community College Original slides by Silberschatz, Galvin, and Gagne from Operating System Concepts with Java, 7th Edition with some modifications Also includes material by Dr. Susan Vrbsky from the Computer Science Department at the University of Alabama 6/10/2014 6/10/2014 1 1

  2. Background • Cooperating and Concurrent Processes • Executions overlap in time and they need to be synchronized • Cooperating processes may share a logical address space (such as the same code and data segments) as well as share data through files or messages • Concurrent access to shared data may result in data inconsistency • Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes

  3. Example • Suppose that we wanted to provide a solution to the consumer-producer problem that has a single shared buffer • We can do so by having an integer count that keeps track of the size of the buffer (an array) • Initially, count is set to 0 • count is incremented by the producer after it inserts a new object into the buffer array and is decremented by the consumer after it removes an object from the buffer

  4. Simulating Shared Memory in Java • Both the Produce and Consumer share the same BoundedBuffer object • Emulates shared memory, as Java does not support shared memory

  5. Producer-Consumer Problem • Paradigm for cooperating processes, producer process produces information that is consumed by a consumer process • unbounded-buffer places no practical limit on the size of the buffer • Used in Factory.java example program in Chapter 4 on Threads • bounded-buffer assumes that there is a fixed buffer size • Used in Factory.java example program for Chapter 6 on Process Synchronization

  6. Producer-Consumer Problem • Produce & Consumer objects share the same BoundedBuffer object • Class BoundedBuffer has a buffer array which is an array of Objects • So the buffer array can hold any type of object • Class BoundedBuffer is implemented as a circular array (buffer) with two indexes • Index in: next free position in buffer array • Index out: first filled position in buffer array

  7. Producer-Consumer Problem • Class BoundedBuffer has a count variable • Keeps track of the number of items currently in the buffer array • Variable BUFFER_SIZE is the maximum size of the buffer array • Buffer array is empty if count==0 • Buffer array is full if count==BUFFER_SIZE • while loop is used to block the producer & consumer when they cannot use the buffer array

  8. Bounded-Buffer – Shared-Memory Solution • An interface for buffers • Interfaces are used to enforce the method names of a class • Makes programs more flexible

  9. Bounded-Buffer – Shared-Memory Solution

  10. Producer-Consumer Problem • Producer invokes (calls) the insert() method • Puts an item into the buffer • In the program, the item is a Date object • Class java.util.Date represents a specific instant in time, with millisecond precision • The toString() method has format: “dow mon dd hh:mm:ss zzz yyyy” • For example:“Wed Mar 12 22:30:09 GMT-10:00 2008” • Consumer invokes (calls) the remove() method • Takes an item (Date object) from the buffer

  11. Bounded-Buffer - insert() method • Producer calls this method

  12. Bounded-Buffer - remove() method • Consumer calls this method

  13. Java Threads • Java threads are managed by the JVM (Java Virtual Machine) • The JVM is can be thought of as a software computer that runs inside a hardware computer • Java threads may be created by: • Implementing the Runnable interface

  14. The start() method allocates memory for a new thread in the JVM, and calls the run() method The sleep() method causes the currently executing thread to sleep (cease execution) for the specified number of milliseconds Java Thread Methods & States

  15. Producer-Consumer Problem • In the Factory.java example program, the methods insert() and remove() called by the producer and consumer may not function correctly when the methods are executed concurrently • This is because of something called a race condition

  16. Race Condition • A race condition is a situation in which several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place • In other words, outcome depends on order in which the instructions are executed • To understand how this works, we need to think a little about machine language, which manipulates the registers in the CPU • When we compile a program, this converts the source code (Java code in the *.java file) to machine code (bytecode in the *.class file)

  17. Race Condition • ++count could be implemented in the CPU as • register1 = countregister1 = register1 + 1count = register1 • --count could be implemented in the CPU as • register2 = countregister2 = register2 - 1count = register2

  18. Race Condition • Consider this arbitrary order (note that other combinations are possible) of execution with initial value of count = 5 • S0: producer register1 = count {register1 = 5}S1: producer register1 = register1 + 1{register1 = 6} S2: consumer register2 = count {register2 = 5} S3: consumer register2 = register2 – 1 {register2 = 4} S4: producer count = register1 {count = 6 } S5: consumer count = register2 {count = 4} • We end up with the incorrect value of count = 4 • Other combinations of statements can also give us correct or incorrect results

  19. Race Condition • To prevent incorrect results when sharing data, we need to make sure that only one process at a time manipulates the shared data • In this example, the variable count should be accessed and changed only by one process at a time

  20. Critical-Section Problem • Race Condition - When there is concurrent access to shared data and the final outcome depends upon order of execution. • Entry Section - Code that requests permission to enter its critical section. • Critical Section - Section of code where shared data is accessed. • Exit Section - Code that is run after completing the critical section to signal that the process has finished running its critical section • Remainder Section - Code that is run after completing the exit section

  21. Structure of a Typical Process

  22. Solution to Critical-Section Problem • Solution to critical-section problem must satisfy the following three requirements • Mutual Exclusion • If a process is executing in its critical section, then no other processes can be executing in their critical sections • In other words, only one process can enter its critical section at a time • Assume that each process has one critical section, and that several processes have the same critical section

  23. Solution to Critical-Section Problem • Solution to critical-section problem must satisfy the following three requirements • Progress • If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely • In other words, a decision on which process will be next must be made only by the processes that are trying to enter their critical section • So should make progress on entering critical-section

  24. Solution to Critical-Section Problem • Solution to critical-section problem must satisfy the following three requirements • Bounded Waiting • A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted • In other words, once a process wants to get into its critical section, other processes are restricted in the number of times they can get into their critical sections • So should be a bound on how long have to wait

  25. Solution to Critical-Section Problem • If your solution satisfies the 3 requirements: • Mutual exclusion, progress, and bounded waiting • You will have no: • Starvation • there exist a process who never gets into the critical section • Deadlock • two or more processes waiting for an event that will not occur

  26. Peterson’s Solution • Peterson’s Solution solves the Critical-Section Problem for TWO process only • 2 processes (P0 and P1) share 2 variables • int turn; • booleanreadyFlag[2]; • The variable turn indicates whose turn it is to enter its critical section • If turn == 0, then P0 is allowed to enter its critical section • If turn == 1, then P1 is allowed to enter its critical section

  27. Peterson’s Solution for Process Pi

  28. Peterson’s Solution • Peterson’s Solution continued • 2 processes (P0 and P1) share 2 variables: • int turn; • boolean readyFlag[2]; • The readyFlag array is used to indicate if a process is ready to enter its critical section. • If readyFlag[0] == true, the process P0is ready to enter its critical section • readyFlag[i]=true implies that Pi is either ready to enter the critical section, or running its critical section • If readyFlag[0] == false, the process P0has finished its critical section

  29. Algorithm Description • Algorithm for Peterson’s Solution • When Pi (where i is 0 or 1) is ready to enter the critical section • Pi assigns readyFlag[i] = true • This statement says that this process wants to enter its critical section • Pi assigns turn = j (where j is the other process, so j=1-i) • This statement allows the other process to enter its critical section 6/10/2014 29

  30. Algorithm Description • Comments on the algorithm • If both processes try to enter their critical sections at the same time, then turn will be set to 0 or 1 at roughly the same time • Since both processes share the turn variable, only one assignment will last, as one process will quickly overwrite the value from the other process • For example, P0 wants to enter its critical section, so turn=1 • In the next nanosecond, P1 wants to enter its critical section, so turn=0 • So in this case, P0 is the allowed to enter its critical section first 6/10/2014 30

  31. Peterson’s Solution for Process Pi

  32. Algorithm Description • Initial values: boolean readyFlag[0]=false; boolean readyFlag[1]=false; int turn=1; 6/10/2014 32

  33. Algorithm Description • Code for P1 while(true){ readyFlag[1]=true; //P1 ready turn=0; //P0 can go while(readyFlag[0]==true && turn==0){ //do nothing} //critical section readyFlag[1]=false; //done //remainder section } • Code for P0 while(true){ readyFlag[0]=true; //P0 ready turn=1; //P1 can go while(readyFlag[1]==true && turn==1){ //do nothing} //critical section readyFlag[0]=false; //done //remainder section } 6/10/2014 33

  34. Correctness of Solution • Criteria 1: mutual exclusion • Pi can only enter its critical section if either readyFlag[j]==false or turn==i • Either case will make the 2ndwhile statement false, so the process can enter its critical section • If both processes want to enter their critical sections at the same time, both readyFlag[0]==true and readyFlag[1]==true, but either turn==0, or turn==1, so only one process at one time can enter its critical section (mutual exclusion) 6/10/2014 34

  35. Correctness of Solution • Criteria 2 & 3: process and bounded waiting • Pi can be prevented from entering its critical section only if it is stuck in the 2nd while loop with readyFlag[j]==true and turn==j • If Pj does not want to enter its critical section, then readyFlag[j]==false, so Pi can then enter its critical section • If Pj does want to enter its critical section, then readyFlag[j]==true & either turn==i or turn==j • If turn==i, then Pi will enter its critical section • If turn==j, then Pj will enter its critical section 6/10/2014 35

  36. Correctness of Solution • Criteria 2 & 3: process and bounded waiting • When Pj exits its critical section • Pj will assign flag[j]=false, so Pi can enter its critical section • If Pj want to enter its critical section again, then it will assign flag[j]=true and turn=i • So Pi can enter its critical section • Therefore, Pi will eventually enter its critical section (progress) after waiting for Pj to enter and finish its critical section at most one time (bounded waiting) 6/10/2014 36

  37. Locks • Solutions to the critical-section problem all have one minimum requirement • This requirement is a lock • Locks prevent race conditions • This is because a process must acquire the lock before entering its critical section and release the lock after exiting its critical section 6/10/2014 37

  38. Critical Section Using Locks • General solution to the critical-section problem emphasizing the use of locks to prevent race conditions • Algorithm:

  39. Synchronization Hardware • Many systems provide hardware support for critical section code • This makes programming easier • Also makes the overall system more efficient

  40. Synchronization Hardware • Uniprocessors (single processor system) • When shared variables are being modified, the processor disables interrupts • Currently running code executes without preemption • Unfortunately, this approach is inefficient on multiprocessor systems • Reason is because messages have to be passed to all processors whenever shared variables are being modified • All these excess messages slow down the operating system

  41. Synchronization Hardware • Modern machines provide special atomic hardware instructions • Atomic means that a certain group of instructions cannot be interrupted • In other words, several instructions form one uninterruptible unit • For example, the machine instructions for ++count can be implemented atomically in the CPU as one uninterruptible unit • register1 = countregister1 = register1 + 1count = register1

  42. Semaphore • Invented by Edsger W. Dijkstra in 1965 • Very famous Dutch computer scientist • Invented many algorithms as well as helped to ban the GOTO statement • For example, you may have studied Dijkstra’s algorithm, which is the shortest path problem • “Computer Science is no more about computers than astronomy is about telescopes.” • Focused on theory of computer science • Programmers should use every trick and tool possible for the complex task of programming

  43. Semaphore • A semaphore is a synchronization tool that does not require busy waiting • Busy waiting (spinning, or spinlock) is continual looping • Continually checks to see if a condition is true • For example, a process that is waiting for a lock to become available is doing busy waiting • Semaphores are simple and powerful • Can be used to solve wide variety of synchronization problems

  44. Semaphore • A semaphore has (1) an integer variable (value) and (2) a queue of processes • The integer variable (value) is modified by two atomic methods: acquire() and release() • Originally called P() andV() • P = probern = Dutch for “to test” • V = verhogun = Dutch for “to increment” • acquire() is used before the critical section to prevent access if other processes are using it • release() is used after the critical section to allow other processes to access it

  45. Semaphore as General Synchronization Tool • This code uses a binary semaphore (where value == 0 or value == 1) to control access to the critical section

  46. Semaphore Types • Counting semaphore • Integer value can range over an unrestricted domain • Binary semaphore • Integer value can only be 0 or 1 • Can be simpler to implement • Also known as mutex locks • Mutex is short for mutual exclusion

  47. Semaphore Implementation • Implementation of acquire() • Implementation of release()

  48. Semaphore Implementation • Two more operations • block() – place the process invoking the operation on the appropriate waiting queue • suspend process invoking it (wait) • wakeup() – remove one of processes in the waiting queue and place it in the ready queue • resume one process

  49. Possible Problems with Semaphore • Prone to programmer errors • For example, by switching acquire() and release() methods by mistake • Starvation is possible • Starvation is indefinite blocking of a process • For example, a process may never be removed from the semaphore queue in which it is waiting

  50. Possible Problems with Semaphore • May have deadlock if don't have synchronization specified correctly • Deadlock is when two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes

More Related