1 / 43

School of Computing Science Simon Fraser University CMPT 300: Operating Systems I

School of Computing Science Simon Fraser University CMPT 300: Operating Systems I Inter-Process Communication and Synchronization Keval Vora. Why processes cooperate? Information sharing Computation speed-up Modularity, Convenience Interprocess Communication (IPC) methods

jeromet
Download Presentation

School of Computing Science Simon Fraser University CMPT 300: Operating Systems I

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. School of Computing Science Simon Fraser University CMPT 300: Operating Systems I Inter-Process Communication and Synchronization Keval Vora

  2. Why processes cooperate? Information sharing Computation speed-up Modularity, Convenience Interprocess Communication (IPC) methods Shared memory Message passing Cooperating Processes

  3. One process creates shared memory Other processes attach shared memory to their own address space Shared memory is treated as regular memory Synchronization is needed to prevent conflicts POSIX: shm_open(), mmap() Shared Memory

  4. Process A sends message to B via kernel send (msg), receive (msg) Direct v/s Indirect Naming Ports, mailboxes Blocking v/s non-blocking Buffering Message Passing

  5. Pros Fast (memory speed) Convenient to programmers (just regular memory) Cons Need to manage conflicts(tricky for distributed) IPC: Shared Memory v/s Message Passing • Pros • No conflict • easy to exchange messages especially in distributed systems • Cons • High overhead & slow • Prepare messages • Kernel involvement: sender  kernel  receiver • Several system calls

  6. Process Synchronization

  7. Two processes (threads) sharing a buffer One places items into the buffer (producer) Must wait if the buffer is full The other takes items from the buffer (consumer) Must wait if buffer is empty Producer-Consumer Problem Producer inserts in buffer out 3 7 9 Consumer removes

  8. How do we coordinate producer & consumer? Keep a counteron number of items in the buffer Producer increases it after creating an item Waits if buffer is full Consumer decreases it after using an item Waits if buffer is empty Producer-Consumer Problem

  9. while (true) { while (count == BUFFER_SIZE) ; // do nothing buffer [in] = nextProduced; in = (in + 1) % BUFFER_SIZE; count++; } Producer & Consumer Threads Producer

  10. while (true) { while (count == BUFFER_SIZE) ; // do nothing buffer [in] = nextProduced; in = (in + 1) % BUFFER_SIZE; count++; } Producer & Consumer Threads Consumer Producer while (true) { while (count == 0) ; // do nothing nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; count--; } What can go wrong with here?

  11. In Producer thread, count++ is implemented as reg1 = count reg1 = reg1 + 1 count = reg1 In Consumer thread, count-- is implemented as reg2 = count reg2 = reg2 - 1 count = reg2 Consumer and Producer threads run independently CPU Scheduler decides when to switch among them The switch can happen at ANYmoment (at any instruction) Race Condition

  12. Example: Initially, count is 5 Producer creates an item (count becomes 6) Consumer consumes an item (count goes back to 5) In the end, count should be 5 A possible execution t0: Producer executes reg1 = count {reg1 = 5} t1: Producer executes reg1 = reg1 + 1 {reg1 = 6} t2:Consumer executes reg2 = count {reg2 = 5} t3:Consumer executes reg2 = reg2 - 1 {reg2 = 4} t4: Consumer executes count = reg2 {count = 4} t5: Producer executes count = reg1 {count = 6 } This is Race Condition! Race Condition Context switch Wrong final value Context switch

  13. Occurs when multiple processes manipulate shared data concurrently& result depends on the order of manipulation Data inconsistency may arise How to handle race condition? Mark code segment that manipulates shared data as critical section (CS) If a process is executing its CS, no other processes can execute their CS Race Condition

  14. Solution for CS Problem must satisfy three requirements: Mutual Exclusion: If a process is executing in its CS, then no other processes can execute their CS Progress: If no process is executing in its CS and there exist processes wanting to enter their CS, then selecting a process to enter its CS cannot be postponed indefinitely Bounded Waiting:A bound must exist on number of times that other processes are allowed to enter their CS after a process requests to enter its CS Critical-Section (CS) Problem

  15. On uniprocessor systems Disable interrupts during running CS Currently running code executes without preemption Problems? Users can make CS arbitrary large  unresponsive system Solutions using software only Solutions using hardware support Solutions for CS Problem

  16. Software solution; no hardware support Solution for two processes Assumes LOAD and STORE instructions are atomic(i.e., cannot be interrupted) May not always be true in modern computers Peterson’s Solution • do { • flag[i] = true; • turn = j; • while (flag[j] && turn == j); • critical section • flag[i] = false; • remainder section • } while (true); • turn and flag[2] are shared • turnindicates whose turn it is to enter CS • flagarray indicates whether process is ready to enter CS Does this algorithm satisfy the three requirements?

  17. Modern machines provide special atomic(non-interruptable) instructions, e.g.: test_and_set(): test a memory word and set its value compare_and_swap(): swap contents of two memory words if a condition is satisfied The above are abstract instructions Specific instructions depend on the architecture Enable easier software synchronization Synchronization Hardware

  18. bool test_and_set(bool *target) { bool rv = *target; *target = TRUE; return rv; } test_and_set() Hardware Instruction Implemented in hardware do { while(test_and_set (&lock)); critical section lock = FALSE; remainder section } while(true); Shared variable lockis initialized to FALSE Does this algorithm satisfy the three requirements? Indefinite waiting!

  19. compare_and_swap() Hardware Instruction intcompare_and_swap(int *value,intexpected, intnew_value){ inttemp = *value; if(*value == expected) *value = new_value; returntemp; } Implemented in hardware do { while(compare_and_swap(&lock, 0, 1)); critical section lock = 0; remainder section } while(true); Shared variable lockis initialized to 0

  20. Hardware instructions are not easy to use May not even be accessible to application programmers OS provides software tools Mutex locks Semaphores Mutex locks Atomic acquire() and release() Mutual Execution (Mutex) Locks • do { • acquire lock • critical section • release lock • remainder section • } while (true);

  21. Spinlocks Process waiting on lock keeps spinning i.e., it wastes CPU cycles Spinlocks typically used on multiprocessor systems A thread keeps spinning on one processor (waiting for lock) While another thread performs CS on another processor,which will eventually release the lock for the spinning thread Advantage? No context switching occurs when thread is spinning Useful especially when CS is small code Mutex Locks (Spinlocks) acquire() { while (!available) ; //busy wait available = false } release() { available = true; }

  22. A semaphore S is an integer variable accessed through two atomic operations: wait() signal() What should S be initialized to? Semaphores wait(S) { while (S <= 0) ; //busy wait S--; } signal(S) { S++; }

  23. Semaphores • Two types • Binarysemaphore: can be 0 or 1 • Similar to mutex • Countingsemaphore: can be any integer value • Makes it more general • Uses • Ensuring mutual exclusion • Synchronizing steps in different processes • Controlling access to finiteresources (shared buffers) • do { • wait(S) • critical section • signal(S) • remainder section • } while (true);

  24. Non-busy waiting semaphore has: Value (of type integer) Waiting queue Two internal operations: Block– suspends the process that invokes it(places the process in the waiting queue) Wakeup– resumes the execution of blocked process(removes one of the processes from the waiting queue and places it in the ready queue) Semaphore Implementation typedefstruct { int value; structprocess* list; } semaphore;

  25. wait(semaphore *S) { S->value--; if (S->value < 0) { add this process to S->list; block(); } } signal(semaphore *S) { S->value++; if (S->value <= 0) { remove a process P from S->list; wakeup(P); } } Semaphore Implementation Issues?

  26. No two processes must execute wait and signal on same semaphore at same time wait and signal become CS (must be protected) Disable interrupts (uniprocessor systems only) Busy waiting or spinlocks (multiprocessor systems) Busy waiting not completely eliminated Just got shifted Semaphore Implementation

  27. Semaphore lock; // initialized to 1 Mutual Exclusion using Semaphores Process 1 wait (lock) Critical Section 1 signal (lock); Process 2 wait (lock) Critical Section 2 signal (lock); • First process to execute wait lock is decremented to 0 • Other process waits until lock becomes > 0, which happens only when the first process executes signal • Value of semaphore indicates number of waiting processes • lock = 0 means 1 process may be waiting • lock = 1 means no process is waiting

  28. Suppose we want S2in process P2 to be executed only after S1 in process P1 No mutual exclusion or CS is needed here! Semaphore synch; // initialized to 0 Synchronizing Steps using Semaphores Process P2 wait (synch); Statement S2; Process P1 Statement S1; signal (synch);

  29. Let S and Q be two semaphores initialized to 1 P0P1 wait (S); wait (Q); wait (Q); wait (S); . . . . . . signal (S); signal (Q); signal (Q); signal (S); Semaphores Issues? • Deadlock • If P0 acquires S while P1 holds Q • Processes will wait for the each other indefinitely

  30. Incorrect order accessing multiple semaphores across processes May cause deadlock signal (mutex) …. wait (mutex) Multiple processes can access CS at the same time wait (mutex) … wait (mutex) Processes may block for ever Forgetting wait (mutex) or signal (mutex) Various problems, inconsistent data, … Be Careful When Using Semaphores

  31. Bounded-Buffer (Producer-Consumer) Problem Readers-Writers Problem Dining-Philosophers Problem These problems are Abstractions that can be used to model many other resource sharing problems Used to test newly proposed synchronization schemes Classical Problems of Synchronization

  32. Buffer of size N Want to coordinate production & consumption of items Recall issues? Violation of buffer structure (count++/count--) Producing when full Consuming when empty We define three semaphores mutexinitialized 1 full initialized to 0 emptyinitialized to N Bounded-Buffer Problem Solution?

  33. Producer while (true) { wait (empty); wait (mutex); // add item // to buffer signal (mutex); signal (full); } Bounded Buffer Problem (cont’d) Consumer while (true) { wait (full); wait (mutex); // remove item // from buffer signal (mutex); signal (empty); }

  34. Data set is shared among multiple processes Readers – only read; do not perform any updates Writers – can read and write Problem Allow multiple concurrent readers and no writer Allow one writer and no reader Shared Data Data set Semaphore rw_mutexinitialized to 1 Semaphore mutex initialized to 1 Integer read_countinitialized to 0 Readers-Writers Problem Solution?

  35. while (true) { wait(mutex); read_count++; if (read_count == 1) wait(rw_mutex); signal(mutex) // perform reading wait(mutex); read_count--; if (read_count == 0) signal(rw_mutex); signal(mutex); } Readers-Writers Problem (cont’d) while (true) { wait(rw_mutex); // perform writing signal(rw_mutex); } Issues? Starvation

  36. Some systems implement readers-writers locks Solaris, Linux, Pthreads API A process can ask for a readers-writes lock either in read or write mode When would you use reader-writer locks? Applications where it is easy to identify readers only and writers only processes Applications with more readers than writers Tradeoff: cost vs. concurrency Require more overhead to establish than semaphores Provide higher concurrency by allowing multiples readers Readers-Writers Problem (cont’d)

  37. Philosophers alternate between eating and thinking To eat, a philosopher needs two chopsticks (at her left and right) Models multiple processes sharing multiple resources Write program for each philosopher such that no starvation / deadlock occurs Solution? Bowl of rice (data set) Array of semaphores: chopstick [5] initialized to 1 Dining Philosophers Problem

  38. while(true) { wait(chopstick[i]); wait(chopstick[(i + 1) % 5]); // Eat signal(chopstick[i]); signal(chopstick[(i + 1) % 5]); } Issues? All philosophers pick their left chopsticks at same time Deadlock! Solutions? Pick chopsticks only if both are available Asymmetric: odd philosopher picks left chopstick first, even picks right first Dining-Philosophers Problem: Philosopher i

  39. High-levelabstraction for process synchronization Compiler (not programmer) takes care of mutual exclusion Only one process may be active within the monitor at a time monitor name { //shared variable declarations function P1 (...) { ... } ... function Pn(...) { ... } initialization_code (...) { ... } } Monitors

  40. Condition x; x.wait () – suspends the calling process x.signal () – resumes one of the processes (if any) that invoked x.wait() If no process is suspended, signal() has no effect Typically used to suspend/awake processes Example: Pthreads pthread_cond_wait()pthread_cond_signal()pthread_cond_broadcast() Condition Variables

  41. monitor DiningPhilos { enum{THINKING, HUNGRY, EATING} state[5] ; condition self [5]; void pickup (inti) { state[i] = HUNGRY; test(i); if (state[i] != EATING) self [i].wait (); } void putdown (inti) { state[i] = THINKING; test((i + 4) % 5); test((i + 1) % 5); } • DiningPhilos.pickup(i); • // EAT • DiningPhilos.putdown(i); void test (inti) { if ( (state[i] == HUNGRY) && (state[(i + 4) % 5] != EATING) && (state[(i + 1) % 5] != EATING) ) { state[i] = EATING ; self [i].signal (); } } initialization_code() { for (inti = 0; i < 5; i++) state[i] = THINKING; } } Deadlock? Starvation?

  42. Synchronization & Priorities • What if a higher-priority process needs to access to a resource which is being accessed by lower-priority process? • The higher-priority process has to wait • Let’s say there are 3 processes: PL, PM, and PH • Priorities: L < M < H • PM becomes runnable, so it preempts PL– Any problem? • PM is now affecting waiting time for PH • Priority Inversion! • Solution? • Priority Inheritance • PL inherits H since PH is waiting on resource held by PL • PM cannot preempt

  43. Synchronization Techniques to coordinate access to shared data Race condition Multiple processes manipulating shared data and result depends on execution order Critical section problem Three requirements: mutual exclusion, progress, bounded waiting Software: Peterson’s Algorithm Hardware: test_and_set(), compare_and_swap() Busy waiting (or spinlocks) Mutexes Semaphores Monitors: high-level constructs (compiler) Classical synchronization problems Summary

More Related