1 / 64

Advanced Operating Systems Lecture 6 - Process Handling, Threads, and Process Synchronization

This lecture covers topics such as reentrant and thread-safe code, signal handling, thread pools, Pthreads, Linux threads, Java threads, process synchronization, and concurrent programming concepts.

abbeyj
Download Presentation

Advanced Operating Systems Lecture 6 - Process Handling, Threads, and Process Synchronization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Advanced Operating Systems - Fall 2009Lecture 6 – January 28, 2009 • Dan C. Marinescu • Email: dcm@cs.ucf.edu • Office: HEC 439 B. • Office hours: M, Wd 3 – 4:30.

  2. Last, Current, Next Lecture • Last time: • Process handling by the kernel • Inter-process communication • Threads • Today • More about threads • Process synchronization • Next time: • Atomic transactions

  3. Re-entrant and thread-safe code • Reentrant code safe to be executed concurrently. • Conditions for code to be reentrant: • No static (global) non-constant data. • Should not return the address to static (global) non-constant data. • Must work only on the data provided to it by the caller. • Must not rely on locks to resources. • Must not call non-reentrant code. • I/O code is not reentrant because it requires access to resources that are shared. • Every reentrant function is thread-safe; not every thread-safe function is reentrant. To make a non-reentrant function reentrant one must change the external interface. To make a non thread-safe code, thread safe we need only change its internal implementation.

  4. Examples of non-reentrant code int global_variable = 1; int f() { global_variable = global_variable + 2; return g_var; } If multiple threads call f() concurrently the result is unpredictable int g() { return f() + 2; } g() calls a non-reentrant function f().

  5. The corresponding reentrant code int f(int i) { return i + 2; } int g(int i) { return f(i) + 2; }

  6. Signal Handling • Signals  used in UNIX systems to notify a process that a particular event has occurred • A signal handler is used to process signals • Signal is generated by particular event • Signal is delivered to a process • Signal is handled • Options: • Deliver the signal to the thread to which the signal applies • Deliver the signal to every thread in the process • Deliver the signal to certain threads in the process • Assign a specific thread to receive all signals for the process

  7. Thread Pools • Create a number of threads in a pool where they await work • Advantages: • Usually slightly faster to service a request with an existing thread than create a new thread • Allows the number of threads in the application(s) to be bound to the size of the pool • Thread Specific Data • Allows each thread to have its own copy of data • Useful when you do not have control over the thread creation process (i.e., when using a thread pool)

  8. Pthreads • Common in UNIX operating systems (Solaris, Linux, Mac OS X) • POSIX standard (IEEE 1003.1c) specifies the API for thread creation and synchronization • Implementation up to developers of the library.

  9. Windows XP Threads • One-to-one mapping • Each thread contains • A thread id • Register set • Separate user and kernel stacks • Private data storage area • The register set, stacks, and private storage area are known as the context of the threads • The primary data structures of a thread include: • ETHREAD (executive thread block) • KTHREAD (kernel thread block) • TEB (thread environment block)

  10. Linux Threads and Java Threads • Linux refers to them as tasks rather than threads • Thread creation clone() system call • clone() allows a child task to share the address space of the parent task (process) • Java threads: • are managed by the JVM • created by: • Extending Thread class • Implementing the Runnable interface

  11. States of a Java Thread

  12. Process Synchronization • The Critical-Section Problem • Peterson’s Solution • Synchronization Hardware • Semaphores • Classic Problems of Synchronization • Monitors • Synchronization Examples • Atomic Transactions

  13. Concepts • Concurrency multiple activities are carried out at the same time. E.g., concurrent processes • Busy waiting  keep checking if a condition is satified. • Deadlock two or more activities are waiting indefinitely for an event that can be caused by only one of the waiting activities. • Starvation an activity cannot get access to the resources it needs

  14. Synchronization • Activity coordination  critical in any environment where entities must cooperate with one another. • Synchronization requires the coordination of events • Applications in: • Physics • Communications • Computer Science • Multimedia • Music/sports

  15. Dining-Philosophers • Shared data • Bowl of rice (data set) • Semaphore chopstick [5] initialized to 1

  16. Dining-Philosophers (Cont.) • Philosopher i: While (true) { wait (chopstick[i] ); wait ( chopStick[ (i + 1) modulo 5] ); // eat signal (chopstick[i] ); signal (chopstick[ (i + 1) modulo 5] ); // think }

  17. Bounded buffer • Shared state between producer and consumer • Count – the number of items written by the producer and yet to be read by the consumer

  18. Producer while (true) { /* produce an item and put in nextProduced */ while (count == BUFFER_SIZE) ; // do nothing buffer [in] = nextProduced; in = (in + 1) modulo BUFFER_SIZE; count++; }

  19. Consumer while (true) { while (count == 0) ; // do nothing nextConsumed = buffer[out]; out = (out + 1) modulo BUFFER_SIZE; count--; /* consume the item in nextConsumed }

  20. Race conditions • Compiler translates producer’s process P instructioncount++ as:R1 = count R1 = R1 +1 count = R1 • Compiler translates consumer’s process C instruction count-- as:R2 = count R2 = R2 - 1 count = R2 • Initially count = 5 P is scheduled to run P executes R1 = count (R1 = 5)P : executes R1 = R1 + 1 (R1 = 6) An interrupt occurs and P is suspended and C is dispatched C executes R2 = count (R2 = 5) C executes R2 = R2-1 (R2 = 4) An interrupt occurs and C is suspended and P is dispatchedP executes count = R1 (count = 6) P writes the new item in buffer(12); P has finished, it is suspended, and C is dispatched C executes count = R2r(count = 4} C read the new item from buffer(7);

  21. Critical Section • Important concept in concurrent programming • A piece of code that accesses a shared resource (data structure or device) that must not be concurrently accessed by more than one thread of execution. • Critical sections may occur in: • User’s programs  user’s program may be affected by the error • System programs the systems may end up in an erroneous state.

  22. Conditions for Critical-Section 1. Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be executing in their critical sections protecting the same resource 2. Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely 3. Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted • Assume that each process executes at a nonzero speed • No assumption concerning relative speed of the N processes

  23. Peterson’s Solution • Two process solution: Pi and Pj • Assume that the LOAD and STORE instructions are atomic cannot be interrupted. • The two processes share two variables: • int turn; • Boolean flag[2] • The variable turn indicates whose turn it is to enter the critical section. • The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implies that process Pi is ready!

  24. Algorithm for Process Pi while (true) { flag[i] = TRUE; turn = j; while ( flag[j] && turn == j); CRITICAL SECTION flag[i] = FALSE; NON-CRITICAL SECTION }

  25. Another version of Peterson’s Solution #define FALSE 0 #define TRUE 1 #define N 2 /* we have two processes 0 and 1 */ int turn; /* variable indicating whose turn it is: process 0 or process 1 */ int interested(N) /* initially set to FALSE shows if the process wishes to enter CS */ void enter_CR (int process); { int other; /* id of the other process) other = 1 – process; interested(process) = TRUE; turn = process; while (turn = process && interested(other) = TRUE) } void leave_CS(int process); { interested(process) = FALSE; }

  26. Synchronization Hardware • Disable interrupts  do not allow preemption of the currently running process • Works only on uniprocessors • Not scalable • Atomic hardware instructions. Atomic =non-interruptable • TestAndSet  test memory word and set value • Swap swap contents of two memory words • The two instructions allows synchronization among several processors. • A process may enter the critical section only if a lock acted upon these instructions allows it. • Shared Boolean variable initially set to unlocked position: lock=FALSE • Locked when a process enters the critical section: lock=TRUE • Once finished working the process unlocks the lock: lock=FALSE

  27. TestAndSet Instruction TestAndSet first checks the lock and returns its state and then sets it to locked state. boolean TestAndSet (boolean *target) { boolean rv = *target; *target = TRUE; return rv: }

  28. TestAndSet use with a shared variable “lock” If the lock=FALSE then TestAndSet: sets lock=TRUE allows the calling process to enter the critical section Upon exiting critical section the process resets lock: lock= TRUE. while (true) { while (TestAndSet (&lock ); /* do nothing  busy wait // critical section lock = FALSE; // remainder section }

  29. Swap Instruction Each process has its own key and the critical section is protected by a lock initially set to FALSE. void Swap (boolean *a, boolean *b) { boolean temp = *a; *a = *b; *b = temp: }

  30. Solution using Swap • A process may enter the critical section only if the lock is open (lock=FALSE) and its key can lock the lock (key=TRUE) • while (true) { key = TRUE; while ( key == TRUE) busy wait again Swap (&lock, &key ); // critical section lock = FALSE; // remainder section }

  31. Two ways of a process to wait • Busy waiting  the CPU running the process keeps testing a condition. • Suspend the process  and allow the CPU to execute another process until someone sends a signal that the condition has been set.

  32. No busy waiting for producer-consumer producer { while(TRUE} { produce item; if (count=N) sleep(); write_Item; count=count++; if (count=1) wakeup(consumer); } } consumer { while(TRUE) { if (count=0) sleep(); read_Item; count--; if (count = N-1) wakeup(producer) } )

  33. Problems with this solution • A wakeup signal sent to a process that is not yet sleeping is lost. • Scenario • The buffer is empty and the consumer finds that count=0. • The consumer is suspended by the scheduler. • The scheduler activates the producer. • The producer adds an item to the buffer and sets count=1. • The producer believing that prior to this the count was 0 and thus the consumer has put itself to sleep sends a sends a wakeup signal to consumer. • The wakeup signal is lost because the consumer is not in a sleep state. • When the consumer runs next it will test again the value of the counter it read previously, find it to be 0 and go back to sleep. • Eventually the producer will fill up the buffer and go to sleep.

  34. Semaphore • Introduced by Dijkstra in 1965 • Does not require busy waiting • Semaphore S – integer variable • Two standard operations modify S: wait() and signal() • Originally called P() andV() • Less complicated • Can only be accessed via two indivisible (atomic) operations • wait (S) { while S <= 0 ; // no-op S--; } • signal (S) { S++; }

  35. Semaphore as General Synchronization Tool • Countingsemaphore – integer value can range over an unrestricted domain • Binary semaphoremutex locks– integer value can range only between 0 and 1; simpler to implement. • Can implement a counting semaphore S as a binary semaphore • Provides mutual exclusion • Semaphore S; // initialized to 1 • wait (S); Critical Section signal (S);

  36. Semaphore Implementation • Must guarantee that no two processes can execute wait () and signal () on the same semaphore at the same time • Implementation becomes the critical section problem where the wait and signal code are placed in the critical section. • Could now have busy waiting in critical section implementation • But implementation code is short • Little busy waiting if critical section rarely occupied • Applications may spend lots of time in critical sections and therefore this is not a good solution.

  37. Semaphore Implementation with no Busy waiting • With each semaphore there is an associated waiting queue. Each entry in a waiting queue has two data items: • value (of type integer) • pointer to next record in the list • The two operations on semphore S, Wait(S) and Signal(S) are implemented using: • block – place the process invoking the operation on the appropriate waiting queue. • wakeup – remove one of processes in the waiting queue and place it in the ready queue.

  38. Semaphore Implementation with no Busy waiting • Implementation of wait: wait (S){ value--; if (value < 0) { add this process to waiting queue block(); } } • Implementation of signal: Signal (S){ value++; if (value <= 0) { remove a process P from the waiting queue wakeup(P); } }

  39. Deadlock and Starvation • Deadlock two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes • Let S and Q be two semaphores initialized to 1 P0P1 wait (S); wait (Q); wait (Q); wait (S); . . . . . . signal (S); signal (Q); signal (Q); signal (S); • Starvation – indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended.

  40. Semaphores for Classical Synchronization Problems • Bounded-Buffer • Readers and Writers • Dining-Philosophers

  41. Semaphores for Bounded-Buffer Problem • N buffers, each can hold one item • Semaphores: • mutex (binary semaphore) initially set to 1 (allow access) • full (counting semaphore - counts the number of full buffers)initially set to 0. • empty (counting semaphore - counts the number of empty buffers)initially set to N.

  42. The producer process • while (true) { produce_Item wait (empty); /* decrement the count of empty slots */ wait (mutex); /* wait for permission to access buffer */ AddItemToBuffer; signal (mutex); /* signal that the buffer can be accessed */ signal (full); /* increment count of full slots */ }

  43. The consumer process while (true) { wait (full); /* decrement the count of full slots */ wait (mutex); /* wait for permission to access buffer */ removeItemFromBuffer signal (mutex); /* signal that the buffer can be accessed */ signal (empty); /* increment the count of empty slots */ }

  44. Readers-Writers • A data set is shared among a number of concurrent processes • Readers – only read the data set; they do not perform any updates • Writers – can both read and write. • Problem – allow multiple readers to read at the same time. Only one single writer can access the shared data at the same time. • Shared Data • Data set • Semaphore mutex initialized to 1. • Semaphore wrt initialized to 1. • Integer readcount initialized to 0.

  45. The writer process while (true) { wait (wrt) ; // writing is performed signal (wrt) ; }

  46. Reader process while (true) { wait (mutex) ; readcount ++ ; if (readcount == 1) wait (wrt) ; signal (mutex) // reading is performed wait (mutex) ; readcount - - ; if (readcount == 0) signal (wrt) ; signal (mutex) ; }

  47. Problems with Semaphores • Correct use of semaphore operations: • signal (mutex) …. wait (mutex) • What if • Omit wait (mutex) or signal (mutex) (or both) • wait (mutex) … wait (mutex) • What if someone does not stop at a traffic light and crosses an intersection on red? • Is there a way to enforce the traffic lights?

  48. Monitors • Programming languages constructs. • The compiler handles calls to monitors differently than other calls. • Only one process may be active within the monitor at a time monitor monitor-name { // shared variable declarations procedure P1 (…) { …. } … procedure Pn (…) {……} Initialization code ( ….) { … } … } }

  49. Schematic view of a Monitor

  50. Blocking a process when it cannot proceed • Condition variables • condition x, y; • Two operations on a condition variable: • x.wait () – a process that invokes the operation is suspended. • x.signal () –resumes one of processes(if any)thatinvoked x.wait ()

More Related