1 / 269

Chapter 6, Process Synchronization, Overheads, Part 2

Chapter 6, Process Synchronization, Overheads, Part 2. Part 2 of the Chapter 6 overheads covers these sections: 6.6 Classic Problems of Synchronization 6.7 Monitors 6.8 Java Synchronization. 6.6 Classic Problems of Synchronization.

thetis
Download Presentation

Chapter 6, Process Synchronization, Overheads, Part 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 6, Process Synchronization, Overheads, Part 2

  2. Part 2 of the Chapter 6 overheads covers these sections: • 6.6 Classic Problems of Synchronization • 6.7 Monitors • 6.8 Java Synchronization

  3. 6.6 Classic Problems of Synchronization • These problems exist in operating systems and other systems which have concurrency • Because they are well-understood, they are often used to test implementations of concurrency control • Some of these problems should sound familiar because the book has already brought them up as examples of aspects of operating systems (without yet discussing all of the details of a correct, concurrent implementation)

  4. The book discusses the following three problems • The bounded-buffer problem • The readers-writers problem • The dining philosophers problem

  5. The book gives Java code to solve these problems • For the purposes of the immediate discussion, these examples are working code • There is one slight, possible source of confusion. • The examples use a home-made Semaphore class

  6. In the current version of the Java API, there is also a Semaphore class • If you look in the API documentation, you’ll discover that that class has quite a number of methods and is more complicated than the simple presentation of semaphores given earlier • The home-made Semaphore class is much simpler

  7. The home-made class will be noted at the end of the presentation of code—but its contents will not be explained in detail • Only after covering the coming section on synchronization syntax in Java would it be possible to understand how the authors have implemented concurrency control in their own semaphore class

  8. The Bounded Buffer Problem • Operating systems implement general I/O using buffers and message passing between buffers • Buffer management is a real element of O/S construction • This is a shared resource problem • The buffer and any variables keeping track of buffer state (such as the count of contents) have to be managed so that contending processes (threads) keep them consistent

  9. Various pieces of code were given in previous chapters for the bounded buffer problem • Now the book gives code which is multi-threaded and also does concurrency control using a semaphore • When looking at it, the existence and placement of semaphores should be noted • The code is given on the following overheads, and more commentary will come afterwards

  10. /** • * BoundedBuffer.java • * • * This program implements the bounded buffer with semaphores. • * Note that the use of count only serves to output whether • * the buffer is empty of full. • */ • import java.util.*; • public class BoundedBuffer implements Buffer • { • private static final int BUFFER_SIZE = 2; • private Semaphore mutex; • private Semaphore empty; • private Semaphore full; • private int count; • private int in, out; • private Object[] buffer;

  11. public BoundedBuffer() • { • // buffer is initially empty • count = 0; • in = 0; • out = 0; • buffer = new Object[BUFFER_SIZE]; • mutex = new Semaphore(1); • empty = new Semaphore(BUFFER_SIZE); • full = new Semaphore(0); • }

  12. // producer calls this method • public void insert(Object item) { • empty.acquire(); • mutex.acquire(); • // add an item to the buffer • ++count; • buffer[in] = item; • in = (in + 1) % BUFFER_SIZE; • if (count == BUFFER_SIZE) • System.out.println("Producer Entered " + item + " Buffer FULL"); • else • System.out.println("Producer Entered " + item + " Buffer Size = " + count); • mutex.release(); • full.release(); • }

  13. // consumer calls this method • public Object remove() { • full.acquire(); • mutex.acquire(); • // remove an item from the buffer • --count; • Object item = buffer[out]; • out = (out + 1) % BUFFER_SIZE; • if (count == 0) • System.out.println("Consumer Consumed " + item + " Buffer EMPTY"); • else • System.out.println("Consumer Consumed " + item + " Buffer Size = " + count); • mutex.release(); • empty.release(); • return item; • } • }

  14. There is more code to the full solution. • It will be given later, but the first thing to notice is that there are three semaphores • All of the previous discussions just talked about protecting a single critical section with a single semaphore • The book has introduced a new level of complexity out of the blue by using this classic problem as an illustration

  15. There is a semaphore, mutex, for mutual exclusion on buffer operations • There are also two more semaphores, empty and full • These semaphores are associated with the idea that the buffer has to be protected from trying to insert into a full buffer or remove from an empty one • In other words, they deal with the concepts, given in an earlier chapter, of blocking sends/receives or writes/reads

  16. When looking at the code, the ordering of the calls to acquire and release the semaphores might not have been clear • The book offers no cosmic theory to explain the ordering of the calls to acquire and release • The example is simply given, and it’s up to us to try and sort out how the calls interact in a way that accomplishes the desired result

  17. mutex is a binary semaphore • It is initialized to 1 • 1 and 0 are sufficient to enforce mutual exclusion

  18. The empty semaphore is a counting semaphore • It is initialized to BUFFER_SIZE • That means that there are up to BUFFER_SIZE slots of the shared buffer array that are empty and available to have messages inserted into them

  19. empty.acquire() can be called BUFFER_SIZE times before the shared buffer is full and the semaphore can’t be acquired anymore • The name empty is a bit of a misnomer—it doesn’t mean completely empty—it keeps track of a count of how many elements of the buffer are empty

  20. The full semaphore is also a counting semaphore • It is initialized to 0 • The full semaphore counts how many slots of the shared buffer array have been filled with messages that are available to be removed • Initially, there are no elements in the buffer array

  21. This means that a call to remove() on the shared buffer won’t find anything until a call to insert() on the buffer has been made • This is because the code for insert() includes a call to full.release()

  22. The name full is a bit of a misnomer—it doesn’t mean completely full—it keeps track of a count of how many elements in the buffer are full • The diagram on the next overhead illustrates the meaning of the empty and full semaphores

  23. In the code, the calls to acquire() and release() on mutex are simply paired, top and bottom, in the insert() and release() methods of the buffer • The calls to acquire() and release() on the empty and full semaphores are crossed between the insert() and remove() methods

  24. We have seen a criss-crossing of semaphore calls already in the example where semaphores were used to enforce the execution sequence of two different blocks of code • Informally, the logic of this example might be expressed as, “You can’t remove unless someone has inserted,” and vice-versa.

  25. In particular in this example: • Empty.acquire() is called at the top of insert() • Empty.release() is called at the bottom of remove() • Full.acquire() is called at the top of remove() • Full.release() is called at the bottom of insert()

  26. The bodies of both insert() and remove() between the calls on empty and full are protected by calls to acquire() and release() on mutex • Since there is just one, shared mutex semaphore, that means that the bodies of the two methods together form one critical section • Only one thread at a time can be in either insert() or remove()

  27. The diagram on the following overhead illustrates the pairing of calls to mutex, making the common critical section • More importantly, it graphically shows how the calls on the other semaphores are criss-crossed

  28. It bears repeating that the book doesn’t give a cosmic theory explaining the placement of the calls to acquire() and release() • The example is given in totality • Someone figured this solution out, and all we can do is accept it as given, and try to see how it accomplishes what it does

  29. The rest of the book code to make this a working example follows

  30. /** • * An interface for buffers • * • */ • public interface Buffer • { • /** • * insert an item into the Buffer. • * Note this may be either a blocking • * or non-blocking operation. • */ • public abstract void insert(Object item); • /** • * remove an item from the Buffer. • * Note this may be either a blocking • * or non-blocking operation. • */ • public abstract Object remove(); • }

  31. /** • * This is the producer thread for the bounded buffer problem. • */ • import java.util.*; • public class Producer implements Runnable • { • public Producer(Buffer b) { • buffer = b; • } • public void run() • { • Date message; • while (true) { • System.out.println("Producer napping"); • SleepUtilities.nap(); • // produce an item & enter it into the buffer • message = new Date(); • System.out.println("Producer produced " + message); • buffer.insert(message); • } • } • private Buffer buffer; • }

  32. /** • * This is the consumer thread for the bounded buffer problem. • */ • import java.util.*; • public class Consumer implements Runnable • { • public Consumer(Buffer b) { • buffer = b; • } • public void run() • { • Date message; • while (true) • { • System.out.println("Consumer napping"); • SleepUtilities.nap(); • // consume an item from the buffer • System.out.println("Consumer wants to consume."); • message = (Date)buffer.remove(); • } • } • private Buffer buffer; • }

  33. /** • * This creates the buffer and the producer and consumer threads. • * • */ • public class Factory • { • public static void main(String args[]) { • Buffer server = new BoundedBuffer(); • // now create the producer and consumer threads • Thread producerThread = new Thread(new Producer(server)); • Thread consumerThread = new Thread(new Consumer(server)); • producerThread.start(); • consumerThread.start(); • } • }

  34. /** • * Utilities for causing a thread to sleep. • * Note, we should be handling interrupted exceptions • * but choose not to do so for code clarity. • */ • public class SleepUtilities • { • /** • * Nap between zero and NAP_TIME seconds. • */ • public static void nap() { • nap(NAP_TIME); • } • /** • * Nap between zero and duration seconds. • */ • public static void nap(int duration) { • intsleeptime = (int) (duration * Math.random() ); • try { Thread.sleep(sleeptime*1000); } • catch (InterruptedException e) {} • } • private static final int NAP_TIME = 5; • }

  35. The book’s Semaphore class follows • Strictly speaking, the example was written to use this home-made class • Presumably the example would also work with objects of the Java API Semaphore class • The keyword “synchronized” in the given class is what makes it work • This keyword will be specifically covered in the section of the notes covering Java synchronization

  36. /** • * Semaphore.java • * • * A basic counting semaphore using Java synchronization. • */ • public class Semaphore • { • private int value; • public Semaphore(int value) { • this.value = value; • } • public synchronized void acquire() { • while (value <= 0) { • try { • wait(); • } • catch (InterruptedException e) { } • } • value--; • } • public synchronized void release() { • ++value; • notify(); • } • }

  37. The Readers-Writers Problem • The author explains this in general terms of a database • The database is the resource shared by >1 thread

  38. At any given time the threads accessing a database may fall into two different categories, with different concurrency requirements • Readers: Reading is an innocuous activity • Writers: Writing (updating) is an activity which changes the state of a database

  39. In database terminology, you control access to a data item by means of a lock • If you own the lock, you have access to the data item • Depending on the kind of lock you either have sole access or shared access to the data item

  40. This may be somewhat confusing because the term locking has appeared here, there, and everywhere • We seek him here, we seek him there,Those Frenchies seek him everywhere.Is he in heaven? — Is he in hell?That damned, elusive Pimpernel

  41. Leslie Howard, the Scarlet Pimpernel

  42. Database management systems have much in common with operating systems • Among the things they have in common are the need for locking and the use of the term locking for this construct • The database management system may or may not be tightly integrated with the operating system • Either way, the application level locking in the database is supported by system level locking

  43. Recall the analogy used earlier to explain locks • The desired data item is like the car, the lock is like the title • If you possess the title, you own the car, allowing you to legally take possession of the car • If you possess the lock on a data item, you are allowed to access the data item

  44. Application level locking in a database adds a new twist: • There are two kinds of locks • An exclusive lock: This is the kind of lock discussed so far. • A writer needs an exclusive lock which means that all other writers and readers are excluded when the writer has the lock

  45. A shared lock: This is actually a new locking concept • This is the kind of lock that readers need. • The idea is that >1 reader can access the data at the same time, as long as writers are excluded

  46. Readers don’t change the data, so by themselves, they can’t cause concurrency control problems which are based on inconsistent state • They can get in trouble if they are intermixed with writing operations that do change database state

  47. The book gives two different possible approaches to the readers-writers problem • It should be noted that neither of the book’s approaches prevents starvation • In other words, you might say that these solutions are application level implementations of synchronization which are not entirely correct, because they violate the bounded waiting condition

  48. First Readers-Writers Approach • No reader will be kept waiting unless a writer has already acquired a lock • Readers don’t wait on other readers • Readers don’t wait on waiting writers • Readers have priority • Writers have to wait • Writers may starve

More Related