1 / 8

CPS 310 second midterm exam, 11/6/2013 Your name please:

CPS 310 second midterm exam, 11/6/2013 Your name please: . / 200. Part 1. Sleeping late (80 points).

kipling
Download Presentation

CPS 310 second midterm exam, 11/6/2013 Your name please:

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CPS 310 second midterm exam, 11/6/2013Your name please: / 200 Part 1. Sleeping late (80 points) The "missed wakeup problem” occurs when a thread calls an internal sleep() primitive to block, and another thread calls wakeup() to awaken the sleeping thread in an unsafe fashion. For example, consider the following pseudocode snippets for two threads: Sleeper thread Thread sleeper = self(); listMx.lock(); list.put(sleeper); listMx.unlock(); sleeper.sleep(); Waker thread listMx.lock(); Thread sleeper = list.get(); listMx.unlock(); sleeper.wakeup(); { } S1 W1 W2 S2 (a) What could go wrong? Outline how this code is vulnerable to the missed wakeup problem, and illustrate with an example schedule. One possible schedule is [S1, S2, W1, W2]. This is the intended behavior: the sleeper puts itself (a reference to its Thread object) on a list and sleeps, and the waker retrieves the sleeping thread from the list and then wakes that sleeper up. These snippets could also execute in some schedule with W1 < S1 (W1 happens before S1) for the given sleeper. In this case, the waker does not retrieve the sleeper from the list, so it does not try to wake it up. It wakes up some other sleeping thread, or the list is empty, or whatever. The schedule of interest is [S1, W1, W2, S2]. In this case, the sleeper is on the list, and the waker retrieves that sleeper from the list and issues a wakeup call on that sleeper, as in the first schedule. But the sleeper is not asleep, and so the wakeup call may be lost or it may execute incorrectly. This is the missed wakeup problem. Note that these raw sleep/wakeup primitives, as defined, are inherently unsafe and vulnerable to the missed wakeup problem. That is why we have discussed them only as “internal” primitives to illustrate blocking behavior: we have not studied them as part of any useful concurrency API. The point of the question is that monitors and semaphores are designed to wrap sleep/wakeup in safe higher-level abstractions that allow threads to sleep for events and wake other threads when those events occur. Both abstractions address the missed wakeup problem, but they resolve the problem in different ways.

  2. CPS 310 second midterm exam, 11/6/2013, page 2 of 7 (b) How does blocking with monitors (condition variables) avoid the missed wakeup problem? Illustrate how the code snippets in (a) might be implemented using monitors, and outline why it works. Monitors (condition variables) provide a higher-level abstraction: instead of using raw sleep and wakeup, we use wait() and signal/notify(). These primitives serve the desired purpose, but the wait() primitive is integrated with the locking, so that the sleeper may hold the mutex until the sleep is complete. The implementation of wait() takes care of releasing the mutex atomically with the sleep. For example: listMx.lock(); sleeper++; listCv.wait(); sleeper--; listMx.unlock(); listMx.lock(); if (sleeper > 0) listCv.signal(); listMx.unlock(); In these snippets we presume that the condition variable listCv is bound to the mutexlistMx. Various languages show this with various syntax. I didn’t require it for full credit. In this example, the sleeper’s snippet may execute before or after the waker, but it is not possible for the waker to see a sleeper’s count (sleeper > 0) and then fail to wake a/the sleeper up. The missed wakeup problem cannot occur. You can add a list to this example, like the snippets in part (a), but you don’t need one: the condition variable itself maintains an atomic internal list of threads waiting on that CV. I gave full credit for answers that showed proper use of a monitor (condition variable) for the sleeper to wait and the waker to signal. Note that condition variables do not allow the waker to control which sleeper it wakes up, if there is more than one sleeper waiting. (c) Now we want to design a scheme that is safe from the missed wakeup problem, but using semaphores only. The first step is to implement locks (i.e., mutexes such as the listMxin the snippets of (a)). Implement locks using semaphores. As always: “any kind of pseudocode is fine, as long as its meaning is clear.” A lock/mutex is equivalent to a binary semaphore. Initialize the semaphore to 1 (free). Lock/Acquire is a P/Down, and Unlock/Release is a V/Up. Any attempt to acquire a lock blocks the caller iff the lock is held (0). Lock: m.P(); Unlock: m.V();

  3. CPS 310 second midterm exam, 11/6/2013, page 3 of 7 (d) Next implement sleep() and wakeup() primitives using semaphores. These primitives are used as in the code snippets in part (1a) above. Note that sleep() and wakeup() operate on a specific thread. Your implementation should be “safe” in that it is not vulnerable to the missed wakeup problem. The idea here is to allocate a semaphore for each thread. Initialize it to 0. The thread sleeps with a P() on its semaphore. Another thread can wake a sleeping thread T up with a V() on T’s semaphore. Thus each call to sleep() consumes a wakeup() before T can run again. If a wakeup on T is scheduled before the corresponding sleep, then the wakeup is “remembered” and T’s next call to sleep simply returns. Note, however, that with this implementation a wakeup is remembered even if the sleep occurs far in the future, and the semaphore records any number of wakeups. Thus it is suitable only if the use of sleep/wakeup is restricted so that a wakeup is issued only after T has declared its intention to sleep, as in the example snippets. for each thread: thread.s.init(0); thread.sleep: thread.s.P(); thread.wakeup: thread.s.V(); A note on Part 1. This problem was intended to be easy for you and easy for me to grade. Many students understood the point of the exercise to be to reimplement the code snippets, demonstrating some safe use of raw sleep and wakeup, e.g., by keeping some kind of counts or flags, or wrapping the sleep call in a lock. But none of those attempts work: that’s why we have the abstractions of monitors and semaphores. For example, wrapping the sleep call in a lock leads to deadlock: the sleeper does not release the lock when it sleeps, so the waker cannot acquire the lock to wake the sleeper up. That is why monitors release the lock atomically in the wait() call. Answers to part (b) that did not show how to use a proper wait() generally received 5/20 points at most. Part (c) clearly just asked for an implementation of mutexes using semaphores. For (c), many answers gave me superfluous code, which I generally just ignored if the key idea was in there somewhere In part (d) many students did not indicate clearly that each thread needs its own semaphore. I was forgiving if the solution was otherwise correct and open to that interpretation. Note that the solution of giving each thread its own semaphore is generally a useful trick: for example, it is the key to the difficult problem of implementing condition variables using semaphores, as discussed at length in the 2003 paper by Andrew Birrell discussing that problem.

  4. CPS 310 second midterm exam, 11/6/2013, page 4 of 7 Part 2. Piling on the heap, again (60 points) For Lab #1 you built a heap manager for use in single-threaded processes. This question addresses the problem of how to adapt a heap manager for use in a multi-threaded process, in which multiple threads may invoke the heap manager API (malloc/free) concurrently. You may answer with reference to your code for Lab #1, or to an idealized heap manager implementation (one that works). (a) If you use the heap manager with no added concurrency control, what could go wrong? Give three examples. Please be brief but specific. Feel free to illustrate with pseudocode or drawings. 1. malloc/malloc race: two threads allocate the same free block, leading to a doubly allocated block. 2. malloc/malloc race: two threads split the same free block, leading to one or more corrupt headers 3. malloc/free race: a malloc call selects a free block B1 and races with a free call on an adjacent block B2: the free code coalesces B2 with B1 just as B1 is being allocated, leading to one or more corrupt headers. 4. Free list races. If the heap manager uses any kind of free block list, then the list can be corrupted by concurrent insertions and deletions. 5. etc. etc. The answers I got were generally at this level of detail. A few answers drew corrupted linked lists and such, which was appreciated. But I gave the points for any clear answer addressing realistic cases. Some answers suggested that a malloc could race with a free, such that the malloc call does not “see” the newly freed block and returns an error because there are no free blocks. But this can occur even with a thread-safe heap manager: since those malloc and free calls are concurrent, they could complete in either order. Some answers suggested various problems that could occur if the application itself was not thread-safe, e.g., if two threads tried to free the same block, or one thread references a block after another thread frees it. But these are bugs in the application, and cannot be fixed by controlling concurrency in the heap manager.

  5. CPS 310 second midterm exam, 11/6/2013, page 5 of 7 (b) Outline a concurrency control scheme for your heap manager and briefly discuss its tradeoffs. You may illustrate with pseudocodeif you wish. Be sure to address the following questions. How many locks? What data is covered by the locks? What are the performance implications for your locking scheme? Do you need any condition variables? If so, how would you use them, i.e., under what conditions would a thread wait or signal? Is deadlock or starvation a concern? Does your scheme require any changes to the heap abstraction, i.e., any changes to the API or to a program that uses the heap manager? The easy answer here was just to “slap a lock on it”. Simply declaring both methods as “synchronized” is good enough. This solution is free from deadlock. It is also free from starvation, provided that the locking primitives are sufficiently fair for practical use (a safe assumption). It requires no changes to the heap abstraction. Some answers added a condition variable for memory exhaustion. When the heap runs out of memory, a thread calling malloc() could wait() until some free() calls come in and frees up enough spacefor the malloc to proceed. This is OK, but it does create a deadlock hazard. For example, it will deadlock if the thread that calls malloc() is the same thread that is supposed to free(), and/or if the thread holds some other lock or resource in the application at the time that it calls malloc(). Regarding performance: holding a single global lock for the duration of malloc() and free() serializes these operations. It makes it impossible for two or more threads to malloc/free memory concurrently. If the program spends a lot of time in malloc and free, then this could be a problem. But if the heap primitives are fast, it is likely “good enough” unless we are using tens of cores. Ask yourself: how much of its time does each thread spend in malloc/free? If it is, say, 1%, then contention on that lock would at most double that cost, even on an application using tens of cores (say, 50). Some answers tried to address the performance concern by discussing possibilities for finer-grained locking, e.g., on different regions of the heap, or multiple free lists. These were generally good answers and I gave them some extra points. But it is tricky to do this without introducing a race. For example, a few solutions suggested using separate locks for malloc() and free(). This is a very bad mistake. I was surprised by the number of answers that used the soda machine producer/consumer template, with a second condition variable that blocks free() calls until....something. Why? But at least I know you studied...

  6. CPS 310 second midterm exam, 11/6/2013, page 6of 7 Part 3. Sharing the plumbing (60 points) The Computer Science building at the Old School was built for a bunch of men. But today many of the computer scientists are women. Unfortunately, the building has only one restroom. In keeping with the Old Traditions, the community has decided to coordinate use of the restroom by the following policy: the restroom may be visited concurrently by up to N individuals, provided they are of the same gender (M or F). This problem asks you to implement a module to coordinate use of the restroom, for use in a simulation. Each individual is represented by a thread. A thread requests use of the restroom by calling one of the methods arriveM() or arriveF(), according to its gender. A thread waits in its arrive*() method until the restroom is available for its use. A thread departs the restroom by calling the method departM() or departF(). All threads are well-behaved. Implement pseudocode for your coordination scheme. Be sure your solution is free of races and avoids starvation and deadlock. Here is an initial cut at a solution: synchronized arriveM() { while (females || occupants == N) wait(); males = true; occupants++; } synchronized departM() { occupants--; if (occupants == 0) males = false; notifyAll(); } The arriveF and departF methods are symmetric with the male logic, and may be omitted. This solution gets the basics right: it locks as needed, loops before leaping, respects capacity constraints, tracks the gender in control of the restroom, transfers control at some reasonable time, and wakes up anyone who needs to know about a departing user. But it is vulnerable to starvation. The problem is that these males do not respect arriving females. Even so, this is a 50-point answer. A properly synchronized solution loses another 10 points if it does not respect capacity constraints, or 10 points if it leaves users hanging on a change of control (signal/notify instead of broadcast/notifyAll).

  7. CPS 310 first midterm exam, 10/9/2013, page 7of 7 There are many possible solutions for sharing the restroom fairly, or at least avoiding starvation. For example, we could add the new code shown in bold: synchronized arriveM() { mWaiting++; while (females || occupants == N || (males && fWaiting > 0)) wait(); mWaiting--; males = true; occupants++; } synchronized departM() { occupants--; if (occupants == 0) { males = false; if (fWaiting > 0) females = true; } notifyAll(); } The idea here is that a waiting user of the out-gender forces a change of control at the earliest opportunity: no more users of the in-gender enter the restroom until the waiting out-gender gets a turn. This solution is not perfect. Both solutions have a thundering herd problem when the restroom is at capacity. For example, if a crowd of males is waiting to enter the full restroom, they will all wake up and contend for any free space that opens up. A signal might be better than a broadcast in this situation. Also, like all problems of this nature, there is a scheduling policy choice that is open to different balances of fairness and efficiency. With this solution, large crowds of arriving males and females may share the restroom in strict alternation by gender, eliminating any possibility of concurrent access by users of the same gender. This might be fair, but it is not efficient. A better solution might do something reasonable like allow up to N members of a given gender to enter the restroom even if there are members of the other gender waiting. “Left as an exercise.”

More Related