1 / 38

Logical Concurrency Control Form Sequential Proofs

Logical Concurrency Control Form Sequential Proofs. By: Deshmukh , Ramalingam , Ranganath and Vaswani Presented by: Omer Toledano. Overview. Using sequential proof to develop locking schemes for concurrency control. Improve it to achieve linearazibility. Example – Compute with Cache.

haig
Download Presentation

Logical Concurrency Control Form Sequential Proofs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Logical Concurrency Control Form Sequential Proofs By: Deshmukh, Ramalingam, Ranganathand Vaswani Presented by: Omer Toledano

  2. Overview • Using sequential proof to develop locking schemes for concurrency control. • Improve it to achieve linearazibility

  3. Example – Compute with Cache • Assume we have a function f, that we are trying to calculate. • F is a pretty computational intensive function, so we decided to preserve cache for the last result

  4. Example – Compute with Cache • Specification: • We want to create a function called “compute” that will return f(num) • The implementation of “Compute” will use cache for the last result to improve performance

  5. Example - Code intlastNum = 0; intlastRes = f(0); /* @return f(num) */ int Compute( num) { intres; if(lastNum == num) { res = lastRes; } else { res = f(num); lastNum= num; lastRes= res; } returnres; }

  6. Proof Model

  7. Proving Specifications – True Branch intlastNum = 0; intlastRes = f(0); /* @return f(num) */ int Compute( num) { intres; // lastRes == f(lastNum) if(lastNum == num) { // lastRes == f(lastNum) && lastNum == Num res = lastRes; // lastRes == f(lastNum) && lastNum == Num && res == lastRes } else{ … } // res == f(num) && lastRes == f(lastNum) returnres; }

  8. Proving Specification – False Branch intlastNum = 0; intlastRes = f(0); /* @return f(num) */ int Compute( num) { intres; // lastRes == f(lastNum) if(lastNum == num) { … } else { // lastRes == f(lastNum) && lastNum != num res = f(num); // res == f(num) lastNum= num; // res == f(num) && lastNum == num lastRes= res; // res == f(num) && lastRes == res && lastNum == num } // res == f(num) && lastRes == f(lastNum) returnres; }

  9. Is this function thread safe? • No! Compute(5) Consider: Compute(5) Compute(7) intlastNum = 0; intlastRes = f(0); /* @return f(num) */ int Compute( num) { intres; if(lastNum == num) { res = lastRes; } else { res = f(num); lastNum= num; lastRes= res; } returnres; }

  10. Consider: Compute(5) Compute(5) Compute(7) intCompute(5) { intres; // lastRes == f(lastNum) if(lastNum == num) { // lastRes == f(lastNum) && lastNum== Num res = lastRes; } else{ // Compute(7) res = f(num); // res == f(num) lastNum= num; // res == f(num) && lastNum == num lastRes= res;//res == f(7) } returnres; } In this scenario the result of the second Compute(5) would be wrong!

  11. How would you fix that? • intCompute( num) { • int res; • // acquire(l) • if (lastNum == num) { • res = lastRes; • } else { • // release(l) • res = f(num); • // acquire(l) • lastNum = num; • lastRes= res; • } • // release(l) • return res; • }

  12. What changed in the concurrent setting? • On every stage we asserted a set of predicates based on precondition and current command. • In the concurrent setting we saw that some of the predicates was invalidated while executing the command, thus yielded a wrong answer.

  13. Goals • We want to find a way to transform sequentially correct code to concurrently correct code using the same proof.

  14. Motivation • It’s much easier to program a sequential correct program than a concurrent one. So we’ll be able to automate the “thread proofing” process. • Sequential proofs can shed light on the “true” critical sections and what makes them “critical” (predicate invalidation), and hopefully achieve smaller critical sections.

  15. Algorithm - Idea • Define a set of locks that corresponds to the predicates generated by the sequential proof. • Let’s think about the program as a graph were the vertices are a conjunction of predicates required at this point of the program and the edges are program commands.

  16. Algorithm – Idea Cont. • Let’s assume we are on some point of the program, and assume we have 2 vertices u,v and e = (u,v). • We will acquire all the locks corresponding to new predicates on v • We will release every lock that is not needed anymore on v.

  17. Algorithm - Example • int Compute( num) { • int res; • // lastRes == f(lastNum) • if (lastNum == num) { • (u)// lastRes == f(lastNum) && lastNum == num • (e)res = lastRes; • (v)// lastRes == f(lastNum) && lastNum == num && res == lastRes On v we only add one predicate (res == lastRes), so we have to take its lock before executing the command e. (u) // lastRes == f(lastNum) && lastNum == num(e) /* acquire (l:res==lastRes) */ res = lastRes; (v) // lastRes == f(lastNum) && lastNum == num && res == lastRes

  18. Correctness of Algorithm • Input: a library L with embedded assertions satisfied by all sequential executions of L. • Output: a Library L’ obtained by augmenting L with concurrency control such that every execution of L’ is “safe”.

  19. Definitions

  20. Definitions

  21. Proof

  22. Proof – Cont.

  23. Is that enough? No! what about deadlocks? It can happen when: While holding a lock on p we are trying to get a lock on q At some point when holding the lock on q we are trying to get the lock on p. This will cause a deadlock since we are already holding the lock on p. To solve this we will define an equivalence relation that merges all those locks into one merged lock.

  24. Algorithm – are all locks necessary? // lastRes == f(lastNum) if (lastNum == num) { // acquire l: lastNum == num // lastRes == f(lastNum) && lastNum == num This lock is redundant since it’s always acquired when another lock is acquired and released when another lock is released. • int Compute( num) { • int res; • // acquire l: lastRes == f(lastNum) • // lastRes == f(lastNum) • if (lastNum == num) { • // acquire l: lastNum == num • res = lastRes; • } else {

  25. Optimizations • As said in the last slide the algorithm can introduce redundant locking, e.g generate a lock l that is always held whenever a lock q is acquired. • Also if we have a predicate that is never invalidated then we won’t need to acquire it before executing commands.

  26. Optimizations – Cont. • Use read-write locks: • When a thread wants to “preserve” a predicate it can acquire a read lock (with more threads) • If it want to invalidate the predicate it needs to acquire a “write” lock.

  27. Another problem? intx = 0; int Increment() { inttmp; // x == x_in tmp = x; tmp= tmp + 1; // going to invalidate x == x_in x = tmp; returntmp; }

  28. Another problem? int x = 0; int Increment() { inttmp; // acquire(l) tmp= x; // release(l) tmp= tmp + 1; // acquire(l) x = tmp; // release(l) returntmp; }

  29. What can happen? Increment() - returns 0 Increment() - returns 0 After both increment x equals one In general we can have “dirty reads” and “lost updates”

  30. Improvement • We will change our locking scheme to solve the previous example problem. If at some point a branch that starts from a program line is going to falsify a predicate we are going to acquire that lock too. int x = 0; int Increment() { inttmp; // acquire(l) tmp= x; tmp= tmp + 1; x = tmp; // release(l) returntmp; }

  31. Is that enough? • What about return values? intx = 0, y = 0; IncX() { // acquire l: x == x_in x = x + 1; (ret11, ret12) = (x, y); // release l: x == x_in } IncY() { // acquire l: y == y_in y = y + 1; (ret21, ret22) = (x, y); // release l: y == y_in } IncX() – return (1,1) IncY() – return (1,1)

  32. This is not linearizable intx = 0, y = 0; IncX() { // acquire l: x == x_in x = x + 1; (ret11, ret12) = (x, y); // release l: x == x_in } IncY() { // acquire l: y == y_in y = y + 1; (ret21, ret22) = (x, y); // release l: y == y_in } The values from the two calls should be different

  33. Solution • We will have to determine whether the execution of a statement s can potentially affect the return-value of another procedure invocation. • We do so by calculating if a statement s can break some procedure return value, and lock it accordingly.

  34. Results • After using real world example and benchmarks they showed that their programs achieved same or better results than human created synchronization. • The improvement was with introducing more locks that helped minimizing the critical sections and separate them by different locks

  35. Results • In the last section they produced extension to allow linearizability with respect to a sequential specification, which is a weaker requirement that permits more concurrency than notions of atomicity. • Achieving linearizability without two phase locking.

  36. Conclusions • This algorithm help us automate the “thread proofing” process and achieve good results. • Help us to get better understanding about the root cause for the critical sections and separate them with different locks for more concurrency.

  37. Conclusions • Also the logical point of view helped us to understand which invariants need to be preserved.

  38. Questions?

More Related