1 / 50

Performance of memory reclamation for lockless synchronization

Performance of memory reclamation for lockless synchronization. By Thomas E. Hart, Paul E. McKenney, Angela Demke Brown, Jonathan Walpole Handwaved about by Jim Cotillier. The Problem. Why not just stick with classical locks? Performance issues (blocking) CAS-class instruction overhead

johnavon
Download Presentation

Performance of memory reclamation for lockless synchronization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Performance of memory reclamation for lockless synchronization By Thomas E. Hart, Paul E. McKenney, Angela Demke Brown, Jonathan Walpole Handwaved about by Jim Cotillier CS510 – Concurrent Systems

  2. The Problem • Why not just stick with classical locks? • Performance issues (blocking) • CAS-class instruction overhead • Susceptible to: • Deadlock • Priority Inversion • Convoying • Lockless synchronization addresses this, but is exposed to Read/Reclaim Races • Reclamation of shared data elements without coordination with all contenders leads to an inconsistent global state • Such ex post facto references to deleted data yield unpredictable results

  3. Uncoordinated reclamation…

  4. Some Approaches to Solutions • QSBR –- Quiescent State Based Reclamation • EBR/NEBR –- Epoch Based Reclamation • HPBR –- Hazard Pointers Based Reclamation • LFRC –- Lock-free Reference Counting • Functionality provided by a client/library interface • But no single, invariant set of interface semantics exists across all schemes

  5. QSBR • Permits the reclamation of data only after a time interval elapses, called a Grace Period • QSBR defines a Grace Period to be the temporal interval (a,b), such that any data element deleted before a can be reclaimed after b • A Quiescent State is a state of a thread, T, in which T holds no references to shared elements, active or deleted (zombie) • Any interval in which each thread passes through a Quiescent State is a QSBR Grace Period

  6. Three-thread QSBR example…

  7. QSBR Fuzzy Barriers • Protect access to “protected” code which no thread should execute before all other threads reach a specified point • Do not absolutely block, a la hard barriers, only prevent execution of “protected” code until barrier opens • Thus, can be used to synchronize reclamation

  8. Using QSBR • Client explicitly declares Quiescent State: … and thereby enters a fuzzy barrier

  9. Problem: thread failure • A dead thread cannot call quiescent_state() and thus can force QSBR to block…

  10. EBR (Fraser) • Uses Grace Periods, like QSBR • But does not rely upon explicit client Quiescent State declarations, as QSBR does • Encapsulates lockless operations within Critical Sections • …which the client explicitly declares, via the functions critical_enter() and critical_exit() • Counts the number of Critical Region invocations, and then attempts to enter a fuzzy barrier to reclaim memory

  11. Linked list search using EBR

  12. EBR Epochs • Epochs are modeled after [3], the group of equivalence classes modulo 3 • Epochs are hierarchical: Global and Local • Each epoch has an associated zombie element list • Fuzzy barrier for reclamation is entered upon entry to each new epoch • A thread entering a Critical Region updates its Local Epoch to match the global epoch • After M (magic number) LE updates, a thread will attempt to increment the GE

  13. EBR Epochs Cont’d. • A GE update attempt only succeeds if the LE of each thread in a CR matches the GE • Since threads update their LE only at the start of a CR, whenever, for a thread T, its LE = GE, then all lockless operations of other threads in progress the last time T was in epoch GE have completed • Thankfully, a grace period has expired!

  14. EBR Epoch Cycle

  15. NEBR – a Modest EBR Improvement • EBR must pay for the expensive fences at the beginning and end of a CS • Modeled a little after QSBR: have the application set/reset a “critical section(s) may be in here” flag • NEBR then does not “automatically” do this in each CS • “Application independence” dies in favor of performance • Reduces EBR’s overhead modestly--closer to QSBR • NEBR is attractive as the programmer’s responsibilities are limited to marking sections that might contain lockless operations

  16. HPBR/SMR (Michael) • Each thread T has (magic) K Hazard Pointers used to protect elements from reclamation by other threads • Thus, for N threads, H = NK HP’s exist in toto • K is small, often 2 (queues and lists); 1 (stacks) • T caches removed elements privately in a list P of size (magic) R • After R removals, T reclaims each element in P that does not have a corresponding HP • If T fails, a maximum of K+R removed elements can be leaked

  17. HPBR Paradigm

  18. HPBR Paradigm Cont’d. • Hazardous References—references to shared elements that may now be zombies or ABA situations • Algorithms using HPBR must identify a Hazardous Reference, set a Hazard Pointer, then check for element removal • If an element has not been removed, it continues to be referentially safe

  19. LFRC (Valois, Detlefs, et al.) • Threads track the instantaneous count of references to elements • When count = 0, element can be reclaimed • Many variations on this scheme may or may not allow element types to change upon reclamation • May require type invariance (Valios); type independence requires DCAS (Detlefs, et al.) • Zombies may consume unbounded memory • Performance may be worse than lock-based • CAS, FAA (Intel: LOCK XADD) very expensive

  20. Summary of Schemes • QSBR –- Detects grace periods using application-specified quiescent states • EBR -- Detects grace periods using application-independent epochs • HPBR –- Uses per-thread Hazard Pointers to synchronize reclamation • LFRC – Uses per-element reference counts to synchronize reclamation

  21. Performance Factors… • Depends on a lot of stuff • Memory consistency and constraints • Workload, contention and thread scheduling • Sequentially consistent memory model is still generally assumed by the lock-free literature • But the hardware trends are toward weaker models • Coder needs to rely on fences (MBarriers), which artificially add overhead • HPBR, EBR and LFRC require per-operation fences, but not QSBR—this is shown to be a distinct advantage

  22. Performance Factors Cont’d. • Thread preemption • Can start when number threads > number CPUs • Descheduled threads are blocked threads, as far as reclamation schemes are concerned • Anything that prevents a Grace Period from closing is bad • Threads may sometimes need to borrow memory from a locked, global pool • A thread may be preempted whilst holding such a lock; setting up a thread convoy on memory • HPBR bounds memory stress and has an advantage here

  23. The μBenchmark

  24. The μBenchmark Cont’d. • Master thread flow logic • Create N children • Start a timer • When timer expires, stop children • Average execution time/measured operation = test duration/number of operations • Net CPU time = execution time * number of threads • If thread count > CPU count, report execution time; otherwise report CPU time. • Driver parameters were selected not to be biased toward any particular reclamation scheme

  25. The μBenchmark Cont’d. • CS implemented on POWER via larx/stcx (LL/SC) • Fences implemented via eieio (“Enforce In-order Execution of I/O”) • Spin locks implemented via cas and fences • Statically allocated HPBR Hazard Pointers • Some algorithms may require unbounded HP counts • Choice of placement of QSBR QS declarations may not be obvious in some algorithms

  26. Performance Measurement Guidelines • Measure the base costs first • Single-threaded execution, small data structures • No contention, preemption, traversing long lists • Non-blocking queues, single-element linked lists… • Then move toward complexity • Pedagogical approach--try to change only one factor at a time • Consider the R/O, the W/O and the R/W cases in each of the examined reclamation schemes

  27. Base Performance Costs

  28. Scalability with Fractional Workload

  29. Scalability with Traversal Length

  30. Scalability of LFRC

  31. No Preemption; R/O Workload

  32. No Preemption; W/O Workload

  33. Preemption; R/O Workload

  34. Preemption; W/O Workload

  35. Memory Stress Busy Wait

  36. Hash Tables; Update Fraction Workload

  37. No Preemption; R/O Workload with NEBR

  38. Case Study—RCU API in Linux • RCU concepts—”Read/Copy/Update” • Lockless concurrent reads with deferred destruction of zombie elements • Writers may not prevent readers from accessing shared data • Writers must coordinate with each other in some way • RCU does not specify what way • RCU neither blocks nor fails for readers • Preemptable kernels necessitate the use of rcu_read_lock() and rcu_read_unlock()to toggle kernel preemption • …so that context switches do not occur at intolerable times

  39. Case Study—RCU Cont’d. • QSBR is a natural choice for memory reclamation • EBR could be used as well, but would not offer any advantages over QSBR • RCU is best targeted to read-mostly data structures • Rare updates imply rare reclamation

  40. Case Study—RCU Cont’d. • SysV IPC subsystem implemented in Linux via CR-QSBR • Implements semaphores, message queues and shared memory • Apps use an integer Accessor ID to access in-kernel data structures (essentially a “resource handle”) • The dynamic, mostly-read (AID/resource) array, formerly spinlocked in stock Linux, was protected here through CR-QSBR instead, and benchmarked

  41. Case Study—RCU Cont’d. Semopbench, 8-CPU, 700 MHz Intel P-III

  42. Case Study—RCU Cont’d. DBT1 Database Benchmark Raw Results

  43. Case Study—RCU Cont’d. DBT1 database benchmark results (TPS)

  44. Conclusions • Reclamation has a huge effect on lockless algorithm performance • So one must tune to the design of the application • Both QSBR and EBR can suffer in the face of memory exhaustion • HPBR and EBR have higher base costs than QSBR due to fences • The NEBR enhancements modestly improve EBR • LFRC has the highest overhead due to the per-element atomic instruction requirement

  45. Conclusons Cont’d. • HPBR scales poorly as the traversal length increases • QSBR is, overall, the best performing reclamation scheme • …and best suited to an OS kernel environment • Lockless approaches using QSBR can widely outperform locking approaches by a large margin

  46. Rantings -- STAE • STAE – Specified Thread Abnormal Exit • User provides Exit code to be run on condition of thread error trap • Exit is driven by the etrap interrupt logic; Exit is called immediately after etrap is detected, e.g., SEGV • Exit has full access to environment of failing thread; may modify any data, etc. • Exit may: • Allow failing thread to die (the status quo) • Resuscitate failing thread by telling the dispatcher to restart the thread at an Exit-specified point in its code • Call a completely new program to run in place of failing thread (with all of the failing thread’s credentials and context)

  47. Rantings -- PLO • PLO – Perform Locked Operation (IBM z Platform) • Meta instruction that atomically encapsulates all of: CAL, CAS, DCAS, CASAS, CASADS, CASATS into single-instruction global atomicity • 32, 64, or 128-bit operands are supported • Acquires a global hardware interlock unique to PLO • Is very powerful and flexible, but is so complex that it may require a pre-built parameter list just to “program” it! • Usually needs to be coded with a zillion operands  • Its proprietary μalgorithm has to be huge, but whether its utility outstrips its cost enough to yield a net gain in performance, has not yet been answered (afaik)

  48. Questions/Musings • Suppose DCAS was “improved” so that it uses an order of magnitude fewer clocks than today. • To what extent could macroscopically faster hardware atomicity affect the utility of these lockless schemes? • Could the STAE formalism provably solve the failed thread blocking problem in QSBR? • If you believe the answer is yes, based on the empirical data in this paper, would the paradigm (QSBR+STAE) satisfy Ockham’s Razor and thus become the overall best solution to the lockless reclamation problem?

More Related