1 / 27

Locality in Concurrent Data Structures

Locality in Concurrent Data Structures. Hagit Attiya Technion. Abstract Data Types (ADT). Abstract representation of data & set of methods ( operations ) for accessing it Signature Specification. data. Implementing High-Level ADT. From lower-level ADTs

macon
Download Presentation

Locality in Concurrent Data Structures

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Locality in Concurrent Data Structures Hagit AttiyaTechnion

  2. Abstract Data Types (ADT) Abstract representation of data& set of methods (operations) for accessing it Signature Specification data Locality @ BGU

  3. Implementing High-Level ADT From lower-level ADTs High-level operations translate into primitives on base objects Obvious: read, write Common: compare&swap (CAS), LL/SC, Double-CAS (DCAS), Generic: read-modify-write (RMW), kRMW, kCAS, … Low-level operations can be implemented from more primitive operations A hierarchy of implementations Locality @ BGU

  4. Example: Binary Operations Atomically read and modify two memory locations (esp. DCAS) • Simplify the writing of concurrent ADTs • kCAS is even better Virtual Locking: Simulate DCAS / kCAS with a single-item CAS • Acquire virtual locks on nodes in the data set • Perhaps in some order • Handle conflicts with blocking operations (holding a required data item) Locality @ BGU

  5. Virtual locking [Turek, Shasha, Prakash, 1992] [Barnes] • Acquire locks by increasing addresses • Guarantees that the implementation is nonblocking • Help the blocking operation to complete (recursively) • May result in long helping chains Locality @ BGU

  6. Virtual Locking: Reducing Contention [Shavit, Touitou, 1995] • Release locks when the operation is blocked • Help an immediate neighbor (only) & retry… • Short helping chains • But long delay chains Locality @ BGU

  7. Randomization: How Often this Happens? [Ha, Tsigas, Wattenhofer, Wattenhofer, 2005] • Operations choose locking order at random • Chains’ length depends on log n / loglog n • Also experimentally • Better, and yet… • Similar analysis for chains’ length when ops choose items at random • Depends on the operations’ density [Dragojevic, Guerraoui & Kapalka, 2008] Locality @ BGU

  8. < < Color-Based Virtual Locking (Binary) [Attiya, Dagan, 1996] • Operations on two data items (e.g., DCAS) • Colors define the locking order • Inspired by the left-right dinning philosophers algorithm [Lynch, 1980] • Color the items when the operation starts • Non-trivial… [Cole, Vishkin, 1986] • Bound the length of delay chains • But the coloring stage is complicated Locality @ BGU

  9. Color-Based Virtual Locking (Fixed k) [Afek, Merritt, Taubenfeld, Touitou, 1997] • Implements operations on k items, for a fixedk • Based on the memory addresses, the conflict graph is decomposed into trees & items are legally colored [Goldberg, Plotkin, Shannon] • Need to have the data set from the beginning • Recursive, with A&D at the basis and in each step • Even more complicated Locality @ BGU

  10. Virtual Locking: More Concurrency, Simpler [Attiya, Hillel, 2008] • Acquire locks in arbitrary order • No need to know the data set (or its size) in advance • No pre-computation Possible ways to handle conflicts between operations contending for a data item • Wait for the other operation • Help the other operation • Reset the other operation Locality @ BGU

  11. Conflict Resolution, How? Depends on the operations’ progress More advanced operation wins • How to gauge progress? • What to do on a tie? Locality @ BGU

  12. Who’s More Advanced? • The operation that locked more data items • If a less advanced operation needs an item • help the conflicting operation or • wait (blocking in a limited radius) • If a more advanced operation needs an item • reset the conflicting operation and claim the item Locality @ BGU

  13. What about Ties? 2 2 Happen when two transactions locked the same number of items A transaction has a descriptorand a lock Use DCAS to race for locking the two descriptors • Winner calls the shots… Locality @ BGU

  14. Measuring Concurrency:Data Items and Operations A data structure is a collection of items An operation accesses a data set not necessarily a pair A set of operations induces a conflict graph Nodes represent items Edges connect items of the same operation Locality @ BGU

  15. Spatial Relations between Operations Disjoint access Non adjacent edges Distance is infinite Overlapping operations Adjacent edges Distance is 0 Chains of operations Paths Distance is length of path (here, 2) d-neighborhood of an operation: all operations at distance ≤ d Locality @ BGU

  16. Interference between Operations Disjoint access Overlapping operations Non-overlapping operations Provides more concurrency & yields better throughput Interference inevitable Should not interfere! no interference Locality @ BGU

  17. Measuring Concurrency: Locality d failure locality[Choi, Singh, 1992] some operation completes in the d-neighborhood unless a failure occurs in the d-neighborhood d-local nonblockingsome operation completes in the d-neighborhood even if a failure occurs in the d-neighborhood 17 Locality @ BGU

  18. Quantitative Measures of Locality [Afek, Merritt, Taubenfeld, Touitou, 1997] Distance in the conflict graph between overlapping operations that interfere d-local step complexity: Only operations at distance ≤ d delay each other d-local contention: Only operations at distance ≤ d access the same memory location 18 Locality @ BGU

  19. In Retrospect Locality @ BGU

  20. Customized Virtual Locking • Can be viewed as software transactional memory • High overhead • Or handled by specialized algorithms • Ad-hoc and very delicate, mistakes happen • Instead, design algorithms in a systematic manner… • Lock the items that have to be changed & apply a sequential implementation on these items • Lock items by colors to increase concurrency • No need to re-color at the start of each operations since with a specific data structure • We manage a data structure since its infancy • The data sets of operations are predictable Locality @ BGU

  21. Example: Doubly-Linked Lists • An important special case underlying many distributed data structures • E.g., priority queue is used as job queue • Insert and Remove operations • Sometimes only at the ends (deques) • The data set is an item and its left / right neighbors (or left / right anchor) Locality @ BGU

  22. Built-In Coloring for Doubly-Linked Lists • An important special case underlying many distributed data structures • E.g., priority queue is used as job queue • Insert and Remove operations • Sometimes only at the ends (deques) • The data set is an item and its left / right neighbors (or left / right anchor) [Attiya, Hillel, 2006] • Always maintain the list items legally colored • Adjacent items have different colors • Adjust colors when inserting or removing items • No need to color from scratch in each operation! • Constant locality: operations that access disjoint data sets do not delay each other Locality @ BGU

  23. < < < Insert Operation • New items are assigned a temporary color • Remove from the ends is similar to Insert • Locks three items, one of them an anchor new item Locality @ BGU

  24. Removing from the Middle • Complicated: Need to lock three list items • Possibly two with same color • A chain of Remove operations may lead to a long delay chain in a symmetric situation Locality @ BGU

  25. To the Rescue: DCAS Again • Use DCAS to lock equally colored nodes Locality @ BGU

  26. In Treatment • DCAS??? • Supported in hardware, or at least with hardware transactional memory • Software implementation (e.g., A&D) • Software transactional memory?! • Speculative execution • Read accesses • Blocking?!!! • Not so bad w/ low failure locality • Can be translated to nonblocking locality Locality @ BGU

  27. Randomization: How Often this Happens? [Ha, Tsigas, Wattenhofer, Wattenhofer, 2005] • Operations choose locking order at random • Chains’ length depends on log n / loglog n • better, and yet… • Similar analysis for chains’ length when ops choose items at random • Depends on the operations’ density [Dragojevic, Guerraoui & Kapalka, 2008] Locality @ BGU

More Related