1 / 24

Scalable Synchronous Queues

Scalable Synchronous Queues. By William N. Scherer III, Doug Lea, and Michael L. Scott. Presented by Ran Isenberg. Introduction. Queues are a way for different threads to communicate and exchange information between them.

colby
Download Presentation

Scalable Synchronous Queues

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Scalable Synchronous Queues By William N. Scherer III, Doug Lea, and Michael L. Scott Presented by Ran Isenberg

  2. Introduction • Queues are a way for different threads to communicate and exchange information between them. • In a thread-safe, concurrent and asynchronous queue, consumers typically wait for producers to make data available. • In a synchronous queue, producers similarly wait for consumers to take the data – “a pair up”.

  3. Hanson’s Synchronous Queue

  4. Hanson’s Pros & Cons Pros • High design-level tractability. • Using semaphores to target wakeups to only the single producer or consumer thread that an operation has unblocked. Cons: • May fall victim to priority inversion. • Poor performance - employs three separate blocking semaphores (locks). Possible Solution? Use non blocking synchronization!

  5. Java 5.0 synchronous queue The Java SE 5.0 synchronous queue (below) uses a pair of queues (in fair mode; stacks for unfair mode) to separately hold waiting producers and consumers.

  6. Java 5.0 synchronous queue (Cont.)

  7. Java 5 Version – Pros & Cons Pros • Uses a pair of queues to separately hold waiting producers and consumers. • Allows producers to publish data items as they arrive instead of having to first awaken after blocking on a semaphore; consumers need not wait. • One transfer (put & take), requires only three synchronization operations, compared to the six incurred by Hanson’s. Cons: • Relies on one lock to protect access to both queues – creates a narrow bottleneck during runtime.

  8. Non-Blocking Synchronization • Non blocking concurrent objects avoid mutual exclusion and locks. • Instead, they use atomic CAS operations (compare & swap) to make sure object’s invariants still hold afterwards. Totalized operations: • Operations that don't block and return a failure code in case the preconditions aren't met. • Example: dequeingfrom an empty queue will not cause the thread to block, but to return a failure code instead.

  9. Non-Blocking Synchronization (Cont.) What isn't promised to any thread in the following scenario when using totalized operations? So, how can we fix this?

  10. Dual Data Structures • We could register a request (reservation) for a hand-off partner. • Reservation is made in non blocking manner. • Reservation is fulfilled by checking whether a pointer has changed in the reservation. • The structure may contain both data and reservations.

  11. Dual Data Structures (Cont.) Totalized methods are now split into 2 partial methods: reserve and follow-up. Main advantage over totalized partial methods: Order of reservations is saved.

  12. Main Properties The algorithms I will present will be: • Contention free. • Lock free. • Scalable. • Fair & unfair implementations. • Have solid linearization points. • Waiting is done by spinning, busy waiting.

  13. Synchronous Dual Queue • Fair implementation. • A linked list with a head & tail. • Waiting is accomplished by spinning until a pointer changes from null to non-null. • If not empty, has always a dummy node at the beginning while other nodes are always of the same type: reservation or data. • Dequeue & enqueue are symmetrical except for the direction of the data flow. • Let’s take a look at the enqueue operation.

  14. Synchronous Dual Queue (Cont.) • First case: Queue is empty or contains only data. Add a new data node at the end of the queue and spin. • Second case (interesting): Queue contains only reservation.

  15. First case code:

  16. Synchronous Dual Queue (Cont.) Second case code:

  17. Synchronous Dual Stack • A singly linked list with only a head node. • Not fair. • Doesn’t have a dummy node at the beginning. • May contain either data or reservation nods but also temporarily a single node of the opposite type at the head. • Push& pop are symmetrical except for the direction of the data flow. • Let’s take a look at the push operation.

  18. Synchronous Dual Stack (Cont.) • First case: Stack is empty or contains only data. Add a new data node at the head of the stack and spin. • Second case (interesting): Stack contains reservation at the head. Add a new data (“fulfilling”) node at the head, find a reservation to fulfill and remove both nodes. • Third case: A fulfilling node at the head. Help the fulfillment process.

  19. First case code:

  20. Second case code:

  21. Third case code:

  22. Results

  23. Results (Cont.)

  24. Conclusion • I presented 2 non blocking synchronous queues that use dual data structures: a fair and unfair version (stack & queue). • The algorithms are all lock-free. • There is little performance cost for fairness. • High scalability can be achieved by using synchronization methods that don’t use locks. • The Algorithms are part of the Java 6 packages.

More Related