1 / 22

CS510 Concurrent Systems Tyler Fetters

CS510 Concurrent Systems Tyler Fetters. A Methodology for Implementing Highly Concurrent Data Objects. Agenda. Definitions Author Biography Overview Small Objects Class Exercise Large Objects Summary & Contributions. Definitions. Non-Blocking Wait-Free Concurrent Objects

rowdy
Download Presentation

CS510 Concurrent Systems Tyler Fetters

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS510 Concurrent Systems Tyler Fetters

  2. A Methodology for Implementing Highly Concurrent Data Objects

  3. Agenda • Definitions • Author Biography • Overview • Small Objects • Class Exercise • Large Objects • Summary & Contributions

  4. Definitions • Non-Blocking • Wait-Free • Concurrent Objects • load_linked (LL) • store_conditional (SC) • Small Object • Large Object • Linearizable

  5. Author Biography • Ph.D. in CS from MIT • Professor: • Carnegie Mellon University • Brown University • Awards: • 2003 & 2012 Dijkstra Prize • 2004 Godel Prize • 2013 W. Wallace McDowell Award

  6. Overview • Provide a framework for transforming sequential data structures into concurrent ones • Requires writing operations as stylized sequential operations • Increase ease of reasoning • Uses LL and SC as core primitives (said to be “universal”, in terms of their ability to reach consensus in a wait-free manner for a unlimited number of threads)

  7. Overview Cont. • Implement data objects as stylized sequential programs without explicit synchronization* • Apply synchronization and memory management techniques • This transformation will “in theory” transform any sequential object into a non-blocking or wait-free concurrent object.

  8. Overview Cont. • Linearizability is used as a basic correctness of the implementation • This doesn’t mean a concurrent version which allows other values to occur is incorrect. • Thus, non-linearizable algorithm are not necessarily incorrect.

  9. Reads memory pointer using load_linked Small Objects • On failure retry • Copies version into another block • On success transformation complete • At a high level: • Calls store_conditional to swing the pointer from the old to the new • Applies sequential operation to the copy

  10. Small Objects Cont. - Code Preventing the race condition! Copy the old, new data. If the check values do not match, loop again. We failed. If the check values DO match, now we can perform our dequeue operation! Try to publicize the new heap via store_conditional, which could fail and we loop back. Lastly, copy the old concurrent object pointer to the new concurrent pointer. Return our priority queue result. intPqueue_deq(Pqueue_type **Q){ ... while(1){ old_pqueue = load_linked(Q); old_version = &old_pqueue->version; new_version = &new_pqueue->version; first = old_pqueue->check[1]; copy(old_version, new_version); last = old_pqueue->check[0]; if (first == last){ result = pqueue_deq(new_version); if (store_conditional(Q, new_version))break; } } new_pqueue = old_pqueue; returnresult; }

  11. Small Objects Cont. – Back Off ... if (first == last) { result = pqueue_deq(new_version); if (store_conditional(Q, new_version )) break; } if (max_delay < DELAY_LIMIT) max_delay = 2 * max_delay; delay = random() % max_delay; for (i = 0; i < delay; i++); } /* end while*/ new_pqueue = old_pqueue; return result; } When the consistency check or the store_conditional fails, introduce back-off for a random amount of time!

  12. Small Objects Cont. - Performance Small Object, Non-Blocking (naive) Small Object, Non-Blocking (back-off)

  13. Small Objects Cont. – Wait Free • Operation combining – before trying to do work to the concurrent object, a thread records what it is trying to do. • Then reads what all other threads are doing and tries to complete the work for them • Once it does all of their work it then does it’s own. • Gain failure tolerance at the cost of efficiency.

  14. Small Objects Cont. – Wait Free • Use Operation combining to transform the block-free object into a wait free • Process starts an operation. • Record the call in Invocation. • Upon completion of the operation, record the result in Result.

  15. Small Objects Cont. – Wait Free Record the process name. Flip the toggle bit. applypending operations to the NEW version. ... announce[P].op_name = DEQ_CODE; new_toggle= announce[P].toggle = !announce[P].toggle; if (max_delay> 1) max_delay = max_delay >> 1; while(((*Q)->responses[P].toggle != new_toggle) || ((*Q)->responses[P].toggle != new_toggle)){ old_pqueue = load_linked(Q); old_version = &old_pqueue->version; new_version = &new_pqueue->version; first = old_pqueue->check[1]; memcopy(old_version, new_version, sizeof(pqueue_type)); last = old_pqueue->check[0]; if (first == last){ result = pqueue_deq(new_version); apply(announce, Q); if (store_conditional(Q, new_version )) break; } if (max_delay < DELAY_LIMIT) max_delay = 2 * max_delay; delay = random() % max_delay; for (i = 0; i < delay; i++); } new_pqueue = old_pqueue; return result; }

  16. Small Objects Cont. – Wait Free Small Object, Non-Blocking (back-off) Small Object, Wait Free (back-off)

  17. Class Exercise • Sequential Code for Removing Head node to the head of a Linked List • Apply synchronization and memory management techniques • Add Exponential Back-off for performance

  18. Class Exercise Typedef Res { booleannon_empty; intret_val; } Res linkList_removeHead(LinkList_type * l){ node * temp; Res value; value->non_empty = false if (l->head != null) { value->non_empty = true; temp = l->head->next; value->ret_value = head->value; head = temp; } return value; }

  19. Class Exercise ResLinkList_removeHead(LinkList_type **L){ ... while(1){ old_linkList = load_linked(L); • old_version= &old_linkList->version; • new_version= &new_linkList->version; first = old_linkList->check[1]; copy(old_version, new_version); last = old_linkList->check[0]; if (first == last){ • result = linkList_removeHead(new_version); if (store_conditional(L, new_version))break; } if (max_delay < DELAY_LIMIT) max_delay = 2 * max_delay; delay = random() % max_delay; for (i = 0; i < delay; i++); } new_linkList = old_linkList; return result; }

  20. Large Objects • Per-process pool of memory • 3 states: committed, allocated and freed • Operations: • set_alloc moves block from committed (freed?) to allocated and returns address • set_freemoves block to freed • set_prepare marks blocks in allocated as consistent • set_commitsets committed to union of freed and committed • set_abortsets freed and allocated to the empty set

  21. Summary & Contributions • Foundation for transforming sequential implementations (small and large) to concurrent operations • Possible to be performed by a compiler • Maintains a “reasonable” level of performance • Utilizing LL and SC as base primitives • Addresses the issue of conceptual complexity (thoughts?)

  22. Sources • A Methodology for Implementing Highly Concurrent Data Objects – Slides from Tina Swenson – 2010 • http://en.wikipedia.org/wiki/Maurice_Herlihy • http://cs.brown.edu/~mph/ • http://web.cecs.pdx.edu/~walpole/class/cs510/papers/10.pdf

More Related