1 / 98

Scalable Reader Writer Synchronization

Scalable Reader Writer Synchronization. John M.Mellor-Crummey , Michael L.Scott. Outline. Abstract Introduction Simple Reader Writer Spin Lock Reader Preference Lock Fair Lock Locks with Local-Only-Spinning Fair Lock Reader Preference Lock Writer Preference Lock

misu
Download Presentation

Scalable Reader Writer Synchronization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Scalable Reader Writer Synchronization John M.Mellor-Crummey, Michael L.Scott

  2. Outline • Abstract • Introduction • Simple Reader Writer Spin Lock • Reader Preference Lock • Fair Lock • Locks with Local-Only-Spinning • Fair Lock • Reader Preference Lock • Writer Preference Lock • Empirical Results & Conclusions • Summary Scalable Reader Writer Synchronization

  3. Abstract – readers & writers • All processes request mutex access to the same memory section. • Multiple readers can access the memory section at the same time. • Only one writer can access the memory section at a time. Scalable Reader Writer Synchronization writers readers 0100110101010100010001000101001010 0100110101010100010001000101001010 0100110101010100010001000101001010

  4. Abstract (continued) • Mutex locks implementation using busy wait. • Busy wait locks causes memory and network contention which degrades performance. • The problem: busy wait is implemented globally (everyone busy wait on the same variable / memory location), creating a global bottleneck instead of a local one. • The global bottleneck created by the busy wait, prevents efficient, larger scale (scalability) implementation of mutex synchronization. Scalable Reader Writer Synchronization

  5. Outline • Abstract • Introduction • Simple Reader Writer Spin Lock • Reader Preference Lock • Fair Lock • Locks with Local-Only-Spinning • Fair Lock • Reader Preference Lock • Writer Preference Lock • Empirical Results & Conclusions • Summary Scalable Reader Writer Synchronization

  6. The purpose of the paper Presenting readers/writers locks which exploits local spin busy wait implementation, in order to reduce memory and network contention . Scalable Reader Writer Synchronization Global – everyone busy wait (spin) on the same location. Local – everyone busy wait (spin) on a different memory location.

  7. Definitions • Fair lock • readers wait for earlier writers • writers wait for any earlier process (reader or writer) • no starvation • Readers preference lock • writers wait as long as there are readers requests. • possible starvation • minimizes the delay for readers • maximizes the throughput • Writers preference lock • readers wait as long as there are writer waiting • possible starvation • prevents the system from using outdated information Scalable Reader Writer Synchronization

  8. The MCS lock The MCS (Mellor-Crummey and Scott) lock is a queue based local spin lock Scalable Reader Writer Synchronization

  9. The MCS lock – acquire lock tail tail Scalable Reader Writer Synchronization lock new_node new_node lock

  10. The MCS lock – release lock tail tail lock Scalable Reader Writer Synchronization lock my_node my_node

  11. The MCS lock – release lock tail Scalable Reader Writer Synchronization lock my_node

  12. The MCS lock – release lock The spin is local since each process spins (busy wait) on its own node Scalable Reader Writer Synchronization

  13. Outline • Abstract • Introduction • Simple Reader Writer Spin Lock • Reader Preference Lock • Fair Lock • Locks with Local-Only-Spinning • Fair Lock • Reader Preference Lock • Writer Preference Lock • Empirical Results & Conclusions • Summary Scalable Reader Writer Synchronization

  14. Simple Reader-Writer Locks This section presents centralized (not local) algorithms for busy wait reader-writer locks. WRITER start_write(lock) writing_critical_section end_write(lock) READER start_read(lock) reading_critical_section end_read(lock) Scalable Reader Writer Synchronization

  15. Outline • Abstract • Introduction • Simple Reader Writer Spin Lock • Reader Preference Lock • Fair Lock • Locks with Local-Only-Spinning • Fair Lock • Reader Preference Lock • Writer Preference Lock • Empirical Results & Conclusions • Summary Scalable Reader Writer Synchronization

  16. Reader Preference Lock • A reader preference lock is used in several cases: • when there are many writers requests, and the preference for readers is required to prevent their starvation. • when the throughput of the system is more important than how up to date the information is. Scalable Reader Writer Synchronization

  17. Reader Preference Lock • The lowest bit indicates if a writer is writing • The upper bits count the interested and currently reading processes. • When a reader arrives it inc the counter, and waits until the writer bit is deactivated. • Writers wait until the whole counter is 0. Scalable Reader Writer Synchronization lock writers flag 31 1 0 readers counter

  18. Reader Preference Lock a writer can write, only when no reader is interested or reading, and no writer is writing lock writers flag 31 1 0 1 1 1 1 1 0 0 0 0 0 readers counter end writing start writing Scalable Reader Writer Synchronization Notice that everything is done on the same 32 bit location in the memory.

  19. Reader Preference Lock readers always get in front of the line, before any writer, other than the one already writing lock writers flag 31 1 0 1 0 0 0 1 0 0 readers counter end reading start reading Scalable Reader Writer Synchronization Again, notice that everything is done on the same 32 bit location in the memory.

  20. Outline • Abstract • Introduction • Simple Reader Writer Spin Lock • Reader Preference Lock • Fair Lock • Locks with Local-Only-Spinning • Fair Lock • Reader Preference Lock • Writer Preference Lock • Empirical Results & Conclusions • Summary Scalable Reader Writer Synchronization

  21. Fair Lock • A Fair lock is used when the system must maintain a balance between keeping the information up to date, and still being reactive (the system should respond to data requests within a reasonable amount of time) Scalable Reader Writer Synchronization

  22. Fair Lock • The readers have 2 counters: • completed readers/writers: those who finished reading/writing • current readers/writers: those who finished + requests • prev/ticket: for waiting in line • writers ticket = total readers + total writers • readers ticket = total writers (because they can read with the rest of the readers) Scalable Reader Writer Synchronization total readers total writers readers writers / / completed writers completed readers Ticket = prev

  23. Fair Lock total readers total writers readers writers 5 1 1 2 / 3 / completed readers completed writers 6 Ticket = prev Scalable Reader Writer Synchronization

  24. Fair Lock total readers total writers readers writers 5 1 2 / 3 5 / completed readers completed writers 6 Ticket = prev Scalable Reader Writer Synchronization

  25. Fair Lock total readers total writers readers writers 5 1 2 / 5 / 2 completed readers completed writers 6 Ticket = prev Scalable Reader Writer Synchronization

  26. Fair Lock total readers total writers readers writers 5 6 2 3 / 5 / completed readers completed writers 3 Ticket = prev Scalable Reader Writer Synchronization

  27. Fair Lock total readers total writers readers writers 6 2 3 / 5 / 3 completed readers completed writers 3 Ticket = prev Scalable Reader Writer Synchronization

  28. Fair Lock total readers total writers readers writers 6 6 3 3 / 5 / completed readers completed writers 3 Ticket = prev Scalable Reader Writer Synchronization

  29. Fair Lock total readers total writers readers writers / / completed readers completed writers Ticket = prev Scalable Reader Writer Synchronization Again, notice that everything is done on the same centralized location in the memory – 3 counters.

  30. Spin On A Global Location • The last 2 algorithms use busy wait by spinning on the same memory location. • When many processes try and spin on the same location, it causes a hot spot in the system. • Interference from still waiting/spinning processes, increase the time required to release the lock by those who are finished waiting. • Also, Interference from still waiting/spinning degrades the performance of processes who are trying to access the same memory area (not just the same exact location) Scalable Reader Writer Synchronization

  31. Outline • Abstract • Introduction • Simple Reader Writer Spin Lock • Reader Preference Lock • Fair Lock • Locks with Local-Only-Spinning • Fair Lock • Reader Preference Lock • Writer Preference Lock • Empirical Results & Conclusions • Summary Scalable Reader Writer Synchronization

  32. Locks with Local Only Spinning • This is the main section of the paper, it contains the implementation of reader/writer locks which uses busy wait on local locations (not all on the same location). • Why not just use the previously mentioned MCS algorithm? • too much serialization for the readers, they can read at the same time • too long code path for this purpose, can be done more efficiently lock Scalable Reader Writer Synchronization my_node

  33. Outline • Abstract • Introduction • Simple Reader Writer Spin Lock • Reader Preference Lock • Fair Lock • Locks with Local-Only-Spinning • Fair Lock • Reader Preference Lock • Writer Preference Lock • Empirical Results & Conclusions • Summary Scalable Reader Writer Synchronization

  34. Fair Lock (local spinning only) • Writing can be done when all previous read and write requests have been met. • Reading can be done when all previous write requests have been met. • Like the MCS algorithm, using a queue. • A reader can begin reading when its predecessor is an active reader or when the previous writer finished. • A writer can write if its predecessor is done and there are on active readers. Scalable Reader Writer Synchronization

  35. Fair Lock (local spinning only) blocked type : reader/writer node next : pointer free blocked : boolean successor_type : reader/writer Scalable Reader Writer Synchronization lock tail : pointer to a node (nil) reader_count : counter (0) next_writer : pointer to a node (nil) n w e r x I t t e r

  36. Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked new_node writer succ: none Scalable Reader Writer Synchronization

  37. Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked new_node writer succ: none Scalable Reader Writer Synchronization lock = tail

  38. Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked new_node writer succ: none 0 reader_counter Scalable Reader Writer Synchronization lock = tail new_node next_writer

  39. Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked new_node writer succ: none 0 reader_counter Scalable Reader Writer Synchronization lock = tail new_node

  40. Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked new_node writer succ: none 0 reader_counter Scalable Reader Writer Synchronization lock = tail new_node

  41. Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked new_node writer succ: none 0 reader_counter Scalable Reader Writer Synchronization lock = tail new_node

  42. Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked new_node writer succ: none 1 reader_counter Scalable Reader Writer Synchronization lock = tail new_node The busy wait is on my own node

  43. Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked new_node writer succ: none 0 reader_counter Scalable Reader Writer Synchronization lock = tail new_node next_writer

  44. Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked new_node writer succ: none Scalable Reader Writer Synchronization lock = tail

  45. Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked new_node writer succ: none Scalable Reader Writer Synchronization new_node lock = tail

  46. Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked new_node writer succ: none succ: writer Scalable Reader Writer Synchronization lock = tail new_node pred

  47. Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked new_node writer succ: writer Scalable Reader Writer Synchronization lock = tail new_node

  48. Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked my_node writer Scalable Reader Writer Synchronization lock = tail

  49. Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked my_node writer Scalable Reader Writer Synchronization lock = tail my_node

  50. Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked my_node writer Scalable Reader Writer Synchronization lock = tail my_node

More Related