1 / 41

Computer architecture II

Computer architecture II. Lecture 8. Today:. Cache coherency Write-through (last class) Write back Invalidation–based: MESI Update-based: Dragon Consistency models Program order Difference between coherency and consistency Sequential consistency Relaxing sequential consistency.

rane
Download Presentation

Computer architecture II

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computer architecture II Lecture 8 Computer Architecture II

  2. Today: • Cache coherency • Write-through (last class) • Write back • Invalidation–based: MESI • Update-based: Dragon • Consistency models • Program order • Difference between coherency and consistency • Sequential consistency • Relaxing sequential consistency Computer Architecture II

  3. PrRd/- PrWr/BusWr V PrRd/BusRd BusWr/- I PrWr/BusW Processor initiated action Snooper initiated action Invalidation-based write-through State Transition Diagram • One transition diagram per cache block • Block states V:valid, I:invalid • V: block contains a correct copy of main memory • I: block is not in the cache • Notation a/b • a : event • b: action taken on event • Events • Processor can Rd/Wr from/to block • Bus can Rd/Wr from/to block Computer Architecture II

  4. PrRd/- PrWr/BusWr V PrRd/BusRd BusWr/- I PrWr/BusWr Processor initiated action Snooper initiated action Invalidation-based write-through State Transition Diagram • Write will invalidate all other caches (no local change of state) • can have multiple simultaneous readers of block, but write invalidates them • Implementation • Hardware state bits associated only with blocks that are in the cache • other blocks can be seen as being in invalid (not-present) state in that cache Computer Architecture II

  5. Problems with Write Through • High bandwidth requirements • Every write from every processor goes to shared bus and memory • Write-through unpopular for SMPs • Write-back caches absorb most writes as cache hits • Write hits don’t go on bus • But now how do we ensure write propagation and serialization? • Need more sophisticated protocols: large design space Computer Architecture II

  6. Write-Back Snoopy Protocols • No need to change processor, main memory, cache … • Extend cache controller (not only V and I states) and exploit bus (provides serialization) • Dirty state now also indicates exclusive ownership • Exclusive: only cache with a valid copy (main memory may be too) • Owner: responsible for supplying block upon a request for it • 2 types of protocols • Invalidation based • Update based Computer Architecture II

  7. Basic MSI Write-back Invalidation Protocol • States • Invalid (I) • Shared (S): one or more • Dirty or Modified (M): one only • Processor Events: • PrRd (read) • PrWr (write) • Bus Transactions • BusRd (read): asks for copy with no intent to modify • BusRdX (read exclusive): asks for copy with intent to modify • BusWB (write back): updates memory • Actions • Update state, perform bus transaction, flush value onto bus Computer Architecture II

  8. State Transition Diagram PrRd/— PrWr/- • Replacement and write backs are not shown • Rd/Wr in M and Rd in S state do not cause bus transaction • But: Rd/Wr in I state cause 2 bus transactions • Wr in S state 2 bus transactions • And data sent at RdX • Can spare the data transfer, because already have latest data: can use upgrade (BusUpgr) instead of BusRdX M BusRd/Flush PrWr/ BusRdX BusRdX/Flush S BusRdX/— PrRd/BusRd PrRd/— BusRd/— PrW r/BusRdX I Computer Architecture II

  9. P1 P2 P3 PrRd/— PrW r/— PrRd U M U U U U U S S S S M 7 5 5 7 7 BusRd/Flush PrW r/BusRdX BusRd U BusRdX/Flush S BusRd U I/O devices 7 BusRdX/— u :5 BusRdx U PrRd/BusRd Memory PrRd/— BusRd/— PrW r/BusRdX I Example: Write-Back Protocol PrRd U PrRd U PrWr U 7 BusRd Flush • P1 reads u • P3 reads u • P3 : u=7 • P1 reads u • P2 reads u Computer Architecture II

  10. MESI (4-state) Invalidation Protocol • Problem with MSI protocol • Reading and modifying data is 2 bus transactions, even if no sharing! • even in a sequential program! • BusRd (I->S) followed by BusRdX or BusUpgr (S->M) • Add exclusive state: not modified block resides only in local cache => write locally without bus transaction • States • invalid • exclusive or exclusive-clean (only this cache has copy, but not modified) • shared (two or more caches may have copies) • modified (dirty) Computer Architecture II

  11. MESI State Transition Diagram PrRd/- PrWr/- • Goal: no bus transaction on E • When does I go to E and when to S? • I -> E on PrRd if no other processor has a copy • I -> S otherwise • S additional signal on bus • BusRd(S): on BusRd signal, if one processor holds the block it asserts S (makes it 1) M BusRdX/Flush BusRd/Flush PrWr/- PrWr/ BusRdX E BusRd/ Flush BusRdX/Flush PrRd/— PrWr/ S BusRdX ¢ BusRdX/Flush PrRd/ BusRd(S) ) PrRd/— ¢ BusRd/Flush PrRd/ BusRd(S) I Computer Architecture II

  12. Dragon Write-Back Update Protocol • 4 states • Exclusive-clean or exclusive (E): locally and memory have it • Shared clean (Sc): locally, others, and maybe memory, but I’m not owner • Shared modified (Sm): locally and others but not memory, and I’m the owner • Sm and Sc can coexist in different caches, with only one Sm • Modified or dirty (D): locally and nowhere else • No invalid state • If in cache, cannot be invalid • If not present in cache, can view as being in not-present or invalid state • New processor events:PrRdMiss, PrWrMiss • Introduced to specify actions when block not present in cache • New bus transaction: BusUpd • Broadcasts single word written on bus; updates caches that hold a copy Computer Architecture II

  13. Dragon State Transition Diagram PrRd/— PrRd/— BusUpd/Update BusRd/— Sc E PrRdMiss/BusRd(S) PrRdMiss/BusRd(S) PrW r/— PrW r/BusUpd(S) PrW r/BusUpd(S) BusUpd/Update BusRd/Flush PrW rMiss/BusRd(S) PrW rMiss/(BusRd(S); BusUpd) M Sm PrW r/BusUpd(S) PrRd/— PrRd/— PrW r/BusUpd(S) PrW r/— BusRd/Flush Computer Architecture II

  14. Invalidate versus Update • Basic question of program behavior • Is a block written by one processor read by others before it is rewritten? • Invalidation: • Yes => readers will take a miss • No => multiple writes without additional traffic • Update: • Yes => readers will not miss if they had a copy previously • single bus transaction to update all copies • No => multiple useless updates, even to dead copies • Invalidate or update may be better depending on the application • Invalidation protocols much more popular • Some systems provide both, or even hybrid Computer Architecture II

  15. Today: Consistency models • Program order • Difference between coherency and consistency • Sequential consistency • Relaxing sequential consistency Computer Architecture II

  16. P P 1 2 (1a) A = 1; (2a) print B; (1b) B = 2; (2b) print A; Program order (an example) • Order in which instructions appear in source code • May be changed by a compiler • We will assume the order the programmer sees (what you see in the example above, not how the assembly code would look like) • Sequential program order • P1: 1a->1b • P2: 2a->2b • Parallel program order: an arbitrary interleaving of sequential orders of P1 and P2 • 1a->1b->2a->2b • 1a->2a->1b->2b • 2a->1a->1b->2b • 2a->2b->1a->1b Computer Architecture II

  17. P P 1 2 (1a) A = 1; (2a) print B; (1b) B = 2; (2b) print A; Program order Initially A=0, B=0 • Possible intuitive printings of the program? • A compiler or an out-of-order execution on a superscalar processor may reorder 1a and 1b of P1 as long as they not affect the result of the program on P1 • This would produce non-intuitive results • Now assume that the compiler/superscalar processor does not reorder • P1 will “see” the results of the writes A=1 and B=2 in the program order • But • when will P2 see the results of the writes A=1 and B=2 ? • when will P2 see the results of the write A=1? • We can say a processor P1 “sees” the results of write of P2 or the write operation of P1completes with respect to P2 • Coherence => Writes to one location become visible to all in the same order • But here we have 2 locations! Computer Architecture II

  18. P P 1 2 /*Assume initial value of A is 0*/ A = 1; Barrier -----------------------Barrier print A; Setup for Memory Consistency • Coherence => Writes to one location become visible to all in the same order • Nothing is said about • when does a write become visible to another processor? • Use event synchronization to insure that • Which is the order in which consecutive writes to different locations are seen by other processors Computer Architecture II

  19. Second Example P P 1 2 • Intuition not guaranteed by coherence • Refers to one location: return the last value written to A or to flag • Does not say anything about order the modification of A and flag are seen by P2 • Intuitively we expect memory to • respect order between accesses to different locations issued by a given process (1.b seen after 1.a) • Conclusion: Coherence is not enough! • pertains only to single location /*Assume initial value of A and flag is 0*/ 1.a A = 1; 2.a while (flag == 0); /*spin idly*/ 1.b flag = 1; 2.b print A; Computer Architecture II

  20. Back to Second Example • What’s the intuition? If 2a prints 2, will 2b print 1? • We need an ordering model for clear semantics • across different locations as well • so programmers can reason about what results are possible • This is the memory consistency model P P 1 2 /*Assume initial values of A and B are 0*/ (1a) A = 1; (2a) print B; (1b) B = 2; (2b) print A; Computer Architecture II

  21. Memory Consistency Model • Specifies constraints on the order in which memory operations (from any process) can appear to execute with respect to one another • What orders are preserved? • Given a load, which are the possible values returned by it • Without it, can’t tell much about an SAS program’s execution • Implications for both programmer and system designer • Programmer uses to reason about correctness and possible results • System designer can use to constrain how much accesses can be reordered by compiler or hardware • Contract between programmer and system Computer Architecture II

  22. Sequential Consistency • Total order achieved by interleaving accesses from different processes • Maintains program order, and memory operations, from all processes, appear to [issue, execute, complete] atomically w.r.t. others • as if there were no caches, and a single memory • “A multiprocessor is sequentially consistent if the result of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program.” [Lamport, 1979] Computer Architecture II

  23. P P 1 2 0*/ /*Assume initial values of A and B are (1a) A = 1; (2a) print B; A=0 (1b) B = 2; (2b) print A; B=2 SC Example • What matters is order in which operations appear to execute, not the chronological order of events • Possible outcomes for (A,B): (0,0), (1,0), (1,2) • What about (0,2) ? • program order => 1a->1b and 2a->2b • A = 0 implies 2b->1a, which implies 2a->1b • B = 2 implies 1b->2a, which leads to a contradiction • What about 1b->1a->2b->2a ? • appears just like 1a->1b->2a->2b => fine! • execution order 1b->2a->2b->1a is not fine, would produce (0,2) Computer Architecture II

  24. P P 1 2 (1a) A = 1; (2a) print B; (1b) B = 2; (2b) print A; Back to the first example • Sequential program order • P1: 1a->1b • P2: 2a->2b • Parallel program order: an arbitrary interleaving of sequential orders of P1 and P2 • 1a->1b->2a->2b • 1a->2a->1b->2b • 1a->2a->2b->1b • 2a->1a->1b->2b • 2a->1a->2b->1b • 2a->2b->1a->1b • But, 1a->1b->2b->2a is also acceptable for SC! intuitive Computer Architecture II

  25. Implementing SC • Two kinds of requirements • Program order • memory operations issued by a process must appear to execute (become visible to others and itself) in program order • Atomicity • in the overall hypothetical total order, one memory operation should appear to complete with respect to all processes before the next one is issued • guarantees that total order is consistent across processes Computer Architecture II

  26. READ WRITE READ WRITE READ WRITE Summary of Sequential Consistency READ WRITE • Maintain order between shared access in each thread • reads or writes wait for previous reads or writes to complete Computer Architecture II

  27. Do we really need SC? • SC has strong requirements • SC may prevent compiler (code reorganization) and architectural optimizations (out-of-order execution in superscalar) • Many programs execute correctly even without “strong” ordering • explicit synch operations order key accesses initial: A, B=0 P1 P2 A := 1; B := 3.1415 barrier -------------------barrier ... = A; ... = B; Computer Architecture II

  28. Does SC eliminate synchronization? • No, still needed • Critical sections ( e.g. insert element into a doubly-linked list) • Barriers (e.g. enforce order on a variable access) • Events (e.g. wait for a condition to become true) • only ensures interleaving semantics of individual memory operations Computer Architecture II

  29. Is SC hardware enough? • No, Compiler can violate ordering constraints • Register allocation to eliminate memory accesses • Common subexpression elimination • Instruction reordering • Software Pipelining • Unfortunately, programming languages and compilers are largely oblivious to memory consistency models P1 P2 P1 P2 B=0 A=0 r1=0 r2=0 A=1 B=1 A=1 B=1 u=B v=A u=r1 v=r2 B=r1 A=r2 (u,v)=(0,0) disallowed under SC may occur here Computer Architecture II

  30. What orderings are essential? • Stores to A and B must complete before unlock • Loads to A and B must be performed after lock • Conclusion: may relax the sequential consistency semantics initial: A, B=0 P1 P2 A := 1; B := 3.1415 unlock(L) lock(L) ... = A; ... = B; Computer Architecture II

  31. READ READ WRITE WRITE READ WRITE READ WRITE READ READ WRITE WRITE READ WRITE READ WRITE Hardware Centric Models • Processor Consistency (Goodman 89) • Total Store Ordering (Sindhu 90) • Partial Store Ordering (Sindhu 90) • Causal Memory (Hutto 90) • Weak Ordering (Dubois 86) Computer Architecture II

  32. Relaxing write-to-read (PC, TSO) • Why? • Hardware may hide latency of write • write-miss in write buffer, later reads hit, maybe even bypass write • write to flag not visible until write to A visible • PC: non atomic write (write does not complete wrt all other processors) • Ex: Sequent Balance, Encore Multimax, vax 8800, SparcCenter, SGI Challenge, Pentium-Pro initial: A, flag, y == 0 P1 P2 (a) A = 1; (c) while (flag ==0) {} (b) flag = 1; (d) y = A; Computer Architecture II

  33. Comparing with SC Initially A,B=0 Initially A,B=0 • Different results • a, b: same for SC, TSO, PC • c: PC allows A=0 no write atomicity: A=1 may complete wrt P2 but not wrt P3 • d: TSO and PC allow A=B=0 (read execute before write) • Mechanism for insuring SC semantics: MEMBAR (Sun SPARC V9) • A subsequent read waits until all write complete Initially A,B=0 Initially A,B=0 Computer Architecture II

  34. Comparing with SC Initially A,B=0 Initially A,B=0 • Different results • a, b: same for SC, TSO, PC • c: PC allows A=0 no write atomicity: A=1 may complete wrt P2 but not wrt P3 • d: TSO and PC allow A=B=0 (read execute before write) • Mechanism for insuring SC semantics: MEMBAR (Sun SPARC V9) • A subsequent read waits until all write complete Initially A,B=0 Initially A,B=0 Computer Architecture II

  35. Comparing with SC Initially A,B=0 Initially A,B=0 • Mechanism for insuring SC semantics: MEMBAR (Sun SPARC V9) • A subsequent read waits until all write complete Initially A,B=0 Initially A,B=0 P P 1 2 /* initially A, B = 0 */ A = 1; B=1, membar; membar; print B; print A; Computer Architecture II

  36. Relaxing write-to-read and write-to-write (PSO) • Why? • Bypass multiple write cache missing • Overlap several write operation => good performance • But, even example (a) breaks • Use MEMBAR: a subsequent write waits until all previous writes have completed Initially A,B=0 Initially A,B=0 Initially A,B=0 Initially A,B=0 Computer Architecture II

  37. Relaxing all orders • Retain control and data dependences within each thread • Why? • allow multiple overlapping read operations • May be bypassed by writes • Hyde read latency (for read misses) • Two important models • Weak ordering • Release Consistency Computer Architecture II

  38. Weak ordering • synchronization operations wait for all previous memory operations to complete • arbitrary completion ordering between them : synchronization operation Computer Architecture II

  39. Release consistency • Differentiate between synchronization operations • acquire: read operation to gain access to set of operations or variables • release: write operation to grant access to other processors • acquire must complete wrt all processors before following accesses • Lock(TaskQ) before newTask->next = Head; …, UnLock(TaskQ) • release must wait until accesses before acquire complete • UnLock(TaskQ) waits for Lock(TaskQ), …, Head=newTask->next; : acquire :release Computer Architecture II

  40. Release consistency • Intuition: • The programmer inserts acquire/release operations for code that shares variables • acquire has to complete before the following instructions • Because the other processes must know a critical section is entered • Acquire and code before acquire can be reordered • The code before the release has to complete • Because the critical section modifications must become visible to the others • Release and code after release can be reordered : acquire :release Computer Architecture II

  41. Preserved Orderings Weak Ordering Release Consistency read/write ° ° ° read/write read/write ° ° ° read/write Acquire • A block contains the instructions of one processor that me be reordered • Intuitive results and performance if data races are eliminated through synchronization 1 1 read/write ° ° ° read/write 2 Synch read/write ° ° ° read/write 3 read/write ° ° ° read/write Release 2 Synch read/write ° ° ° read/write 3 Computer Architecture II

More Related