1 / 30

ECE 669 Parallel Computer Architecture Lecture 18 Scalable Parallel Caches

ECE 669 Parallel Computer Architecture Lecture 18 Scalable Parallel Caches. Overview. Most cache protocols are more complicated than two state Snooping not effective for network-based systems Consider three alternate cache coherence approaches Full-map, limited directory, chain

presley
Download Presentation

ECE 669 Parallel Computer Architecture Lecture 18 Scalable Parallel Caches

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECE 669Parallel Computer ArchitectureLecture 18Scalable Parallel Caches

  2. Overview • Most cache protocols are more complicated than two state • Snooping not effective for network-based systems • Consider three alternate cache coherence approaches • Full-map, limited directory, chain • Caching will affect network performance • Limitless protocol • Gives appearance of full-map • Practical issues of processor – memory system interaction

  3. Context for Scalable Cache Coherence Scalable Networks - many simultaneous transactions Realizing Pgm Models through net transaction protocols - efficient node-to-net interface - interprets transactions Scalable distributed memory Caches naturally replicate data - coherence through bus snooping protocols - consistency Need cache coherence protocols that scale! - no broadcast or single point of order

  4. Generic Solution: Directories • Maintain state vector explicitly • associate with memory block • records state of block in each cache • On miss, communicate with directory • determine location of cached copies • determine action to take • conduct protocol to maintain coherence

  5. A Cache Coherent System Must: • Provide set of states, state transition diagram, and actions • Manage coherence protocol • (0) Determine when to invoke coherence protocol • (a) Find info about state of block in other caches to determine action • whether need to communicate with other cached copies • (b) Locate the other copies • (c) Communicate with those copies (inval/update) • (0) is done the same way on all systems • state of the line is maintained in the cache • protocol is invoked if an “access fault” occurs on the line • Different approaches distinguished by (a) to (c)

  6. M a a 3 a 2 Broadcast snoop Match cache directory cache a cache directory 4 cache 5 a Dual ported Purge 1 write Processor Processor Coherence in small machines: Snooping Caches • Broadcast address on shared write • Everyone listens (snoops) on bus to see if any of their own addresses match • How do you know when to broadcast, invalidate... • State associated with each cache line

  7. invalid Local Write Broadcast a Local Read Remote Write (Replace) Remote Write (Replace) read-clean write-dirty Remote Read Local Write Broadcast a State diagram for ownership protocols • For each shared data cache block Ownership • In ownership protocol: writer owns exclusive copy

  8. foo1 foo2 foo3 foo4 GET_FOO_LOCK /* MUNGE WITH FOOs */ Foo1 = X = Foo2 Foo3 = RELEASE_FOO_LOCK . . . Maintaining coherence in large machines • Software • Hardware - directories • Software coherence Typically yields weak coherence i.e. Coherence at sync points (or fence pts) • E.g.: When using critical sections for shared ops... • Code • How do you make this work?

  9. MEM cache cache foo1 foo2 foo3 foo4 foo home flush wait P P flush unlock Situation • Flush foo* from cache, wait till done • Issues • Lock ? • Must be conservative - Lose some locality • But, can exploit appl. characteristics e.g. TSP, Chaotic Allow some inconsistency • Need special processor instructions

  10. Scalable Approach: Directories • Every memory block has associated directory information • keeps track of copies of cached blocks and their states • on a miss, find directory entry, look it up, and communicate only with the nodes that have copies if necessary • in scalable networks, communication with directory and copies is through network transactions • Many alternatives for organizing directory information

  11. Basic Operation of Directory • Read from main memory by processor i: • If dirty-bit OFF then { read from main memory; turn p[i] ON; } • if dirty-bit ON then { recall line from dirty proc (cache state to shared); update memory; turn dirty-bit OFF; turn p[i] ON; supply recalled data to i;} • Write to main memory by processor i: • If dirty-bit OFF then { supply data to i; send invalidations to all caches that have the block; turn dirty-bit ON; turn p[i] ON; ... } • k processors. • With each cache-block in memory: k presence-bits, 1 dirty-bit • With each cache-block in cache: 1 valid bit, and 1 dirty (owner) bit

  12. Scalable dynamic schemes • Limited directories • Chained directories • Limitless schemes Use software Other approach: Full Map (not scalable)

  13. Limited directories • Chained directories • LimitLESS directories DIR MEM DIR ... M MEM M C C C P P P Scalable hardware schemes • General directories: • On write, check directory • if shared, send inv msg • Distribute directories with MEMs • Directory bandwidth scales in proportion to memory bandwidth • Most directory accesses during memory access -- so not too many extra network requests (except, write to read VAR)

  14. Memory controller - (directory) state diagram for memory block uncached replace update write read write/invs req 1 write copy i 1 or more read copies read i/update req pointers pointer

  15. M M M C C C . . . P P P Network

  16. Network Limited directories: Exploit worker set behavior • Invalidate 1 if 5th processor comes along (sometimes can set a broadcast invalidate bit) • Rarely more than 2 processors share • Insight: The set of 4 pointers can be managed like a fully-associative 4 entry cache on the virtual space of all pointers • But what do you do about widely shared data?

  17. 1 5th req M M M Trap 3 2 C C C . . . P P P Network • LimitLESS directories: Limited directories Locally Extended through Software Support • Trap processor when 5th request comes • Processor extends directory into local memory

  18. mem directory comm cache proc trap always mem comm cache proc remote mem comm cache proc local Zero pointer LimitLESS: All software coherence

  19. M M M inv inv inv C C C wrt . . . P P P Network • Chained directories: Simply different data structure for directory • Link all cache entries • But longer latencies • Also more complex hardware • Must handle replacements of elements in chain due to misses!

  20. M M M C C C . . . P P P Network Doubly linked chains Of course, can do these data structures though software + msgs as in LimitLESS

  21. M M M DIR ... C C C . . . P P P Network

  22. M M M DIR ... 4 3 write permission granted ack 1 2 inv C C C write . . . P P P Network Full map: Problem • Does not scale -- need N pointers • Directories distributed, so not a bottleneck

  23. MEMORY disks? • Hierarchical - E.g. KSR (actually has rings...) cache P P P P P P P

  24. MEMORY disks? Hierarchical - E.g. KSR (actually has rings...) A A A A cache RD A P P P P P P P

  25. MEMORY disks? Hierarchical - E.g. KSR (actually has rings...) A A A A A? A cache RD A P P P P P P P

  26. Software combining tree flag wrt spin wrt Widely shared data • 1. Synchronization variables • 2. Read only objects • 3. But, biggest problem: Instructions Read-only data Can mark these and bypass coherence protocol Frequently read, but rarely written data which does not fall into known patterns like synchronization variables

  27. All software LimitLESS1 LimitLESS2 LimitLESS4 All hardware 0.00 0.40 0.80 1.20 1.60 Execution Time (Mcycles) All software coherence

  28. 9.0 LimitLESS4 Protocol 8.0 LimitLESS0 : All software Execution time (megacycles) 7.0 6.0 5.0 4.0 3.0 2.0 1.0 0.0 Matrix size Cycle Performance for Block Matrix Multiplication, 16 processors 4x4 8x8 16x16 32x32 64x64 128x128 All software coherence

  29. 5.0 LimitLESS4 Protocol LimitLESS0 : All software 4.0 Execution time (10000 cycles) 3.0 2.0 1.0 0.0 2 nodes 4 nodes 16 nodes 32 nodes 64 nodes Performance for Single Global Barrier (first INVR to last RDATA) All software coherence

  30. Summary • Tradeoffs in caching an important issue • Limitless protocol provides software extension to hardware caching • Goal: maintain coherence but minimize network traffic • Full map not scalable and too costly • Distributed memory makes caching more of a challenge

More Related