1 / 14

DDM – A Cache Only Memory Architecture

This presentation discusses the basics of Cache-Only Memory Architectures, focusing on the Data Diffusion Machine (DDM) and its coherence protocol. It explores examples of replacement, reading, and writing, as well as memory overhead and simulated performance. The strengths and weaknesses of DDM architecture are also discussed, along with alternatives to DDM.

dangelol
Download Presentation

DDM – A Cache Only Memory Architecture

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. DDM – A Cache Only Memory Architecture Hagersten, Landin, and Haridi (1991) Presented by Patrick Eibl

  2. Outline • Basics of Cache-Only Memory Architectures • The Data Diffusion Machine (DDM) • DDM Coherence Protocol • Examples of Replacement, Reading, Writing • Memory Overhead • Simulated Performance • Strengths and Weaknesses • Alternatives to DDM Architecture

  3. The Big Idea: UMA→NUMA →COMA • Centralized sharedmemory feeds data through network to individual caches • Uniform access time to all memory • Shared memory is distributed among processors (DASH) • Data can move from home memory to other caches as needed • No notion of “home” for data; moves to wherever it is needed • Individual memories behave like caches

  4. COMA: The Basics • Individual memories are called Attraction Memories (AM) – each processor “attracts” its working data set • AM also contains data that has never been accessed (+/-?) • Uses shared memory programming model, but with no pressure to optimize static partitioning • Limited duplication of shared memory • The Data Diffusion Machine is the specific COMA presented here

  5. Data Diffusion Machine • Directory hierarchy allows scaling to arbitrary number of processors • Branch factors and bottlenecks a consideration • Hierarchy can be split into different address domains to improve bandwidth

  6. Coherence Protocol • Transient states support split-transaction bus • Fairly standard protocol with important exception of replacement, which must be managed carefully (example to come) • Sequential consistency is guaranteed, but with cost that writes must wait for acknowledge before continuing • Item States • I: Invalid • E: Exclusive • S: Shared • R: Reading • W: Waiting • RW: Reading and Waiting • Bus Transactions • e: erase • x: exclusive • r: read • d: data • i: inject • o: out

  7. Replacement Example o: out i: inject I: Invalid S: Shared o i I S i o I I I S Directories o i d I I I I I I S I S I Caches P P P P P P P P Processors 1. A block needs to be brought into a full AM, necessitating a replacement and an out transaction 4. Inject finds space in new AM 5. Data is transferred to new home 2. Out propagates up until it finds another copy of block in S, R, W, or RW 6. States change accordingly 3. Out reaches top and is converted to inject, meaning this is the last copy of the data and it needs a new home

  8. Multilevel Read Example r: read d: data I: Invalid R: Reading A: Answering S: Shared d r R S I S S A d r r d d r S I R I R S S A S S Directories d r d r r d I I I R S I R S S S I I Caches P P P P P P P P Processors 1. First cache issues read request 2. Second cache issues request for same block 2. Read propagates up hierarchy 3. Request for same block encountered; directory simply waits for data reply from other request 3. Read reaches directory with block in shared state 4. Directories change to answering state while waiting for data 5. Data moves back along same path, changing states to shared as it goes

  9. Multilevel Write Example e: erase x: exclusive I: Invalid R: Reading W: Waiting E: Exclusive S: Shared e S W E I S x e e E W S I S S I I S Directories e x e e e I I S W E RW S W S I S I I I Caches P P P P P P P P Processors 1. Cache issues write request 2. Second cache issues write to same block 2. Erase propagates up hierarchy and back down, invalidating all other copies 3. Second exclusive request encounters other write to same block; first one won because it arrived first; other erase is propagated back down 4. Top of hierarchy responds with acknowledge 4. State of second cache changed to RW, and will issue a read request before another erase (not shown) 5. ACK propagates back down, changing states from Waiting to Exclusive

  10. Memory Overhead • Inclusion is necessary for directories, but not for data • Directories only need state bits and address tags • For two sample configurations given, overheads were 6% for one-level 32-processor and 16% for two-level 256-processor • Larger item size reduces overhead

  11. Simulated Performance • Minimal success on programs for which each processor operates on entire shared data • MP3D was rewritten to improve performance by exploiting fact that data has no home • OS, hardware, and emulator in development at the time • Different DDM topology for each program (-)

  12. Strengths • Each processor attracts the data it’s using into its own memory space • Data doesn’t need to be duplicated at a home node • Ordinary shared memory programming model • No need to optimize static partitioning (there is none) • Directory hierarchy scales reasonably well • Good when data is moved around in smaller chunks

  13. Weaknesses • Attraction memories hold data that isn’t being used, making them bigger and slower • Different DDM hierarchy topology was used for each program in simulations • Does not fully exploit large spatial locality; software wins in that case (S-COMA) • Branching hierarchy is prone to bottlenecks and hotspots • No way to know where data is but with expensive tree traversal (NUMA wins here)

  14. Alternatives to COMA/DDM • Flat-COMA • Blocks are free to migrate, but have home nodes with directories corresponding to physical address • Simple-COMA • Allocation managed by OS and done at page granularity • Reactive-NUMA • Switches between S-COMA and NUMA with remote cache on per-page basis Good summary of COMAs: http://ieeexplore.ieee.org/iel5/2/16679/00769448.pdf?tp=&isnumber=&arnumber=769448

More Related