1 / 12

Main Memory Update Policy

This update provides information on different main memory update policies such as write-through, write-back, and clean and dirty blocks. It also discusses memory fetch policies and how to handle read and write operations efficiently.

Download Presentation

Main Memory Update Policy

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Main Memory Update Policy Update : about 8~16% are writes • Write-through: Write data through to the memory as soon as they are placed on any cache. Reliable, but poor performance. • Write-back (copy-back): Modifications written to the cache and then written through to the memory later. Fast: some data may be overwritten before they are written back, and so need never be written at all.Poor reliability: unwritten data will be lost whenever a user machine crashes. • Clean and dirty blocks? Dirty bit: Indicate whether a line is modified while in the cache. When a “dirty line” is replaced it must be written back to the main Memory. • Write-buffer: A queue that holds data while the data is waiting to be written to memory. \course\cpeg323-07F\Topic7b

  2. Main Memory Fetch Policy • • Demand fetch: Fetching a block when it is needed and is not already in the cache, i.e. to fetch the required block on a miss. • •Prefetch: Fetching blocks before they are requested. A simple prefetch strategy is to prefetch the (i+1)th block when the ith block is initially referenced on the expectation that it is likely to be needed if the ith block is needed. • • Selective fetch: Not always fetching blocks, dependent upon some defined criterion, and in these cases using the main memory rather than the cache to hold the information. \course\cpeg323-07F\Topic7b

  3. How to Handle Read/Write Read Easy • Send the address to the appropriate cache. The address comes either from the PC (for an instruction read) or from ALU (for an data access). • . • If the cache signals hit: the requested word is available on the data lines. • If the cache signals miss: we send the full address to the main memory. When the memory returns with the data, we write it into the cache. \course\cpeg323-07F\Topic7b

  4. Read Miss is easy to be handled quickly: Read tag and read block can be done simultaneously before we know it is a hit. • Write is usually slower: Read tag and write block cannot be done simultaneously. (Except: for one-word-line caches) One-word-line: (DEC 3100) \course\cpeg323-07F\Topic7b

  5. For multiple-word-line: when write on a write miss, it is a read - modify - write cycle the original a portion write the block block The tag comparison cannot be done in parallel, so it is slower \course\cpeg323-07F\Topic7b

  6. x x1 x3 x4 x2 x1 x3 x4 x x2 y1 y3 y4 y2 y Example Assume x, y map to the same set and cache has x initially Main Memory Cache Tag Data \course\cpeg323-07F\Topic7b

  7. y x4 x1 x2 z Assume before write, cache contain line x. when write y: a miss, and a write occurs, e.g.: write y.3 z Note after write miss, if not careful, we get But later a write back will destroy y1, y2, y4! Tag Data \course\cpeg323-07F\Topic7b

  8. There are two options on a write miss: • Write allocate (also called fetch on write):The block is loaded, followed by the write-hit actions above. This is similar to a read miss. • No write allocate (also called write around):The block is modified in the lower level and not loaded into the cache. Think what you do when have a write miss! \course\cpeg323-07F\Topic7b

  9. When Write ThroughMay Be Better? • When 2-level cache is used: CPU on chip small cache + A large off-chip cache • Consistency is easier • Memory traffic is avoid by the 2nd cache \course\cpeg323-07F\Topic7b

  10. Write-back vs. Write-through • Speed (write-back is fast) • Traffic (in general, copy-back is better) • If more than one read hit to the line • So attractive to multiprocessor in this sense • Cache consistency (write-through is better) • Logic (copy-back more complicated) \course\cpeg323-07F\Topic7b

  11. Cont’d Write-back vs. Write-through • Buffering (4 is best for write-through) Needed for both, but copy-back only need 1. Management is complicated because when a ref is made, it must consult the buffer. • Reliability (write-through is better) because main memory has error detection “There is no clear choice” in terms of performance ... \course\cpeg323-07F\Topic7b

  12. Advantage \course\cpeg323-08F\Topic7b

More Related