1 / 20

Topics in Memory System Design

Topics in Memory System Design. Reading List. Slides: Topic7x Henn & Patt: Chapter 7 Other papers as assigned in class or homeworks. MAIN PROCESSOR. BACKING STORE. HIGH- SPEED CACHE. MAIN MEMORY. MEMORY MANAGE- MENT UNIT. Program Characteristics and Memory Organization.

adempster
Download Presentation

Topics in Memory System Design

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Topics in Memory System Design \course\cpeg323-05F\Topic7.ppt

  2. Reading List • Slides: Topic7x • Henn & Patt: Chapter 7 • Other papers as assigned in class or homeworks \course\cpeg323-05F\Topic7.ppt

  3. MAIN PROCESSOR BACKING STORE HIGH- SPEED CACHE MAIN MEMORY MEMORY MANAGE- MENT UNIT \course\cpeg323-05F\Topic7.ppt

  4. Program Characteristics and Memory Organization • RAM vs. sequential access trade-off: between performance/cost and technology • Locality in memory access patterns - hierarchy in memory design • cache • virtual memory \course\cpeg323-05F\Topic7.ppt

  5. Random Access Memory DATA REGISTER MEMORY BUS (To-from Processor) { ADDRESS REGISTER 0 1 2 3 The structure of a random-access memory (RAM) N-2 N-1 Key: fixed access time ADDRESSES MEMORY CELLS \course\cpeg323-05F\Topic7.ppt

  6. Memory Performance Bandwidth = # bits/sec “ that can be accessed” = (bit/word) x (word/cycle) x (cycle/sec) So, improve bandwidth? “Von Neumann Bottleneck” \course\cpeg323-05F\Topic7.ppt

  7. How to Improve Memory System Performance? • Reduce cycle time • Increase word size • Concurrency • Efficient memory design \course\cpeg323-05F\Topic7.ppt

  8. Cache • Almost all higher-performance microprocessors on the market use cache • Why not Cray vector architectures ? \course\cpeg323-05F\Topic7.ppt

  9. The improvements in IC technology affected not only DRAMs, but also SRAMs, making the cost of caches much lower. Caches are one of the most important ideas in computer architecture because they can substantially improve performance by the use of memory. The growing gap between DRAM cycle times and processor cycle times, as the next figure shows, is a key motivation for caches. If we are to run processors at the speeds they are capable of, we must have higher speed memories to provide data. [Joupi & Hennessy 91] \course\cpeg323-05F\Topic7.ppt

  10. \course\cpeg323-05F\Topic7.ppt

  11. Latency in a Single System Ratio Memory Access Time CPU Time THE WALL \course\cpeg323-05F\Topic7.ppt

  12. Locality of Reference “Program references tend to be clustered in time.” \course\cpeg323-05F\Topic7.ppt

  13. Regions with High Access Probabilities • PC vicinity • Stack frame (local) • “Nearest” subroutines • Active data \course\cpeg323-05F\Topic7.ppt

  14. Probability of Reference Address of Reference \course\cpeg323-05F\Topic7.ppt

  15. Probability of Reference Address of Reference Problem: often predict too big a page size than actually needed \course\cpeg323-05F\Topic7.ppt

  16. The success of ache memories has been explained by reference to the “property of locality” [Denn72]. The property of locality has two aspects, temporal and spatial. Over short periods of time, a program distributes its memory references nonuniformly over its address space, and which portions of the address space are favored remain largely the same for long periods of time. \course\cpeg323-05F\Topic7.ppt

  17. This first property, called temporal locality, or locality by time, means that the information which will be in use in the near future is likely to be in use already. This type of behavior can be expected from program loops in which both data and instructions are reused. \course\cpeg323-05F\Topic7.ppt

  18. The second property, locality by space, means that portions of the address space which are in use generally consist of a fairly small number of individually contiguous segments of that address space. Locality by space, then, means that the loci of reference of the program in the near future are likely to be near the current loci of reference. \course\cpeg323-05F\Topic7.ppt

  19. This type of behavior can be expected from common knowledge of programs; related data items (variables, arrays) are usually stored together, and instructions are mostly executed sequentially. Since the cache memory buffers segments of information that have been recently used, the property of locality implies that needed information is also likely to be found in the cache. [Smith82, p475] \course\cpeg323-05F\Topic7.ppt

  20. Information in USE in the near future is likely to consist of that information in current use (locality by time) and that information logically adjacent to that in currently use (locality by space). Temporal Spacial \course\cpeg323-05F\Topic7.ppt

More Related