cheng chang yang
Download
Skip this Video
Download Presentation
Cheng-Chang Yang

Loading in 2 Seconds...

play fullscreen
1 / 20

Cheng-Chang Yang - PowerPoint PPT Presentation


  • 136 Views
  • Uploaded on

Cache Memory. Cheng-Chang Yang. Generally speaking, faster memory is more expensive than slower memory. To provide the best performance at the reasonable cost, memory is organized in a hierarchical function. Memory Hierarchy. The base types that hierarchical memory system include: Registers

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Cheng-Chang Yang' - gary-elliott


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
cheng chang yang

Cache Memory

Cheng-Chang Yang

slide2

Generally speaking, faster memory is more expensive than slower memory.

  • To provide the best performance at the reasonable cost, memory is organized in a hierarchical function.
memory hierarchy
Memory Hierarchy
  • The base types that hierarchical memory system include:
  • Registers
  • Cache Memory
  • Main Memory
  • SecondaryMemory: hard disk, CD…
what is cache memory
What is Cache Memory?
  • Cache memory is to speed up memory accesses by storing recently used data closer to the CPU, instead of storing it in main memory.
cache and main memory2
Cache and Main Memory
  • The Level 2 cache is slower and larger than the Level 1 cache, and the Level 3 cache is slower and Larger than the Level 2 cache.
    • Transmission speed
    • Level 1 > Level 2 > Level 3
    • Transmission capacity
    • Level 1 < Level 2 < Level 3
cache mapping function
Cache Mapping Function
  • How to determining which main memory block currently holds a cache line?
  • Direct Mapping
  • Associate Mapping
  • Set Associate Mapping
direct mapped cache1
Direct MappedCache
  • Maps each block of main memory into only one possible cache line.
associative mapping
AssociativeMapping
  • Instead of placing main memory blocks in specific cache locations based on main memory address, we could allow a block to go anywhere in cache.
  • In this way, cache would have to fill up before any blocks move out.
  • This is how associativemappingcache works.
associative mapping1
AssociativeMapping
  • Associative mapping overcomes the disadvantages of direct mapping by permitting each main memory block to be loaded into any line of the cache.
associative mapping2
AssociativeMapping
  • We must determine which block to move out from the cache .
  • A simple first-in first-out (FIFO) algorithm would work. However, there are many replacement algorithms that can be used; these are discussed in later.
set associative mapping
Set Associative Mapping
  • The problem of the direct mapping is eased by having a few choices for block placement.
  • At the same time, the hardware cost is reduced by decreasing the size of the associative mapping search.
  • Set associative mapping is a compromise that exhibits both the direct mapping and associative mapping while reducing their disadvantages.
replacement algorithms
Replacement Algorithms
  • For direct mapping there is only one possible line for any block, and no choice is possible.
  • For associative and set associative mapping, a replacement algorithms is needed.
  • Least recently used (LRU)
  • First in first out (FIFO)
  • Least frequently used (LFU)
  • Random
replacement algorithms1
Replacement Algorithms
  • Least recently used (LRU) algorithm keeps track of the last time that a block was assessed and evicts the block that has been unused for the longest period of time.
  • First in first out (FIFO) algorithm: the block that has been in cache the longest would be selected and removed from cache memory.
replacement algorithms2
Replacement Algorithms
  • Least frequently used (LFU) algorithm: replace that block in the set that has experienced the fewest references.
  • The most effective is least recently used (LRU)
reference
Reference
  • Internet Source
    • Wikipedia

(http://en.wikipedia.org/wiki/Cache_memory)

  • Book
    • Computer Organization And Embedded Systems(6th)
    • Computer Organization And Architecture(8th)
ad