Cheng chang yang
Download
1 / 20

Cheng-Chang Yang - PowerPoint PPT Presentation


  • 136 Views
  • Uploaded on

Cache Memory. Cheng-Chang Yang. Generally speaking, faster memory is more expensive than slower memory. To provide the best performance at the reasonable cost, memory is organized in a hierarchical function. Memory Hierarchy. The base types that hierarchical memory system include: Registers

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Cheng-Chang Yang' - gary-elliott


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Cheng chang yang

Cache Memory

Cheng-Chang Yang



Memory hierarchy
Memory Hierarchy slower memory.

  • The base types that hierarchical memory system include:

  • Registers

  • Cache Memory

  • Main Memory

  • SecondaryMemory: hard disk, CD…


What is cache memory
What is Cache Memory? slower memory.

  • Cache memory is to speed up memory accesses by storing recently used data closer to the CPU, instead of storing it in main memory.


Cache and main memory
Cache and Main Memory slower memory.


Cache and main memory1
Cache and Main Memory slower memory.


Cache and main memory2
Cache and Main Memory slower memory.

  • The Level 2 cache is slower and larger than the Level 1 cache, and the Level 3 cache is slower and Larger than the Level 2 cache.

    • Transmission speed

    • Level 1 > Level 2 > Level 3

    • Transmission capacity

    • Level 1 < Level 2 < Level 3


Flow chart cache read operation
Flow Chart (Cache Read Operation) slower memory.

RA: read address


Cache mapping function
Cache Mapping Function slower memory.

  • How to determining which main memory block currently holds a cache line?

  • Direct Mapping

  • Associate Mapping

  • Set Associate Mapping


Direct mapped cache
Direct Mapped slower memory.Cache


Direct mapped cache1
Direct Mapped slower memory.Cache

  • Maps each block of main memory into only one possible cache line.


Associative mapping
Associative slower memory.Mapping

  • Instead of placing main memory blocks in specific cache locations based on main memory address, we could allow a block to go anywhere in cache.

  • In this way, cache would have to fill up before any blocks move out.

  • This is how associativemappingcache works.


Associative mapping1
Associative slower memory.Mapping

  • Associative mapping overcomes the disadvantages of direct mapping by permitting each main memory block to be loaded into any line of the cache.


Associative mapping2
Associative slower memory.Mapping

  • We must determine which block to move out from the cache .

  • A simple first-in first-out (FIFO) algorithm would work. However, there are many replacement algorithms that can be used; these are discussed in later.


Set associative mapping
Set Associative Mapping slower memory.

  • The problem of the direct mapping is eased by having a few choices for block placement.

  • At the same time, the hardware cost is reduced by decreasing the size of the associative mapping search.

  • Set associative mapping is a compromise that exhibits both the direct mapping and associative mapping while reducing their disadvantages.


Replacement algorithms
Replacement Algorithms slower memory.

  • For direct mapping there is only one possible line for any block, and no choice is possible.

  • For associative and set associative mapping, a replacement algorithms is needed.

  • Least recently used (LRU)

  • First in first out (FIFO)

  • Least frequently used (LFU)

  • Random


Replacement algorithms1
Replacement Algorithms slower memory.

  • Least recently used (LRU) algorithm keeps track of the last time that a block was assessed and evicts the block that has been unused for the longest period of time.

  • First in first out (FIFO) algorithm: the block that has been in cache the longest would be selected and removed from cache memory.


Replacement algorithms2
Replacement Algorithms slower memory.

  • Least frequently used (LFU) algorithm: replace that block in the set that has experienced the fewest references.

  • The most effective is least recently used (LRU)


Reference
Reference slower memory.

  • Internet Source

    • Wikipedia

      (http://en.wikipedia.org/wiki/Cache_memory)

  • Book

    • Computer Organization And Embedded Systems(6th)

    • Computer Organization And Architecture(8th)


end slower memory.


ad