Cheng chang yang
This presentation is the property of its rightful owner.
Sponsored Links
1 / 20

Cheng-Chang Yang PowerPoint PPT Presentation


  • 86 Views
  • Uploaded on
  • Presentation posted in: General

Cache Memory. Cheng-Chang Yang. Generally speaking, faster memory is more expensive than slower memory. To provide the best performance at the reasonable cost, memory is organized in a hierarchical function. Memory Hierarchy. The base types that hierarchical memory system include: Registers

Download Presentation

Cheng-Chang Yang

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Cheng chang yang

Cache Memory

Cheng-Chang Yang


Cheng chang yang

  • Generally speaking, faster memory is more expensive than slower memory.

  • To provide the best performance at the reasonable cost, memory is organized in a hierarchical function.


Memory hierarchy

Memory Hierarchy

  • The base types that hierarchical memory system include:

  • Registers

  • Cache Memory

  • Main Memory

  • SecondaryMemory: hard disk, CD…


What is cache memory

What is Cache Memory?

  • Cache memory is to speed up memory accesses by storing recently used data closer to the CPU, instead of storing it in main memory.


Cache and main memory

Cache and Main Memory


Cache and main memory1

Cache and Main Memory


Cache and main memory2

Cache and Main Memory

  • The Level 2 cache is slower and larger than the Level 1 cache, and the Level 3 cache is slower and Larger than the Level 2 cache.

    • Transmission speed

    • Level 1 > Level 2 > Level 3

    • Transmission capacity

    • Level 1 < Level 2 < Level 3


Flow chart cache read operation

Flow Chart (Cache Read Operation)

RA: read address


Cache mapping function

Cache Mapping Function

  • How to determining which main memory block currently holds a cache line?

  • Direct Mapping

  • Associate Mapping

  • Set Associate Mapping


Direct mapped cache

Direct MappedCache


Direct mapped cache1

Direct MappedCache

  • Maps each block of main memory into only one possible cache line.


Associative mapping

AssociativeMapping

  • Instead of placing main memory blocks in specific cache locations based on main memory address, we could allow a block to go anywhere in cache.

  • In this way, cache would have to fill up before any blocks move out.

  • This is how associativemappingcache works.


Associative mapping1

AssociativeMapping

  • Associative mapping overcomes the disadvantages of direct mapping by permitting each main memory block to be loaded into any line of the cache.


Associative mapping2

AssociativeMapping

  • We must determine which block to move out from the cache .

  • A simple first-in first-out (FIFO) algorithm would work. However, there are many replacement algorithms that can be used; these are discussed in later.


Set associative mapping

Set Associative Mapping

  • The problem of the direct mapping is eased by having a few choices for block placement.

  • At the same time, the hardware cost is reduced by decreasing the size of the associative mapping search.

  • Set associative mapping is a compromise that exhibits both the direct mapping and associative mapping while reducing their disadvantages.


Replacement algorithms

Replacement Algorithms

  • For direct mapping there is only one possible line for any block, and no choice is possible.

  • For associative and set associative mapping, a replacement algorithms is needed.

  • Least recently used (LRU)

  • First in first out (FIFO)

  • Least frequently used (LFU)

  • Random


Replacement algorithms1

Replacement Algorithms

  • Least recently used (LRU) algorithm keeps track of the last time that a block was assessed and evicts the block that has been unused for the longest period of time.

  • First in first out (FIFO) algorithm: the block that has been in cache the longest would be selected and removed from cache memory.


Replacement algorithms2

Replacement Algorithms

  • Least frequently used (LFU) algorithm: replace that block in the set that has experienced the fewest references.

  • The most effective is least recently used (LRU)


Reference

Reference

  • Internet Source

    • Wikipedia

      (http://en.wikipedia.org/wiki/Cache_memory)

  • Book

    • Computer Organization And Embedded Systems(6th)

    • Computer Organization And Architecture(8th)


Cheng chang yang

end


  • Login