1 / 16

Lecture#15

Lecture#15. Cache Function. The data that is stored within a cache might be values that have been computed earlier or duplicates of original values that are stored elsewhere If requested data is contained in the cache ( cache hit ),

macy
Download Presentation

Lecture#15

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture#15

  2. Cache Function • The data that is stored within a cache • might be values that have been computed earlier or • duplicates of original values that are stored elsewhere • If requested data is contained in the cache (cache hit), • this request can be served by simply reading the cache, which is comparatively faster. • Otherwise (cache miss), • the data has to be recomputed or fetched from its original storage location, which is comparatively slower. • Hence, the greater the number of requests that can be served from the cache, the faster the overall system performance becomes.

  3. Cache • Small amount of very fast memory which stores copies of the data from the most frequently used main memory locations • Sits between normal main memory (RAM & ROM) and CPU • May be located on CPU chip or module • Used to reduce the average time to access memory. • As long as most memory accesses are cached memory locations, the average access time of memory accesses will be closer to the cache access time than to the access time of main memory.

  4. Cache Operation – Overview • CPU requests contents of memory location • Check cache for this data • If present, get from cache (fast) • If not present, read required block from main memory to cache • Then deliver from cache to CPU • Cache includes tags to identify which block of main memory is in each cache slot

  5. Types of Cache • Most modern desktop and server CPUs have at least three independent caches: • an instruction cache to speed up executable instruction fetch, • a data cache to speed up data fetch and store, and • a translation look aside buffer (TLB) used to speed up virtual-to-physical address translation for both executable instructions and data.

  6. Multi Level Cache • Another issue is the fundamental tradeoff between cache access time and hit rate • Larger caches have better hit rates but longer access time • To address this tradeoff, many computers use multiple levels of cache, with small fast caches backed up by larger slower caches. • Multi-level caches generally operate by checking the smallest level 1 (L1) cache first; • if it hits, the processor proceeds at high speed. • If the smaller cache misses, the next larger cache (L2) is checked, and • so on, before external memory is checked. • L1 holds recently used data • L2 holds upcoming data • L3 holds possible upcoming data

  7. Multilevel Caches • High logic density enables caches on chip • Faster than bus access • Frees bus for other transfers • Common to use both on and off chip cache • L1 on chip, L2 off chip in static RAM • L2 access much faster than DRAM or ROM • L2 often uses separate data path • L2 may now be on chip • Resulting in L3 cache • Bus access or now on chip…

  8. Multilevel Cache

  9. L1 Cache • Built directly in the processor chip. • Usually has a very small capacity, ranging from 8 KB to 128 KB. • The more common sizes for PCs are 32 KB or 64 KB.

  10. L2 Cache • Slightly slower than L1 cache • Has a much larger capacity, ranging from 64 KB to 16 MB • Current processors include Advanced Transfer Cache (ATC), a type of L2 cache built directly on the processor chip • Processors that use ATC perform at much faster rates than those that do not use it • PCs today have from 512 KB to 12 MB of ATC • Servers and workstations have from 12 MB to 16 MB of ATC

  11. L3 Cache • L3 cache is a cache on the motherboard • Separate from the processor chip. • Exists only on computers that use L2 Advanced Transfer Cache. • Personal computers often have up to 8 MB of L3 cache; • Servers and work stations have from 8 MB to 24 MB of L3 cache.

  12. Multi Level Cache • speeds the processes of the computer because it stores frequently used instructions and data

  13. Memory hierarchy –Design constraints • How much? • open ended. If the capacity is there, applications will likely be developed to use it • How fast? • To achieve greatest performance, the memory must be able to keep up with the processor. • As the processor is executing instructions, it should not have to pause waiting for instructions or operands. • How expensive? • the cost of memory must be reasonable in relationship to other components

  14. Memory Hierarchy • Faster access time, greater cost per bit • Greater capacity, smaller cost per bit • Greater capacity, slower access time

  15. Memory Hierarchy

  16. Access Time

More Related