1 / 14

Memory - Cache

Memory - Cache. Instructor : Sin-Min Lee Student : Chen Ying Kuo. What is Memory Chche??. a  cache  is a component that transparently stores data so that future requests for that data can be served faster. L1 & L2. A level 1 (L1) cache is a memory bank built into the CPU chip.

Download Presentation

Memory - Cache

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Memory - Cache Instructor : Sin-Min Lee Student : Chen Ying Kuo

  2. What is Memory Chche?? • a cache is a component that transparently stores data so that future requests for that data can be served faster.

  3. L1 & L2 • A level 1 (L1) cache is a memory bank built into the CPU chip. • A level 2 cache (L2) is a secondary staging area that feeds the L1 cache. • Increasing the size of the L2 cache may speed up some applications but have no effect on others.

  4. Cache in real

  5. About L2 • L2 may be built into the CPU chip, reside on a separate chip in a multichip package module or be a separate bank of chips on the motherboard. • Caches are typically static RAM (SRAM), while main memory is generally some variety of dynamic RAM (DRAM).

  6. Simple Memory Hierarchy

  7. Data in cache •  The data that is stored within a cache might be values that have been computed earlier or duplicates of original values that are stored elsewhere.

  8. Hit & Miss • If requested data is contained in the cache (cache hit), this request can be served by simply reading the cache, which is comparatively faster. • Otherwise (cache miss), the data has to be recomputed or fetched from its original storage location, which is comparatively slower.

  9. Short conclusion • Hence, the more requests can be served from the cache the faster the overall system performance is.

  10. Locality • In physics, the principle of locality states that an object is influenced directly only by its immediate surroundings. • References exhibit temporal locality if data is requested again that has been recently requested already. • References exhibit spatial locality if data is requested that is physically stored close to data that has been requested already.

  11. Temporal locality •  if at one point in time a particular memory location is referenced, then it is likely that the same location will be referenced again in the near future. • There is a temporal proximity between the adjacent references to the same memory location.

  12. Temporal locality(Cont.) • In this case it is common to make efforts to store a copy of the referenced data in special memory storage, which can be accessed faster. • Temporal locality is a very special case of the spatial locality, namely when the prospective location is identical to the present location.

  13. Spatial locality • if a particular memory location is referenced at a particular time, then it is likely that nearby memory locations will be referenced in the near future. In this case it is common to attempt to guess the size and shape of the area around the current reference for which it is worthwhile to prepare faster access.

  14. References Internet sources: - http://en.wikipedia.org/wiki/Cache_memory - http://tuancom.wordpress.com/ Book sources : • Computer Organization And Architecture, 8th edtion. • Aho, Lam, Sethi, and Ullman. "Compilers: Principles, Techniques & Tools" 2nd ed. 

More Related