1 / 12

CS61C Midterm #2 Review Session

CS61C Midterm #2 Review Session. A little Cache goes a long way. The Ideal Memory System. Fast Cheap (Large). Actual Memory Systems. Fast, Expensive (Small). Slow, Cheap (Large). Idea: Multilevel Memory (cache). +. =. The Cache. CPU. Store recently used data in fast memory Cache Hit

Antony
Download Presentation

CS61C Midterm #2 Review Session

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS61C Midterm #2 Review Session A little Cache goes a long way

  2. The Ideal Memory System Fast Cheap (Large)

  3. Actual Memory Systems Fast, Expensive (Small) Slow, Cheap (Large)

  4. Idea: Multilevel Memory (cache) + =

  5. The Cache CPU • Store recently used data in fast memory • Cache Hit • Address we’re looking for is in cache • Cache Miss • Not found… read memory and insert into cache • This works because… Tag Data Main Memory

  6. Locality Just referenced x address Spatial Locality data Reference to data near x likely Temporal Locality stack Likely to reference x again soon code time

  7. Computing Average Access Time Q: Suppose we have a cache with a 5ns access time, main memory with a 60ns access time, and a cache hit rate of 95%. What is the average access time?

  8. Cache Design Issues • Associativity • Fully associative, direct-mapped, n-way set associative • Block Size • Replacement Strategy • LRU, etc. • Write Strategy • Write-through, write-back

  9. An Example

  10. Multiple Choice (1) • LRU is an effective cache replacement policy primarily because programs • exhibit locality of reference • usually have small working sets • read data much more frequently than writing data • can generate addresses that collide in the cache

  11. Multiple Choice (2) • Increasing the associativity of a cache improves performance primarily because programs • exhibit locality of reference • usually have small working sets • read data much more frequently than writing data • can generate addresses that collide in the cache

  12. Multiple Choice (3) • Increasing the block size of a cache improves performance primarily because programs • exhibit locality of reference • usually have small working sets • read data much more frequently than writing data • can generate addresses that collide in the cache

More Related