1 / 47

ECE7995 Caching and Prefetching Techniques in Computer Systems

ECE7995 Caching and Prefetching Techniques in Computer Systems. Lecture 8: Buffer Cache in Main Memory (IV). 5. Recency = 1. Recency = 2. 3. 2. 8. 1. 4. 9. Quantifying Locality with LRU Stack . Blocks are ordered by their recencies;

carney
Download Presentation

ECE7995 Caching and Prefetching Techniques in Computer Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECE7995 Caching and Prefetching Techniques in Computer Systems Lecture 8: Buffer Cache in Main Memory (IV)

  2. 5 Recency = 1 Recency = 2 3 2 8 1 4 9 Quantifying Locality with LRU Stack • Blocks are ordered by their recencies; • Blocks enter from the stack top, and leave from its bottom; . . . 3 4 4 5 3 LRU stack

  3. IRR = 2 Recency = 0 Recency = 2 5 3 8 4 9 LRU Stack • Blocks are ordered by recency in the LRU stack; • Blocks enter from the stack top, and leave from its bottom; . . . 3 3 4 5 3 Inter-Reference Recency (IRR)The number of other distinct blocks accessed between two consecutive references to the block. 2 LRU stack

  4. Locality Strength Locality Strength MULTI2 LRU Good for “absolutely” strong locality Bad for relatively weak locality IRR (Re-use Distance in Blocks) Cache Size Virtual Time (Reference Stream)

  5. LRU’s Inability with Weak Locality • Memory scanning (one-time access) • Infinite IRR, weak locality; • should not be cached at all; • not replaced timely in LRU (be cached until their recency larger than cache size);

  6. LRU’s Inability with Weak Locality • Loop-like accesses (repeated accesses with a fixed interval) • IRR is the same as the interval • The interval larger than cache size, no hits • blocks to be accessed soonest can be unfortunately replaced.

  7. LRU’s Inability with Weak Locality • Accesses with distinct frequencies: • The recencies of frequently accessed blocks become large because of references to infrequently accessed block; • Frequently accessed blocks could be unfortunately replaced.

  8. Locality Strength Looking for Blocks with Strong Locality MULTI2 Cover 1000 Blocks with Strongest Locality IRR (Re-use Distance in Blocks) Cache Size Virtual Time (Reference Stream)

  9. Challenges • Simplicity: affordable implementation • Adaptability: responsive to access pattern changes • Address the limitations of LRU fundamentally. • Retain the low overhead and adaptability merits of LRU.

  10. Principle of the LIRS Replacement If a block’s IRR is high, its next IRR is likely to be high again. We select the blocks with high IRRs for replacement . LIRS:Low IRR Set Replacement algorithmWe keep the set of blocks with low IRRs in cache.

  11. Requirements on Low IRR Block Set (LIRS) • The set size should be the cache size. • The set consists of the blocks with strongest locality strength (with the lowest IRRs) • Dynamically keep the set up to date

  12. Llirs Lhirs Low IRR Block Set Low IRR ( LIR ) block and High IRR (HIR) block Block Sets Physical Cache LIR block set (size is Llirs ) Cache size L =Llirs + Lhirs HIR block set

  13. An Example for LIRS Llirs=2, Lhirs=1 LIR block set = {A, B}, HIR block set = {C, D, E}

  14. LIR block set A B A B E C D E Resident blocks HIR block set Mapping to Cache Block Sets Physical Cache Llirs=2 Lhirs=1

  15. Which Block is replaced ? Replace HIR Blocks D is referenced at time 10 The resident HIR block (E) is replaced !

  16. How LIR Set is Updated ? Recency of LIR Block Used

  17. D B After D is Referenced at Time 10 … … E is replaced, D enters LIR set

  18. If Reference is to C at Time 10 … … E is replaced, C cannot enter LIR set

  19. The LIRS References with Weak Locality • Memory scanning (one-time access) • Infinite IRR; • Not included in the LIR block set; • replaced timely.

  20. The LIRS References with Weak Locality • Loop-like accesses • The IRRs of all blocks are the same; • Once a block becomes LIR block, it can keep its status; • Any cached block can contribute a hit in one loop of accesses.

  21. The LIRS References with Weak Locality • Accesses with distinct frequencies: • The IRRs of frequently accessed blocks have smaller IRR, than infrequently accessed blocks. • Frequently accessed blocks are LIR blocks; • Always cached and get hits.

  22. Making LIRS O(1) Efficient IRR HIR (New IRR of the HIR block) Rmax (Maximum Recency of LIR blocks) This efficiency is achieved by our LIRS stack. LRU stack + LIR block with Rmax recency in its bottom ==> LIRS stack.

  23. LIRS stack LRU stack resident block 5 5 LIR block 3 3 2 2 1 HIR block 1 6 6 9 4 Cache size L = 5 8 Llir= 3 Lhir =2 Differences between LRU and LIRS Stacks • Stack size of LRU decided by cache size, and fixed; Stack size of LIRS decided by Rmax, and varied. • LRU stack holds only resident blocks; LIRS stack holds any blocks whose recencies are no more than Rmax. • LRU stack does not distinguish “hot” and “cold” blocks in it; LIRS stack distinguishes LIR and HIR blocks in it, and dynamically maintains their statues.

  24. Rmax(Maximum Recency of LIR blocks) IRR HIR (New IRR of the HIR block) How does LIRS Stack Help? LIRS Stack Blocks in the LIRS stack ==> IRR < Rmax Other blocks ==> IRR > Rmax

  25. resident in cache LIR block Cache size L = 5 HIR block Llir= 3 Lhir =2 LIRS Operations 5 3 • Initialization: All the referenced blocks are given an LIR status until LIR block set is full. • We place resident HIR blocks in Stack Q 2 1 6 9 4 5 8 3 Resident HIR Stack Q LIRS stack S

  26. resident in cache 5 LIR block 3 Cache size L = 5 2 HIR block Llir= 3 Lhir =2 1 6 4 9 Access an LIR Block (a Hit) . . . 5 9 7 5 3 8 4 5 3 8 Resident HIR Stack Q LIRS stack S

  27. resident in cache LIR block 5 Cache size L = 5 3 HIR block Llir= 3 Lhir =2 2 1 4 6 9 8 Access an LIR Block (a Hit) . . . 5 9 7 5 3 8 5 3 Resident HIR Stack Q LIRS stack S

  28. resident in cache LIR block Cache size L = 5 5 HIR block 3 Llir= 3 Lhir =2 2 4 1 6 9 8 Access an LIR block (a Hit) . . . 5 9 7 5 3 8 5 3 S Q

  29. resident in cache LIR block 3 Cache size L = 5 HIR block Llir= 3 Lhir =2 4 3 3 8 Access a Resident HIR Block (a Hit) . . . 5 9 7 5 3 5 2 5 1 S Q

  30. resident in cache LIR block Cache size L = 5 HIR block Llir= 3 Lhir =2 4 5 2 5 1 3 8 Access a Resident HIR Block (a Hit) . . . 5 9 7 5 3 S Q

  31. resident in cache LIR block Cache size L = 5 HIR block Llir= 3 Lhir =2 4 5 2 5 1 1 3 8 Access a Resident HIR Block (a Hit) . . . 5 9 7 5 3 S Q

  32. resident in cache LIR block Cache size L = 5 HIR block Llir= 3 Lhir =2 4 5 5 1 3 8 Access a Resident HIR Block (a Hit) . . . 5 9 7 5 S Q

  33. resident in cache LIR block Cache size L = 5 HIR block Llir= 3 Lhir =2 4 7 5 5 7 1 3 8 Access a Non-Resident HIR block (a Miss) . . . 5 9 7 S Q

  34. resident in cache LIR block Cache size L = 5 HIR block Llir= 3 Lhir =2 5 4 5 9 5 5 7 7 9 3 8 Access a Non-Resident HIR block (a Miss) . . . 5 9 S Q

  35. resident in cache LIR block Cache size L = 5 HIR block 7 Llir= 3 Lhir =2 5 4 4 7 7 9 9 7 3 5 8 Access a Non-Resident HIR block (a Miss) . . . 5 S Q

  36. Workload Traces • postgres is a trace of join queries among four relations in a relational database system; • sprite is from the Sprite network file system; • multi2 is obtained by executing three workloads, cs, cpp, and postgres, together.

  37. Cache Partition • 1%of the cache size is for HIR blocks • 99% of the cache size is for LIR blocks • Performance is not sensitive to a partition.

  38. Looping Pattern: postgres (Access Map) Logical Block Number Virtual Time (Reference Stream)

  39. Looping Pattern: Postgres (IRR Map) LIRS IRR (Re-use Distance in Blocks) LRU Virtual Time (Reference Stream)

  40. Looping Pattern: postgres (Hit Rates)

  41. Temporally-Clustered Pattern: sprite (Access Map) Logical Block Number Virtual Time (Reference Stream)

  42. Temporally-Clustered Pattern: sprite (IRR Map) IRR (Re-use Distance in Blocks) LIRS LRU Virtual Time (Reference Stream)

  43. Temporally-Clustered Pattern: sprite (Hit Ratio)

  44. Mixed Pattern: multi2 (Access Map) Logical Block Number Virtual Time (Reference Stream)

  45. Mixed Pattern: multi2 (IRR Map) IRR (Re-use Distance in Blocks) LIRS LRU Virtual Time (Reference Stream)

  46. Mixed Pattern: multi2 (Hit Ratio)

  47. Summay • LIRS uses both IRR (or reuse distance) and recency for its replacement decision. 2Q uses only reuse distance. • LIRS adapts to the locality changes when deciding which blocks have small IRRs. 2Q uses a fixed threshold in looking for blocks of small reuse distances. • Both LIRS and 2Q are of low time overhead (as low as LRU). Their space overheads are acceptably larger.

More Related