slide1 n.
Download
Skip this Video
Download Presentation
Lecture 19: Cache Basics

Loading in 2 Seconds...

play fullscreen
1 / 17

Lecture 19: Cache Basics - PowerPoint PPT Presentation


  • 129 Views
  • Uploaded on

Lecture 19: Cache Basics. Today’s topics: Out-of-order execution Cache hierarchies Reminder: Assignment 7 due on Thursday. Multicycle Instructions. Multiple parallel pipelines – each pipeline can have a different number of stages

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Lecture 19: Cache Basics' - lottie


Download Now An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
slide1

Lecture 19: Cache Basics

  • Today’s topics:
    • Out-of-order execution
    • Cache hierarchies
  • Reminder:
    • Assignment 7 due on Thursday
slide2

Multicycle Instructions

  • Multiple parallel pipelines – each pipeline can have a different
  • number of stages
  • Instructions can now complete out of order – must make sure
  • that writes to a register happen in the correct order
slide3

An Out-of-Order Processor Implementation

Reorder Buffer (ROB)

Branch prediction

and instr fetch

Instr 1

Instr 2

Instr 3

Instr 4

Instr 5

Instr 6

T1

T2

T3

T4

T5

T6

Register File

R1-R32

R1  R1+R2

R2  R1+R3

BEQZ R2

R3  R1+R2

R1  R3+R2

Decode &

Rename

T1  R1+R2

T2  T1+R3

BEQZ T2

T4  T1+T2

T5  T4+T2

ALU

ALU

ALU

Instr Fetch Queue

Results written to

ROB and tags

broadcast to IQ

Issue Queue (IQ)

slide4

Cache Hierarchies

  • Data and instructions are stored on DRAM chips – DRAM
  • is a technology that has high bit density, but relatively poor
  • latency – an access to data in memory can take as many
  • as 300 cycles today!
  • Hence, some data is stored on the processor in a structure
  • called the cache – caches employ SRAM technology, which
  • is faster, but has lower bit density
  • Internet browsers also cache web pages – same concept
slide5

Memory Hierarchy

  • As you go further, capacity and latency increase

Disk

80 GB

10M cycles

Memory

1GB

300 cycles

L2 cache

2MB

15 cycles

L1 data or

instruction

Cache

32KB

2 cycles

Registers

1KB

1 cycle

slide6

Locality

  • Why do caches work?
    • Temporal locality: if you used some data recently, you

will likely use it again

    • Spatial locality: if you used some data recently, you

will likely access its neighbors

  • No hierarchy: average access time for data = 300 cycles
  • 32KB 1-cycle L1 cache that has a hit rate of 95%:
  • average access time = 0.95 x 1 + 0.05 x (301)
  • = 16 cycles
slide7

Accessing the Cache

Byte address

101000

Offset

8-byte words

8 words: 3 index bits

Direct-mapped cache:

each address maps to

a unique address

Sets

Data array

slide8

The Tag Array

Byte address

101000

Tag

8-byte words

Compare

Direct-mapped cache:

each address maps to

a unique address

Tag array

Data array

slide9

Example Access Pattern

Byte address

Assume that addresses are 8 bits long

How many of the following address requests

are hits/misses?

4, 7, 10, 13, 16, 68, 73, 78, 83, 88, 4, 7, 10…

101000

Tag

8-byte words

Compare

Direct-mapped cache:

each address maps to

a unique address

Tag array

Data array

slide10

Increasing Line Size

Byte address

A large cache line size  smaller tag array,

fewer misses because of spatial locality

10100000

32-byte cache

line size or

block size

Tag

Offset

Tag array

Data array

slide11

Associativity

Byte address

Set associativity  fewer conflicts; wasted power

because multiple data and tags are read

10100000

Tag

Way-1

Way-2

Tag array

Data array

Compare

slide12

Associativity

How many offset/index/tag bits if the cache has

64 sets,

each set has 64 bytes,

4 ways

Byte address

10100000

Tag

Way-1

Way-2

Tag array

Data array

Compare

slide13

Example

  • 32 KB 4-way set-associative data cache array with 32
  • byte line sizes
  • How many sets?
  • How many index bits, offset bits, tag bits?
  • How large is the tag array?
slide14

Cache Misses

  • On a write miss, you may either choose to bring the block
  • into the cache (write-allocate) or not (write-no-allocate)
  • On a read miss, you always bring the block in (spatial and
  • temporal locality) – but which block do you replace?
    • no choice for a direct-mapped cache
    • randomly pick one of the ways to replace
    • replace the way that was least-recently used (LRU)
    • FIFO replacement (round-robin)
slide15

Writes

  • When you write into a block, do you also update the
  • copy in L2?
    • write-through: every write to L1  write to L2
    • write-back: mark the block as dirty, when the block

gets replaced from L1, write it to L2

  • Writeback coalesces multiple writes to an L1 block into one
  • L2 write
  • Writethrough simplifies coherency protocols in a
  • multiprocessor system as the L2 always has a current
  • copy of data
slide16

Types of Cache Misses

  • Compulsory misses: happens the first time a memory
  • word is accessed – the misses for an infinite cache
  • Capacity misses: happens because the program touched
  • many other words before re-touching the same word – the
  • misses for a fully-associative cache
  • Conflict misses: happens because two words map to the
  • same location in the cache – the misses generated while
  • moving from a fully-associative to a direct-mapped cache
slide17

Title

  • Bullet