Chapter 5
This presentation is the property of its rightful owner.
Sponsored Links
1 / 116

Chapter 5 PowerPoint PPT Presentation


  • 74 Views
  • Uploaded on
  • Presentation posted in: General

Chapter 5. Large and Fast: Exploiting Memory Hierarchy. Memory Technology. §5.1 Introduction. Static RAM (SRAM) 0.5ns – 2.5ns, $2000 – $5000 per GB Dynamic RAM (DRAM) 50ns – 70ns, $20 – $75 per GB Magnetic disk 5ms – 20ms, $0.20 – $2 per GB Ideal memory Access time of SRAM

Download Presentation

Chapter 5

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Chapter 5

Chapter 5

Large and Fast: Exploiting Memory Hierarchy


Memory technology

Memory Technology

§5.1 Introduction

  • Static RAM (SRAM)

    • 0.5ns – 2.5ns, $2000 – $5000 per GB

  • Dynamic RAM (DRAM)

    • 50ns – 70ns, $20 – $75 per GB

  • Magnetic disk

    • 5ms – 20ms, $0.20 – $2 per GB

  • Ideal memory

    • Access time of SRAM

    • Capacity and cost/GB of disk

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 2


Principle of locality

Principle of Locality

  • Programs access a small proportion of their address space at any time

  • Temporal locality

    • Items accessed recently are likely to be accessed again soon (Lease Recently Used)

    • e.g., instructions in a loop, induction variables

  • Spatial locality

    • Items near those accessed recently are likely to be accessed soon (Near Locality Used)

    • E.g., sequential instruction access, array data

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 3


Taking advantage of locality

Taking Advantage of Locality

  • Memory hierarchy

  • Store everything on disk

  • Copy recently accessed (and nearby) items from disk to smaller DRAM memory

    • Main memory

  • Copy more recently accessed (and nearby) items from DRAM to smaller SRAM memory

    • Cache memory attached to CPU

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 4


Memory hierarchy levels

Memory Hierarchy Levels

  • Block (aka line): unit of copying

    • May be multiple words

  • If accessed data is present in upper level

    • Hit: access satisfied by upper level

      • Hit ratio: hits/accesses

  • If accessed data is absent

    • Miss: block copied from lower level

      • Time taken: miss penalty

      • Miss ratio: misses/accesses= 1 – hit ratio

      • + Miss penalty

    • Then accessed data supplied from upper level

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 5


Cache memory

Cache Memory

  • Cache memory

    • The level of the memory hierarchy closest to the CPU

  • Given accesses X1, …, Xn–1, Xn

§5.2 The Basics of Caches

  • How do we know if the data is present?

  • Where do we look?

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 6


Direct mapped cache

Direct Mapped Cache

  • Location determined by address

  • Direct mapped: only one choice

    • (Block address) modulo (#Blocks in cache)

  • #Blocks is a power of 2

  • Use low-order address bits

  • Cache tag higher order 2 bits /index lower order 3bits

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 7


Tags and valid bits

Tags and Valid Bits

  • How do we know which particular block is stored in a cache location?

    • Store block address as well as the data

    • Actually, only need the high-order bits

    • Called the tag

  • What if there is no data in a location?

    • Valid bit: 1 = present, 0 = not present

    • Initially 0

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 8


Cache example

Cache Example

  • 8-blocks, 1 word/block, direct mapped

  • Initial state

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 9


Cache example1

Cache Example

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 10


Cache example2

Cache Example

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 11


Cache example3

Cache Example

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 12


Cache example4

Cache Example

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 13


Cache example5

Cache Example

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 14


Address subdivision

Address Subdivision

4*8*1024=4kbytes

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 15


Example larger block size

31

10

9

4

3

0

Tag

Index

Offset

22 bits

6 bits

4 bits

Example: Larger Block Size

  • 64 blocks, 16 bytes/block

    • To what block number does address 1200 map?

  • Block address = 1200/16 = 75

  • Block number = 75 modulo 64 = 11

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 16


Block size considerations

Block Size Considerations

  • Larger blocks should reduce miss rate

    • Due to spatial locality

  • But in a fixed-sized cache

    • Larger blocks  fewer of them

      • More competition  increased miss rate

    • Larger blocks  pollution

  • Larger miss penalty

    • Can override benefit of reduced miss rate

    • Early restart and critical-word-first can help

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 17


Cache misses

Cache Misses

  • On cache hit, CPU proceeds normally

  • On cache miss

    • Stall the CPU pipeline

    • Fetch block from next level of hierarchy

    • Instruction cache miss

      • Restart instruction fetch

    • Data cache miss

      • Complete data access

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 18


Write through

Write-Through

  • On data-write hit, could just update the block in cache

    • But then cache and memory would be inconsistent

  • Write through: also update memory

  • But makes writes take longer

    • e.g., if base CPI = 1, 10% of instructions are stores, write to memory takes 100 cycles

      • Effective CPI = 1 + 0.1×100 = 11

  • Solution: write buffer

    • Holds data waiting to be written to memory

    • CPU continues immediately

      • Only stalls on write if write buffer is already full

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 19


Write back

Write-Back

  • Alternative: On data-write hit, just update the block in cache

    • Keep track of whether each block is dirty

  • When a dirty block is replaced

    • Write it back to memory

    • Can use a write buffer to allow replacing block to be read first

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 20


Write allocation

Write Allocation

  • What should happen on a write miss?

  • Alternatives for write-through

    • Allocate on miss: fetch the block

    • Write around: don’t fetch the block

      • Since programs often write a whole block before reading it (e.g., initialization)

  • For write-back

    • Usually fetch the block

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 21


Example intrinsity fastmath

Example: Intrinsity FastMATH

  • Embedded MIPS processor

    • 12-stage pipeline

    • Instruction and data access on each cycle

  • Split cache: separate I-cache and D-cache

    • Each 16KB: 256 blocks × 16 words/block

    • D-cache: write-through or write-back

  • SPEC2000 miss rates

    • I-cache: 0.4%

    • D-cache: 11.4%

    • Weighted average: 3.2%

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 22


Example intrinsity fastmath1

Example: Intrinsity FastMATH

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 23


Main memory supporting caches

Main Memory Supporting Caches

  • Use DRAMs for main memory

    • Fixed width (e.g., 1 word)

    • Connected by fixed-width clocked bus

      • Bus clock is typically slower than CPU clock

  • Example cache block read

    • 1 bus cycle for address transfer

    • 15 bus cycles per DRAM access

    • 1 bus cycle per data transfer

  • For 4-word block, 1-word-wide DRAM

    • Miss penalty = 1 + 4×15 + 4×1 = 65 bus cycles

    • Bandwidth = 16 bytes / 65 cycles = 0.25 B/cycle

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 24


Increasing memory bandwidth

Increasing Memory Bandwidth

  • 4-word wide memory

    • Miss penalty = 1 + 15 + 1 = 17 bus cycles

    • Bandwidth = 16 bytes / 17 cycles = 0.94 B/cycle

  • 4-bank interleaved memory

    • Miss penalty = 1 + 15 + 4×1 = 20 bus cycles

    • Bandwidth = 16 bytes / 20 cycles = 0.8 B/cycle

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 25


Advanced dram organization

Advanced DRAM Organization

  • Bits in a DRAM are organized as a rectangular array

    • DRAM accesses an entire row

    • Burst mode: supply successive words from a row with reduced latency

  • Double data rate (DDR) DRAM

    • Transfer on rising and falling clock edges

  • Quad data rate (QDR) DRAM

    • Separate DDR inputs and outputs

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 26


Dram generations

DRAM Generations

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 27


Measuring cache performance

Measuring Cache Performance

  • Components of CPU time

    • Program execution cycles

      • Includes cache hit time

    • Memory stall cycles

      • Mainly from cache misses

  • With simplifying assumptions:

§5.3 Measuring and Improving Cache Performance

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 28


Cache performance example

Cache Performance Example

  • Given

    • I-cache miss rate = 2%

    • D-cache miss rate = 4%

    • Miss penalty = 100 cycles

    • Base CPI (ideal cache) = 2

    • Load & stores are 36% of instructions

  • Miss cycles per instruction

    • I-cache: 0.02 × 100 = 2

    • D-cache: 0.36 × 0.04 × 100 = 1.44

  • Actual CPI = 2 + 2 + 1.44 = 5.44

    • Ideal CPU is 5.44/2 =2.72 times faster

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 29


Average access time

Average Access Time

  • Hit time is also important for performance

  • Average memory access time (AMAT)

    • AMAT = Hit time + Miss rate × Miss penalty

  • Example

    • CPU with 1ns clock, hit time = 1 cycle, miss penalty = 20 cycles, I-cache miss rate = 5%

    • AMAT = 1 + 0.05 × 20 = 2ns

      • 2 cycles per instruction

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 30


Performance summary

Performance Summary

  • When CPU performance increased

    • Miss penalty becomes more significant

  • Decreasing base CPI

    • Greater proportion of time spent on memory stalls

  • Increasing clock rate

    • Memory stalls account for more CPU cycles

  • Can’t neglect cache behavior when evaluating system performance

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 31


Associative caches

Associative Caches

  • Fully associative

    • Allow a given block to go in any cache entry

    • Requires all entries to be searched at once

    • Comparator per entry (expensive)

  • n-way set associative

    • Each set contains n entries

    • Block number determines which set

      • (Block number) modulo (#Sets in cache)

    • Search all entries in a given set at once

    • n comparators (less expensive)

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 32


Associative cache example

Associative Cache Example

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 33


Spectrum of associativity

Spectrum of Associativity

  • For a cache with 8 entries

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 34


Associativity example

Associativity Example

  • Compare 4-block caches

    • Direct mapped, 2-way set associative,fully associative

    • Block access sequence: 0, 8, 0, 6, 8

  • Direct mapped

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 35


Associativity example1

Associativity Example

  • 2-way set associative

  • Fully associative

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 36


How much associativity

How Much Associativity

  • Increased associativity decreases miss rate

    • But with diminishing returns

  • Simulation of a system with 64KBD-cache, 16-word blocks, SPEC2000

    • 1-way: 10.3%

    • 2-way: 8.6%

    • 4-way: 8.3%

    • 8-way: 8.1%

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 37


Set associative cache organization

Set Associative Cache Organization

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 38


Replacement policy

Replacement Policy

  • Direct mapped: no choice

  • Set associative

    • Prefer non-valid entry, if there is one

    • Otherwise, choose among entries in the set

  • Least-recently used (LRU)

    • Choose the one unused for the longest time

      • Simple for 2-way, manageable for 4-way, too hard beyond that

  • Random

    • Gives approximately the same performance as LRU for high associativity

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 39


Multilevel caches

Multilevel Caches

  • Primary cache attached to CPU

    • Small, but fast

  • Level-2 cache services misses from primary cache

    • Larger, slower, but still faster than main memory

  • Main memory services L-2 cache misses

  • Some high-end systems include L-3 cache

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 40


Multilevel cache example

Multilevel Cache Example

  • Given

    • CPU base CPI = 1, clock rate = 4GHz

    • Miss rate/instruction = 2%

    • Main memory access time = 100ns

  • With just primary cache

    • Miss penalty = 100ns/0.25ns = 400 cycles

    • Effective CPI = 1 + 0.02 × 400 = 9

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 41


Example cont

Example (cont.)

  • Now add L-2 cache

    • Access time = 5ns

    • Global miss rate to main memory = 0.5%

  • Primary miss with L-2 hit

    • Penalty = 5ns/0.25ns = 20 cycles

  • Primary miss with L-2 miss

    • Extra penalty = 500 cycles

  • CPI = 1 + 0.02 × 20 + 0.005 × 400 = 3.4

  • Performance ratio = 9/3.4 = 2.6

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 42


Multilevel cache considerations

Multilevel Cache Considerations

  • Primary cache

    • Focus on minimal hit time

  • L-2 cache

    • Focus on low miss rate to avoid main memory access

    • Hit time has less overall impact

  • Results

    • L-1 cache usually smaller than a single cache

    • L-1 block size smaller than L-2 block size

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 43


Interactions with advanced cpus

Interactions with Advanced CPUs

  • Out-of-order CPUs can execute instructions during cache miss

    • Pending store stays in load/store unit

    • Dependent instructions wait in reservation stations

      • Independent instructions continue

  • Effect of miss depends on program data flow

    • Much harder to analyse

    • Use system simulation

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 44


Interactions with software

Interactions with Software

  • Misses depend on memory access patterns

    • Algorithm behavior

    • Compiler optimization for memory access

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 45


Virtual memory

Virtual Memory

§5.4 Virtual Memory

  • Use main memory as a “cache” for secondary (disk) storage

    • Managed jointly by CPU hardware and the operating system (OS)

  • Programs share main memory

    • Each gets a private virtual address space holding its frequently used code and data

    • Protected from other programs

  • CPU and OS translate virtual addresses to physical addresses

    • VM “block” is called a page

    • VM translation “miss” is called a page fault

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 46


Address translation

Address Translation

  • Fixed-size pages (e.g., 4K)

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 47


Page fault penalty

Page Fault Penalty

  • On page fault, the page must be fetched from disk

    • Takes millions of clock cycles

    • Handled by OS code

  • Try to minimize page fault rate

    • Fully associative placement

    • Smart replacement algorithms

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 48


Page tables

Page Tables

  • Stores placement information

    • Array of page table entries, indexed by virtual page number

    • Page table register in CPU points to page table in physical memory

  • If page is present in memory

    • PTE stores the physical page number

    • Plus other status bits (referenced, dirty, …)

  • If page is not present

    • PTE can refer to location in swap space on disk

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 49


Chapter 5

背景說明(補)

  • 電腦系統中,記憶體是系統的重心。記憶體本身是一個大型的字組或位元組陣列,各字組和位元組都各有其位址,CPU是根據程式計數器的數值到記憶體的位址擷取指令。這些指令可能會造成對於特殊記憶體位址的額外載入或儲存動作。

  • 藉由管理記憶體的各種不同相關技巧講題開始討論,這包含了基本硬體、符號的記憶體位址與真實記憶體的連結,以及邏輯位址和實際位址間區分的概觀。


Chapter 5

基本硬體

  • 主記憶體和建立在處理器內的暫存器是CPU唯一可以直接存取的儲存體。機器指令使用記憶體位址做為參數,但卻使用磁碟位址做為參數。因此,任何執行的指令,和任何被這些指令使用的資料必須放在這些直接存取儲存裝置之內。如果資料不在記憶體內,則必須在CPU操作它們之前移到記憶體之中。


Chapter 5

位址連結

  • 編譯時間(compile time):在編譯時間時,若已確知程式在記憶體中的位置, 那麼絕對碼(absolute code)便可產生。

  • 載入時間(load time):程式在編譯時間若不能確定在記憶體中的位置,那麼編 譯程式就必須產生重定碼(relocate code)。

  • 執行時問(execution time):如果行程在執行時能夠被由原來的記憶段落移動至另一段落位置的話,那麼連結的動作就會被延遲至執行時間才發生。


Chapter 5

邏輯位址空間和實體位址空間

  • CPU所產生的位址通常稱為邏輯位址(logical address),而記憶體單元所看到的位址(也就是載入到記憶體的記憶體位址暫存器(memory-address register)之數值)通常叫做實體位址(Physical address)。


Chapter 5

  • 動態載入

    • 行程大小受限於實體記憶體大小。要得到較佳之記憶體空間使用效率,可採行動態載入(dynamic loading)。

    • 主程式儲存在主記憶體並執行,當需要呼叫其它程式時,首先看看此程式是不是已經存在記憶體內,如果不是,便呼叫重定位 鏈結載入程式(relocatable linking loading),將所需要的程式載入主記憶體內,並更新行程位址表的內容。然後控制就轉移給新的載入程式。

  • 動態鏈結和共用程式庫

    • 採用動態鏈結,在程式參用程式庫副程式處便做一記號 (stub),該記號為一小段程式來指示如何去找尋適當的記憶體常駐程式庫副程式,或是如何載入程式庫副程式(如果不在記憶體中時)。當執行到記號處時,首先檢查所需要的副程式是否已經存在記憶體中。如果該副程式不在記憶體中,程式會將它載入到記憶體中。


Swapped

置換(swapped)

  • 一個行程必須在記憶體被執行,然而一個行程可能會暫時的被置換 (swapped)出記憶體到備份儲存 (backing store),而後再回到記憶體被繼續執行。


Chapter 5

連續記憶體配置

  • 記憶體映對和保護


Chapter 5

  • 記憶體配置

    • 最先配合(First fit):配置第一個夠大的區間。搜尋的起始位置可為第一個可用的區間 或是前一個最先配合搜尋動作的結束位置,而找到夠大的區間後,立即停止搜 尋的動作。

    • 最佳配合(Best fit):在所有容量夠大的區間中,將最小的一個區間分配給行程。如果 區間串列並未依照容量的大小順序排序,我們必須搜尋整個串列才能找到符合 這項策略的區間。最佳配合法會找出最小的剩餘區間。

    • 最差配合(Worst fit):將最大的區間配合給行程。如果區間串列並未依照容量的大小順 序排列,我們必須搜尋整個串列。最差配合法會找出最大的剩餘區間,而這些 區間的用處可能要比最佳配合法所找到的剩餘區間來得大。


Chapter 5

  • 斷裂(Fragmentation)

    • 記憶體配置的最先配合和最佳配合這二種策略都會遭遇到記憶體外部片到(external fragmentation)的問題。由於行程的載入和移出,可用的記憶空間將被劃分為許多小區間。當有足夠的總記憶體得以滿足要求,而這些區間卻是非連續的時候,此時外部斷裂的現象便發生了,儲存體被分成了許許多多的小區間。

    • 外部斷裂是否會造成嚴重的問題,完全視記憶空間的總容量大小和行程的平均大小而定。例如,最先配合的統計分析顯示縱使具有最佳條件,給予N配置區間由於斯裂現象也將遺失其它的0.5N,那麼三分之一的記憶體可能沒有利用到,這就是著名的百分之五十規則 (50-percent rule)。


Paging

分頁(Paging)

  • 基本方法


Chapter 5

  • 硬體的支援


Chapter 5

  • 保護


Chapter 5

  • 共用分頁


Chapter 5

分頁表的結構

  • 階層式的分頁


Chapter 5

  • 雜湊分頁表(hash page table)

    • 處理位址空間大於32位元的一種常見方法是使用雜湊分頁表(hash page table)。其中雜湊值即是虛擬分頁值。雜湊表中每一項包括了雜湊到相同位置之單元的鏈結串列。

    • 每一個單元由三個欄位所組成:

      • 虛擬分頁的數值

      • 對映分頁們的數值

      • 指向鏈結串列下一單元的指標。


Chapter 5

  • 演算法依照下列的方式進行:虛擬位址的虛擬分頁數值被雜湊(hashed)到雜湊表中。虛擬分頁數值和鏈結串列中第一個單元的欄位(1)做比較。如果兩者相同,相關分頁欄位 (欄位(2))就被用來形成所需要的實體位址。如果不吻合,鏈結串列中接下來的單元會被搜尋以找出符合的虛擬分頁數值。


Chapter 5

  • 反轉分頁表(inverted page table)

    • 對於實體記憶體中的每一頁 (或叫做欄)都有一個進入點。每一項中包含了在此實體位址頁所對映的虛擬位址,以及擁有該頁的行程。因此在電腦系統中只有一份分頁表,而此表對於實體位址中的每一頁都只有一個進入點。


Segmentation

分段(Segmentation)

  • 基本方法

    • 試想你將會把一個正在寫的程式看成如何?你會認為它是由一個主程式和許多次常式、函數或模式所組成。其中也有許多不同的資料結構:物件、陣列、堆疊、變數等等。這些模式或資料單元都由其名稱而定。你們常說的 "堆疊"、"函數程式庫"、"主程式",而並不考慮這些單元在記憶體中佔用那些位址。


Chapter 5

  • 硬體

    • 分段表 (segment table)中的每一項都有一個分段基底值 (segment base)和分段界限值 (segment limit)。分段基底值包含了分段在主記憶體中的賞際開始位址,而分段界限值則設定了分段的長度。


Intel pentium

範例: Intel Pentium

  • Pentium架構允許區段可以大到4GB位元組,每個行程的分段數目最多是 16KB。一個行程的邏輯位址被分成兩部份。第一部份是由該行程私有的分段 (至多有8K8)所組成。第二部份是由所有行程共同使用的區段 (至多有8KB)所組成。

  • 第一部份的資料存放在區域描述表 (local descriptor table,LDT)中,第二部份的資料存放在全域描述表 (global descriptor table,GDT)中。LDT和GDT中的每一項皆佔用8個位元組,其中包含關於某一特殊區段的詳細資料,包括基底位址以及該區段的長度。


Chapter 5

Linux在Pentium系統


Translation using a page table

Translation Using a Page Table

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 79


Mapping pages to storage

Mapping Pages to Storage

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 80


Replacement and writes

Replacement and Writes

  • To reduce page fault rate, prefer least-recently used (LRU) replacement

    • Reference bit (aka use bit) in PTE set to 1 on access to page

    • Periodically cleared to 0 by OS

    • A page with reference bit = 0 has not been used recently

  • Disk writes take millions of cycles

    • Block at once, not individual locations

    • Write through is impractical

    • Use write-back

    • Dirty bit in PTE set when page is written

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 81


Fast translation using a tlb

Fast Translation Using a TLB

  • Address translation would appear to require extra memory references

    • One to access the PTE

    • Then the actual memory access

  • But access to page tables has good locality

    • So use a fast cache of PTEs within the CPU

    • Called a Translation Look-aside Buffer (TLB)

    • Typical: 16–512 PTEs, 0.5–1 cycle for hit, 10–100 cycles for miss, 0.01%–1% miss rate

    • Misses could be handled by hardware or software

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 82


Fast translation using a tlb1

Fast Translation Using a TLB

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 83


Tlb misses

TLB Misses

  • If page is in memory

    • Load the PTE from memory and retry

    • Could be handled in hardware

      • Can get complex for more complicated page table structures

    • Or in software

      • Raise a special exception, with optimized handler

  • If page is not in memory (page fault)

    • OS handles fetching the page and updating the page table

    • Then restart the faulting instruction

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 84


Tlb miss handler

TLB Miss Handler

  • TLB miss indicates

    • Page present, but PTE not in TLB

    • Page not preset

  • Must recognize TLB miss before destination register overwritten

    • Raise exception

  • Handler copies PTE from memory to TLB

    • Then restarts instruction

    • If page not present, page fault will occur

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 85


Page fault handler

Page Fault Handler

  • Use faulting virtual address to find PTE

  • Locate page on disk

  • Choose page to replace

    • If dirty, write to disk first

  • Read page into memory and update page table

  • Make process runnable again

    • Restart from faulting instruction

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 86


Tlb and cache interaction

TLB and Cache Interaction

  • If cache tag uses physical address

    • Need to translate before cache lookup

  • Alternative: use virtual address tag

    • Complications due to aliasing

      • Different virtual addresses for shared physical address

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 87


Memory protection

Memory Protection

  • Different tasks can share parts of their virtual address spaces

    • But need to protect against errant access

    • Requires OS assistance

  • Hardware support for OS protection

    • Privileged supervisor mode (aka kernel mode)

    • Privileged instructions

    • Page tables and other state information only accessible in supervisor mode

    • System call exception (e.g., syscall in MIPS)

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 88


The memory hierarchy

The Memory Hierarchy

  • Common principles apply at all levels of the memory hierarchy

    • Based on notions of caching

  • At each level in the hierarchy

    • Block placement

    • Finding a block

    • Replacement on a miss

    • Write policy

The BIG Picture

§5.5 A Common Framework for Memory Hierarchies

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 89


Block placement

Block Placement

  • Determined by associativity

    • Direct mapped (1-way associative)

      • One choice for placement

    • n-way set associative

      • n choices within a set

    • Fully associative

      • Any location

  • Higher associativity reduces miss rate

    • Increases complexity, cost, and access time

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 90


Finding a block

Finding a Block

  • Hardware caches

    • Reduce comparisons to reduce cost

  • Virtual memory

    • Full table lookup makes full associativity feasible

    • Benefit in reduced miss rate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 91


Replacement

Replacement

  • Choice of entry to replace on a miss

    • Least recently used (LRU)

      • Complex and costly hardware for high associativity

    • Random

      • Close to LRU, easier to implement

  • Virtual memory

    • LRU approximation with hardware support

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 92


Write policy

Write Policy

  • Write-through

    • Update both upper and lower levels

    • Simplifies replacement, but may require write buffer

  • Write-back

    • Update upper level only

    • Update lower level when block is replaced

    • Need to keep more state

  • Virtual memory

    • Only write-back is feasible, given disk write latency

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 93


Sources of misses

Sources of Misses

  • Compulsory misses (aka cold start misses)

    • First access to a block

  • Capacity misses

    • Due to finite cache size

    • A replaced block is later accessed again

  • Conflict misses (aka collision misses)

    • In a non-fully associative cache

    • Due to competition for entries in a set

    • Would not occur in a fully associative cache of the same total size

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 94


Cache design trade offs

Cache Design Trade-offs

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 95


Virtual machines

Virtual Machines

§5.6 Virtual Machines

  • Host computer emulates guest operating system and machine resources

    • Improved isolation of multiple guests

    • Avoids security and reliability problems

    • Aids sharing of resources

  • Virtualization has some performance impact

    • Feasible with modern high-performance comptuers

  • Examples

    • IBM VM/370 (1970s technology!)

    • VMWare

    • Microsoft Virtual PC

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 96


Virtual machine monitor

Virtual Machine Monitor

  • Maps virtual resources to physical resources

    • Memory, I/O devices, CPUs

  • Guest code runs on native machine in user mode

    • Traps to VMM on privileged instructions and access to protected resources

  • Guest OS may be different from host OS

  • VMM handles real I/O devices

    • Emulates generic virtual I/O devices for guest

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 97


Example timer virtualization

Example: Timer Virtualization

  • In native machine, on timer interrupt

    • OS suspends current process, handles interrupt, selects and resumes next process

  • With Virtual Machine Monitor

    • VMM suspends current VM, handles interrupt, selects and resumes next VM

  • If a VM requires timer interrupts

    • VMM emulates a virtual timer

    • Emulates interrupt for VM when physical timer interrupt occurs

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 98


Instruction set support

Instruction Set Support

  • User and System modes

  • Privileged instructions only available in system mode

    • Trap to system if executed in user mode

  • All physical resources only accessible using privileged instructions

    • Including page tables, interrupt controls, I/O registers

  • Renaissance of virtualization support

    • Current ISAs (e.g., x86) adapting

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 99


Cache control

31

10

9

4

3

0

Tag

Index

Offset

18 bits

10 bits

4 bits

Cache Control

  • Example cache characteristics

    • Direct-mapped, write-back, write allocate

    • Block size: 4 words (16 bytes)

    • Cache size: 16 KB (1024 blocks)

    • 32-bit byte addresses

    • Valid bit and dirty bit per block

    • Blocking cache

      • CPU waits until access is complete

§5.7 Using a Finite State Machine to Control A Simple Cache

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 100


Interface signals

Interface Signals

Cache

Memory

CPU

Read/Write

Read/Write

Valid

Valid

32

32

Address

Address

32

128

Write Data

Write Data

32

128

Read Data

Read Data

Ready

Ready

Multiple cycles per access

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 101


Finite state machines

Finite State Machines

  • Use an FSM to sequence control steps

  • Set of states, transition on each clock edge

    • State values are binary encoded

    • Current state stored in a register

    • Next state= fn (current state,current inputs)

  • Control output signals= fo (current state)

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 102


Cache controller fsm

Cache Controller FSM

Could partition into separate states to reduce clock cycle time

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 103


Cache coherence problem

Cache Coherence Problem

  • Suppose two CPU cores share a physical address space

    • Write-through caches

§5.8 Parallelism and Memory Hierarchies: Cache Coherence

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 104


Coherence defined

Coherence Defined

  • Informally: Reads return most recently written value

  • Formally:

    • P writes X; P reads X (no intervening writes) read returns written value

    • P1 writes X; P2 reads X (sufficiently later) read returns written value

      • c.f. CPU B reading X after step 3 in example

    • P1 writes X, P2 writes X all processors see writes in the same order

      • End up with the same final value for X

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 105


Cache coherence protocols

Cache Coherence Protocols

  • Operations performed by caches in multiprocessors to ensure coherence

    • Migration of data to local caches

      • Reduces bandwidth for shared memory

    • Replication of read-shared data

      • Reduces contention for access

  • Snooping protocols

    • Each cache monitors bus reads/writes

  • Directory-based protocols

    • Caches and memory record sharing status of blocks in a directory

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 106


Invalidating snooping protocols

Invalidating Snooping Protocols

  • Cache gets exclusive access to a block when it is to be written

    • Broadcasts an invalidate message on the bus

    • Subsequent read in another cache misses

      • Owning cache supplies updated value

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 107


Memory consistency

Memory Consistency

  • When are writes seen by other processors

    • “Seen” means a read returns the written value

    • Can’t be instantaneously

  • Assumptions

    • A write completes only when all processors have seen it

    • A processor does not reorder writes with other accesses

  • Consequence

    • P writes X then writes Y all processors that see new Y also see new X

    • Processors can reorder reads, but not writes

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 108


Multilevel on chip caches

Multilevel On-Chip Caches

Intel Nehalem 4-core processor

§5.10 Real Stuff: The AMD Opteron X4 and Intel Nehalem

Per core: 32KB L1 I-cache, 32KB L1 D-cache, 512KB L2 cache

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 109


2 level tlb organization

2-Level TLB Organization

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 110


3 level cache organization

3-Level Cache Organization

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 111


Mis penalty reduction

Mis Penalty Reduction

  • Return requested word first

    • Then back-fill rest of block

  • Non-blocking miss processing

    • Hit under miss: allow hits to proceed

    • Mis under miss: allow multiple outstanding misses

  • Hardware prefetch: instructions and data

  • Opteron X4: bank interleaved L1 D-cache

    • Two concurrent accesses per cycle

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 112


Pitfalls

Pitfalls

  • Byte vs. word addressing

    • Example: 32-byte direct-mapped cache,4-byte blocks

      • Byte 36 maps to block 1

      • Word 36 maps to block 4

  • Ignoring memory system effects when writing or generating code

    • Example: iterating over rows vs. columns of arrays

    • Large strides result in poor locality

§5.11 Fallacies and Pitfalls

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 113


Pitfalls1

Pitfalls

  • In multiprocessor with shared L2 or L3 cache

    • Less associativity than cores results in conflict misses

    • More cores  need to increase associativity

  • Using AMAT to evaluate performance of out-of-order processors

    • Ignores effect of non-blocked accesses

    • Instead, evaluate performance by simulation

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 114


Pitfalls2

Pitfalls

  • Extending address range using segments

    • E.g., Intel 80286

    • But a segment is not always big enough

    • Makes address arithmetic complicated

  • Implementing a VMM on an ISA not designed for virtualization

    • E.g., non-privileged instructions accessing hardware resources

    • Either extend ISA, or require guest OS not to use problematic instructions

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 115


Concluding remarks

Concluding Remarks

  • Fast memories are small, large memories are slow

    • We really want fast, large memories 

    • Caching gives this illusion 

  • Principle of locality

    • Programs use a small part of their memory space frequently

  • Memory hierarchy

    • L1 cache  L2 cache  …  DRAM memory disk

  • Memory system design is critical for multiprocessors

§5.12 Concluding Remarks

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 116


  • Login