Solving common problems with gpu a case study with sparse matrices 17 mar 2011 arts talk
This presentation is the property of its rightful owner.
Sponsored Links
1 / 78

Solving Common Problems with GPU: A Case Study with Sparse Matrices 17 Mar 2011 ARTS Talk PowerPoint PPT Presentation

  • Uploaded on
  • Presentation posted in: General

Solving Common Problems with GPU: A Case Study with Sparse Matrices 17 Mar 2011 ARTS Talk. Sean Baxter (324). GPU Architecture. Why GPU? Scan Idiomatic GPU Programming Sparse Matrix Optimization. Why GPU?. Why GPU?. Consider advance of CMOS process technology. CMOS Process Tech.

Download Presentation

Solving Common Problems with GPU: A Case Study with Sparse Matrices 17 Mar 2011 ARTS Talk

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

Solving common problems with gpu a case study with sparse matrices 17 mar 2011 arts talk

Solving Common Problems with GPU:A Case Study with Sparse Matrices17 Mar 2011ARTS Talk

Sean Baxter (324)

Gpu architecture

GPU Architecture

  • Why GPU?

  • Scan

  • Idiomatic GPU Programming

  • Sparse Matrix Optimization

Why gpu

Why GPU?

Why gpu1

Why GPU?

Consider advance of CMOS process technology

Cmos process tech

CMOS Process Tech

  • 1972 – Intel 8008 – 10000nm - 3500 trans

  • 1985 – Intel 386 – 1000nm - 275k trans

  • 1989 – Intel 486 – 800nm - 1.18m trans

  • 1997 – Pentium II – 350nm - 7.5m trans

  • 2000 – Pentium 4 – 180nm - 42m trans

  • 2006 – Core 2 Duo – 65nm - 291m trans

  • 2011 – Sandy Bridge – 32nm - 995m trans

Cmos process tech1

CMOS Process Tech

Moore’s Law is Ending because…

Cmos process tech2

CMOS Process Tech

…we’re running out of atoms

Cmos process tech3

CMOS Process Tech

  • Current node is 28nm

  • 28nm is half distance between start of repeating features

  • Radius of Si is .11nm

  • Only 250 Si atoms between start of repeating features

  • Not many more nodes before we run out of atoms

Why gpu2

Why GPU?

  • Feature density will plateau

  • To get continuously improving performance we need to improve efficiency

  • Focus on features that actually do work

  • CPU pipeline length enables high clockspeed

  • CPU pipeline length makes big, inefficient chip

  • GPU uses latency hiding to manage pipeline

Instruction pipeline

Instruction Pipeline

  • Old school RISC pipeline:

  • Instruction fetch

  • Instruction decode

  • Execute

  • Memory access

  • Write back to register

Instruction pipeline1

Instruction Pipeline

  • Cycle 0: Fet In0

  • Cycle 1: Dec In0, Fet In0

  • Cycle 2: Ex In0, Dec In1, Fet In2

  • Cycle 3: Mem In0, Ex In1, Dec In2, Fet In3

  • Cycle 4: WB In0, Mem In1, Ex In2, Dec In2, Fet In3

  • Cycle 5: WB In1, Mem In2, Ex In3, Dec In4, Fet In5

Instruction pipeline2

Instruction Pipeline

  • Pipelining allows concurrent execution of instructions

  • Latency increases but so does throughput

  • Longer pipelines allow higher clockspeeds for increased throughput

  • Longer pipelines require more parallel circuitry to implement all stages

Pipeline hazards

Pipeline Hazards

0: x3 <- x2 op x1

1: x5 <- x4 op x3

Result of In0 is source of In1: Pipeline stall

NOP are silently inserted into pipeline

Also stall on memory contention or branch

Pipeline performance

Pipeline Performance

  • Mitigate hazards with longer pipelines

  • Out-of-order execution

  • Branch prediction

  • Speculative execution

  • Register renaming

  • Microfusion

  • Micro-op cache (1500 uOp on Sandy Bridge)



  • Instruction Level Parallelism (ILP) on CPU

  • Retire multiple instructions per cycle (IPC)

  • Intel Core architecture retires 4 IPC

  • Build multiple pipelines and have each fetch, decode, execute multiple instructions

Instruction pipeline3

Instruction Pipeline

  • Pipeline length drove MHz in 90s

  • CMOS Process improvements enabled longer pipelines with increasing feature density

  • Circuitry to support long pipelines grew faster than circuitry to execute

  • Superscalar not actually all that effective:

    • Measured performance more like 1-1.5 IPC

End of long pipelines

End of Long Pipelines

  • Pentium 4 Prescott (90nm Netburst)

  • Longest pipeline of any consumer chip:

    • Pentium 4 Northwood 20 pipeline stages

    • Pentium 4 Prescott 31 pipeline stages

  • Pipeline irony

    • Longer pipelines increase latency, make hazards harder to resolve

  • After Prescott, all vendors retreated from long pipelines

Gpu revisionist history

GPU Revisionist History

  • CPU pipeline is very complex to ensure single-threaded program correctness

  • GPU drops single threaded support and implements latency hiding

  • Scalar, in-order execution

  • Long pipelines for high MHz

  • Almost all circuitry is in execution blocks for doing work, not decoding, re-ordering, or analyzing instructions

Gpu pipeline

GPU Pipeline

  • NVIDIA DX11 part “Fermi” (GF100-GF118)

  • GTX 580 (GF110) 16 Streaming Multiprocessors (SM)

  • Each SM operates independently, akin to cores on CPU

  • Within each SM, 32 active threads

  • Within each SM, 1536 threads in flight

  • GTX 580 is like 16-core exotic sequential processor

Gpu sm pipeline

GPU SM Pipeline

  • Instruction dispatch grouped into warps

  • On NV warp = 32 threads, on ATI warp = 64 threads

  • Threads within warp execute in parallel

  • In0 completes on all threads of warp 0 prior to In1 executing on any thread of warp 0

  • ATI and G200 execute quarter warp per cycle

  • Fermi executes full warp per cycle

Gpu sm pipeline1

GPU SM Pipeline

  • Fake RISC-like pipeline on GPU

  • Cycle 0:

    • Tid 0-31 fetch In0

  • Cycle 1:

    • Tid 0-31 decode In0

    • Tid 32-63 fetch In0

  • Cycle 2:

    • Tid 0-31 execute In0

    • Tid 32-63 decode In0

    • Tid 64-95 fetch In0

Gpu sm pipeline2

GPU SM Pipeline

  • On Fermi, up to 48 warps in flight

  • For 100% SM occupancy, 48 cycle instruction latency can be hidden with zero pipeline stalls

  • Actual instruction latency suspected around 22 cycles – launch at least 704/1536 threads

Gpu sm pipeline performance

GPU SM Pipeline Performance

  • 32 instructions executed per cycle

  • Core speed 1544MHz

  • 16 MPs on device

  • FMA x 1544MHz x 32IPC x 16MP = 1.58TFlop

  • ATI is VLIW4 for almost 3TFlop

Gpu sm pipeline3

GPU SM Pipeline

  • The programmer has little control over the CPU pipeline

  • The programmer has great control over the GPU pipeline, as expressed by the shape of thread execution

  • Branching over a small number of warps stalls the pipeline

Memory bandwidth

Memory Bandwidth

  • AMD Phenom II: 14.6 GB/s

  • Intel Sandy Bridge Quad: 21.3 GB/s

  • Itanium 2 (NASA Columbia): 6.4 GB/s

  • NVidiaGeforce GTX 580: 192 GB/s

    Rule of thumb: GPU has 10x memory bandwidth and 50x arithmetic throughput compared to CPU

Gpu memory architecture

GPU Memory Architecture

  • Memory segmented into aligned 128 byte intervals

  • On global memory I/O, number of segments addressed by threads in a warp is computed

  • Each segment is a memory transaction

  • On coalesced r/w, each thread in warp addresses different 4byte address in segment

  • Coalesced r/w mean only 1 transaction per 128 bytes



  • For bandwidth-bound kernels, memory coalescing is #1 performance priority

  • For 4 byte types, transactions are serviced for full warps

  • For 8 byte types, transactions are serviced for half warps

  • For 16 bytes types (like float4 vecs in D3D), transactions are serviced for quarter warps

  • For larger types, penalties apply when loading from typed pointers

  • To r/w large structures (>16bytes), spread transactions over multiple threads

Gpu memory architecture1

GPU Memory Architecture

  • GTX 580 memory clock 1002 MHz

  • GDDR5 controller is quad-pumped

  • Memory controller is 384 bits (6x64) (48 byte)

  • 1002MHz x 48 byte x 4 = 192.3e9 byte/s

    Thread switching enables rest of warps on SM to execute while some are waiting on memory

Gpu core speed

GPU Core Speed

  • 16 MPs

  • 1544 MHz shader speed

  • 32 IPC

  • 16 MPs x 1544 MHz x 32 IPC = 790e9 IPS

  • If each thread reads 4 bytes, 3162e9 byte/s

  • 3162e9 / 192.3e9 = 16.5 cycles/read

Gpu memory bandwidth

GPU Memory Bandwidth

  • On GTX 580, you need to do a fully coalesced memory op every 16 cycles to saturate the controller

  • The high arithmetic throughput is there to enable this!

  • Latency hiding averages this out – may either stream data in or load in all data, do all ops, write all data – still get max throughput if ratio is met

Gpu memory bandwidth1

GPU Memory Bandwidth

  • GPU has expansive fields of ALUs to allow fast read-execute-write turnaround and see max performance in real problems

  • CPU has 1/10th memory bandwidth because it doesn’t have the arithmetic performance to do work on data even if it had more bandwidth

Worst idea for sparse matrix

Worst Idea for Sparse Matrix

  • Consider a CSR encoded MxN sparse matrix

  • Launch M threads (one per row)

  • Each thread reads an interval from the Row array

  • Each thread dynamically loops over all non-zero values in the row, reads from the Col array, and uses that to index into the dense vector texture

  • Accumulate products and write

Worst idea for sparse matrix1

Worst Idea for Sparse Matrix

  • Consider memory transactions on GPU hardware

  • Threads 0-31 (computing rows 0-31) run concurrently

  • If average row density is 9 values, the warp reads over an interval of 32*9 = 288 values

  • 288 values span 10 segments

  • Memory throughput is 1/10th of peak

Worst idea for sparse matrix2

Worst Idea for Sparse Matrix

  • Threads 0-31 (computing rows 0-31) run concurrently

  • All threads take as long as the thread with the most work

  • If all threads in warp have rows with 5 values, except one thread with 40 values, all threads wait through 40 values

  • Runs 1/8th efficiency!

Understand this

Understand This

MPs are not coarse-grained parallel processors!

(They are exotic sequential processors)

Sean s sparse matrix performance

Sean’s Sparse Matrix Performance

  • Utilize special encoding of sparse matrices to fully saturate the memory controller

  • For double precision, each matrix element is 12 bytes, each vector element is 8 bytes

  • How many dot-product components computed at 20bytes/component?

  • On GTX 570 (peak bandwidth 141.5 GB/s) I see up to 197 GB/s throughput. Thanks texture cache! Incredible CG method speed

Solving common problems with gpu a case study with sparse matrices 17 mar 2011 arts talk


The GPGPU algorithm for everything

Solving common problems with gpu a case study with sparse matrices 17 mar 2011 arts talk


  • GPU is fast YAY

  • Your old code won’t work BOO

  • GPU is really hard to program

  • GPU is fairly easy to optimize

  • Throw away algorithms book

  • One algorithm to rule them all: scan

Solving common problems with gpu a case study with sparse matrices 17 mar 2011 arts talk


  • Not a callable routine – more like Batman’s utility belt

  • At its simplest, adds up sequence of numbers:

    • Inclusive scan transforms sequence:

      (3, 2, 1, 4, 5) -> (3, 5, 6, 10, 15)

    • Exclusive scan transforms sequence:

      (3, 2, 1, 4, 5) -> (0, 3, 5, 6, 10)

Solving common problems with gpu a case study with sparse matrices 17 mar 2011 arts talk

// Basic 32-element scan in cs_5_0 HLSL

#define NUM_THREADS 32

#define sync GroupMemoryBarrierWithGroupSync

Buffer<uint> readbuf : register(b0);

RWBuffer<uint> writebuf : register(u0);


[numthreads(NUM_THREADS, 1, 1)]

void WarpScan(uinttid : SV_GroupIndex) {

uint x = readbuf[tid];

sharedArray[tid] = x;



for(uint offset = 1; offset < NUM_THREADS; offset<<= 1) {

uint left = (NUM_THREADS - 1) & (tid - offset);

uint y = sharedArray[left];


if(offset <= tid) x += y;

sharedArray[tid] = x;



writebuf[tid] = x;


Solving common problems with gpu a case study with sparse matrices 17 mar 2011 arts talk


dcl_globalFlags refactoringAllowed

dcl_resource_buffer (uint,uint,uint,uint) t0

dcl_uav_typed_buffer (uint,uint,uint,uint) u0

dcl_input vThreadIDInGroupFlattened

dcl_temps 3

dcl_tgsm_structured g0, 4, 32

dcl_thread_group 32, 1, 1

ld_indexable(buffer)(uint,uint,uint,uint) r0.x, vThreadIDInGroupFlattened.xxxx, t0.xyzw

store_structured g0.x, vThreadIDInGroupFlattened.x, l(0), r0.x


iadd r1.xyzw, vThreadIDInGroupFlattened.xxxx, l(-1, -2, -4, -8)

and r1.xyzw, r1.xyzw, l(31, 31, 31, 31)

ld_structured r0.y, r1.x, l(0), g0.xxxx

iadd r0.y, r0.x, r0.y


uge r2.xyzw, vThreadIDInGroupFlattened.xxxx, l(1, 2, 4, 8)

movc r0.x, r2.x, r0.y, r0.x

store_structured g0.x, vThreadIDInGroupFlattened.x, l(0), r0.x


ld_structured r0.y, r1.y, l(0), g0.xxxx

iadd r0.y, r0.x, r0.y

movc r0.x, r2.y, r0.y, r0.x


Solving common problems with gpu a case study with sparse matrices 17 mar 2011 arts talk


store_structured g0.x, vThreadIDInGroupFlattened.x, l(0), r0.x


ld_structured r0.y, r1.z, l(0), g0.xxxx

iadd r0.y, r0.x, r0.y

movc r0.x, r2.z, r0.y, r0.x


store_structured g0.x, vThreadIDInGroupFlattened.x, l(0), r0.x


ld_structured r0.y, r1.w, l(0), g0.xxxx

iadd r0.y, r0.x, r0.y

movc r0.x, r2.w, r0.y, r0.x


store_structured g0.x, vThreadIDInGroupFlattened.x, l(0), r0.x


iadd r0.y, vThreadIDInGroupFlattened.x, l(-16)

and r0.y, r0.y, l(31)

ld_structured r0.y, r0.y, l(0), g0.xxxx

iadd r0.y, r0.x, r0.y

uge r0.z, vThreadIDInGroupFlattened.x, l(16)

movc r0.x, r0.z, r0.y, r0.x

store_uav_typed u0.xyzw, vThreadIDInGroupFlattened.xxxx, r0.xxxx


// Approximately 38 instruction slots used

Solving common problems with gpu a case study with sparse matrices 17 mar 2011 arts talk


  • Parallel scan is inefficient for adding numbers, yet critical for idiomatic GPU programming

  • Complex predicates allow many problems to be solved

  • Essential for load balancing jagged problems across simple threads

  • A well-used scan broadcasts information-dense values to all threads in the warp or block

Segmented scan

Segmented Scan

  • Sum from left-to-right within segments

  • Same as above code but with a modified predicate:


    for(uint offset = 1; offset < NUM_THREADS; offset<<= 1) {

    uint left = (NUM_THREADS - 1) & (tid - offset);

    uint y = sharedArray[left];


    if(offset <= delta) x += y;

    sharedArray[tid] = x;



  • delta is distance from tid to start of segment

Segmented scan1

Segmented Scan

  • Values of the same color are in the same segment:

    (2 1 2 0 3 4 5 12 10 4 5 2 1)

  • Segmented scan performs a complete scan within each segment

    (2 3 5 0 3 7 12 12 30 4 9 11 12)

Sparse matrix

Sparse Matrix

  • Sparse matrix * vector is a very obvious segmented scan problem

  • Each segment is the size of the number of non-zero rows in the matrix

  • Each element is the product of a non-zero element and its corresponding component from the dense vector

Sparse matrix1

Sparse Matrix

  • Consider data in CSR (Compressed Sparse Row) format:

    • Row: (0 3 5 6 9 14)

    • Col: (5 4 3 1 234 6 3 1 2 1 5 4)

  • Index vec from col to get vector values for the matrix values, and multiply into the matrix values:

    • Mat*vec: (x1 x2 x3 x4 x5x6x7 x8 x9 x10 x11 x12 x13 x14)

Solving common problems with gpu a case study with sparse matrices 17 mar 2011 arts talk

  • Run a segmented scan – the last value in each segment is the dot product of a sparse matrix row and the dense vector

    (x1 x2 x3 x4 x5x6x7 x8 x9 x10 x11 x12 x13 x14)

    scans to


    x4 x4+x5





Sparse matrix challenges

Sparse Matrix Challenges

  • Sparse matrix * vector should be bandwidth limited – each component requires only a multiply-and-add.

  • To achieve peak bandwidth, we need to issue a coalesced read every 16 cycles

  • DP is nerfed on Geforce series – only runs at 1/4th speed as same die on Quadro/Tesla part, so extra efficiency during reduction is essential

  • Parallel scan has poor work efficiency

  • Matrix data not in a convenient order for processing multiple values per thread

Idiomatic gpu programming

Idiomatic GPU Programming

GPU architecture guides software design

Fermi sm programming environment

Fermi SM Programming Environment

  • 16 MPs per device

  • Up to 1536 threads in flight

  • 32768 32bit registers

    • For 100% SM occupancy, only 20 regs per thread

    • More regs means lower occupancy

  • Up to 8 blocks per SM

  • Max block size 1024 (DX11 requirement)

    • 256 threads/block may give 100% occupancy on all architectures

Fermi sm programming environment1

Fermi SM Programming Environment

  • 48KB shared memory

  • Shared memory supported by 32 32-bit banks

    • For 100% SM occupancy, 32bytes shared memory per thread

  • Each thread in warp must access different bank of shared memory to avoid bank conflicts

  • N-way conflict takes N cycles to resolve

Fermi sm programming environment2

Fermi SM Programming Environment

  • Shared memory is primary mechanism for inter-thread communication

  • Intra-warp communication requires only volatile shared mem pointer

  • Inter-warp communication requires __syncthreads() call

    • __syncthreads() flushes pipeline

  • Inter-block communication requires global memory access and __threadfence or new kernel launch

Parallel hierarchy

Parallel hierarchy

  • Prioritize computation:

    • Thread sequential (90%)

      • Fast sequential algorithms

      • Runs at high occupancy

      • Compute information-dense values and store in shared memory

    • Intra-warp communication (8%)

      • Slower parallel algorithms

      • Fast shared mem communication with volatile pointer

      • Runs at high occupancy – no pipeline issues

Parallel hierarchy 2

Parallel hierarchy (2)

  • Prioritize computation (2):

    • Inter-warp communication (1.9%)

      • Slower parallel algorithms

      • Sync between shared mem access requires pipeline flush

      • May include divergent branches (such as in multiscan)

    • Inter-block communication (0.1%)

      • If all blocks are running concurrently, __threadfence can sync, otherwise new kernel launch is required

      • Must share data through global memory

A simple model

A simple model

  • Thread sequential work is ‘vertical’

  • Load multiple values per thread and process from top to bottom

  • Thread parallel work is ‘horizontal’

  • Combine information-dense values from vertical stage from left to right with scan

Sparse matrix optimization

Sparse Matrix Optimization

Think like the machine

Sparse matrix on cuda

Sparse Matrix on CUDA


  • My open source SpMxV library

  • Performance 1.6-4.3x faster than CUSPARSE

  • Full pre-conditioned CG solver in a few weeks

  • Super fast radix sort included in next update

  • Hits 197 GB/s throughput, usually over 190GB/s for DP

Sparse matrix on cuda1

Sparse Matrix on CUDA

  • Dense vector stored in 1D texture

    • GPU’s 768KB texture cache pushes sparse throughput over theoretical peak

  • Texture cache critical in graphics and is also available in GPGPU

  • Only bilinear filters in CUDA

  • All sampler states available in D3D11 (it’s better in many ways)

Sparse matrix on cuda2

Sparse Matrix on CUDA

Texture cache misses cause of poor performance

pwtk.mtx (wind tunnel)

Height = 217,918

nz = 11,634,424

nz / h = 54.389

Max throughput = 196 GB/s

Matrix is dense and well banded

Solving common problems with gpu a case study with sparse matrices 17 mar 2011 arts talk

pdb1HYS.mtx (Protein)

height = 36,417

nz = 4,344,765

nz / h = 119.306

Max throughput = 197 GB/s

Row density brings high throughput

scircuit.mtx (Circuit)

height = 170,998

nz = 958,936

nz / h = 5.608

Max throughput = 105 GB/s

High matrix bandwidth and low density cause texture cache misses, impairing performance

Reformat the matrix

Reformat the Matrix

  • Uses special matrix encoding to accelerate scans by baking offsets and flags into unused top bits of col indices

  • SpMxV is performed thousands of times to solve CG problem – reformatting is slow, but is only done once, and doubles multiplication throughput

Strided order vs thread order

Strided Order vs Thread Order

Warps making multiple coalesced reads receive data in strided order


Threads don’t hold sequential values!

a0 a1 a2 a3 a4 a5 a6 b0

b1 b2 c0 c1 d0 d1 d2 d3

d4 d5 e0 e1 e2 e3 e4 f0

f1 f2 g0 g1 g2 g3 g4 g5

Transposed format

Transposed Format

Transpose each group’s data so that coalesced reads put values in thread order:

a0 a4 b1 d0 d4 e2 f1 g2

a1 a5 b2 d1 d5 e3 f2 g3

a2 a6 c0 d2 e0 e4 g0 g4

a3 b0 c1 d3 e1 f0 g1 g5

With data in thread order we can perform sequential scan within threads, then parallel scan between them

Locating scan buckets

Locating scan buckets

tid 0: a0 a1 a2 a3

tid 1: a4 a5 a6b0

tid2: b1 b2c0 c1

tid3: d0 d1 d2 d3

tid4: d4 d5e0 e1

tid5: e2 e3 e4f0

tid6: f1 f2g0 g1

tid7: g2 g3 g4 g5

Locate the start and end of each bucket (matrix row) within each thread

Left underline indicates first value in a bucket in the thread

Right underline indicates last value in a bucket in the thread

Summing within threads with sequential scan is fast, provided each thread doesn’t need to calculate matrix geometry

Encoding scan buckets

Encoding scan buckets

Encode first and last value bits into column indices.

scanOffset is the position in shared memory in which to store the first “last” value for each thread. These are dot product fragments.

tid0: TF FF FF FT scanOffset= 0

tid1: TF FF FT TT scanOffset= 1

tid2: TF FT TF TT scanOffset= 3

tid3: TF FF FF FT scanOffset= 5

tid4: TF FT TF FT scanOffset= 6

tid5: TF FF FT TT scanOffset= 8

tid6: TF FT TF FT scanOffset= 10

tid7: TF FF FF FT scanOffset= 12

Sequential scan results

Sequential Scan Results

tid 0: a0 a0+a1 a0+a1+a2 a0+a1+a2+a3 s[0] = a0+a1+a2+a3

tid 1: a4 a4+a5 a4+a5+a6 b0 s[1] = a4+a5+a6 s[2] = b0

tid 2: b1 b1+b2 c0 c0+c1 s[3] = b1+b2 s[4] = c0+c1

tid 3: d0 d0+d1 d0+d1+d2 d0+d1+d2+d3 s[5] = d0+d1+d2+d3

tid 4: d4 d4+d5 e0 e0+e1 s[6] = d4+d5 s[7] = e0+e1

tid 5: e2 e2+e3 e2+e3+e4 f0 s[8] = e2+e3+e4 s[9] = f0

tid 6: f1 f1+f2 g0 g0+g1 s[10] = f1+f2 s[11] = g0+g1

tid 7: g2 g2+g3 g2+g3+g4 g2+g3+g4+g5 s[12] = g2+g3+g4+g5

  • Stream in data and compute products

  • Use sequential segmented scan (i.e. just add the current value to the previous total)

  • Write to shared memory at sharedOffset and inc sharedOffset is LAST flag is set

  • Clear preceding total before adding if FIRST flag is set

Parallel scan

Parallel Scan

  • A parallel segmented scan merges partial dot products

  • There are at most 2 * WARP_SIZE partial dot products, so each thread handles 2 elements in the parallel scan: tid and WARP_SIZE + tid

  • Bake delta offsets into unused bits of two of the column indices (recall slide 44)

Parallel scan1

Parallel Scan

For convenience, let:

A0 = a0+a1+a2+a3 A1 = a4+a5+a6 B0 = b0

B1 = b1+b2 C0 = c0+c1 D0 = d0+d1+d2+d3

D1 = d4+d5 E0 = e0+e1 E1 = e2+e3+e4

F0 = f0 F1 = f1+f2 G0 = g0+g1

G1 = g2+g3+g4+g5

2 * WARP_SIZE sharedArray

s = [ A0A1B0B1C0D0D1E0E1F0F1G0G1XXXXXX ]

tid 0: A0 E1deltaX = 0 deltaY = 1

tid 1: A1 F0deltaX = 1 deltaY = 0

tid 2: B0 F1 deltaX = 0 deltaY = 1

tid 3: B1 G0 deltaX = 1 deltaY = 0

tid 4: C0 G1deltaX = 0 deltaY = 1

tid 5: D0 XX deltaX = 0 deltaY = 0

tid 6: D1 XX deltaX = 1 deltaY = 0

tid 7: E0 XX deltaX = 0 deltaY = 0

Parallel scan2

Parallel Scan

  • After parallel scan, the shared array holds:

s[0] = A0

s[1] = A0+A1

s[2] = B0

s[3] = B0+B1

s[4] = C0

s[5] = D0

s[6] = D0+D1

s[7] = E0

s[8] = E0+E1

s[9] = F0

s[10] = F0+F1

s[11] = G0

s[12] = G0+G1

s[13] = XX

s[14] = XX

s[15] = XX

  • The completed dot products are at indices 1, 3, 4, 6, 8, 10, and 12

  • Bake these indices into the unused high bits of column indices

  • Each thread writes up to 1 value to global memory

The final pass

The final pass

  • Blocks are constant size

  • The last row in each block may spill over into the next block, causing two or more partial dot products to be written to global memory

  • These are summed by a simple final pass

Performance analysis

Performance Analysis

  • Because each thread writes no more than 1 partial dot products, each group cannot process more than WARP_SIZE (32) unique rows

  • VALUES_PER_THREAD should be increased as high as the mean number of values per row to maximize fast sequential processing

Performance analysis1

Performance Analysis

  • All global memory loads are coalesced

  • Most of the sum is computed with efficient sequential segmented scan as opposed to inefficient parallel scan

  • There is no branching in the kernel when computing the sum

  • Segmented scan flags and offsets are baked into col indices to accelerate inner loops

  • Low register and shared mem usage delivers high SM occupancy

Performance analysis2

Performance Analysis

  • Exceptionally high memory bandwidth of GPUs make them the obvious choice for iterative algorithms like CG

  • Switched fabric in clusters results in high latency, slowing process and reducing effectiveness of parallelism

  • With GTX 580, expect 12.5 billion double-precision dot product components

  • A 50 million element DP matrix can be multiplied 250 times per second with a single card

Optimization wrap up

Optimization Wrap-up

  • GPUs are not coarse-grained parallel systems

  • Prioritize for coalesced r/w

  • Favor sequential operations

  • Use intra-warp reduction

  • Maintain high SM occupancy to avoid pipeline stalls by reducing register usage and warp-divergent branches

  • Keep optimizing until 1/16th of instructions are global memory ops

Gpu is enabling tech

GPU is Enabling Tech

  • Low cost

  • Fast

  • Simple deployment

  • Energy efficient

  • Makes rendering simple

  • Low latency encourages interactivity, bringing researches closer to their data and models

Gpu is enabling tech1

GPU is Enabling Tech

  • No way for clusters with conventional nodes to compete

  • Clusters with obsolete hardware (like Columbia) will get crushed by a single GPU in iterative processes

  • On-die GPU integration (Fusion) will soon support GPGPU computation in system memory for low-cost mobile systems

Thank you

Thank you

  • Login