patterns for programming shared memory
Download
Skip this Video
Download Presentation
Patterns for Programming Shared Memory

Loading in 2 Seconds...

play fullscreen
1 / 27

Patterns for Programming Shared Memory - PowerPoint PPT Presentation


  • 194 Views
  • Uploaded on

Patterns for Programming Shared Memory. Keep Abstraction Level High. Temptation: ad-hoc parallelization Add tasks or parallel loops all over the code Discover data races/deadlocks, fix them one-by-one Problem: Complexity quickly adds up

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Patterns for Programming Shared Memory' - bess


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
patterns for programming shared memory

Patterns for Programming Shared Memory

Patterns for Shared Memory Programming

keep abstraction level high
Keep Abstraction Level High
  • Temptation: ad-hoc parallelization
    • Add tasks or parallel loops all over the code
    • Discover data races/deadlocks, fix them one-by-one
  • Problem:
    • Complexity quickly adds up
    • Easy to get cornered by deadlocks, atomicity violations, and data races
    • And these bugs often are hard to expose
  • Alternatives?
    • Use well-understood, simple high-level patterns

Patterns for Shared Memory Programming

book recommendation parallel programming with microsoft net
Book Recommendation:Parallel Programming with Microsoft .NET
  • Covers Common Patterns Systematically
  • Free download at

http://parallelpatterns.codeplex.com

Patterns for Shared Memory Programming

sharing state safely
Sharing State Safely
  • We discuss two types of patterns in this unit
    • Not an exhaustive list, read book for more

Architectural Patterns

Localize shared state

Replication Patterns

Make copies of shared state

Producer-Consumer

Immutable Data

Pipeline

Double Buffering

Worklist

Patterns for Shared Memory Programming

architectural patterns
Part IArchitectural Patterns

Patterns for Shared Memory Programming

architectural patterns1
Architectural Patterns
  • Visualize “factory”: each worker works on a separate item
  • Items travel between workersby means of buffers

Architectural Patterns

Localize shared state

Producer-Consumer

Pipeline

Worklist

Patterns for Shared Memory Programming

example producer consumer
Example: Producer-Consumer

Producer

Producer

  • One or more producers add items to the buffer
  • One or more consumers remove items from the buffer

Producer

Buffer

Consumer

Consumer

Consumer

Patterns for Shared Memory Programming

example producer consumer1
Example: Producer-Consumer

Producer

Producer

No data races!

  • Item is local to Producer before insertion into buffer
  • Item is local to Consumer after removal from buffer
  • What about buffer?
    • Buffer is thread-safe
    • Blocks when full/empty
    • Not trivial to implement
    • More about this in Unit 3
    • For now, use

BlockingCollection

Producer

Buffer

Consumer

Consumer

Consumer

Patterns for Shared Memory Programming

producer consumer code
Producer-Consumer Code

intbuffersize = 1024;

intnum_producers = 4;

intnum_consumers = 4;

// create bounded buffer

varbuffer = newBlockingCollection(buffersize);

// start consumers

for(int i = 0; i < num_consumers; i++)

Task.Factory.StartNew(() => newConsumer(buffer).Run());

// start producers

for(int i = 0; i < num_producers; i++)

Task.Factory.StartNew(() => newProducer(buffer).Run());

Patterns for Shared Memory Programming

producer code
Producer Code

classProducer

{

publicProducer(BlockingCollection buffer)

{

this.buffer = buffer;

}

publicvoid Run()

{

while(!Done())

{

Itemi = Produce();

buffer.Add(i);

}

}

privateBlockingCollection buffer;

privateItem Produce() { … }

privatebool Done() { … }

}

Patterns for Shared Memory Programming

consumer code
Consumer Code

classConsumer

{

publicConsumer(BlockingCollection buffer)

{

this.buffer= buffer;

}

privateBlockingCollection buffer;

publicvoid Run()

{

foreach(Item i inbuffer.GetConsumingEnumerable())

Consume(i);

}

privatevoid Consume(Item item)

{

...

}

}

Patterns for Shared Memory Programming

pipeline pattern
Pipeline Pattern
  • Generalization of Producer-Consumer
    • One or more workers per stage
    • First stage = Producer
    • Last stage = Consumer
    • Middle stages consume and produce
    • No Data Races: Data is local to workers

Stage 1 Worker

Stage 1 Worker

Stage 1 Worker

Buffer

Stage 2 Worker

Stage 2 Worker

Stage 2 Worker

Buffer

Stage 3 Worker

Stage 3 Worker

Stage 3 Worker

Patterns for Shared Memory Programming

worklist pattern
WorklistPattern
  • Worklist contains items to process
    • Workers grab one item at a time
    • Workers may add items back to worklist
    • No data races: items are local to workers

Worklist

Worker

Worker

Worker

Patterns for Shared Memory Programming

worklist code 1 2
Worklist Code (1/2)
  • BlockingCollection worklist;
  • CancellationTokenSourcects;
  • intitemcount;

publicvoid Run()

{

intnum_workers = 4;

// create worklist, filled with initial work

worklist = newBlockingCollection(

newConcurrentQueue(GetInitialWork()));

cts= newCancellationTokenSource();

itemcount = worklist.Count();

for(int i = 0; i < num_workers; i++)

Task.Factory.StartNew(RunWorker);

}

IEnumerable GetInitialWork() { … }

Patterns for Shared Memory Programming

worklist code 2 2
Worklist Code (2/2)
  • BlockingCollection worklist;
  • CancellationTokenSourcects;
  • intitemcount;

publicvoidRunWorker() {

try{

do {

Itemi = worklist.Take(cts.Token);

// blocks until item available or cancelled

Process(i);

// exit loop if no more items left

} while (Interlocked.Decrement(refitemcount) > 0);

} finally{

if (! cts.IsCancellationRequested)

cts.Cancel();

}

}

publicvoid Process(Itemi) { . . . }

// may call AddWork() to add more work to worklist

publicvoidAddWork(Itemitem) {

Interlocked.Increment(refitemcount);

worklist.Add(item);

}

Patterns for Shared Memory Programming

application architecture may combine patterns
Application Architecture May Combine Patterns

Stage A Worker

  • “Stages” arranged in a graph and may have
    • Single worker (only inter-stage parallelism)
    • Multiple workers(also intra-stage parallelism)
  • No data races: items local to worker
  • Buffers may have various ordering characteristics
    • ConcurrentQueue = FIFO
    • ConcurrentStack = LIFO
    • ConcurrentBag = unordered

Buffer

Stage X Worker

Buffer

Buffer

Stage Y Worker

Stage 2 Worker

Stage Y Worker

Buffer

Stage D Worker

Patterns for Shared Memory Programming

provisioning
Provisioning
  • How to choose number of workers?
    • Single worker
      • If ordering of items matter
    • Workers are CPU-bound
      • Can not utilize more than one per processornum_workers = Environment.ProcessorCount
    • Workers block in synchronous IO
      • Can potentially utilize many more
  • How to choose buffer size?
    • Too small -> forces too much context switching
    • Too big -> uses inappropriate amount of memory

Patterns for Shared Memory Programming

manual provisioning
Manual Provisioning

BlockingCollection buffer;

classItem{ … }

void Consume(Item i) { … }

publicvoidFixedConsumers() {

intnum_workers = Environment.ProcessorCount;

for(int i = 0; i < num_workers; i++)

Task.Factory.StartNew(() => {

foreach(Itemiteminbuffer.GetConsumingEnumerable())

Consume(item);

});

}

Patterns for Shared Memory Programming

automatic provisioning
Automatic Provisioning

BlockingCollection buffer;

classItem{ … }

void Consume(Item i) { … }

publicvoidFlexibleConsumers()

{

Parallel.ForEach(

newBlockingCollectionPartitioner(buffer),

Consume);

}

privateclassBlockingCollectionPartitioner : Partitioner

{

// see http://blogs.msdn.com/b/pfxteam/archive/2010/04/06/9990420.aspx

}

Patterns for Shared Memory Programming

replication patterns
Part IIReplication Patterns

Patterns for Shared Memory Programming

replication patterns1
Replication Patterns
  • Idea: Many workers can work on the “same” item if we make copies

Replication Patterns

Make copies of shared state

Immutable Data

Double Buffering

Patterns for Shared Memory Programming

immutability
Immutability
  • Remember: concurrent reads do not conflict
  • Idea: never write to shared data
    • All shared data is immutable (read only)
    • To modify data, must make a fresh copy first
  • Example: strings
    • Strings are immutable!
    • Never need to worry about concurrent access of strings.

Patterns for Shared Memory Programming

b uild your own immutable objects
Build your own immutable objects
  • Mutable list: changes original

interfaceIMutableSimpleList

{

int Count { get; }  // read only propertyTthis[int index] { get; } // read only indexervoid Add(T item);   // mutator method}

  • Immutable list: never changes original, copies on mutation

interfaceIImmutableSimpleList

{

int Count { get; }  // read only propertyTthis[int index] { get; } // read only indexerImmutableSimplelist Add(T item); // read only method

}

Patterns for Shared Memory Programming

copy on write
Copy on Write

classSimpleImmutableList : IImmutableSimpleList {private T[] _items = new T[0];publicint Count { get { return _items.Length; } }public T this[int index] { get { return _items[index]; } }publicIImmutableSimpleList Add(T item)        {            T[] items = new T[_items.Length + 1];Array.Copy(_items, items, _items.Length);            items[items.Length - 1] = item;returnnewSimpleImmutableList(items);        }    }

Patterns for Shared Memory Programming

example game
Example: Game

// GOAL: draw and update in parallel

voidGameLoop()

{

while (!Done()) {

DrawAllCells();

UpdateAllCells();

}

}

classGameObject {

publicvoid Draw()

{ // ... reads state

}

publicvoid Update()

{ // ... modifies state

}

}

GameObject[] objects

= new GameObject[numobjects];

voidDrawAllCells()

{

foreach(GameObject g in objects)

g.Draw();

}

voidUpdateAllCells()

{

foreach(GameObject g in objects)

g.Update();

}

// incorrect parallelization:

// data race between draw and update

voidParallelGameLoop()

{

while (!Done())

Parallel.Invoke(DrawAllCells,

UpdateAllCells)

}

Can we use immutability to fix this?

Patterns for Shared Memory Programming

example game1
Example: Game

classGameObject {

publicvoid Draw()

{ // ... reads state

}

publicGameObjectImmutableUpdate()

{ // ... return updated copy

}

}

GameObject[] objects

= newGameObject[numobjects];

voidDrawAllCells()

{

foreach(GameObjectg inobjects)

g.Draw();

}

GameObject[] UpdateAllCells()

{

varnewobjects = newGameObject[numobjects];

for (int i = 0; i < numobjects; i++)

newobjects[i] = objects[i].ImmutableUpdate();

returnnewObjects;

}

// correct parallelization of loop

voidParallelGameLoop()

{

while (!Done())

{

GameObject[] newarray = null;

Parallel.Invoke(() => DrawAllCells(),

() => newarray = UpdateAllCells());

objects = newarray;

}

}

Correct parallelization.

Can we do this without allocating a new array every iteration?

Patterns for Shared Memory Programming

trick double buffering
Trick: Double Buffering

// correct parallelization of loop

voidParallelGameLoop()

{

intbufferIdx= 0;

while (!Done())

{

Parallel.Invoke(

() => DrawAllCells(bufferIdx),

() => UpdateAllCells(bufferIdx)

);

bufferIdx= 1 - bufferIdx;

}

}

classGameObject {

publicvoid Draw()

{ // ... reads state

}

publicGameObjectImmutableUpdate()

{ // ...

}

}

GameObject[,] objects

= newGameObject[2, numobjects];

voidDrawAllCells(intbufferIdx)

{

for (int i = 0; i < numobjects; i++)

objects[bufferIdx, i].Draw();

}

voidUpdateAllCells(intbufferIdx)

{

for (int i = 0; i < numobjects; i++)

objects[1- bufferIdx, i] = objects[bufferIdx, i].ImmutableUpdate();

}

Patterns for Shared Memory Programming

ad