Patterns for programming shared memory
This presentation is the property of its rightful owner.
Sponsored Links
1 / 27

Patterns for Programming Shared Memory PowerPoint PPT Presentation


  • 136 Views
  • Uploaded on
  • Presentation posted in: General

Patterns for Programming Shared Memory. Keep Abstraction Level High. Temptation: ad-hoc parallelization Add tasks or parallel loops all over the code Discover data races/deadlocks, fix them one-by-one Problem: Complexity quickly adds up

Download Presentation

Patterns for Programming Shared Memory

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Patterns for programming shared memory

Patterns for Programming Shared Memory

Patterns for Shared Memory Programming


Keep abstraction level high

Keep Abstraction Level High

  • Temptation: ad-hoc parallelization

    • Add tasks or parallel loops all over the code

    • Discover data races/deadlocks, fix them one-by-one

  • Problem:

    • Complexity quickly adds up

    • Easy to get cornered by deadlocks, atomicity violations, and data races

    • And these bugs often are hard to expose

  • Alternatives?

    • Use well-understood, simple high-level patterns

Patterns for Shared Memory Programming


Book recommendation parallel programming with microsoft net

Book Recommendation:Parallel Programming with Microsoft .NET

  • Covers Common Patterns Systematically

  • Free download at

    http://parallelpatterns.codeplex.com

Patterns for Shared Memory Programming


Sharing state safely

Sharing State Safely

  • We discuss two types of patterns in this unit

    • Not an exhaustive list, read book for more

Architectural Patterns

Localize shared state

Replication Patterns

Make copies of shared state

Producer-Consumer

Immutable Data

Pipeline

Double Buffering

Worklist

Patterns for Shared Memory Programming


Architectural patterns

Part I

Architectural Patterns

Patterns for Shared Memory Programming


Architectural patterns1

Architectural Patterns

  • Visualize “factory”: each worker works on a separate item

  • Items travel between workersby means of buffers

Architectural Patterns

Localize shared state

Producer-Consumer

Pipeline

Worklist

Patterns for Shared Memory Programming


Example producer consumer

Example: Producer-Consumer

Producer

Producer

  • One or more producers add items to the buffer

  • One or more consumers remove items from the buffer

Producer

Buffer

Consumer

Consumer

Consumer

Patterns for Shared Memory Programming


Example producer consumer1

Example: Producer-Consumer

Producer

Producer

No data races!

  • Item is local to Producer before insertion into buffer

  • Item is local to Consumer after removal from buffer

  • What about buffer?

    • Buffer is thread-safe

    • Blocks when full/empty

    • Not trivial to implement

    • More about this in Unit 3

    • For now, use

      BlockingCollection<T>

Producer

Buffer

Consumer

Consumer

Consumer

Patterns for Shared Memory Programming


Producer consumer code

Producer-Consumer Code

intbuffersize = 1024;

intnum_producers = 4;

intnum_consumers = 4;

// create bounded buffer

varbuffer = newBlockingCollection<Item>(buffersize);

// start consumers

for(int i = 0; i < num_consumers; i++)

Task.Factory.StartNew(() => newConsumer(buffer).Run());

// start producers

for(int i = 0; i < num_producers; i++)

Task.Factory.StartNew(() => newProducer(buffer).Run());

Patterns for Shared Memory Programming


Producer code

Producer Code

classProducer

{

publicProducer(BlockingCollection<Item> buffer)

{

this.buffer = buffer;

}

publicvoid Run()

{

while(!Done())

{

Itemi = Produce();

buffer.Add(i);

}

}

privateBlockingCollection<Item> buffer;

privateItem Produce() { … }

privatebool Done() { … }

}

Patterns for Shared Memory Programming


Consumer code

Consumer Code

classConsumer

{

publicConsumer(BlockingCollection<Item> buffer)

{

this.buffer= buffer;

}

privateBlockingCollection<Item> buffer;

publicvoid Run()

{

foreach(Item i inbuffer.GetConsumingEnumerable())

Consume(i);

}

privatevoid Consume(Item item)

{

...

}

}

Patterns for Shared Memory Programming


Pipeline pattern

Pipeline Pattern

  • Generalization of Producer-Consumer

    • One or more workers per stage

    • First stage = Producer

    • Last stage = Consumer

    • Middle stages consume and produce

    • No Data Races: Data is local to workers

Stage 1 Worker

Stage 1 Worker

Stage 1 Worker

Buffer

Stage 2 Worker

Stage 2 Worker

Stage 2 Worker

Buffer

Stage 3 Worker

Stage 3 Worker

Stage 3 Worker

Patterns for Shared Memory Programming


Worklist pattern

WorklistPattern

  • Worklist contains items to process

    • Workers grab one item at a time

    • Workers may add items back to worklist

    • No data races: items are local to workers

Worklist

Worker

Worker

Worker

Patterns for Shared Memory Programming


Worklist code 1 2

Worklist Code (1/2)

  • BlockingCollection<Item> worklist;

  • CancellationTokenSourcects;

  • intitemcount;

publicvoid Run()

{

intnum_workers = 4;

// create worklist, filled with initial work

worklist = newBlockingCollection<Item>(

newConcurrentQueue<Item>(GetInitialWork()));

cts= newCancellationTokenSource();

itemcount = worklist.Count();

for(int i = 0; i < num_workers; i++)

Task.Factory.StartNew(RunWorker);

}

IEnumerable<Item> GetInitialWork() { … }

Patterns for Shared Memory Programming


Worklist code 2 2

Worklist Code (2/2)

  • BlockingCollection<Item> worklist;

  • CancellationTokenSourcects;

  • intitemcount;

publicvoidRunWorker() {

try{

do {

Itemi = worklist.Take(cts.Token);

// blocks until item available or cancelled

Process(i);

// exit loop if no more items left

} while (Interlocked.Decrement(refitemcount) > 0);

} finally{

if (! cts.IsCancellationRequested)

cts.Cancel();

}

}

publicvoid Process(Itemi) { . . . }

// may call AddWork() to add more work to worklist

publicvoidAddWork(Itemitem) {

Interlocked.Increment(refitemcount);

worklist.Add(item);

}

Patterns for Shared Memory Programming


Application architecture may combine patterns

Application Architecture May Combine Patterns

Stage A Worker

  • “Stages” arranged in a graph and may have

    • Single worker (only inter-stage parallelism)

    • Multiple workers(also intra-stage parallelism)

  • No data races: items local to worker

  • Buffers may have various ordering characteristics

    • ConcurrentQueue = FIFO

    • ConcurrentStack = LIFO

    • ConcurrentBag = unordered

Buffer

Stage X Worker

Buffer

Buffer

Stage Y Worker

Stage 2 Worker

Stage Y Worker

Buffer

Stage D Worker

Patterns for Shared Memory Programming


Provisioning

Provisioning

  • How to choose number of workers?

    • Single worker

      • If ordering of items matter

    • Workers are CPU-bound

      • Can not utilize more than one per processornum_workers = Environment.ProcessorCount

    • Workers block in synchronous IO

      • Can potentially utilize many more

  • How to choose buffer size?

    • Too small -> forces too much context switching

    • Too big -> uses inappropriate amount of memory

Patterns for Shared Memory Programming


Manual provisioning

Manual Provisioning

BlockingCollection<Item> buffer;

classItem{ … }

void Consume(Item i) { … }

publicvoidFixedConsumers() {

intnum_workers = Environment.ProcessorCount;

for(int i = 0; i < num_workers; i++)

Task.Factory.StartNew(() => {

foreach(Itemiteminbuffer.GetConsumingEnumerable())

Consume(item);

});

}

Patterns for Shared Memory Programming


Automatic provisioning

Automatic Provisioning

BlockingCollection<Item> buffer;

classItem{ … }

void Consume(Item i) { … }

publicvoidFlexibleConsumers()

{

Parallel.ForEach(

newBlockingCollectionPartitioner<Item>(buffer),

Consume);

}

privateclassBlockingCollectionPartitioner<T> : Partitioner<T>

{

// see http://blogs.msdn.com/b/pfxteam/archive/2010/04/06/9990420.aspx

}

Patterns for Shared Memory Programming


Replication patterns

Part II

Replication Patterns

Patterns for Shared Memory Programming


Replication patterns1

Replication Patterns

  • Idea: Many workers can work on the “same” item if we make copies

Replication Patterns

Make copies of shared state

Immutable Data

Double Buffering

Patterns for Shared Memory Programming


Immutability

Immutability

  • Remember: concurrent reads do not conflict

  • Idea: never write to shared data

    • All shared data is immutable (read only)

    • To modify data, must make a fresh copy first

  • Example: strings

    • Strings are immutable!

    • Never need to worry about concurrent access of strings.

Patterns for Shared Memory Programming


B uild your own immutable objects

Build your own immutable objects

  • Mutable list: changes original

    interfaceIMutableSimpleList<T>

    {

    int Count { get; }  // read only propertyTthis[int index] { get; } // read only indexervoid Add(T item);   // mutator method}

  • Immutable list: never changes original, copies on mutation

    interfaceIImmutableSimpleList<T>

    {

    int Count { get; }  // read only propertyTthis[int index] { get; } // read only indexerImmutableSimplelist<T> Add(T item); // read only method

    }

Patterns for Shared Memory Programming


Copy on write

Copy on Write

classSimpleImmutableList<T> : IImmutableSimpleList<T> {private T[] _items = new T[0];publicint Count { get { return _items.Length; } }public T this[int index] { get { return _items[index]; } }publicIImmutableSimpleList<T> Add(T item)        {            T[] items = new T[_items.Length + 1];Array.Copy(_items, items, _items.Length);            items[items.Length - 1] = item;returnnewSimpleImmutableList<T>(items);        }    }

Patterns for Shared Memory Programming


Example game

Example: Game

// GOAL: draw and update in parallel

voidGameLoop()

{

while (!Done()) {

DrawAllCells();

UpdateAllCells();

}

}

classGameObject {

publicvoid Draw()

{ // ... reads state

}

publicvoid Update()

{ // ... modifies state

}

}

GameObject[] objects

= new GameObject[numobjects];

voidDrawAllCells()

{

foreach(GameObject g in objects)

g.Draw();

}

voidUpdateAllCells()

{

foreach(GameObject g in objects)

g.Update();

}

// incorrect parallelization:

// data race between draw and update

voidParallelGameLoop()

{

while (!Done())

Parallel.Invoke(DrawAllCells,

UpdateAllCells)

}

Can we use immutability to fix this?

Patterns for Shared Memory Programming


Example game1

Example: Game

classGameObject {

publicvoid Draw()

{ // ... reads state

}

publicGameObjectImmutableUpdate()

{ // ... return updated copy

}

}

GameObject[] objects

= newGameObject[numobjects];

voidDrawAllCells()

{

foreach(GameObjectg inobjects)

g.Draw();

}

GameObject[] UpdateAllCells()

{

varnewobjects = newGameObject[numobjects];

for (int i = 0; i < numobjects; i++)

newobjects[i] = objects[i].ImmutableUpdate();

returnnewObjects;

}

// correct parallelization of loop

voidParallelGameLoop()

{

while (!Done())

{

GameObject[] newarray = null;

Parallel.Invoke(() => DrawAllCells(),

() => newarray = UpdateAllCells());

objects = newarray;

}

}

Correct parallelization.

Can we do this without allocating a new array every iteration?

Patterns for Shared Memory Programming


Trick double buffering

Trick: Double Buffering

// correct parallelization of loop

voidParallelGameLoop()

{

intbufferIdx= 0;

while (!Done())

{

Parallel.Invoke(

() => DrawAllCells(bufferIdx),

() => UpdateAllCells(bufferIdx)

);

bufferIdx= 1 - bufferIdx;

}

}

classGameObject {

publicvoid Draw()

{ // ... reads state

}

publicGameObjectImmutableUpdate()

{ // ...

}

}

GameObject[,] objects

= newGameObject[2, numobjects];

voidDrawAllCells(intbufferIdx)

{

for (int i = 0; i < numobjects; i++)

objects[bufferIdx, i].Draw();

}

voidUpdateAllCells(intbufferIdx)

{

for (int i = 0; i < numobjects; i++)

objects[1- bufferIdx, i] = objects[bufferIdx, i].ImmutableUpdate();

}

Patterns for Shared Memory Programming


  • Login