1 / 28

Parallel Programming Abstractions

Parallel Programming Abstractions. Tasks vs Threads. Similar but not the same. T asks. Task Scheduler. T hreads. Operating System. h/w processors. Tasks vs Threads: Main Differences. Tasks need not run concurrently The task scheduler can schedule them on the same thread.

damian
Download Presentation

Parallel Programming Abstractions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Parallel Programming Abstractions Parallel Programming Abstractions

  2. Parallel Programming Abstractions Tasks vs Threads • Similar but not the same. Tasks Task Scheduler Threads Operating System h/w processors

  3. Parallel Programming Abstractions Tasks vs Threads: Main Differences • Tasks need not run concurrently • The task scheduler can schedule them on the same thread

  4. Parallel Programming Abstractions Tasks vs Threads: Main Differences • Tasks need not run concurrently • The task scheduler can schedule them on the same thread • Tasks do not have fairness guarantees while(t == 0); Write(“hello”); t = 1;

  5. Parallel Programming Abstractions Tasks vs Threads: Main Differences • Tasks need not run concurrently • The task scheduler can schedule them on the same thread • Tasks do not have fairness guarantees • Tasks are cheaper than threads • Have smaller stacks • Do not pre-allocate OS resources

  6. Parallel Programming Abstractions Generating Tasks from Programs • You don’t want programmers to explicitly create tasks • Like assembly level programming • Instead: • Design programming language constructs that capture programmers intent • Compiler converts these constructs to tasks • Languages with the following features are very convenient for this purpose • Type inference • Generics • First order anonymous functions (lamdas/delegates)

  7. Parallel Programming Abstractions First-order functions: C# Lambda Expressions • Syntax (input parameters) => expression • Is similar to : &anon_foo // where foo is declared elsewhere anon_foo(input parameters){ return expression;} • Examples: x => x (x,y) => x==y (int x, string s) => s.Length > x () => { return SomeMethod()+1; } Func<int, bool> myFunc = x => x == 5; bool result = myFunc(4);

  8. Parallel Programming Abstractions Sequential Merge Sort With Lambdas MergeSort(int[] a, low, hi){ if(base_case) … int mid = low + (hi-low)/2; var f = (l,h) => { MergeSort(a, l, h);} f(low, mid-1); f(mid, high); Merge(a, low, mid, hi); }

  9. Parallel Programming Abstractions Things to Know about C# Lambdas • Lambda is an expression (with no type) • Conversion to a delegate type • Type inference for parameters • Capture of free variables • Locations referenced by free variables are converted to be on the heap (“boxed”)

  10. Parallel Programming Abstractions The Task Abstraction delegate void Action(); class Task { Task( Action a ); void Wait(); // called by the WSQ scheduler void Execute(); }

  11. Parallel Programming Abstractions Merge Sort With Tasks MergeSort(int[] a, low, hi){ if(base_case) … int mid = low + (hi-low)/2; Task left = new Task( delegate{ MergeSort(a, low, mid); } ); Task right = new Task( delegate{ MergeSort(a, mid, hi); } ); left.Wait(); right.Wait(); Merge(a, low, mid-1, hi); }

  12. Parallel Programming Abstractions Parallel.Invoke staticvoid Invoke(paramsAction[] actions); • Invokes all input actions in parallel • Waits for all of them to finish

  13. Parallel Programming Abstractions Merge Sort With Parallel.Invoke MergeSort(int[] a, low, hi){ if(base_case) … int mid = low + (hi-low)/2; Paralle.Invoke{ () => { MergeSort(a, low, mid-1); } () => { MergeSort(a, mid, hi); } } Merge(a, low, mid, hi); }

  14. Parallel Programming Abstractions Compare with Sequential Version MergeSort(int[] a, low, hi){ if(base_case) … int mid = low + (hi-low)/2; { MergeSort(a, low, mid-1); } { MergeSort(a, mid, hi); } Merge(a, low, mid, hi); }

  15. Parallel Programming Abstractions Data Parallelism • Sometimes you want to perform the same computation on all elements of a collection • For every string in an array, check if it contains “foo”

  16. Parallel Programming Abstractions Parallel.For For(intlower, intupper, delegate int (int) body); • Iterates a variable i from lower from to upper • Calls the delegate with i as the parameter in parallel

  17. Parallel Programming Abstractions Parallel.For // sequential for for(int i=0; i<n; i++){ if(a[i].Contains(“foo”)){DoSomething(a[i]);} } //Parallel for Parallel.For(0, n, (i) => { if(a[i].Contains(“foo”)){DoSomething(a[i]);} });

  18. Parallel Programming Abstractions The DAG created by Parallel.For A; Parallel.For(0, N, m: i => { B; }); C; A … m(0) m(1) m(N-1) C

  19. Parallel Programming Abstractions Paralle.ForEach ForEach<T>(IEnumerable<T> source, delegate void (T) body); • Same as Parallel.For, but iterates over elements of a collection in parallel

  20. Parallel Programming Abstractions Advantage of High-Level Abstractions • Makes it easy to add parallelism • Explore and experiment with different parallelization strategies • Language Compiler/Runtime can perform performance optimizations • Efficiently create tasks • Efficiently distribute tasks to available cores • Tight integration with scheduler

  21. Parallel Performance Example Partitioning on Two Cores

  22. Parallel Performance Partitioning on Four Cores

  23. Parallel Programming Abstractions Advantage of High-Level Abstractions • Makes it easy to add parallelism • Explore and experiment with different parallelization strategies • Language Compiler/Runtime can perform performance optimizations • Efficiently create tasks • Efficiently distribute tasks to available cores • Tight integration with scheduler • Provide programmatic features • Exceptions • Cancelling running tasks

  24. Parallel Programming Abstractions Semantics of Parallel Constructs • What does this do: Paralle.Invoke{ () => { WriteLine(“Hello”); } () => { WriteLine(“World”); } }

  25. Parallel Programming Abstractions Semantics of Parallel Constructs • What does this do: • Compare with Paralle.Invoke{ () => { WriteLine(“Hello”); } () => { WriteLine(“World”); } } { WriteLine(“Hello”); } { WriteLine(“World”); }

  26. Parallel Programming Abstractions Semantics of Parallel Constructs • By writing this program • You are telling the compiler that both outcomes are acceptable Paralle.Invoke{ () => { WriteLine(“Hello”); } () => { WriteLine(“World”); } }

  27. Parallel Programming Abstractions Correctness Criteria • Given a DAG, any linearization of the DAG is an acceptable execution

  28. Parallel Programming Abstractions Correctness Criteria • Simple criterion: • Every task operates on separate data • No dependencies other than the edges in the DAG • We will look at more complex criteria later in the course.

More Related