1 / 17

Parallelism COS 597C

Parallelism COS 597C. David August David Walker. Goals. To compare and contrast a variety of different programming languages and programming styles imperative programming (threads, shared memory, vector machines)

tam
Download Presentation

Parallelism COS 597C

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ParallelismCOS 597C David August David Walker

  2. Goals • To compare and contrast a variety of different programming languages and programming styles • imperative programming (threads, shared memory, vector machines) • functional programming (nested data parallelism, asychronous, functional reactive programming) • implementation techniques (GPUs, vector flattening) • new languages (Cilk, StreamIt, Map-Reduce, Sawzall, Dryad)

  3. Course Organization • A series of relatively independent modules • Students in the class will be assigned to different modules and help develop content for them • lectures • assignments for other students • Workload • some time preparing (learning) material for other students • some time working on exercises, topic-oriented projects • lots of group work

  4. Walker’s Modules • Asynchronous and Reactive Functional Programming • Software Transactional Memory • Nested Data Parallelism • Massively Parallel Systems (cloud computing) • A unifying theme: F# • a modern functional programming language with strong support for concurrency & access to lots of libraries

  5. Asynchronous, Reactive Programming • Technology Goals: • responsiveness & concurrency • in a GUI to respond to users rapidly • in a web browser to hide network latency • in a robot controller to respond to environmental changes • in a network controller to structure code for controlling a set of routers • in a programmed animation to write computations over time • Old-fashioned way: • call-backs, explicit event-based programming with tricky control-flow • New-fangled way: • an asynchronous concurrency monad “workflow” that helps structure programs

  6. open System.Net open Microsoft.FSharp.Control.WebExtensions let urlList = [ "Microsoft.com", "http://www.microsoft.com/" "MSDN", "http://msdn.microsoft.com/" "Bing", "http://www.bing.com" ] let fetchAsync(name, url:string) = async { try let uri = new System.Uri(url) let webClient = new WebClient() let! html = webClient.AsyncDownloadString(uri) printfn "Read %d characters for %s" html.Length name with | ex -> printfn "%s" (ex.Message); } let runAll() = urlList |> Seq.map fetchAsync |> Async.Parallel |> Async.RunSynchronously |> ignore runAll() introduce async computation run asynchronously, queueing the rest of the computation run set of asynchs in parallel

  7. Asynchronous, Reactive Programming • Stuff we’ll learn • how to structure reactive programs using asynchronous workflows and monads • what a monad is and how to build different kinds of monads in F# • non-standard applications: programming routers (Frenetic), programming robots (Yampa), and programming animations (Fran) • Possible projects/assignments • the mechanics of how to implement functional reactive programming infrastructure

  8. (Software) Transactional Memory • Technology Goals: • to simplify parallel programming by providing programmers with the illusion that the instructions of a transaction are executed atomically • a programmer does not have to reason about the possible interleavings of the instructions of a particular block of program code with all other instructions in the program

  9. v := x.item; x.item := v + 1; v := x.item; x.item := v + 1; If x.item starts as 1. What are its final results?

  10. val readTVar : TVar<'a> -> Stm<'a>val writeTVar : TVar<'a> -> 'a -> Stm<unit> val : atomically : Stm<'a> -> 'a let incr x = stm { let! v = readTVar x let! _ = writeTVar x (v+1)    return v } let incr2 x =  stm {     let! _ = incr x    let! v = incr x    return v } atomic increment introduce atomic block composable transactions incr x |> atomically incr x |> atomically

  11. (Software) Transactional Memory • Stuff we’ll learn • programming paradigms, pros and cons of STMs • software transactions can be phrased as another form of monad workflow • implementation techniques • hardware support • Possible projects/assignments • structuring scientific apps as STMs

  12. Nested Data Parallel Programming • Technology Goals • enable simple, concise, high-level expression of parallel algorithms • provide a clear, machine-independent cost model for algorithm design

  13. select lower elements in parallel select equal elements in parallel let r = new System.Random(); let rec quicksort (s : Nesl.vector<int>) = if s.Length < 2 then s else let pivot = Nesl.choose r s let les = Nesl.filter ((>) pivot) s let eqs = Nesl.filter ((=) pivot) s let ges = Nesl.filter ((<) pivot) s let answers = Nesl.map quicksort [| les; ges |] Nesl.concat [| Nesl.get answers 0; eqs; Nesl.get answers 1 |] select lower elements in parallel quicksort in parallel create concatenation in parallel

  14. Nested Data Parallel Programming • Stuff we’ll learn • data parallel design patterns and algorithms over vectors, matrices and graphs • cost model for data parallel programs • work, depth and relation to real machines • implementation techniques • vector flattening & cost guarantees • Possible projects/assignments • parallelizing “hard-to-parallelize” algorithms • parallelizing high-value scientific applications • genomics algorithms • implementation infrastructure in F#

  15. Massively Parallel Systems • Technology goal • Make it easy to program applications that scale to google-sized workloads • counting all the words on all the web pages in the world • filtering rss feeds for everyone with google reader installed • managing all amazon clients • DNA sequencing and analysis • Fault tolerance & performance

  16. reduce map f g g g f f g g g f web pages

  17. Massively Parallel Systems • What we’ll learn • language design for programming massively parallel systems • map-reduce, sawzall, dryad, azure • interesting things from guest speakers from Microsoft & Google • Possible projects/assignments • implementing high-value scientific apps

More Related