1 / 24

Libby Shoop Joel Adams Dick Brown

Patterns and Exemplars: Compelling Strategies for Teaching Parallel and Distributed Computing to CS Undergraduates . Libby Shoop Joel Adams Dick Brown. Today’s messages. Parallel Design Patterns provide an established, practical set of principles for teaching PDC

torgny
Download Presentation

Libby Shoop Joel Adams Dick Brown

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Patterns and Exemplars: Compelling Strategies for Teaching Parallel and Distributed Computing to CS Undergraduates Libby Shoop Joel Adams Dick Brown

  2. Today’s messages • Parallel Design Patterns provide an established, practical set of principles for teaching PDC • “Exemplar” example applications with multiple implemented solutions provide motivation for students and teaching materials for instructors • Patterns and Exemplars fit together naturally and are ready for deployment

  3. Parallel Design Patterns • Following on the original Gang of Four design patterns work Active work on parallel design patterns and parallel pattern languages: • Catalog parallel patterns used in solutions and describe a methodology for using the pattern

  4. 2010 2012 Past Work 1999 2004 • Lea : • Java Concurrency Patterns book • Mattson, Saunders, and Massingil : • PPLP book • Ralph Johnson et al. : • Parallel Programming Patterns online; books of Visual C++, .NET examples • Oretega-Arjonabook • McCool, Reinders, and Robison book • Kreutzer, Mattson, et al. : • Our Pattern Language (OPL) online • ParaPLoP Workshop on Parallel Programming Patterns 2010 2011 ParaPLoP ‘10

  5. Pattern Approach • Using existing design knowledge when designing new parallel programs • Leads to parallel software systems that are: • modular, adaptable, understandable and evolve easily • Also provides an effective problem-solving framework and a guide for teaching about good parallel solutions

  6. Patternlets

  7. Patternlets… … are minimalist, scalable, executable programs, each illustrating a particular pattern’s behavior: • Minimalist so that students can grasp the concept without non-essential details getting in the way • Scalable so that students see different behaviors as the number of threads changes • Executable so that • Instructors can use it in a live-coding demo • Students can use it in a hands-on exercise Patternlets let students see the pattern in action

  8. Existing Patternlets (so far) • MPI • SPMD • Master-Worker • Message Passing • Parallel For Loop (stripes) • Parallel For Loop (blocks) • Broadcast • Reduction • Scatter • Gather • Barrier • OpenMP • Fork-Join • SPMD • Master-Worker • Parallel For Loop (blocks) • Parallel For Loop (stripes) • Reduction • Private • Atomic • Critical • Critical2 • Sections • Barrier

  9. OpenMPPatternlets MPI Patternlets

  10. /* masterWorker.c (MPI) … */ #include <stdio.h> #include <mpi.h> int main(intargc, char** argv) { int id = -1, numProcs= -1, length = -1; char hostName[MPI_MAX_PROCESSOR_NAME]; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &id); MPI_Comm_size(MPI_COMM_WORLD, &numProcs); MPI_Get_processor_name (hostName, &length); if ( id == 0 ) { // process with ID == 0 is the master printf("Greetings from the master, #%d (%s) of %d processes\n”, id, hostName, numProcs); } else { // processes with IDs > 0 are workers printf("Greetings from a worker, #%d (%s) of %d processes\n”, id, hostName, numProcs); } MPI_Finalize(); return 0; }

  11. Sample Executions $ mpirun -np 1 ./masterWorker Greetings from the master, #0 (node-01) of 1 processes $ mpirun–np 8 ./masterWorker Greetings from the master, #0 (node-01) of 8 processes Greetings from a worker, #1 (node-02) of 8 processes Greetings from a worker, #5 (node-06) of 8 processes Greetings from a worker, #3 (node-04) of 8 processes Greetings from a worker, #4 (node-05) of 8 processes Greetings from a worker, #7 (node-08) of 8 processes Greetings from a worker, #2 (node-03) of 8 processes Greetings from a worker, #6 (node-07) of 8 processes

  12. /* masterWorker.c (OpenMP) … */ #include <stdio.h> #include <omp.h> int main(intargc, char** argv) { int id = -1, numThreads = -1; // #pragma omp parallel { id = omp_get_thread_num(); numThreads = omp_get_num_threads(); if ( id == 0 ) { // thread with ID 0 is master printf(”Greetings from the master, #%d of %d threads\n\n”, id, numThreads); } else { // threads with IDs > 0 are workers printf(”Greetings from a worker, #%d of %d threads\n\n”, id, numThreads); } } return 0; }

  13. Sample Executions $ ./masterWorker// pragma omp parallel disabled Greetings from the master, #0 of 1 threads $ ./masterWorker// pragma omp parallel enabled Greetings from a worker, #1 of 8 threads Greetings from a worker, #2 of 8 threads Greetings from a worker, #5 of 8 threads Greetings from a worker, #3 of 8 threads Greetings from a worker, #6 of 8 threads Greetings from the master, #0 of 8 threads Greetings from a worker, #4 of 8 threads Greetings from a worker, #7 of 8 threads

  14. Exemplars

  15. Motivation • Everyone in CS needs PDC • Not everyone is naturally drawn to PDC topics How shall we motivate every CS undergraduate to learn the PDC they will need for their careers?

  16. Motivation • Everyone in CS needs PDC • Not everyone is naturally drawn to PDC topics Proposal: Teach PDC concepts with compelling applications. • Some CS students draw by concepts and tech • Other CS students drawn by the applications How shall we motivate every CS undergraduate to learn the PDC they will need for their careers?

  17. Exemplars An exemplar is: • A representative applied problem plus • multiple code solutions implemented in various PDC technologies, with commentary

  18. Exemplar A (from EAPF Practicum) • Compute π via numerical integration • Implemented solutions • Serial • Shared memory (OpenMP, TBB, pthreads, Windows Threads, go language) • Distributed computing (MPI) • Accelerators (CUDA, Array Building Blocks) • Comments: • Flexible uses: demo, concepts, tech, compare • But not a compelling application

  19. Exemplar B (from EAPF Practicum) • Drug design • Implemented solutions • Serial • Shared memory (OpenMP, boost threads, go lang) • Map-reduce framework (Hadoop)

  20. Exemplar B (from EAPF Practicum) • Comments • Compelling application • Molecular dynamics, docking algorithm • Substitute for docking algorithm to score ligands: (score is maximal match count) • Relates to genetic alignment algorithm • Multiple ways to scale: # ligands, ligand length, # cores • Random strings with random lengths for variable computational load per ligand

  21. Exemplars + Patterns • Exemplar implementations offer a rich opportunity for learning patterns • Examples • π as area (among 8 PDC implementations): • Data Decomposition, Geometric Decomposition; Parallel For Loop, Master-Worker, Strict Data Parallel, Distributed Array; SIMD, Thread Pool, Message Passing, Collective Communication, Mutual Exclusion • Drug design (among 4 PDC implementations): • Map-Reduce; Data Decomposition; Parallel For Loop, Fork-Join, BSP, Master-Worker, Task Queue, Shared Array, Shared Queue; Thread Pool, Message Passing, Mutual Exclusion

  22. π as area Drug design

  23. Conclusion • Patterns – a meaning for “parallel thinking,” best practice from industry • Patternlets – minimalist, scalable, executable programs, each illustrating a particular pattern’s behavior • Exemplars – motivation, hands-on/demo, teaching resource, opportunities for PDC • These are naturally combined and ready for deployment

More Related