1 / 21

A Flipped Classroom Approach to Teaching Concurrency and Parallelism

A Flipped Classroom Approach to Teaching Concurrency and Parallelism. Shirley Moore University of Texas at El Paso svmoore@utep.edu Steven Dunlop Purdue University dunlop@purdue.edu EduPar-16 May 23, 2016. Motivation and Goals.

swhitt
Download Presentation

A Flipped Classroom Approach to Teaching Concurrency and Parallelism

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Flipped Classroom Approach to Teaching Concurrency and Parallelism Shirley Moore University of Texas at El Paso svmoore@utep.edu Steven Dunlop Purdue University dunlop@purdue.edu EduPar-16 May 23, 2016

  2. Motivation and Goals • Concurrency and parallelism involve difficult concepts for computer science students. • Little or no experience in prior courses • Traditional lecture and homework approach can leave students frustrated and unable to complete assignments successfully. • Goals: Use flipped classroom approach to • Promote collaborative learning and engagement during and outside of class • Lead to clearer understanding of concurrency and parallelism concepts • Increase students’ ability to apply and synthesize knowledge

  3. Background – Flipped Classroom • Pedagogical model in which lecture and homework elements are reversed • Student view video lectures or do readings prior to class session • Class time is spent on discussion and activities • Prior work • Zaretsky and Bangerth, “Teaching high performance computing: lessons from a flipped classroom, project-based course on finite element methods”, EduHPC@SC 2014. • Deslauriers, Schelew, and Wieman, “Improved learning in a large-enrollment physics class”, Science 332, May 2011. • Estes, Ingram, and Liu, “A review of flipped classroom research, practice, and technologies”, International HETL Review 4, 2014.

  4. Pros and Cons • Pros • Students spend more time preparing for class • May be viewed as a con by students • More engagement during and outside class • Cons • Preparing in-class activities takes more instructor time than preparing lectures and homework. • We used videos prepared by experts on the topic and provided by PurdueNext • Assigning videos and reading does not guarantee it will be done. • We gave short quizzes at start of class. • Students may not understand videos or know what to focus on. • We prepared viewing guides with questions.

  5. CS 5334 at UTEP • Parallel and Concurrent Programming course taught in spring semester • Cross-listed as CS 4390 for undergraduates • Typically about even mix of • CS graduate students • CS undergraduate students and CPS graduate students • Recent enrollments • Spring 2014: 16 • Spring 2015: 16 • Spring 2016: 19 • Learning outcomes based • Format • Spring 2014: traditional lecture + homework • Spring 2015, 2016: flipped classroom using PurdueNext videos • 2 exams and project with both formats

  6. Learning Outcomes – Level 1 (define) 1.1 Distinguish between concurrency and parallelism. 1.2 Distinguish multiple sufficient programming constructs for synchronization that may be inter-implementable but have complementary advantages. 1.3 Explain the difference between processes and threads. 1.4 Describe a design technique for avoiding liveness failures in programs using multiple locks or semaphores. 1.5 Explain the concept of interconnection networks and characterize different approaches. 1.6 Discuss performance advantages of multithreading offered in an architecture along with the factors that make maximum performance difficult to achieve. 1.7 Describe the relevance of scalability to performance. 1.8 Define “critical path”, “work”, and “span”. 1.9 Explain the differences between shared and distributed memory

  7. Learning Outcomes – Level 2 (apply, implement) 2.1 Use mutual exclusion to avoid a given race condition. [ 2.2 Give an example of an ordering of accesses among concurrent activities that is not sequentially consistent. 2.3 Write a program that correctly terminates when all of a set of concurrent tasks have completed. 2.4 Give an example of a scenario in which deadlock can occur. 2.5 Compute the work and span, and determine the critical path with respect to a parallel execution diagram. 2.6 Identify issues that arise in producer-consumer algorithms and mechanisms that may be used for addressing them. 2.7 Measure throughput and latency in a given network.

  8. Learning Outcomes – Level 3 (design, evaluate) • 3.1 Use formal techniques to show that a concurrent algorithm is correct with respect to a safety or liveness property. • 3.2 Decide if a specific execution is linearizable or not. • 3.3 Write a correct and scalable parallel algorithm. • 3.4 Parallelize an algorithm by applying task-based decomposition. • 3.5 Parallelize an algorithm by applying data-parallel decomposition. • 3.6 Implement a parallel matrix algorithm. • 3.7 Develop and validate an analytical parallel performance model.

  9. Example Activity – Collective Communication • Addressed learning outcomes 2.7 and 3.7 • Develop a detailed analytical model for the communication costs of the assigned collective operation and validate the model on the Stampede supercomputer • Collective operations studied • One-to-all broadcast • All-to-one broadcast • All-to-all broadcast • All-reduce • Gather and scatter • All-to-all personalized communication • Different implementations of one-to-all broadcast • Videos on this topic contain explanations of the optimal algorithms for each collective and an outline of the analysis of the communication costs. • Class activity acted out the algorithms.

  10. Collective Communication Activity -- Results • All groups had a clear understanding of their operation and were able to explain the algorithm clearly. • Spring 2015 • Only half the groups were able to fill in the detailed mathematical analysis of the communication costs. • Spring 2016 • All groups were able to do analysis. • For the experimental validation, the students were asked to use regression modeling to fit the experimental data to the analytical models. After some instruction and examples on regression modeling, all groups were able to complete this task successfully.

  11. Example Activity – 2014, 2015 Parallel Matrix Operations • Addressed learning outcomes 3.5, 3.6, and 3.7. • Implement two parallel matrix operations – matrix-vector multiplication and matrix-matrix multiplication – on a distributed memory system using C+MPI with a 2D block distribution of the matrix data. • Worked through an implementation of matrix-vector multiplication using a 1D block distribution as a class activity. • Straightforward implementation of matrix-matrix multiplication was required, with use of Cannon’s space-efficient algorithm as extra credit. • Further requirement for graduate students was to construct and validate analytical performance models for the two problems (extra credit for undergraduates).

  12. Example Activity – Spring 2016 2D Time Dependent Heat Equation • Addressed learning outcomes 3.3, 3.5, and 3.7. • Design explicit time-stepping solution of discretized partial differential equations using C+MPI with a 2D block distribution of the matrix data. • Construct analytical performance model with iso-efficiency analysis. • Implement four algorithms and compare parallel performance and scalability • One layer of halo cells, synchronous communication • Asynchronous communication overlapped with computation • Multiple layers of halo cells • Combination of 2 and 3

  13. Parallel Matrix Activity – Results • Spring 2014 • Entire class completed the matrix-vector multiplication part after a large amount of coaching from the instructor, including outlining the code on the board. • No one in the class completed the matrix-matrix multiplication part, and the students voiced their opinion that the assignment was too difficult. • None of the students completed the analytical modeling part. • Spring 2015 • Half of two 80-minute class periods were devoted to collaborative work on the assignment. • Instructor circulated about the room answering questions and giving feedback but did not provide detailed guidance as in previous year. • Over half the class was able to complete the matrix-matrix multiplication portion. One group of two graduate CS students completed the Cannon’s multiplication extra credit and successfully completed the analytical modeling part.

  14. Heat Equation Activity – Results • Spring 2016 • Entire class completed the performance model and iso-efficiency analysis correctly • 16 out of 19 students successfully implemented Algorithm 1 after a moderate amount of coaching from the instructor, including an additional scheduled help session. • Nine students successfully implemented Algorithm 2 and compared the performance of Algorithms 1 and 2. • Five students successfully completed all four algorithms but they did not finish the performance comparisonof the algorithms.

  15. Exam Question -- Example • Addressed learning outcome 2.5 • Spring 2014 • Spring 2015

  16. Exam Question -- Example • Addressed learning outcome 2.5 • Spring 2016

  17. Comparison of Achievement of Learning Outcomes

  18. Comparison of Course Evaluations

  19. Conclusions and Future Work • Saw improvement in achievement of learning outcomes with flipped classroom format in Spring 2015 • Higher level of achievement of lower and mid-level outcomes • Students still struggled with outcomes related to analysis of algorithms and proofs of correctness • Revised activities and assignments to address these problems in Spring 2016 • More specific instructions and examples to help guide students with analysis and proofs • Higher achievement of analysis of algorithms and proof of correctness learning objectives • Explore completely online offering in collaboration with PurdueNext • Course materials available at http://svmoore.pbworks.com/

  20. References • Seven things you should know about flipped classrooms, Educause 2012, https://net.educause.edu/ir/library/pdf/ELI7081.pdf • Jill Zarestky and Wolfgang Bangerth. Teaching high performance computing: lessons from a flipped classroom, project-based course on finite element methods. EduHPC@SC 2014: 34-41. • L. Deslauriers, E. Schelew, and C. Wieman. Improved learning in a large-enrollment physics class. Science 332, May 2011: 862-876. • M. D. Estes, R. Ingram, and J.C. Liu (2014). A review of flipped classroom research, practice, and technologies.International HETL Review, Volume 4, Article 7, URL: https://www.hetl.org/feature-articles/a-review-of-flipped-classroom-research-practice-and-technologies • Computer Science Curricula 2013: Curriculum guidelines for undergraduate programs in computer science, by the Joint Task Force on Computing Curricula – ACM/IEEE, December 2013, https://www.acm.org/education/CS2013-final-report.pdf • Shirley Moore and Steven Dunlop. A flipped classroom approach to teaching concurrency and parallelism. EduPar 2016, Chicago, IL, May 2016.

  21. Acknowledgments • We acknowledge the use of an Education Allocation on the Stampede supercomputer at Texas Advanced Computing Center that was used for parallel programming assignments. • We would like to thank William White and Angela Opie for assisting with administration and coordination of the PurdueNext courses.

More Related