1 / 18

Parallel and Distributed Algorithms (CS 6/76501) Spring 2007

Parallel and Distributed Algorithms (CS 6/76501) Spring 2007. Johnnie W. Baker. Overview and Syllabus. Presentations. Professor Johnnie W. Baker Instructor Will give most presentations Guest Lecturers from Parallel Processing Group Occasional lecture in areas of expertise

minh
Download Presentation

Parallel and Distributed Algorithms (CS 6/76501) Spring 2007

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Parallel and Distributed Algorithms(CS 6/76501)Spring 2007 Johnnie W. Baker

  2. Overview and Syllabus

  3. Presentations • Professor Johnnie W. Baker • Instructor • Will give most presentations • Guest Lecturers from Parallel Processing Group • Occasional lecture in areas of expertise • Occasionally cover classes when I am away • Perhaps a lecture by a visiting professor

  4. Two Primary Textbooks • Parallel Computation: Models and Methods • Selim Akl, author • Prentice Hall, 1997 • Access to an online copy will be provided. • A textbook focusing on a multiprocessor model and algorithms for this model. • Parallel Programming in C with MPI and OpenMP, Michael Quinn, McGraw Hill, 2004 • There will also be some supplementary handouts.

  5. Prerequisites The prerequisite for this course is one of the following: • A course in the design and analysis of algorithms such as CS 4/56101. • CS 6/76105 Parallel and Distributed Computing • Permission

  6. Overview of Topics • Parallel Algorithms are key ingredient to solving a wide range of problems on various parallel systems • Sequential algorithms are designed for one standard model called RAM • Random Access Machine • Due to the wide variety of parallel & distributed systems, multiple computational models are needed to described the different types of systems • Important parallel models capture the essence of existing or project future parallel systems.

  7. Overview (cont.) • Will include both synchronous & asychronous models. • Asychronous models & algorithms are closely related to distributed computing • Typical algorithms studied are for basic areas • searching, sorting, graphs, matrices, computational geometry • While focus will be on parallel algorithms, many of the models, algorithms, and principles are applicable to distributed systems as well. • Key course for those planning on working in parallel & distributed computing

  8. Major Topics Covered in PDA(Not necessarily listed in order covered) • General topics • Analysis of parallel computation • Limits for parallel computation • PRAM model and algorithms • Parallel Random Access Machine (or Parallel “RAM”) • More published algorithms than other models • Formerly the “standard parallel model” • Algorithms for some important interconnection networks • e.g. linear arrays, 2D mesh, hypercube • Bus-Based & optical models & typical algorithms

  9. Major Topics in PDA (cont) • Task/Channel Model algorithms (using MPI) • Quinn textbook uses this model. • Also used by Grama et al and Foster textbooks • Small subset of MPI Language used in pseudocode • BSP (Bulk Synchronous Model) and algorithms • Essentially a combined computational & programming model. • Has an extensive software library that can be used in programs • KSU’s associative model and algorithms • Air Traffic Control algorithms included

  10. Benefits • Most large software projects must be implemented on a parallel or distributed system • Needed for memory space • Needed to obtain reasonable speed • Parallel systems are much more efficient for computational intensive applications • Distributed systems are much slower due to greater communication bottlenecks due to distributed database and greater synchronization problems • Efficient algorithms and software is key to effective use of parallel & distributed systems.

  11. Benefits (cont) • There is a wide choice of thesis and dissertation topics in parallel & distributed computing area • Professors sponsoring parallel research are Baker, Farrell, and Walker • Farrell and Ruttan use parallel computation heavily as a tool in applications (bioinformatics, scientific, etc.) • Several professors in distributed areas as well. • Students who are working on thesis or dissertation in another area can also benefit from this course. • Parallel systems often needed to handle computational intensive systems (e.g., bioinformatics)

  12. Two Complementary Courses • Parallel & Distributed Computing (usually in Fall) • Parallel Architectures • Parallel Languages • Parallel Programming • Algorithm Examples for some architectures • Parallel & Distributed Algorithms (Alternate Springs) • Important Models of Computation • Designing Efficient Algorithms for Various Models • PDC and PDA can be taken in either order • More natural for PDC to be taken first • However, students often take PDA first

  13. Limited Overlap in PDC & PDA • Allows PDC and PDA to be taken in either order. • Performance Evaluation and Limits for Parallel Computation • Some general topics required for both courses. • More practical coverage needed for programming in PDC • More theoretical considerations in PDA • Basic MPI Language Constructs • Used as a programming language in PDC • Only a small subset of commands used for algorithm descriptions in PDA • No programming in PDA

  14. Major Topics in Companion Course (PDC) • Fundamental concepts in parallel computation. • Synchronous Computation • SIMD, Vector, Pipeline Computing • Associative and Multi-Associative Computing • ASC Language and Programming • MultiC Language and Programming • Fortran 90 and HPF Languages • Asynchronous (MIMD) Shared Memory Computation • OpenMP language • Symmetric Multiprocessors or SMPs • Asynchronous (MIMD) Distributed Memory Computation • Communications • MPI Language and Programming • Architectures • Interconnection Networks (synchronous and asynchronous) • Specific Computer Examples for above computation paradigms • MIMD-SIMD Comparisons in Real-Time Applications

  15. Assignments and Grading • Homework assignments • Problems assigned for most chapters • Probably 5-7 different assignments • No programming assignments • Course Grade based on • Homework (& class presentations) • Midterm • final • Approximate weights • Homework 30% • Midterm Exam 35% • Final Exam 35%

  16. Disabilities Information In accordance with university policy, if you have a documented disability and require accomodations to obtain equal access to this course, please contact the instructor at the beginning of the semester or when given an assignment for which an accomodation is required. Students with disabilities must verify their elgibility through the Office of Student Disability Services (SDS) in the Michael Schwartz Student Services Center (672-3391).

  17. Plagiarism Information Plagiarism of any type will not be tolerated and will be dealt with in accordance to the University's Administrative policy and procedures regarding student cheating and plagiarism. See the University Statement on Academic Dishonesty for more details. Additionally, unattributed copying from another webpage is also considered plagiarism. Also, see the Computer Science Department Academic Policy involving Programming.

  18. Attendance Information • It is important that students attend class regularly. • Material for the slides for this course is often obtained from multiple sources. • While my slides will contain a lot of information, I can not included all of the information that will be covered in class. • Those who can not attend regularly should discuss this situation with me in advance.

More Related