1 / 35

SHARED-MEMORY PROGRAMMING 6 th week

SHARED-MEMORY PROGRAMMING 6 th week. SHARED-MEMORY PROGRAMMING 6 th week. References Introduction The ANSI X3H5 Shared-Memory Model The POSIX Threads Model The OpenMP Standard. REFERENCES .

cyrah
Download Presentation

SHARED-MEMORY PROGRAMMING 6 th week

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SHARED-MEMORY PROGRAMMING6th week

  2. SHARED-MEMORY PROGRAMMING6th week • References • Introduction • The ANSI X3H5 Shared-Memory Model • The POSIX Threads Model • The OpenMP Standard Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  3. REFERENCES • Scalable Parallel Computing: Technology, Architecture and Programming, Kai Hwang and ZhiweiXu, ch12 • Parallel Processing Course • Yang-Suk Kee(yskee@iris.snu.ac.kr) School of EECS, Seoul National University Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  4. Introduction to Shared-Memory Programming Model Processor Memory System Thread (Process) Thread (Process) read(X) write(X) X Shared variable Shared-Memory Model / Shared Address Space (SAS) Model Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  5. Introduction… (cont’d) • Naming • Any process can name any variable in shared space • Operations • Loads and stores, plus those needed for ordering • Simplest Ordering Model • Within a process/thread: sequential program order • Across threads: some interleaving (as in time-sharing) • Additional orders through synchronization • Again, compilers/hardware can violate orders without getting caught Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  6. SYNCHORNIZATION • Mutual exclusion (locks) • Ensure certain operations on certain data can be performed by only one process at a time • Room that only one person can enter at a time • No ordering guarantees • Event synchronization • Ordering of events to preserve dependences • e.g. producer —> consumer of data • 3 main types: • point-to-point • global • group Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  7. NAMING AND OPEATIONS • Naming and operations in programming model can be directly supported by lower levels, or translated by compiler, libraries or OS • Example • Shared virtual address space in programming model • Hardware interface supports shared physical address space • Direct support by hardware through v-to-p mappings, no software layers Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  8. NAMING AND OPERATIONS (cont’d) • Hardware supports independent physical address spaces • Can provide SAS through OS, so in system/user interface • v-to-p mappings only for data that are local • remote data accesses incur page faults; brought in via page fault handlers • same programming model, different hardware requirements and cost model • Or through compilers or runtime, so above sys/user interface • shared objects, instrumentation of shared accesses, compiler support Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  9. SHARED-MEMORY STANDARDS • No widely-accepted standard • Three popular platform-independent Shared-Memory standards are • X3H5 • OpenMP • POSIX Pthreads Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  10. THE ANSI X3H5 MODEL • Established in 1993 • Has greatly influencence on many commercial shared-memory systems • Defines one conceptual standard programming model and 3 bindings for C, Fortran 77 and Fortran 90 Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  11. THE ANSI X3H5 MODEL (cont’d) • Main features • Parallelism Constructs • Parallel Blocks • Parallel Loop • Implicit Barrier • Support for thread Interaction and synchronization Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  12. PARALLELISM CONSTRUCTS • Is a pair of paralleland end parallelwith the enclosed code • Program starts in sequential mode with one initial thread (base thread/ master thread) • When the program encounters a parallel, it switches to parallel mode by creating a number of children threads. • The team of master thread and children threads execute in parallel till an end parallel • After the end parallel, the program switches back to sequential mode (only base thread continues execution) Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  13. PARALLEL CONSTRUCTS IN AN X3H5 PROGRAM Program main A paralllel B psections section C section D end psections psingle E end psingle pdo i=1,6 F(i) end pdo no wait G end parallel H End executed by only the base thread executed by every thread in the team (parallel mode) executed by one team member executed by another thread executed by only one thread (sequential mode ) all threads share 6 iterations of the loop to execute Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  14. PARALLEL CONSTRUCTS IN AN X3H5 PROGRAM: ILLUSTRATION Q R P Threads A Implicit barrier B B B C D Implicit barrier E Implicit barrier F(1:2) F(3:4) F(5:6) no Implicit barrier G G G Implicit barrier H Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  15. OTHER CONSTRUCTS • Inside a parallel construct, there are • Work-sharing constructs • Parallel block • Parallel loop (pdo…end pdo) • A single process (psingle…end psingle) • Other code to be duplicatedly executed by every thread in the team • Parallel Block • Consists of many sections (psections…end psections) • Used to specify MPMD parallelism • Each section is to be executed by a team member Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  16. OTHER CONSTRUCTS (cont’d) • Parallel Loop ( pdo … end pdo) • Used to specify SPMD parallelism • The same code is to be executed by all team members Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  17. OTHER FEATURES OF X3H5 • Implicit Barrier • At parallel, end parallel, end psections, end pdo and end psingle ( use no wait to avoid this) • Fence operation forces all memory acceses up to the barrier point to be consistent • Parallel and Work-sharing constructs can be nested • Support for thread interaction • shared/privated variable in a parallel construct • implicit and explicit barrirer • 4 types of synchornization objects: latch, lock, event and ordinal • Support for thread synchronization • Lock/event synchornization • Critical region and ordinal objects Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  18. THE POSIX THREADS (Pthreads) MODEL • Established by IEEE in 1995 • Functionality and interface are similar to those of Solaris Threads • Defines a set of primitive routines to manage and synchornize threads • Uses mutex objects and conditional variables for thread synchronization Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  19. THE Pthreads MODEL (cont’d) • Thread management • pthread_create • pthread_exit • pthread_join • pthread_self • Thread synchornization primitives • pthread_mutex_init • pthread_mutex_destroy • pthread_mutex_lock • pthread_mutex_trylock • pthread_mutex_unlock • pthread_cond_init • pthread_cond_destroy • pthread_cond_wait • pthread_cond_timedwait • pthread_cond_signal • pthread_cond_broadcast Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  20. HELLO WORLD PROGRAM:PTHREAD VERSION ….(cont’d)… //create N threads for(i=0; i<4; i++) pthread_create(&thread[i], &attr, thrfunc, (void*)&arg[i]); //wait for the N threads to finish for(i=0; i<4; i++) pthread_join(thread[i], NULL); }//end main int main(void){ pthread_t thread[4]; pthread_attr_t attr; int arg[4] = {0,1,2,3}; int i; // setup joinable threads with // system scope pthread_attr_init(&attr); pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE); pthread_attr_setscope(&attr, PTHREAD_SCOPE_SYSTEM); ….. #include <pthread.h> #include <stdio.h> void* thrfunc(void* arg){ printf(“hello from thread %d\n”, *(int*)arg); }//end thrfunc Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  21. THE OpenMP STANDARD • An Application Program Interface (API) to be used to explicitly direct multi-threaded, shared memory parallelism • Inherits many concepts from ANSI X3H5 model • Three API components • Compiler Directives • Runtime Library Routines • Environment Variables • Portable • APIs for C/C++ and Fortran • Multiple platforms: most Unix platforms and Windows NT Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  22. THE OpenMP STANDARD (cont’d) • Standardized • Jointly proposed by a group of major computer hardware and software vendors • Expected to become an ANSI standard • What does OpenMP stand for? • Open specifications for multi-processing • Collaborative work with interested parties from the hardware and software industry, government and academia Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  23. OpenMP IS NOT… • Distributed memory parallel systems by itself • Implemented identically by all vendors • Guaranteed to make the most efficient use of shared memory • There are no data locality constructs Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  24. GOALS OF OpenMP • Standardization • Provide a standard among a variety of shared memory architectures(platforms) • High-level interfaces to thread programming • Lean and Mean • A simple and limited set of directives for shared address space programming • Just 3 or 4 directives are enough to represent significant parallelism Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  25. HELLO WORLD: OpenMP VERSION #include <omp.h> #include <stdio.h> int main(void) { #pragma omp parallel printf(“hello from thread %d\n”, omp_get_thread_num()); } Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  26. GOALS OF OpenMP (cont’d) • Ease of use • Incrementally parallelize a serial program • Unlike all or nothing approach of message-passing • Implement both coarse-grain and fine-grain parallelism • Portability • Fortran (77, 90, and 95), C, and C++ • Public forum for API and membership Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  27. MATRIX MULTIPLICATION: SEQUENTIAL VERSION for (i=0; i<N; i++) { for (j=0; j<N; j++) { temp = 0; for (k=0; k<N; k++) temp += a[i][k] * b[k][j]; c[i][j] = temp; } } Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  28. MATRIX MULTIPLICATION:OPENMP VERSION #pragma omp parallel for private(temp), schedule(static) for (i=0; i<N; i++) { for (j=0; j<N; j++) { temp = 0; for (k=0; k<N; k++) temp += a[i][k] * b[k][j]; c[i][j] = temp; } } Add directive Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  29. PROGRAMMING MODEL • Thread Based Parallelism • A shared memory process with multiple threads • Based upon multiple threads in the shared memory programming paradigm • Explicit Parallelism • Explicit (not automatic) programming model • Offer the programmer full control over parallelization Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  30. PROGRAMMING MODEL (cont’d) • Fork - Join Model • All OpenMP programs begin as a single sequential process: the master thread • Fork at the beginning of parallel constructs • The master thread creates a team of parallel threads • The statements enclosed by the parallel region construct are executed in parallel • Join at the end of parallel constructs • The threads synchronize and terminate after completing the statements in the parallel construct • Only the master thread exists Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  31. FORK-JOIN MODEL Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  32. PROGRAMMING MODEL (cont’d) • Compiler Directive Based • Parallelism is specified through the use of compiler directives imbedded in C/C++ or Fortran source code • Nested Parallelism Support • Parallel constructs may include other parallel constructs inside. • Implementation-dependent • Dynamic Threads • Alter the number of threads used to execute parallel regions • Implementation-dependent Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  33. GENERAL CODE STRUCTURE #include <omp.h> main () { int var1, var2, var3; Serial code ... /*Beginning of parallel section. Fork a team of threads. Specify variable scoping */ #pragma omp parallel private(var1, var2) shared(var3) { Parallel section executed by all threads ... All threads join master thread and disband } Resume serial code } Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  34. OPENMP COMPONENTS • Directives • Work-sharing constructs • Data environment clauses • Synchronization constructs • Runtime libraries • Environment variables Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

  35. COMPARISON OF 5 SHARED-MEMORY PROGRAMMING STANDARD Courtesy: OpenMP Standards Board, 1997 Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

More Related