1 / 29

OpenMP – Introduction *

OpenMP – Introduction *. *UHEM yaz çalıştayı notlarından derlenmiştir. (uhem.itu.edu.tr). Outline. What is OpenMP? Introduction (Code Structure, Directives, Threads etc.) Limitations Data Scope Clauses Shared, Private Work-sharing constructs Synchronization. What is OpenMP?.

emilyperez
Download Presentation

OpenMP – Introduction *

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. OpenMP – Introduction* *UHEM yaz çalıştayı notlarından derlenmiştir. (uhem.itu.edu.tr)

  2. Outline • What is OpenMP? • Introduction (Code Structure, Directives, Threads etc.) • Limitations • Data Scope Clauses • Shared, Private • Work-sharing constructs • Synchronization

  3. What is OpenMP? • An Application Program Interface (API) that may be used to explicitly direct multithreaded, shared memory parallelism • OpenMP is managed by the nonprofit technology consortium OpenMP Architecture Review Board (or OpenMP ARB), jointly defined by a group of major computer hardware and software vendors, including AMD, IBM, Intel, Cray, HP, Fujitsu, Nvidia, NEC, Red Hat, Texas Instruments, Oracle Corporation, and more. • Portable & Standardized • API exist both C/C++ and Fortan 90/77 • Multi platform Support (Unix, Linux etc.)

  4. OpenMP Specifications • Version 3.1, Complete Specifications, July 2011 • Version 3.0, May 2008 • Version 2.5, May 2005 (C/C++ & Fortran) • Version 2.0 • C/C++, March 2002 • Fortran, November 2000 • Version 1.0 • C/C++, October 1998 • Fortran, October 1997 Detailed Info: http://www.openmp.org/wp/openmp-specifications/

  5. Intel & GNU OpenMP • Intel Compilers • OpenMP 2.5 conforming • Nested parallelisim • Workqueuing extension to OpenMP • Interoperability with POSIX and Windows threads • OMP_DYNAMIC support • GNU OpenMP (OpenMP+gcc) • OpenMP 3.0 Support (gcc 4.4 and later)

  6. OpenMP Programming Model • Explicit parallelism • Thread based parallelism; program runs with user specified number of multiple thread • Uses fork & join model Synchronization Point (“barrier”, “critical region”, “single processor region”)

  7. Limitations of OpenMP • Shared Memory Model • Each thread must be reach a shared memory (SMP)

  8. Terminology and Behavior • OpenMP Team = Master + Worker • Parallel Region is a block of code executed by all threads simultaneously (has implicit barrier) • The master thread always has thread id 0 • Parallel regions can be nested • If clause can be used to guard the parallel region

  9. Terminology and Behavior • A Work-Sharing construct divides the execution of the enclosed code region among the members of the team. (Loop, Section etc.)

  10. OpenMP Code Structure C/C++ #include <omp.h> main () { int var1, var2, var3; /* Serialcode */ . . . /* Beginning of parallelsection. Fork a team of threads.Specifyvariablescoping */ #pragma ompparallelprivate(var1, var2) shared(var3) { Parallelsectionexecutedbyallthreads . . Allthreadsjoinmasterthreadanddisband } /* Resumeserialcode */ . . }

  11. OpenMP Directives • Format in C/C++: • #pragma omp: • Required for all OpenMP C/C++ directives. • directivename: • A valid OpenMP directive. Must appear after the pragma and before any clauses. • [clause, ...] : • Optional. • Clauses can be in any order, and repeated as necessary unless otherwise restricted. #pragma omp directivename [clause, ...] \

  12. OpenMP Directives • Example: • General Rules: • Directives follow conventions of the C/C++ standards for compiler directives. • Case sensitive • Only one directivename may be specified per directive • Long directive lines can be "continued" on succeeding lines by escaping the newline character with a backslash ("\") at the end of a directive line. #pragma omp parallel default(shared) private(beta,pi)

  13. OpenMP Directives • PARALLEL Region Construct: • A parallel region is a block of code that will be executed by multiple threads. • This is the fundamental OpenMP parallel construct. #pragma omp parallel[clause ...]

  14. OpenMP Directives C/C++ OpenMP structured block definition. #pragma omp parallel[clause ...] { structured_block }

  15. When a thread reaches a PARALLEL directive • It creates a term of threads and becomes the master of the team • The master is a member of that team, it has thread number 0 within that team (THREAD ID) • Starting from the beginning of this parallel region, the code is duplicated and all threads will execute that code • There is an implied barrier at the end of a parallel section • Only the master thread continues execution past this point

  16. Lab: Helloworld

  17. Lab: Compiling Helloworld $ gcc -fopenmp omp_hello.c-o omp_hello $ export OMP_NUM_THREADS=2 $ ./omp_helloHello World from thread = 0Hello World from thread = 1

  18. Lab: Helloworld Optional Exercise: 1 - set OMP_NUM_THREADS to an higher value (such as 10) 2- repeat example. • Set environment variables (export) • Run your OpenMP compile bash: $ export OMP_NUM_THREADS=4 bash: $ ./omp_hello Hello OpenMP! Hello OpenMP! Hello OpenMP! Hello OpenMP!

  19. OpenMP Constructs

  20. Data Scope Attribute Clauses C/C++ shared (list) • SHARED Clause: • It declares variables in its list to be shared to each thread. • Behavior • The pointer of the object of the same type is declared once for each thread in the team • All threads reference to the original object

  21. Data Scope Attribute Clauses C/C++ private (list) • PRIVATE Clause: • It declares variables in its list to be private to each thread. • Behavior • A new object of the same type is declared once for each thread in the team • All references to the original object are replaced with references to the new object • Variables declared PRIVATE are uninitialized for each thread (FIRSTPRIVATE can be used for initialization of variables)

  22. Work-Sharing Constructs • A work-sharing construct divides the execution of the enclosed code region among the members of team that encounter it. • Must be enclosed in a parallel region otherwise it is simply ignored. • Work-sharing constructs do not launch/create new threads. • There is no implied barrier upon entry to a work-sharing construct. However there is an implicit barrier at the end of a work-sharing construct.

  23. Work-Sharing Constructs • Types

  24. Work-Sharing Constructs shares iterations of a loop across the team. Represents a type of "data parallelism". serializes a section of code breaks work into separate, discrete sections. Each section is executed by a thread. Can be used to implement a type of "functional parallelism".

  25. Work-Sharing Constructs • for directive (C/C++) #pragma omp for [clause ...] { for_loop }

  26. Work-Sharing Constructs • scheduleclause:schedule(kind [,chunk_size]) • static: less overhead, default on many OpenMP compilers • dynamic & guided: useful for poorly balanced and unpredictable workload. In guided the size of chunk decreases over time. • runtime: If this schedule is selected, the decision regarding scheduling kind is made at run time. The schedule and (optional) chunk size are set through theOMP_SCHEDULEenvironment variable.

  27. Work-Sharing Constructs • scheduleclause: • describes how iterations of the loop are divided among the threads in the team When a thread finishes one chunk, it is dynamically assigned another. The default chunk size is 1. The chunk size is exponentially reduced with each dispatched piece of the iteration space. The default chunk size is 1. Loop iterations are divided into pieces of size chunk statically

  28. Work-Sharing Constructs • nowait (C/C++) clause: • If specified, then threads do not synchronize at the end of the parallel loop. Threads proceed directly to the next statements after the loop.

  29. Work-Sharing Lab : nowait

More Related