1 / 67

Introducción a la Programación Paralela (Memoria Compartida)

Introducción a la Programación Paralela (Memoria Compartida). Casiano Rodríguez León casiano@ull.es Departamento de Estadística, Investigación Operativa y Computación. Introduction. Grid of Problem to be solved. CPU #1 works on this area of the problem. CPU #2 works on this area

bailey
Download Presentation

Introducción a la Programación Paralela (Memoria Compartida)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introducción a la Programación Paralela (Memoria Compartida) Casiano Rodríguez León casiano@ull.es Departamento de Estadística, Investigación Operativa y Computación.

  2. Introduction

  3. Grid of Problem to be solved CPU #1 works on this area of the problem CPU #2 works on this area of the problem exchange y CPU #3 works on this area of the problem CPU #4 works on this area of the problem exchange x What is parallel computing? • Parallel computing: the use of multiple computers or processors working together on a common task. • Each processor works on its section of the problem • Processors are allowed to exchange information with other processors

  4. Why do parallel computing? • Limits of single CPU computing • Available memory • Performance • Parallel computing allows: • Solve problems that don’t fit on a single CPU • Solve problems that can’t be solved in a reasonable time • We can run… • Larger problems • Faster • More cases

  5. Performance Considerations

  6. SPEEDUP TIME OF THE FASTEST SEQUENTIAL ALGORITHM TIME OF THE PARALLEL ALGORITHM SPEEDUP = SPEEDUP  NUMBER OF PROCESSORS • Consider a parallel algorithm that runs in T steps on P processors • It is a simple fact that the parallel algorithm can be simulated by • a sequential machine in TxP steps • The best sequential algorithm runs in Tbest seq  TxP

  7. = + t f / P f t n p s 1 1 = S + f f / P s p Amdahl’s Law • Amdahl’s Law places a strict limit on the speedup that can be realized by using multiple processors. • Effect of multiple processors on run time • Effect of multiple processors on speed up • Where • fs= serial fraction of code • fp = parallel fraction of code • P = number of processors ( )

  8. 250 fp = 1.000 200 fp = 0.999 fp = 0.990 150 fp = 0.900 100 50 0 0 50 100 150 200 250 Number of processors Illustration of Amdahl's Law It takes only a small fraction of serial content in a code to degrade the parallel performance. It is essential to determine the scaling behavior of your code before doing production runs using large numbers of processors

  9. 80 fp = 0.99 70 60 50 Amdahl's Law 40 Reality 30 20 10 0 0 50 100 150 200 250 Number of processors Amdahl’s Law Vs. Reality Amdahl’s Law provides a theoretical upper limit on parallel speedup assuming that there are no costs for speedup assuming that there are no costs for communications. In reality, communications will result in afurther degradation of performance

  10. Memory/Cache CPU Cache MAIN MEMORY

  11. Data Memory L1 Data Cache L2 Data Cache Processor Instruction Memory L1 Instruction Cache L2 Instruction Cache Locality and Caches • a [i ] = b[ i ]+c[ i] • On uniprocessors systems from a correctness point of view, memory is monolithic, from a performance point of view is not. • It might take more time to bring a(i) from memory than to bring b(i), • Bringing in a(i) at one point in time maight take longer than bringing it in at a later point in time.

  12. for(i=0;i<n;i++) for(j=0;j<n;j++) a[i][j] = 0; Spatial locality is enhanced if the loops are exchanged for(i=0;i<n;i++) for(j=0;j<n;j++) a[j][i] = 0; a[j][i] and a[j+1][i] have stride n, being n the dimension of a. There is an stride 1 access to a[j][i+1] that occurs n iterations after the reference to a[j][i]. Spatial Locality When an element is referenced its neighbors will be referenced too Temporal Locality When an element is referenced, it might be referenced again soon

  13. Shared Memory Machines

  14. P P P P P P M M M M M M P P P P P P B U S Network M e m o r y Shared and Distributed memory Distributed memory - each processor has it’s own local memory. Must do message passing to exchange data between processors. Shared memory - single address space. All processors have access to a pool of shared memory. Methods of memory access : - Bus - Crossbar

  15. Uniform memory access (UMA) Each processor has uniform access to memory - Also known as symmetric multiprocessors (SMPs) Non-uniform memory access (NUMA) Time for memory access depends on location of data. Local access is faster than non-local access. Easier to scale than SMPs Styles of Shared memory: UMA and NUMA

  16. UMA: Memory Access Problems • Conventional wisdom is that systems do not scale well • Bus based systems can become saturated • Fast large crossbars are expensive • Cache coherence problem • Copies of a variable can be present in multiple caches • A write by one processor my not become visible to others • They'll keep accessing stale value in their caches • Need to take actions to ensure visibility or cache coherence

  17. P P P 2 1 3 u = ? u = ? u = 7 3 5 $ 4 $ $ u : 5 u : 5 1 I / O d e v i c e s 2 u : 5 M e m o r y Cache coherence problem • Processors see different values for u after event 3 • With write back caches, value written back to memory depends on circumstance of which cache flushes or writes back value when • Processes accessing main memory may see the old value

  18. Snooping-based coherence • Basic idea: • Transactions on memory are visible to all processors • Processor or their representatives can snoop (monitor) bus and take action on relevant events • Implementation • When a processor writes a value a signal is sent over the bus • Signal is either • Write invalidate tell others cached value is invalid • Write broadcast - tell others the new value

  19. P0 P1 , P2 , P3 ... Pn-1 While (MyTask == NULL) { Begin Critical Section; if (Head != NULL) { MyTask = Head; Head = Head ->Next; } End Critical Section; } .... = MyTask->data; While (there are more tasks) { Task = GetFromFreeList(); Task ->data = ....; insert Task in task queue; } Head = head of task queue; What value is read here? Memory Consistency Models

  20. P0 P1 , P2 , P3 ... Pn-1 While (MyTask == NULL) { Begin Critical Section; if (Head != NULL) { MyTask = Head; Head = Head ->Next; } End Critical Section; } .... = MyTask->data; While (there are more tasks) { Task = GetFromFreeList(); Task ->data = ....; insert Task in task queue; } Head = head of task queue; Memory Consistency Models In some commercial shared memory systems it is possible to observe the old value ofMyTask->data!!

  21. Distributed shared memory (NUMA) • Consists of N processors and a global address space • All processors can see all memory • Each processor has some amount of local memory • Access to the memory of other processors is slower • NonUniform Memory Access

  22. SGI Origin 2000

  23. OpenMP Programming

  24. Origin2000 memory hierarchy Level Latency (cycles) register 0 primary cache 2..3 secondary cache 8..10 local main memory & TLB hit 75 remote main memory & TLB hit 250 main memory & TLB miss 2000 page fault 10^6

  25. OpenMP • OpenMP C and C++ Application Program Interface • DRAFT • Version 2.0 November 2001 DRAFT 11.05 • OPENMP ARCHITECTURE REVIEW BOARD • http://www.openmp.org/ • http://www.compunity.org/ • http://www.openmp.org/specs/ • http://www.it.kth.se/labs/cs/odinmp/ • http://phase.etl.go.jp/Omni/

  26. #include <omp.h> int main() { int iam =0, np = 1; #pragma omp parallel private(iam, np) { #if defined (_OPENMP) np = omp_get_num_threads(); iam = omp_get_thread_num(); #endif printf(“Hello from thread %d out of %d \n”, iam, np); } } parallel region directive with data scoping clause Hello World in OpenMP

  27. Defines a parallel region, to be executed by all the threads in parallel load x? end begin a #pragma omp parallel private(i, id, p, load, begin, end) { p = omp_get_num_threads(); id = omp_get_thread_num(); load = N/p; begin = id*load; end = begin+load; for (i = begin; ((i<end) && keepon); i++) { if (a[i] == x) { keepon = 0; position = i; } #pragma omp flush(keepon) } }

  28. Defines a parallel region, to be executed by all the threads in parallel #pragma omp parallel private(i, id, p, load, begin, end) { p = omp_get_num_threads(); id = omp_get_thread_num(); load = N/p; begin = id*load; end = begin+load; for (i = begin; ((i<end) && keepon); i++) { if (a[i] == x) { keepon = 0; position = i; } #pragma omp flush(keepon) } }

  29. A = (1000, ..., 901, 900, ..., 801, ..., 100, ... , 1) A = (1000, ..., 901,900, ..., 801, ... , 900, ...,801, ..., 100 ... , 1) P0 P1 P8 P9 #pragma omp parallel private(i, id, p, load, begin, end) { p = omp_get_num_threads(); id = omp_get_thread_num(); load = N/p; begin = id*load; end = begin+load; for (i = begin; ((i<end) && keepon); i++) { if (a[i] == x) { keepon = 0; position = i; } #pragma omp flush(keepon) } } Search for x= 100 P = 10 processors The sequential algorithm traverses 900 elements Processor 9 finds x = 10 in the first step Speedup  900/1 > 10 = P

  30. 1 p= =S 4 dx (1+x2) 0<i<N 0 A WS construct distributes the execution of the statement among the members of the team 4 N(1+((i+0.5)/N)2) main() { double local, pi=0.0, w; long i; w = 1.0 / N; #pragma omp parallel private(i, local) { #pragma omp single pi = 0.0; #pragma omp for reduction (+: pi) for (i = 0; i < N; i++) { local = (i + 0.5)*w; pi = pi + 4.0/(1.0 + local*local); } }

  31. Nested Parallelism

  32. The expression of Nested Parallelism in OpenMP has to conform to these two rules: • A parallel directive dynamically inside another parallel establishes a new team, which is composed of only the current thread unless nested parallelism is enabled. • for, sections and single directives that bind to the same parallel are not allowed to be nested inside each other.

  33. http://www.openmp.org/index.cgi?faq Q6: What about nested parallelism? A6: Nested parallelism is permitted by the OpenMP specification. Supporting nested parallelism effectively can be difficult, and we expect most vendors will start out by executing nested parallel constructs on a single thread. OpenMP encourages vendors to experiment with nested parallelism to help us and the users of OpenMP understand the best model and API to include in our specification. We will include the necessary functionality when we understand the issues better.

  34. A parallel directive dynamically inside another parallel establishes a new team, which is composed of only the current thread unless nested parallelism is enabled. NANOS Ayguade E., Martorell X., Labarta J., Gonzalez M. and Navarro N. Exploiting Multiple Levels of Parallelism in OpenMP: A Case Study Proc. of the 1999 International Conference on Parallel Processing, Aizu (Japan), September 1999. http://www.ac.upc.es/nanos/

  35. KAI Shah S, Haab G, Petersen P, Throop J. Flexible control structures for parallelism in OpenMP. 1st European Workshop on OpenMP, Lund, Sweden, September 1999. http://developer.intel.com/software/products/trans/kai/ Nodeptr list; ... #pragma omp taskq for ( nodeptr p = list; p != NULL; p = p-< next) { #pragma omp task process(p->data); }

  36. #pragma omp taskq { #pragma task } The Workqueuing Model void traverse(Node & node) { process(node.data); if (node.has_left) traverse(node.left); if (node.has_right) traverse(node.right); } Robert D. Blumofe, Christopher F. Joerg, Bradley C. Kuszmaul, Charles E. Leiserson, Keith H. Randall, Yuli Zhou: Cilk: An Efficient Multithreaded Runtime System. Journal of Parallel and Distributed Computing 37(1): 55-69 (1996). http://supertech.lcs.mit.edu/cilk/

  37. A parallel directive dynamically inside another parallel establishes a new team, which is composed of only the current thread unless nested parallelism is enabled. OMNI Yoshizumi Tanaka, Kenjiro Taura, Mitsuhisa Sato, and Akinori Yonezawa Performance Evaluation of OpenMP Applications with Nested Parallelism Languages, Compilers, and Run-Time Systems for Scalable Computers pp. 100-112, 2000 http://pdplab.trc.rwcp.or.jp/Omni/

  38. What were the reasons that led the designers to the constraints implied by the second rule? for, sections and single directives that bind to the same parallel are not allowed to be nested inside each other. Simplicity!!

  39. for, sections and single directives that bind to the same parallel are not allowed to be nested inside each other. Chandra R., Menon R., Dagum L., Kohr D., Maydan D. and McDonald J. Morgan Kaufmann Publishers. Academic press. 2001.

  40. for, sections and single directives that bind to the same parallel are not allowed to be nested inside each other. Page 122: “A work-sharing construct divides a piece of work among a team of parallel threads. However, once a thread is executing within a work-sharing construct, it is the only thread executing that code; there is no team of threads executing that specific piece of code anymore, so, it is nonsensical to attempt to further divide a portion of work using a work-sharing construct. ’’ Nesting of work-sharing constructs is therefore illegal in OpenMP.

  41. FAST FOURIER TRANSFORM . . .. . . . . . . . . . . . . . . . QUICK HULL . . . . P2 P1 Divide and Conquer void qs(int *v, int first, int last) { int i, j; if (first < last) { #pragma ll MASTER partition(v, &i, &j first, last); #pragma ll sections firstprivate(i,j) { #pragma ll section qs(v, first, j); #pragma ll section qs(v, i, last); } } . P3

  42. 1 2 3 4 0 3 4 qs(v,first,j) 2 qs(v,i,last) 0 1 2 4 0 1 3 1 0 qs(v,first,j) void qs(int *v, int first, int last) { int i, j; if (first < last) { #pragma ll MASTER partition(v, &i, &j first, last); #pragma ll sections firstprivate(i,j) { #pragma ll section qs(v, first, j); #pragma ll section qs(v, i, last); } } qs(v,first,last) ...

  43. qs(v,first,last) 1 2 3 4 0 3 4 qs(v,first,j) 2 qs(v,i,last) 0 1 2 4 0 1 3 void qs(int *v, int first, int last) { int i, j; if (first < last) { #pragma ll MASTER partition(v, &i, &j first, last); #pragma ll sections firstprivate(i,j) { #pragma ll section qs(v, first, j); #pragma ll section qs(v, i, last); } } ...

  44. qs(v,first,last) 1 2 3 4 0 3 4 2 0 1 void qs(int *v, int first, int last) { int i, j; if (first < last) { #pragma ll MASTER partition(v, &i, &j first, last); #pragma ll sections firstprivate(i,j) { #pragma ll section qs(v, first, j); #pragma ll section qs(v, i, last); } } ...

  45. OpenMP Architecture Review Board: OpenMP C and C++ application program interface v. 1.0 - October. (1998). http://www.openmp.org/specs/mp-documents/cspec10.ps void qs(int *v, int first, int last) { int i, j; if (first < last) { #pragma ll MASTER partition(v, &i, &j first, last); #pragma ll sections firstprivate(i,j) { #pragma ll section qs(v, first, j); #pragma ll section qs(v, i, last); } } page 14: “The sections directive identifies a non iterative work-sharing construct that specifies a set of constructs that are to be divided among threads in a team. Each section is executed once by a thread in the team.’’

  46. NWS: The Run Time Library void qs(int *v, int first, int last) { int i, j; if (first < last) { #pragma ll MASTER partition(v, &i, &j first, last); #pragma ll sections firstprivate(i,j) { #pragma ll section qs(v, first, j); #pragma ll section qs(v, i, last); } } void qs(int *v, int first, int last) { int i, j; if (first < last) { MASTER partition(v, &i, &j, first, last); ll_2_sections( ll_2_first_private(i, j), qs(v, first, j), qs(v, i, last) ); } }

  47. if (ll_NUMPROCESSORS > 1) { \ else { f1; f2; } NWS: The Run Time Library #define ll_2_sections(decl, f1, f2) \ decl; \ ll_barrier(); \ { \ int ll_oldname = ll_NAME, \ ll_oldnp = ll_NUMPROCESSORS; \ if (ll_DIVIDE(2)) {f2; } \ else { f1; } \ ll_REJOIN(ll_oldname, ll_oldnp); \ } \ } \

  48. GROUP BARRIER (ORIGIN STYLE) 0 0 1 0 1 0 0 1 0 1 0 0 0 0 0 1 1 1 2 2 2 3 3 3 void ll_barrier() { MASTER { int i, all_arrived; do { all_arrived = 1; for (i = 1; i < ll_NUMPROCESSORS; i++) if (!(ll_ARRIVED[i])) all_arrived = 0; } while (!all_arrived); for (i = 1; i < ll_NUMPROCESSORS; i++) ll_ARRIVED[i] = 0; } SLAVE { *ll_ARRIVED = 1; while (*ll_ARRIVED) ; } }

  49. int ll_DIVIDE(int ngroups) { int ll_group; double ll_first; double ll_last; ll_group = (int)floor(((double)(ngroups*ll_NAME))/((double)ll_NUMPROCESSORS)); ll_first = (int)ceil(((double)(ll_NUMPROCESSORS*ll_group))/((double)ngroups)); ll_last=(int)ceil((((double)(ll_NUMPROCESSORS*(ll_group+1)))/((double)ngroups))-1); ll_NUMPROCESSORS = ll_last - ll_first + 1; ll_NAME = ll_NAME - ll_first; return ll_group; } void ll_REJOIN(int old_name, int old_np) { ll_NAME = old_name; ll_NUMPROCESSORS = old_np; } (A bit) more overhead if weights are provided!

  50. &ll_var2 &ll_var1 0 1 2 3 #define ll_2_first_private(ll_var1, ll_var2) { \ void **q; \ MASTER { \ *ll_FIRST_PRIVATE = (void *) malloc(2*sizeof(void *)); \ q = *ll_FIRST_PRIVATE; \ *q = (void *) &(ll_var1); \ q ++; \ *q = (void *) &(ll_var2); \ } \ ll_barrier(); \ SLAVE { \ q = *(ll_FIRST_PRIVATE-ll_NAME); \ memcpy((void *) &(ll_var1), *q, sizeof(ll_var1)); \ q ++; \ memcpy((void *) &(ll_var2), *q, sizeof(ll_var2)); \ } \ ll_barrier(); \ MASTER free(*ll_FIRST_PRIVATE); \ } *ll_FIRST_PRIVATE ll_FIRST_PRIVATE

More Related