1 / 71

6. Multi-Processamento 6.1. Introdução

6. Multi-Processamento 6.1. Introdução. Nota Importante. A apresentação desta parte da matéria é largamente baseada num curso internacional leccionado no DEI, em Set/2003 sobre “Cluster Computing and Parallel Programming”. Os slides originais podem ser encontrados em:

laasya
Download Presentation

6. Multi-Processamento 6.1. Introdução

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 6. Multi-Processamento6.1. Introdução

  2. Nota Importante • A apresentação desta parte da matéria é largamente baseada num curso internacional leccionado no DEI, em Set/2003 sobre “Cluster Computing and Parallel Programming”. Os slides originais podem ser encontrados em: http://eden.dei.uc.pt/~pmarques/courses/best2003/pmarques_best.pdf • Para além desses materiais, é principalmente utilizado o Cap. 6 do [CAQA] e o Cap. 9 do “Computer Organization and Design”

  3. Motivation • I have a program that takes 7 days to execute, which is far too long for practical use. How do I make it run in 1 day? • Work smarter!(i.e. find better algorithms) • Work faster!(i.e. buy a faster processor/memory/machine) • Work harder!(i.e. add more processors!!!)

  4. Motivation • We are interested in the last approach: • Add more processors!(We don’t care about being too smart or spending too much $$$ in bigger faster machines!) • Why? • It may no be feasible to find better algorithms • Normally, faster, bigger machines are very expensive • There are lots of computers available in any institution (especially at night) • There are computer centers from where you can buy parallel machine time • Adding more processors enables you not only to run things faster, but to run bigger problems

  5. Motivation • “Adding more processors enables you not only to run things faster, but to run bigger problems”?! • “9 women cannot have a baby in 1 month, but they can have 9 babies in 9 months” • This is called the Gustafson-Barsis law (informally) • What the Gustafson-Barsis law tell us is that when the size of the problem grows, normally there’s more parallelism available

  6. 6. Multi-Processamento6.2. Arquitectura das Máquinas

  7. von Neumann Architecture • Based on the fetch-decode-execute cycle • The computer executes a single sequence of instructions that act on data. Both program and data are stored in memory. Flow of instructions A B C Data

  8. Flynn's Taxonomy • Classifies computers according to… • The number of execution flows • The number of data flows

  9. Single Instruction, Single Data (SISD) • A serial (non-parallel) computer • Single instruction: only one instruction stream is being acted on by the CPU during any one clock cycle • Single data: only one data stream is being used as input during any one clock cycle • Most PCs, single CPU workstations, …

  10. Single Instruction, Multiple Data (SIMD) • A type of parallel computer • Single instruction: All processing units execute the same instruction at any given clock cycle • Multiple data: Each processing unit can operate on a different data element • Best suited for specialized problems characterized by a high degree of regularity, such as image processing. • Examples: Connection Machine CM-2, Cray J90, Pentium MMX instructions 1 3 4 5 21 3 3 5 V1 ADD V3, V1, V2 32 43 2 46 87 65 43 32 V2 V3

  11. The Connection Machine 2 (SIMD) • The massively parallel Connection Machine 2 was a supercomputer produced by Thinking Machines Corporation, containing 32,768 (or more) processors of 1-bit that work in parallel.

  12. Multiple Instruction, Single Data (MISD) • Few actual examples of this class of parallel computer have ever existed • Some conceivable examples might be: • multiple frequency filters operating on a single signal stream • multiple cryptography algorithms attempting to crack a single coded message • the Data Flow Architecture

  13. Multiple Instruction, Multiple Data (MIMD) • Currently, the most common type of parallel computer • Multiple Instruction: every processor may be executing a different instruction stream • Multiple Data: every processor may be working with a different data stream • Execution can be synchronous or asynchronous, deterministic or non-deterministic • Examples: most current supercomputers, computer clusters, multi-processor SMP machines (inc. some types of PCs)

  14. IBM BlueGene/L DD2 • Department of Energy's, Lawrence Livermore National Laboratory (California, USA) • Currently the fastest machine on earth (70TFLOPS) Some Facts - 32768x 700MHz PowerPC440 CPUs (Dual Processors) - 512MB RAM per node, total = 16TByte of RAM - 3D Torus Network; 300MB/sec per node.

  15. IBM BlueGene/L DD2

  16. What about Memory? • The interface between CPUs and Memory in Parallel Machines is of crucial importance • The bottleneck on the bus, many times between memory and CPU, is known as the von Neumann bottleneck • It limits how fast a machine can operate: relationship between computation/communication

  17. Communication in Parallel Machines • Programs act on data. • Quite important: how do processors access each others’ data? Shared Memory Model Message Passing Model CPU Memory CPU Memory CPU Memory network CPU CPU Memory CPU Memory CPU CPU

  18. Shared Memory • Shared memory parallel computers vary widely, but generally have in common the ability for all processors to access all memory as a global address space • Multiple processors can operate independently but share the same memory resources • Changes in a memory location made by one processor are visible to all other processors • Shared memory machines can be divided into two main classes based upon memory access times: UMA and NUMA

  19. Shared Memory (2) Single 4-processor Machine A 3-processor NUMA Machine CPU Memory CPU CPU CPU CPU CPU Memory Memory Memory CPU Fast Memory Interconnect UMA: Uniform Memory Access NUMA: Non-Uniform Memory Access

  20. Uniform Memory Access (UMA) • Most commonly represented today by Symmetric Multiprocessor (SMP) machines • Identical processors • Equal access and access times to memory • Sometimes called CC-UMA - Cache Coherent UMA. • Cache coherent means if one processor updates a location in shared memory, all the other processors know about the update. Cache coherency is accomplished at the hardware level. • Very hard to scale

  21. Non-Uniform Memory Access (NUMA) • Often made by physically linking two or more SMPs. One SMP can directly access memory of another SMP. • Not all processors have equal access time to all memories • Sometimes called DSM – Distributed Shared Memory • Advantages • User-friendly programming perspective to memory • Data sharing between tasks is both fast and uniform due to the proximity of memory and CPUs • More scalable than SMPs • Disadvantages • Lack of scalability between memory and CPUs • Programmer responsibility for synchronization constructs that ensure "correct" access of global memory • Expensive: it becomes increasingly difficult and expensive to design and produce shared memory machines with ever increasing numbers of processors

  22. UMA and NUMA SGI Origin 3900: - 16 R14000A processors per brick, each brick with 32GBytes of RAM. - 12.8GB/s aggregated memory bw (Scales up to 512 processors and 1TByte of memory) The Power MAC G5 features 2 PowerPC 970/G5 processors that share a common central memory (up to 8Gbyte)

  23. Distributed Memory (DM) • Processors have their own local memory. • Memory addresses in one processor do not map to another processor (no global address space) • Because each processor has its own local memory, cache coherency does not apply • Requires a communication network to connect inter-processor memory • When a processor needs access to data in another processor, it is usually the task of the programmer to explicitly define how and when data is communicated. • Synchronization between tasks is the programmer's responsibility • Very scalable • Cost effective: use of off-the-shelf processors and networking • Slower than UMA and NUMA machines

  24. Distributed Memory Computer Computer Computer TITAN@DEI, a PC cluster interconnected by FastEthernet CPU CPU CPU Memory Memory Memory network interconnect

  25. Hybrid Architectures • Today, most systems are an hybrid featuring shared distributed memory. • Each node has several processors that share a central memory • A fast switch interconnects the several nodes • In some cases the interconnect allows for the mapping of memory among nodes; in most cases it gives a message passing interface CPU Memory CPU CPU Memory CPU CPU CPU CPU CPU fast network interconnect CPU Memory CPU CPU Memory CPU CPU CPU CPU CPU

  26. ASCI White at theLawrence Livermore National Laboratory • Each node is an IBM POWER3 375 MHz NH-2 16-way SMP (i.e. 16 processors/node) • Each node has 16GB of memory • A total of 512 nodes, interconnected by a 2GB/sec network node-to-node • The 512 nodes feature a total of 8192 processors, having a total of 8192 GB of memory • It currently operates at 13.8 TFLOPS

  27. Summary

  28. Summary (2) • Plot of top 500 supercomputer sites over a decade

  29. 6. Multi-Processamento6.3. Modelos de Programação e Desafios

  30. Warning • We will now introduce the main ways how you can program a parallel machine. • Don’t worry if you don’t immediately visualize all the primitives that the APIs provide. We will cover that latter. For now, you just have to understand the main ideas behind each paradigm. • In summary: DON’T PANIC!

  31. The main programming models… • A programming model abstracts the programmer from the hardware implementation • The programmer sees the whole machine as a big virtual computer which runs several tasks at the same time • The main models in current use are: • Shared Memory • Message Passing • Data parallel / Parallel Programming Languages • Note that this classification is not all inclusive. There are hybrid approaches and some of the models overlap (e.g. data parallel with shared memory/message passing)

  32. Shared Memory Model Process or Thread B Process or Thread A Process or Thread C double matrix_A[N]; double matrix_B[N]; double result[N]; Globally Accessible Memory (Shared) Process or Thread D

  33. Shared Memory Model • Independently of the hardware, each program sees a global address space • Several tasks execute at the same time and read and write from/to the same virtual memory • Locks and semaphores may be used to control access to the shared memory • An advantage of this model is that there is no notion of data “ownership”. Thus, there is no need to explicitly specify the communication of data between tasks. • Program development can often be simplified • An important disadvantage is that it becomes more difficult to understand and manage data locality. Performance can be seriously affected.

  34. Shared Memory Modes • There are two major shared memory models: • All tasks have access to all the address space (typical in UMA machines running several threads) • Each task has its address space. Most of the address space is private. A certain zone is visible across all tasks. (typical in DSM machines running different processes) Memory Memory A Memory B Memory B Shared memory A B C A B (all the tasks share the same address space)

  35. Shared Memory Model –Closely Coupled Implementations • On shared memory platforms, the compiler translates user program variables into global memory addresses • Typically a thread model is used for developing the applications • POSIX Threads • OpenMP • There are also some parallel programming languages that offer a global memory model, although data and tasks are distributed • For DSM machines, no standard exists, although there are some proprietary implementations

  36. Shared Memory – Thread Model • A single process can have multiple threads of execution • Each thread can be scheduled on a different processor, taking advantage of the hardware • All threads share the same address space • From a programming perspective, thread implementations commonly comprise: • A library of subroutines that are called from within parallel code • A set of compiler directives imbedded in either serial or parallel source code • Unrelated standardization efforts have resulted in two very different implementations of threads: POSIX Threads and OpenMP

  37. POSIX Threads • Library based; requires parallel coding • Specified by the IEEE POSIX 1003.1c standard (1995), also known as PThreads • C Language • Most hardware vendors now offer PThreads • Very explicit parallelism; requires significant programmer attention to detail

  38. OpenMP • Compiler directive based; can use serial code • Jointly defined and endorsed by a group of major computer hardware and software vendors. The OpenMP Fortran API was released October 28, 1997. The C/C++ API was released in late 1998 • Portable / multi-platform, including Unix and Windows NT platforms • Available in C/C++ and Fortran implementations • Can be very easy and simple to use - provides for “incremental parallelism” • No free compilers available

  39. Message Passing Model • The programmer must send and receive messages explicitly

  40. Message Passing Model • A set of tasks that use their own local memory during computation. • Tasks exchange data through communications by sending and receiving messages • Multiple tasks can reside on the same physical machine as well as across an arbitrary number of machines. • Data transfer usually requires cooperative operations to be performed by each process. For example, a send operation must have a matching receive operation.

  41. Message Passing Implementations • Message Passing is generally implemented as libraries which the programmer calls • A variety of message passing libraries have been available since the 1980s • These implementations differed substantially from each other making it difficult for programmers to develop portable applications • In 1992, the MPI Forum was formed with the primary goal of establishing a standard interface for message passing implementations

  42. MPI – The Message Passing Interface • Part 1 of the Message Passing Interface (MPI), the core, was released in 1994. • Part 2 (MPI-2), the extensions, was released in 1996. • Freely available on the web: http://www.mpi-forum.org/docs/docs.html • MPI is now the “de facto” industry standard for message passing • Nevertheless, most systems do not implement the full specification. Especially MPI-2 • For shared memory architectures, MPI implementations usually don’t use a network for task communications • Typically a set of devices is provided. Some for network communication, some for shared memory. In most cases, they can coexist.

  43. Data Parallel Model • Typically a set of tasks performs the same operations on different parts of a big array

  44. Data Parallel Model • The data parallel model demonstrates the following characteristics: • Most of the parallel work focuses on performing operations on a data set • The data set is organized into a common structure, such as an array or cube • A set of tasks works collectively on the same data structure, however, each task works on a different partition of the same data structure • Tasks perform the same operation on their partition of work, for example, “add 4 to every array element” • On shared memory architectures, all tasks may have access to the data structure through global memory. • On distributed memory architectures the data structure is split up and resides as "chunks" in the local memory of each task

  45. Data Parallel Programming • Typically accomplished by writing a program with data parallel constructs • calls to a data parallel subroutine library • compiler directives • In most cases, parallel compilers are used: • High Performance Fortran (HPF): Extensions to Fortran 90 to support data parallel programming • Compiler Directives: Allow the programmer to specify the distribution and alignment of data. Fortran implementations are available for most common parallel platforms • DM implementations have the compiler convert the program into calls to a message passing library to distribute the data to all the processes. • All message passing is done invisibly to the programmer

  46. Summary • Middleware for parallel programming: • Shared memory: all the tasks (threads or processes) see a global address space. They read and write directly from memory and synchronize explicitly. • Message passing: the tasks have private memory. For exchanging information, they send and receive data through a network. There is always a send() and receive() primitive. • Data parallel: the tasks work on different parts of a big array. Typically accomplished by using a parallel compiler which allows data distribution to be specified.

  47. Final Considerations… Beware of Amdahl's Law!

  48. Task 1 Task 2 Task 3 Work time Wait Load Balancing • Load balancing is always a factor to consider when developing a parallel application. • Too big granularity  Poor load balancing • Too small granularity  Too much communication • The ratio computation/communication is of crucial importance!

  49. Amdahl's Law • The speedup depends on the amount of code that cannot be parallelized: n: number of processors s: percentage of code that cannot be made parallel T: time it takes to run the code serially

  50. Amdahl's Law – The Bad News!

More Related