1 / 174

High Performance Platforms

High Performance Platforms. Salvatore Orlando. Scope of Parallelism: the roadmap towards HPC . Conventional architectures are coarsely comprised of processor , memory system , I/O Each of these components presents significant performance bottlenecks .

havard
Download Presentation

High Performance Platforms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. High Performance Platforms Salvatore Orlando

  2. Scope of Parallelism: the roadmap towards HPC • Conventional architectures are coarsely comprised of • processor, memory system, I/O • Each of these components presents significant performance bottlenecks. • It is important to understand each of these performance bottlenecks to reduce their effects • Parallelismaddresses each of these components in significant ways to improve performance and reduce the bottlenecks • Applications utilize different aspects of parallelism, e.g., • data intensive applications utilize high aggregate memory bandwidth • server applications utilize high aggregate network bandwidth • scientific applications typically utilize high processing and memory system performance

  3. Implicit Parallelism: Trends in Microprocessor Architectures • Microprocessor clock speeds have posted impressive gains over the past two decades (two to three orders of magnitude) • there are limits to this increase, also due to power consumption • Higher levels of device integration have made available a large number of transistors • the issue is how best to transform these large amount of transistors into computational power • Single-core processors use these resources in multiple functional units and execute multiple instructions in the same cycle. • The precise manner in which these instructions are selected and executed provides impressive diversity in architectures.

  4. Pipelining and Superscalar Execution • Pipelining overlaps various stages of instruction execution to achieve performance • At a high level of abstraction, an instruction can be executed while the next one is being decoded and the next one is being fetched. • This is akin to an assembly line for manufacture of cars. Time IF ID EX MEM WB IF ID EX MEM WB IF ID EX MEM WB IF ID EX MEM WB A simple pipeline IF: Instructionfetch ID: Instructiondecode EX: Execution and addresscalculation MEM: Memoryaccess WB: Write back ofregisters IF ID EX MEM WB IF ID EX MEM WB IF ID EX MEM WB Instruct.

  5. Pipelining and Superscalar Execution • Pipelining, however, has several limitations. • The speed of a pipeline is eventually limited by the slowest stage. • For this reason, conventional processors rely on very deep pipelines (20 stage pipelines in some Pentium processors). • However, in typical program traces, every 5-6 instructions, there is a conditional jump! This requires very accurate branch prediction. • The penalty of a misprediction grows with the depth of the pipeline, since a larger number of instructions will have to be flushed. • Data and resource dependency may stall the pipeline

  6. Pipelining and Superscalar Execution • We can use multiple pipelines • Superscalar processors simultaneously dispatch multiple instructions to redundant functional units on the processor • Instructions are still issued from a sequential instruction stream • CPU hardware dynamically checks for data dependencies between instructions at run time (versus software checking at compile time) • The CPU issuesmultiple instructions per clock cycle • The question thenbecomes how toselect from the stream the instructions to be issued on these pipelines • Compilers can help in generation code that canbe easily parallelized

  7. Superscalar Execution: An Example of two-way execution • A pipeline of 4 stages • Instructions with different addressing modes: register and memory NA means that the pipeline stalls, waiting for the availability of data (data dependency due to registersR1 and R2)

  8. Superscalar Execution: An Example • In the above example, there is some wastage of resources due to data dependencies. • The example also illustrates that different instruction mixes with identical semantics can take significantly different execution time.

  9. Superscalar Execution • Scheduling of instructions is determined by a number of factors: • True Data Dependency: The result of one operation is an input to the next. • Resource Dependency: Two operations require the same resource. • Branch Dependency: Scheduling instructions across conditional branch statements cannot be done deterministically a-priori. • The scheduler, a piece of hardware, looks at a large number of instructions in an instruction queue and selects appropriate number of instructions to execute concurrently based on these factors. • The complexity of this hardware is an important constraint on superscalar processors.

  10. Superscalar Execution: Issue Mechanisms • In the simpler model, instructions can be issued only in the order in which they are encountered. • That is, if the second instruction cannot be issued because it has a data dependency with the first, only one instruction is issued in the cycle. • This is called in-order issue. • The compiler may statically reschedule instructions to increase the chance of parallel execution • In a more aggressive model, instructions can be issued out-of-order. • In this case, if the second instruction has data dependencies with the first, but the third instruction does not, the first and third instructions can be co-scheduled. This is also called dynamic issue. • Performance of in-order issue is generally limited

  11. Superscalar Execution: Efficiency Considerations • Not all functional units can be kept busy at all times. • If during a cycle, no functional units are utilized, this is referred to as vertical waste. • If during a cycle, only some of the functional units are utilized, this is referred to as horizontal waste. • Due to limited parallelism in typical instruction traces, dependencies, or the inability of the scheduler to extract parallelism, the performance of superscalar processors is eventually limited. • Conventional microprocessors typically support four-way superscalar execution.

  12. Very Long Instruction Word (VLIW) Processors • The hardware cost and complexity of the superscalar scheduler is a major consideration in processor design. • To address this issues, VLIW processors rely on compile time analysis to identify and bundle together instructions that can be executed concurrently. • These instructions are packed and dispatched together, and thus the name very long instruction word. • This concept was used with some commercial success in the Multiflow Trace machine (1984). • Variants of this concept are employed in the Intel Itanium IA64 processors.

  13. VLIW Processors: Considerations • Issue hardware is simpler. • Compiler has a bigger context (than Superscalar CPUs) from which to select instructions to co-schedule • to pack in a VLIW instruction • Compilers, however, do not have runtime information such as cache misses or branch taken. • Scheduling decision must be based on sophisticated static predictions to pack independent instructions in each VLIW • may result inherently conservative. • VLIW performance is highly dependent on the compiler. A number of techniques such as loop unrolling, speculative execution, branch prediction are critical. • Typical VLIW processors are limited to 4-way to 8-way parallelism.

  14. Limitations of Memory System Performance • Memory system, and not processor speed, is often the bottleneck for many applications. • Memory system performance is largely captured by two parameters, latency and bandwidth. • Latency is the time from the issue of a memory request to the time the data is available at the processor. • Bandwidth is the rate at which data can be pumped to the processor by the memory system.

  15. Memory System Performance: Bandwidth / Latency • It is very important to understand the difference between latency and bandwidth. • Consider the example of a fire-hose. • If the water comes out of the hose two seconds after the hydrant is turned on, the latency of the system is two seconds. • Once the water starts flowing, if the hydrant delivers water at the rate of 20 liters/second, the bandwidth of the system is 20 liters/second. • If you want immediate response from the hydrant, it is important to reduce latency. • If you want to fight big fires, you want high bandwidth.

  16. Memory Latency: An Example • Consider a processor operating at 1 GHz (1 ns clock) connected to a DRAM with a latency of 100 nsfor each request of 1 word (no caches). • Assume that the processor has two multiply-add units and is capable of executing four instructions(2 mult and 2 add) in each cycle of 1 ns. • The following observations follow: • The peak processor rating is 4 GFLOPS. • Since the memory latency is equal to 100 cycles and block size is one word, every time a memory request is made, the processor must wait 100 cycles before it can process the data.

  17. Memory Latency: An Example • On the above architecture, consider the problem of computing a dot-product of two vectors. • A dot-product computation performs one multiply-add(2 FP operations, storing the add result in an accumulator) on each single pair(2 operands) of vector elements • On average, each floating point operation requires one data fetch. • It follows that the peak speed of this computation is limited to two floating point ops (one multiply-add) every fetch of a pair (200 ns) ….. or • one floating point operation every 100 ns, or • an actual rate of 1/(100*10-9) = 107 = 10 MFLOPS, a very small fraction of the peak processor rate (4 GFLOPS) !!

  18. Improving Effective Memory Latency Using Caches • Caches are small and fast memory elements between the processor and DRAM. • This memory acts as a low-latency high-bandwidth storage. • If a piece of data is repeatedly used, the effective latency of this memory system can be reduced by the cache. • The fraction of data references satisfied by the cache is called the cache hit ratio of the computation on the system. • Cache hit ratio achieved by a code on a memory system often determines its performance.

  19. Impact of Caches: Example • Consider the architecture from the previous example. In this case, we introduce a cache of size 32 KB with a latency of 1 ns or one cycle. • We use this setup to multiply two matrices A and B of dimensions 32 × 32 = 25 × 25. • If a word (4 Bytes) is needed to store each element of a matrix, 4 KB are needed to store the whole matrix. • With these numbers we can think that the cache is large enough to store matrices A and B, as well as the result matrix C.

  20. Impact of Caches: Example (continued) • The following observations can be made about the problem: • (Pre)-Fetching the two matrices A and B into the cache corresponds to fetching 2K words, which takes approximately200 µs (miss penalty = 100 ns) • Multiplying two n × n matrices takes 2n3 operations. For our problem, this corresponds to2×25×25×25=64Koperations, which can be performed in 16K cycles (or 16 µs) at 4 operation per cycle. • The total time for the computation is therefore approximately the sum of time for load/store operations and the time for the computation itself, i.e., (200 + 16) µs. • This corresponds to a peak computation rate of 64K ops / 216 µs = (64*210) / (216*10-6) ≈ 303 MFLOPS.

  21. Impact of Caches • Repeated references to the same data item correspond to temporal locality • In our example, we had O(n2) data to access and O(n3) computation. This asymptotic difference makes the above example particularly desirable for caches • Data reuse is critical for cache performance • Anyway, we saw that the compulsory cache misses to load the cache is enough to reduces greatly the computation rate • 303 MFLOPS vs. 4 GFLOPS

  22. Impact of Memory Bandwidth • Memory bandwidth is determined by the bandwidth of the memory bus as well as the memory units. • Memory bandwidth can be improved by increasing the size of memory blocks. • The underlying system takes l time units (where l is the latency of the system) to deliver b units of data (where b is the block size).

  23. Impact of Memory Bandwidth: Example • Consider the same setup as before (memory penalty = 100 ns), except, in this case, the block size is 4 words instead of 1 word. • 40 MB/s with blocks of 1 word (4 B) • 160 MB/s with blocks of 4 words (16 B) • We repeat the dot-product computation in this scenario: • Assuming that the vectors are laid out linearly in memory, eight FLOPs (four multiply-adds) can be performed in 200 cycles. • This is because a single memory access fetches four consecutive words in the vector. • Therefore, two accesses can fetch four elements of each of the vectors 200 cycles = 200 ns • 8 ops (performed in 2 ns) overlapped with the load of the next block • This corresponds to a FLOP every 25 ns, for a peak speed of 40 MFLOPS. • spatial locality

  24. Impact of Memory Bandwidth • It is important to note that increasing block size does not change latency of the system. • Physically, the scenario illustrated here can be viewed as a wide data bus (4 words or 128 bits) connected to multiple memory banks. • In practice, such wide buses are expensive to construct. • In a more practical system, consecutive words are sent on the memory bus on subsequent bus cycles after the first word is retrieved.

  25. Impact of Memory Bandwidth • The above examples clearly illustrate how increased bandwidth results in higher peak computation rates, but … • The data layouts were assumed to be such that consecutive data words in memory were used by successive instructions (spatial locality of reference). • If we take a data-layout centric view, computations must be reordered to enhance spatial locality of reference.

  26. Impact of Memory Bandwidth: Example Consider the following code fragment: for (i = 0; i < 1000; i++) column_sum[i] = 0.0; for (j = 0; j < 1000; j++) column_sum[i] += b[j][i]; The code fragment sums columns of the matrix b into a vectorcolumn_sum.

  27. Impact of Memory Bandwidth: Example • Vector column_sum is small and easily fits into the cache • Matrix bis accessed in a column order, but is stored in row-order. • The strided access results in very poor performance. We can fix the above code as follows: for (i = 0; i < 1000; i++) column_sum[i] = 0.0; for (j = 0; j < 1000; j++) for (i = 0; i < 1000; i++) column_sum[i] += b[j][i]; • In this case, the matrix is traversed in a row-order and performance can be expected to be significantly better.

  28. Memory System Performance: Summary • The series of examples presented in this section illustrate the following concepts: • Exploiting spatial and temporallocality in applications is critical for amortizing memory latency and increasing effective memory bandwidth. • Memory layouts and organizing computation appropriately can make a significant impact on the spatial and temporal locality. • The ratio of the number of operations to number of memory accesses is a good anticipated indicator of tolerance to memory bandwidth.

  29. Alternative Approaches for Hiding Memory Latency • Consider the problem of browsing the web on a very slow network connection. We deal with the problem in one of three possible ways: • we anticipate which pages we are going to browse ahead of time and issue requests for them in advance; • we open multiple browsers and access different pages in each browser, thus while we are waiting for one page to load, we could be reading others; or • we access a whole bunch of pages in one go - amortizing the latency across various accesses. • The first approach is called prefetching, the second multithreading, and the third one corresponds to a technique to exploit spatial locality (increasing the memory block) in accessing memory words. • We already saw the third approach to reduce the effect of memory latency …

  30. Multithreading for Latency Hiding • A thread is a single stream of control in the flow of a program. • We illustrate threads with a simple example (matrix by vector): for (i = 0; i < n; i++) c[i] = dot_product(get_row(a, i), b); • Each dot-product is independent of the other, and therefore represents a concurrent unit of execution. • We can safely rewrite the above code segment as: for (i = 0; i < n; i++) c[i] = create_thread(dot_product,get_row(a, i), b);

  31. Multithreading for Latency Hiding: Example • In the last code, the first instance of the function accesses a pair of vector elements and waits for them. • In the meantime, the second instance of this function can be scheduled, to access two other vector elements in the next cycle, and so on. • After l units of time, where l is the latency of the memory system, the first function instance gets the requested data from memory and can perform the required computation • In the next cycle, the data items for the next function instance arrive, and so on • In this way, in every clock cycle, we can perform a computation.

  32. Multithreading for Latency Hiding • The execution schedule in the previous example is predicated upon two assumptions: • the memory system is capable of servicing multiple outstanding requests, and • the processor is capable of switching threads at every cycle. • It also requires the program to have an explicit specification of concurrency in the form of threads. • Machines such as the HEP and Tera rely on multithreaded processors that can switch the context of execution in every cycle. Consequently, they are able to hide latency effectively.

  33. Prefetching for Latency Hiding • Misses on loads cause programs to stall. • Why not advance the loads so that by the time the data is actually needed, it is already there! • The only drawback is that you might need more space to store advanced loads. • However, if the advanced loads are overwritten, we are no worse than before! • Unfortunately, if you prefetch in the cache, you can introduce pollution in the cache, by removing blocks to be used in the next future

  34. Tradeoffs of Multithreading and Prefetching • Multithreading and (single-thread) prefetching are critically impacted by the memory bandwidthwhen combined with cache. Consider the following example: • Consider a computation running on a machine with a 1 GHz clock, 1-word cache line, single cycle access to the cache, and 100 ns latency to DRAM. • The computation has a cache hit ratio at 1 KB of 25% (miss 75%) and at 32 KB of 90% (miss 10%). • Consider two cases: first, a single threaded execution in which the entire cache is available to the serial context, and second, a multithreaded execution with 32 threads where each thread has a cache residency of 1 KB. • If the computation makes one data request in every cycle of 1 ns (4 GB/s), you may notice that the first scenario requires 400MB/s (10% 4 GB/s) of memory bandwidth and the second 3 GB/s (75% 4 GB/s)

  35. Tradeoffs of Multithreading and Prefetching • Bandwidth requirements of a multithreaded system may increase very significantly because of the smaller cache residency of each thread. • Multithreaded systems become bandwidth bound instead of latency bound. • Multithreading and prefetching only address the latency problem and may often exacerbate the bandwidth problem. • Multithreading and prefetching also require significantly more hardware resources in the form of storage.

  36. Parallel Architecture and Parallel Programming Models

  37. Parallelplatforms • We now introduce, at a high level, the elements of parallel computing platforms that are critical for performance oriented and portable parallel programming • We first explore the logical architectural organization • The critical aspects of the logical organization are how to support the execution of concurrent tasks and the interactionamong them • The former is related to the control structure model and the latter to the cooperation/communication model of the architecture • Control structure model • SIMDvsMIMD • Cooperation/communication model • shared memory (SM) vs. messages passing/distributed memory (DM). • About concurrent tasks, parallelism can be expressed at various levels of granularity • from instruction level to processes. • between these extremes there exists a range of models, along with corresponding architectural support.

  38. Parallel platforms • Flynn [1972] introduced the following taxonomy of computers, where the parallel computers are MIMD (Multiple Instruction stream- Multiple Data stream) and SIMD (Single Instruction stream - Multiple Data stream) • Control structure model • Processing units in parallel computers either operate under a centralized control or work independently under a private control • SIMD: single control unit that dispatches the same instruction to various processors (that work on different data) • MIMD: each processor has its own control control unit, each processor can execute different instructions on different data items.

  39. Capabilities of the four computer models

  40. SIMD • The array of processing elements (PEs) work synchronously • At each step (global clock tick), PEs execute the same instruction on different data elements • Array processors : Connection Machine, ICL DAP (Distributed Array Processor), MasPar MP • Pipelined vector computer : CRAY 1 & 2 e CYBER 205 • co-processors SIMD (MMX, SSE) • GPUs (Graphics Proc. Units) • SIMD architectures suitable for solving data parallel programs, i.e. the case of the same function applied to distinct elements of a given data structure • Sum of two 2x2 matrixes: A + B = C. • If A and B are distributed and stored in the memories of 4 PEs, so in parallel: A11 + B11 = C11 A12 + B12 = C12A21 + B21 = C21A22 + B22 = C22

  41. MIMD (Multiprocessor / Multicomputer) • Each processor works asynchronously under the control of a distinct stream of instructions (programs) working on different data • As for SIMD, cooperation through shared memory or interconnection network • MIMD computer with SM are known as multiprocessors or tightly coupled machines. The current multi-cores fall within this class. • MIMD computer where the processes interact by message passing are known as multicomputers or loosely coupled machines. • Some examples: IBM SP, Intel Paragon, TM CM5, Cray T3E, Meiko CS-2, all modern clusters

  42. Sharedmemoryvs. messagepassing • In both SIMD and MIMD the cooperationmodel can bebasedeither on • SHARED MEMORY (SM) and SHARED VARIABLES • DISTRIBUTED/PRIVATE MEMORY (DM) and MESSAGE PASSING • Examplesof MIMD architectures:Multiprocessor (SM) Multicomputer (DM) P: processorM: memorybankC: cache

  43. SM multiprocessors • Load/Store instructions can directly access all (part of) the shared physical address space • the same instruction supported by uniprocessors • simple and fast mechanism to communicate and share information • Specific instructions to build mechanisms of synchronization (test&set, exch, load-locked, ecc.) Physical memory Space Virtual Memory Space

  44. UMA vs. NUMA • Different Shared Memory organization • Cost of Memory Access: Uniform (UMA) or Non-Uniform: (NUMA) • The distinction between NUMA and UMA platforms is important from the point of view of algorithm design • NUMA machines performance also depends on memory allocation • Both UMA and NUMA machines need to exploit locality thus improving cache usage NUMA UMA

  45. Sharedmemoryvs. messagepassing • In SM cooperation model we have the same physical address space that can be accessed by all processors • cooperation by reading/writing the same shared variable mapped in SM • however, read-write data to shared data must be coordinated/synchronized (this will be discussed in greater detail when we will talk about threads programming) • cooperation method suitable for small numbers of processors (2-32 ways MIMD multiprocessors), at least in UMA organizations • In DM cooperation model, we have to cope with a private physical address space for each processor • cooperation/synchronization via message passing over the network • common cooperation model adopted by clusters and high-end massively parallel computers over fast interconnection networks • data must be explicitly assigned to the private address spaces the various computational node

  46. Programmingmodel • The programming model refers to how the parallel platforms virtually appears to programmers • due to the presence of software layers, it is nowadays possible to have programming models that diverge from the underlying physical architecture model • to exploit some specific features of the architecture, it is often needed to use programming languages and environments that are close to the architecture • Control • How parallel tasks can be created, terminated, and how they can synchronize • Naming • Shared or private data • Naming of data/processes/network abstraction • Operations • Which are the base operations • Which are the associated costs

  47. Comparing programming models • We consider a simple example • A sum of nvalues, in turn obtained by applying a function f()to each element of a set of n elements A[0] A[1] A[n-1] f() f() f() +

  48. Comparing programming models • All the p tasks compute a partial sum of distinct n/p values, each obtained by the application of function f()over distinct elements • The tasks compute independently the partial sums by using “private” variables • At the end, one (or all) task takes the partial sums and produces the global sum • Data that are logically shared • Variable storing the n original numbers and the final global sum • Data that are logically private • Temporary variables and the variables used for partial sums In DM architectures such data must be allocated and copied in the private memory space of each processor.

  49. “Shared address” programming model • Shared Address Space • a program consists of a collection of threads of control • each with a set of private variables • e.g., local variables on the private stack frame of the various function instances executed by the threads • collectively and asynchronously access a set of shared variables • e.g., static variables, shared common blocks, global heap • threads communicate implicitly by writing and reading shared variables • threads coordinate explicitly by synchronization operations on shared variables • writing and reading flags • locks, semaphores • Like concurrent programmingon uniprocessor

  50. The associated architectural model • An SM – MIMD architecture • Processors all connected to a large shared memory • Caches are a sort of low-latency local memories which maintain copies of shared data • Shared address programming models are, to some extent, the native programming models for MIMD SM architectures • From the programming viewpoint, even if shared data can be logically partitioned among threads, they are not owned • it can become complex to guarantee locality in memory accesses

More Related