1 / 27

Mark Hampton and Krste Asanović April 9, 2008

Compiling for Vector-Thread Architectures. Mark Hampton and Krste Asanović April 9, 2008. MIT Computer Science and Artificial Intelligence Laboratory. University of California at Berkeley. Vector-thread (VT) architectures efficiently encode parallelism in a variety of applications.

satya
Download Presentation

Mark Hampton and Krste Asanović April 9, 2008

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Compiling for Vector-Thread Architectures Mark Hampton and Krste Asanović April 9, 2008 MIT Computer Science and Artificial Intelligence Laboratory University of California at Berkeley

  2. Vector-thread (VT) architectures efficiently encode parallelism in a variety of applications • A VT architecture unifies the vector and multithreaded execution models • The Scale VT architecture exploits DLP, TLP, and ILP (with clustering) simultaneously • Previous work [Krashinsky04] has shown the ability of Scale to take advantage of the parallelism available in several different types of loops • However, that evaluation relied on mapping code to Scale using handwritten assembly

  3. This work presents a back end code generator for the Scale architecture • Compiler infrastructure is relatively immature, as much of the work to this point consisted of getting all the pieces to run together • We prioritized taking advantage of Scale’s unique features to enable support for “difficult” types of loops rather than focusing on optimizations • Compiler can parallelize loops with internal control flow, outer loops, loops with cross-iteration dependences • However, compiler does not currently handle while loops • Despite lack of optimizations, compiler is still able to produce some significant speedups

  4. Talk outline • Vector-thread architecture background • Compiler overview • Emphasis on how code is mapped to Scale • Performance evaluation • Conclusions

  5. Vector-thread architectures use a virtual processor (VP) abstraction Registers ALUs VP • VPs contains registers and ALUs • VPs execute RISC-like instructions grouped into atomic instruction blocks (AIBs) • AIBs must be explicitly fetched • Can either be fetched for a group of VPs (vector-fetch) or for a single VP (thread-fetch) • Fetches can be predicated to allow conditional branching • A VP stops after it executes an AIB that does not issue a fetch instruction vector-fetch thread-fetch vector-fetched AIB thread-fetched AIBs instruction

  6. A control processor interacts with a vector of virtual processors Registers Registers Registers ALUs ALUs ALUs vector-fetch Control Processor VPN VP0 VP1 vector-fetch vector-fetch vector-fetch vector-load thread-fetch thread-fetch thread-fetch vector-store cross-VP queue Vector Memory Unit Memory

  7. The Scale processor prototype implements the vector-thread architectural paradigm vector-fetch Vector-Thread Unit Control Processor Lane 0 Lane 1 Lane 2 Lane 3 shared shared shared shared private registers VP12 VP13 VP14 VP15 VP8 VP9 VP10 VP11 VP4 VP5 VP6 VP7 vector-load VP0 VP1 VP2 VP3 cross-VP queue vector-store cr0 cr1 cr0 cr1 cr0 cr1 cr0 cr1 Vector Memory Unit ALUs ALUs ALUs ALUs Memory Scale is a high-performance, energy-efficient embedded design [Krashinsky07]

  8. Scale excels at exploiting loop-level parallelism • Typical programming model is to have control processor launch a group of VPs, with each VP executing a single iteration • The ability of VPs to direct their own control flow and to use the cross-VP network enables support for a wider variety of loop types than traditional vector designs • Ability to support vector execution in data-parallel code sections enables a higher degree of performance and energy efficiency than a traditional multithreaded design

  9. The compiler for Scale ties together three existing infrastructures SUIF Trimaran GCC Scalar-to-VP Code Transformation AIB Formation C Source Code SUIF Front End Cluster Assignment Chain Register Insertion Memory Dependence Analysis Prepass Instruction Scheduling Assembly Code Generation SUIF-to-Trimaran Conversion Register Allocation GCC Cross Compilation Classical Optimizations Postpass Instruction Scheduling Binary Executable

  10. The compiler conducts a dependence analysis to select which loop to parallelize • SUIF’s dependence library is used to annotate memory operations with direction vectors • The restrict keyword is required to indicate there is no aliasing • This is the extent of manual programmer intervention • Trimaran uses the results of the SUIF analysis to detect whether a particular loop in a nest has any cross-iteration dependences • Priority is given to parallelizing innermost DOALL loops • If a loop nest contains no DOALL loops, the compiler tries to parallelize a DOACROSS loop

  11. Once a loop is selected, it is mapped to the VTU without any restructuring Loop Entry Header Block: Vector-fetched code, VTU commands, scalar instructions Internal Loop Blocks: Thread-fetched code Back Edge/Exit Block: Vector-fetched code, VTU commands, scalar instructions Loop Exit Any established front end loop transformation can also be used, but that doesn’t change the back end code generation strategy

  12. Simple DOALL loops are handled similarly to traditional vectorization for (i = 0; i < len; i++) out[i] = COEFF*in1[i] + in2[i]; li r0, COEFF • Compiler tasks: • Add a command to configure the VTU • Strip mine the loop • Map scalar code to VTU code • Propagate loop-invariant values to shared registers loop: lw r1, in1 mult r2, r0, r1 lw r3, in2 add r4, r2, r3 sw r4, out add in1, 4 add in2, 4 add out, 4 sub len, 1 bnez len, loop

  13. Simple DOALL loops are handled similarly to traditional vectorization for (i = 0; i < len; i++) out[i] = COEFF*in1[i] + in2[i]; vcfgvl r5, 128, ... vwrsh s0, COEFF li r0, COEFF loop: lw r1, in1 mult r2, r0, r1 lw r3, in2 add r4, r2, r3 sw r4, out add in1, 4 add in2, 4 add out, 4 sub len, 1 bnez len, loop loop: setvl r6, len vlw v0, in1 vmult v1, v0, s0 vlw v2, in2 vadd sd0, v1, v2 vsw sd0, out sll r7, r6, 2 add in1, r7 add in2, r7 add out, r7 sub len, r6 bnez len, loop

  14. Internal control flow can be handled by allowing VPs to fetch their own code for (i = 0; i < len; i++) { if (in[i] < 4) temp = in[i] * 4; else temp = in[i] * 2; out[i] = temp; } loop: lw r0, in slt r1, r0, 4 bnez r1, b3 • Additional compiler tasks beyond simple DOALL case: • Map branches and fall-through paths to VP fetches • Place AIB addresses in shared regs as optimization • Compute induction variable values used in internal loop blocks (not required for this example) b2: sll r2, r0, 1 j b4 b3: sll r2, r0, 2 b4: sw r2, out # bookkeeping code ... bnez len, loop

  15. Internal control flow can be handled by allowing VPs to fetch their own code for (i = 0; i < len; i++) { if (in[i] < 4) temp = in[i] * 4; else temp = in[i] * 2; out[i] = temp; } vcfgvl r3, 128, ... vwrsh s0, b2 vwrsh s1, b3 loop: lw r0, in slt r1, r0, 4 bnez r1, b3 loop: setvl r4, len vlw v0, in vslt p, v0, 4 psel.fetch s1, s0 b2: sll r2, r0, 1 j b4 b3: sll r2, r0, 2 b2: vsll sd0, v0, 1 b3: vsll sd0, v0, 2 b4: sw r2, out # bookkeeping code ... bnez len, loop b4: vsw sd0, out # bookkeeping code ... bnez len, loop

  16. Internal control flow can be handled by allowing VPs to fetch their own code for (i = 0; i < len; i++) { if (in[i] < 4) temp = in[i] * 4; else temp = in[i] * 2; out[i] = temp; } vcfgvl r3, 128, ... vwrsh s0, b2 vwrsh s1, b3 • Although example is simple, it illustrates how compiler is able to map complex control flow to VPs • No need to execute both sides of a branch and throw away one set of results • However, it is possible to perform if-conversion (although that is not currently implemented) loop: setvl r4, len vlw v0, in vslt p, v0, 4 psel.fetch s1, s0 b2: vsll sd0, v0, 1 b3: vsll sd0, v0, 2 b4: vsw sd0, out # bookkeeping code ... bnez len, loop

  17. The ability of VPs to direct their control flow allows outer loop parallelization for (i = 0; i < len; i++) { sum = 0; for (j = 0; j < len-i; j++) sum += in[j] * in[j+i]; out[i] = sum; } loop1: li r0, 0 sub r1, len, i move r2, in sll r3, i, 2 add r4, r3, in No need to perform loop interchange or unrolling • Compiler has same tasks as in previous case: • New aspect illustrated by this example is need to compute induction variables in internal loop blocks loop2: lw r5, r2 lw r6, r4 mult r7, r5, r6 add sum, r7 # bookkeeping code... bnez r1, loop2 sw sum, out # bookkeeping code... bnez len, loop1

  18. The ability of VPs to direct their control flow allows outer loop parallelization for (i = 0; i < len; i++) { sum = 0; for (j = 0; j < len-i; j++) sum += in[j] * in[j+i]; out[i] = sum; } vcfgvl r8, 128, ... vwrsh s0, len vwrsh s1, in la r9, vp_numbers vlb v0, r9 loop1: li r0, 0 sub r1, len, i move r2, in sll r3, i, 2 add r4, r3, in loop1: setvl r10, len vwrsh s2, i vadd v1, s2, v0 ... loop2: lw r5, r2 lw r6, r4 mult r7, r5, r6 add sum, r7 # bookkeeping code... bnez r1, loop2 loop2: vplw... ... sw sum, out # bookkeeping code... bnez len, loop1 ... bnez len, loop1

  19. Loop-carried dependences can be mapped to the cross-VP network for (i = 1; i < len; i++) out[i] = in[i] * out[i-1]; • Additional compiler tasks beyond simple DOALL case: • Insert commands to push initial value into cross-VP network and to pop final value • Map loop-carried values to prevVP/nextVP queues in VP code • Copy any cross-VP queue values that have more than one reader to registers sub len, 1 lw r0, -4(out) loop: lw r1, in mult r0, r1 sw r0, out add in, 4 add out, 4 sub len, 1 bnez len, loop

  20. Loop-carried dependences can be mapped to the cross-VP network for (i = 1; i < len; i++) out[i] = in[i] * out[i-1]; sub len, 1 lw r0, -4(out) vcfgvl r2, 128, ... xvppush r3, x0 sub len, 1 lw r0, -4(out) loop: setvl r4, len vlw v0, in vmult v1, v0, prevVP vmove sd0, v1 vmove nextVP, v1 vsw sd0, out # bookkeeping code ... bnez len, loop loop: lw r1, in mult r0, r1 sw r0, out add in, 4 add out, 4 sub len, 1 bnez len, loop xvppop r5, x0

  21. The compiler focuses on improving throughput rather than reducing single-thread latency • Various phases are aimed at minimizing physical register usage • Cluster assignment attempts to balance work even at the expense of inter-cluster moves • Instruction scheduling tries to pack dependence chains together • Chain register insertion is designed to avoid using the register file for short-lived values • Additional details in paper

  22. Evaluation methodology • Scale simulator uses detailed models for VTU and cache, but a single-instruction-per-cycle latency for control processor • Reduces magnitude of parallelized code speedups • Performance is evaluated across a limited number of EEMBC benchmarks • EEMBC benchmarks are difficult to automatically parallelize • Continued improvements to the compiler infrastructure (e.g. if-conversion, front end loop transformations) would enable broader benchmark coverage

  23. The speedups of (relatively) unoptimized code reflect Scale’s advantages More accurately ~11x • Speedups exceed or are comparable to those observed in a limit study [Islam07] performed for an idealized 16-core multiprocessor supporting thread-level speculation; same is true for an infinite number of cores • Results point to benefits of exploiting parallelism within a single core

  24. There is a variety of related work • TRIPS also exploits multiple forms of parallelism, but the compiler’s focus is on forming blocks of useful instructions and mapping instructions to ALUs • Stream processing compilers share some similarities with our approach, but also have somewhat different priorities, such as managing the utilization of the Stream Register File • IBM’s Cell compiler has to deal with issues such as alignment and branch hints, which are not present for Scale • GPGPU designs (Nvidia’s CUDA, AMD’s Stream Computing) also have similarities with Scale, but the differences in the programming models result in different focuses in the compilers

  25. Concluding remarks • Vector-thread architectures exploit multiple forms of parallelism • This work presented a compiler for the Scale vector-thread architecture • The compiler can parallelize a variety of loop types • Significant performance gains were achieved over a single-issue scalar processor

  26. A comparison to handwritten code shows there is still significant room for improvement There are several optimizations that can be employed to narrow the performance gap

More Related