1 / 22

Salvador/Brazil , IWOMP’10 2014, Sep 28-30th

On the role of the programmer, the compiler and the runtime when facing accelerators in OpenMP 4.0. Guray Ozen Eduard Ayguade and Jesus Labarta. Salvador/Brazil , IWOMP’10 2014, Sep 28-30th. Motivation MACC: M ercurium ACC elerator Model Evaluation Conclusion. Motivation.

dylan
Download Presentation

Salvador/Brazil , IWOMP’10 2014, Sep 28-30th

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. On the role of the programmer, the compilerand the runtime when facing accelerators inOpenMP 4.0 Guray Ozen Eduard Ayguade and Jesus Labarta Salvador/Brazil, IWOMP’10 2014, Sep 28-30th

  2. Motivation • MACC: MercuriumACCelerator Model • Evaluation • Conclusion

  3. Motivation • GPUs have become popular • Performance / WATT • GPGPU Challenges • Productivity is low, due to the different programming languages • Takes more time to learn and program • A lot of new concepts to consider • Thread divergence (due to conditional statements) • Using efficient Multi-GPU | Concurrencyof kernel computation • Minimizing Data movement (Slow bandwidth because of PCIe) • Appropriate use of GPU’s memory hierarchy (private vs. shared vs. global memory) and memory access patterns (coalesced memory accessing) • Optimization is hard even for experts • Code generation for GPGPU? Guray Ozen - IWOMP'14

  4. Outcome • Approach Analyzed Directive Based APIs • OpenACC proposal based on directives and compiler to translate to GPU code • OpenMP 4.0 accelerator model included in the OpenMP standard • OmpSs programming model at BSC • Outcome  MACC=MercuriumACCeleratorcompiler • CUDA code generatorby OpenMP 4.0 Accelerator directives • Involves little GPU specific compiler optimization techniques • Trying to influence the evolution of the OpenMP 4.0 • Data transfer minimization automatically(HOST-2-GPU) • Extended OpenMP 4.0with experimental new clauses • In order to use more team memory • Become available Multi-GPU task scheduling | Device-to-Devicedata transfer • Based on OmpSs suite • Already supports different memory address space such as GPU • Generated CUDA kernels by MACC can be OmpSs task • All kind of tasks (SMP | CUDA | ACC ) works Asynchronously • Manages CUDAConcurrency Guray Ozen - IWOMP'14

  5. OmpSs Programming Model Task implementation for a GPU device The compiler parses CUDA kernel invocation syntax Ask the runtime to ensure consistent data is accessible in the address space of the device #pragmaomptargetdevice ({ smp | cuda | opencl}) \ { copy_deps | [ copy_in(...)] [ copy_out (...)] [ copy_inout (...)] } \ [ ndrange (…) ] #pragmaomptask[ in (...)] [ out (...)] [ inout (...)] { <<.. function or code block ..>> } #pragmaomptaskwait Wait for sons To compute dependences Support kernel based programming • Extended OpenMP & Task based programming model • Mercurium Compiler • Nanos Runtime • Forerunner for OpenMP • Tasking and tasks dependences are two examples of OmpSsinfluence • OmpSs Current Accelerator Supports • Tasks to be executed on GPU programmed in CUDA or OpenCL • Runtime system takes care of data movement, overlapping and scheduling • Doesn’t generate gpu code Guray Ozen - IWOMP'14

  6. VectorADD: MACC vs OmpSs vs OpenMP 4.0 Guray Ozen - IWOMP'14

  7. MACCCode Generation • Offload • Starts after #pragma omptarget device(acc) • Device clause extended to specify device type, not physical device number (much better support for multiple GPUs) • Besides task level parallelism for target directive • Generated CUDA codes from task’s code region will be OmpSs task • Works asynchronously • Kernel configuration • #pragma omp teams | num_threads(int) | thread_limits(int) • If not specified MACC defaults to one iteration per block/thread • Work-sharing Directives • #pragma omp distributeIterations of loop distributed among CUDA blocks • #pragma omp parallel for Iterations of loop distributed among CUDA threads Nesting to enable multiple thread dimensions (2D/3D) Guray Ozen - IWOMP'14

  8. MACCCode Generation MACC MACC • Cyclic distribution • 1 iteration  1 CUDA Block / Thread • If at all possible, remove redundant iteration • Thread Divergence in CUDA • Assign one iteration to one thread/block Guray Ozen - IWOMP'14

  9. MACCCode Generation • Data Transfer Minimized Automatically (GPU-HOST) • OpenMP 4.0 • Need to specify target datain order to stay data on device • Sometimes download / upload is performed with target updateby hand • MACC • Ignoredtarget data & target update • Programmer only specifies directionality of task data, not the actual data movement • #pragma omptask in(list) out(list) inout(list) • Doesn’t download data from GPU until taskwait • Task scheduling with Multi-GPU • OpenMP 4.0 • device_id is given by hand  device(int) • Multi-Gpu scheduling is managed by user! • Device-to-device data transfer is unavailable! • target data device(device_id) • MACC • No device_id • Runtime can schedule Multi-GPU | Concurrent Kernel • Became available device-2-device transfer

  10. MACCMinimized Data Transfers & MultiGPU intmain(…) { doubleA[N], B[N],C[N] , D[N]; while(0-> 2) { #pragmaomp target device(cuda) ndrange(...) copy_deps #pragmaomp task inout(C) out(D) <..Optimized CUDA Kernel Invocation..> #pragmaomp target device(acc) copy_deps #pragmaomp task in(A) out(B) #pragmaomp teams distribute parallel for for(i=0 ; i< N; ++i) <..Sequential Codes to generate CUDA..> #pragmaomp target device(acc) copy_deps #pragmaomp task inout(A,B) #pragmaomp teams distribute parallel for for(i=0 ; i< N; ++i) <..Sequential Codes to generate CUDA..> } #pragmaomp target device(acc) copy_deps #pragmaomp task inout(C,B) in(D) #pragmaomp teams distribute parallel for for(i=0 ; i< N; ++i) <..Sequential Codes to generate CUDA..> #pragmaomp target device(smp) copy_deps #pragmaomp task in(A, C) <..Sequential codes / Result Test..> #pragmaomptaskwait } EXE 1 1 IDLE memcpyH2D(C) memcpyH2D(A) 2 2 3 3 2 1 4 3 memcpyD2D(B) 5 4 memcpyD2H(A) memcpyD2H(C) Guray Ozen - IWOMP'14 5

  11. MACCCode Generation • Team – 1 • Memory GPU thread Fast BW Slow BW GPU thread PCIe GPU thread Host thread GPU thread (HOST) Main Memory (Device) GPU Memory Host thread Slow BW GPU thread GPU thread • Team – 2 • Memory Fast BW GPU thread Host thread GPU thread Host thread GPU thread Slow BW GPU thread Team – 3 Memory Fast BW GPU thread • GPU Global Memory • Slow & Big ! • Use Team Memory • Correspond shared memory in CUDA • Each thread groups (CUDA blocks) have one shared memory. • Shared memory faster than global memory • Very limited in size (e.g. 48 KB compared to 6 GB of global memory) • In some hand-written CUDA codes we observed the use of shared memory for shared data, using blocking to overcome limited size GPU thread

  12. MACC Code Generation • Data sharing clauses with teams | private | first_private • Offers experimental 3 new clauses for distribute directive • dist_private([CHUNK]data1, [CHUNK]data2 …) • dist_firstprivate([CHUNK]data1, [CHUNK]data2 …) • dist_lastprivate([CHUNK]data1, [CHUNK]data2 …) Data movement to Device Memory Using TeamMem with Small DATA Using TeamMem with Big DATA Main Memory B B C A C A Device Memory C C C C B B B B A A A B B D D A Team Memories B B B B Guray Ozen - IWOMP'14

  13. Jacobi (A*x=B) • Transparent management of data movement in MACC • No need for data scoping directives in OpenMP 4.0 / OpenACC Hardware 2 x Xeon E5649 2 x NVidia Tesla M2090 24GB Main Memory Software OpenACC HMPP NVCC 5.0 GCC 4.6

  14. NAS Parallel Benchmark CG • NAS-CG Solves an unstructured sparse linear system by the conjugate gradient method • 3 Problem Set C > B > A • Effects of Runtime • How important Task-Scheduling • Multiple-GPU • Device-to-Device transfer • With 2-GPU • Easy to develop with MACC • MACC is better even with one GPU • Supports CUDA concurrency by streaming • Optimized task scheduling by Nanos runtime Guray Ozen - IWOMP'14

  15. DG Kernel • To calculate climate benchmark developed by NCAR (National Center for Atmospheric Research) • 4 versions of DG-Kernel • CUDA hand optimized code developed at NCAR • OmpSs + CUDA kernel • OpenACC code developed NCAR • MACC • Used to demonstrate: • MACC can have better results than hand-optimized CUDA • MACC optimization techniques • Compare MACC with hand optimized CUDA program Guray Ozen - IWOMP'14

  16. DG Kernel • 3 Optimization Techniques of MACC are used • Opt1New team memory techniques • Opt2Removing redundant For iterations • Opt3Start assign with 2 dim of thread • MACC has better result! Guray Ozen - IWOMP'14

  17. Conclusion • Presented a MACC research compiler to include new accelerator directives in the OmpSs programming model • Avoid the use of kernel programming using CUDA/OpenCL • Programming productivity and performance • New extensions proposed to OpenMP 4.0 • Compilersplays key factor • Code generation • Applying GPU specific optimizations • Effects of runtime & programmer are also important • Managing many kernels with many GPU? • Ability to use multi GPU • Using different pragma directives Guray Ozen - IWOMP'14

  18. Thank you! For further information please contact guray.ozen@bsc.es

  19. MACCMercuirumAcceleratorcompiler

  20. MACCMercuriumACCeleratorcompiler • Approach • StartfromOmpSs • Developed at BSC • Alreadyprovidingsupportfortaskdependencies and offloading CUDA/OpenCLkernels to accelerators • Addtheminimum set of OpenMP 4.0 acceleratormodeldirectivesintotheOmpSsprogramming in orderto avoidkernelprogramming • Add extra directivesforadditionalprogrammingproductivity and performance, ifnecessary • OmpSsprogrammingmodelimplementedwith • MercuriumCompiler • Source-2-Source compiler • Easy to prototype new codetransformations and generation • MACC requiredsomechanges in existingcompilationphases and a new phase • Nanos++ runtimesystem • Extremelygoodtaskparallelism • SupportsHeterogeneoustask (CUDA, OpenCL, SMP) • No changesrequired to support MACC codegeneration Guray Ozen - IWOMP'14

  21. Compiler phases in Mercurium • Parser(modified) • To parse new OpenMP 4.0 directives • Added new IR for OpenMP 4.0 • Nanos++ Lowering (modified) • It lowers OpenMP directives • Some semantics are changed • MACC lowering (new) • CUDA code generation • CompilationDriver • Backend compiling & linking Guray Ozen - IWOMP'14

  22. MACC Offers New Clauses in order to use Team Memory! MACC MACC • IF DATA SMALL? (existing OpenMP clauses) • Existing Clauses for #pragma omp teams • private(list) • firstprivate(list) • IF DATA BIG? (new MACC clauses) • New Clauses for #pragma ompdistribute • dist_private([CHUNK]data1, [CHUNK]data2 …) • dist_firstprivate([CHUNK]data1, [CHUNK]data2 ) • dist_lastprivate([CHUNK]data1, [CHUNK]data2 ) Guray Ozen - IWOMP'14 22

More Related