1 / 20

Measuring Synchronisation and Scheduling Overheads in OpenMP

Measuring Synchronisation and Scheduling Overheads in OpenMP. J. Mark Bull EPCC University of Edinburgh, UK email: m.bull@epcc.ed.ac.uk. Overview. Motivation Experimental method Results and analysis Synchronisation Loop scheduling Conclusions and future work. Motivation.

jaegar
Download Presentation

Measuring Synchronisation and Scheduling Overheads in OpenMP

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Measuring Synchronisation and Scheduling Overheads in OpenMP J. Mark Bull EPCC University of Edinburgh, UK email: m.bull@epcc.ed.ac.uk

  2. Overview • Motivation • Experimental method • Results and analysis • Synchronisation • Loop scheduling • Conclusions and future work

  3. Motivation • Compare OpenMP implementations on different systems. • Highlight inefficiencies. • Investigate performance implications of semantically equivalent directives. • Allow estimation of synchronisation/scheduling overheads in applications.

  4. Basic idea is to compare same code executed with and without directives. Overhead computed as (mean) difference in execution time. e.g. for DO directive, compare: !$OMP PARALLEL do j=1,innerreps !$OMP DO do i=1,numthreads to do j=1,innerreps call delay(dlength) call delay(dlength) end do end do end do !$OMP END PARALLEL Experimental method

  5. Experimental method (cont.) • Similar technique can be used for PARALLEL (with and without REDUCTION clause), PARALLEL DO, BARRIER and SINGLE directives. • For mutual exclusion (CRITICAL, ATOMIC, lock/unlock) use a similar method, comparing !$OMP PARALLEL do j=1,innerreps/nthreads !$OMP CRITICAL call delay(dlength) !$OMP END CRITICAL end do !$OMP END PARALLEL to same reference time.

  6. Experimental method (cont.) • Can use same method as for DO directive to investigate loop scheduling overheads. • For loop scheduling options, overhead depends on • number of threads • number of iterations per thread • execution time of loop body • chunk size • Large parameter space - fix first 3 and look at varying chunk size. • 4 threads • 1024 iterations per thread • 100 clock cycles to execute loop body

  7. Timing • Need to take care with timing routines: • second differences of 32 bit floating point values (e.g .etime) lose too much precision. • need microsecond accuracy (Fortran 90 system_clock isn’t good enough on some systems) • For statistical stability, repeat each measurement 50 times per run, and for 20 runs of the executable. • observe significant variation between runs which is absent within a given run. • Reject runs with large standard deviations, or with large numbers of outliers.

  8. Systems tested Benchmark codes have been run on: • Sun HPC 3500, eight 400 MHz UltraSparcII processors, KAI guidef90 preprocessor, Solaris f90 compiler. • SGI Origin 2000, 40 195 MHz MIPS R10000 processors, MIPSpro f90 compiler (access to 8 processors only) • Compaq Alpha server, four 525 MHz EV5/6 processors, Digital f90 compiler

  9. Sun HPC 3500

  10. SGI Origin 2000

  11. Compaq Alpha server

  12. Sun HPC 3500

  13. SGI Origin 2000

  14. Compaq Alpha server

  15. Sun HPC 3500

  16. SGI Origin 2000

  17. Compaq Alpha server

  18. Observations • PARALLEL directive uses 2 barriers • is this strictly necessary? • PARALLEL DO cost twice as much as DO • REDUCTION clause scales badly • should use a fan-in method? • SINGLE should not cost more than BARRIER • Mutual exclusion scales badly on Origin 2000 • CRITICAL directive very expensive on Compaq

  19. Observations (cont.) • Small chunk sizes very expensive • compiler should generate code statically for block cyclic schedule. • DYNAMIC much more expensive than STATIC, especially on Origin 2000 • On Origin 2000 and Compaq, block cyclic is more expensive than block, even with one chunk per thread.

  20. Conclusions and future work • Set of benchmarks to measure synchronisation and scheduling costs in OpenMP. • Show significant differences between systems. • Show some potential areas for optimisation. • Would like to run on more (and larger) systems.

More Related