1 / 36

CoMPI: Enhancing MPI based applications performance and scalability using run-time compression.

CoMPI: Enhancing MPI based applications performance and scalability using run-time compression. Rosa Filgueira, David E.Singh, Alejandro Calderón and Jesús Carretero University Carlos III of Madrid. Summary. Problem description Main objectives CoMPI Study of compression algorithms.

miette
Download Presentation

CoMPI: Enhancing MPI based applications performance and scalability using run-time compression.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CoMPI: Enhancing MPI based applications performance and scalability using run-time compression. Rosa Filgueira, David E.Singh, Alejandro Calderón and Jesús Carretero University Carlos III of Madrid.

  2. Summary • Problem description • Main objectives • CoMPI • Study of compression algorithms. • Evaluation of CoMPI • Results • Conclusions

  3. Summary • Problem description • Main objectives • CoMPI • Study of compression algorithms. • Evaluation of CoMPI • Results • Conclusions

  4. Problem description

  5. Main objectives (1/2) • Reduce the communication transfer time for MPI.

  6. Main objectives (2/2) • CoMPI: Optimization of MPI communications by using compression. • Compression in all MPI primitives. • Fit any MPI application. • Transparent to user. • Run-time compression. • Studding of compression algorithms. • Selecting the best algorithm based on message characteristics.

  7. Summary • Problem description • Main objectives • CoMPI • How we have integrated compression into MPI • Set of compression algorithms proposed • Study of compression algorithms. • Evaluation of CoMPI • Results • Conclusions

  8. MPICH architecture (1/2)

  9. MPICH architecture (2/2)

  10. Compression of MPI Messages (1/2)

  11. Compression of MPI Messages (2/2) • Header in the exchanged message to inform: • Compression used or not, algorithm and length. • All compression algorithms are included in a single Compression Library: • CoMPI can be easily updated . • New compression algorithms can be included .

  12. Set of compression algorithms proposed (1/2)

  13. Set of compression algorithms proposed (2/2)

  14. Summary • Problem description • Main objectives • CoMPI • Study of compression algorithms. • Conclusion of compression study. • Evaluation of CoMPI • Results • Conclusions

  15. Study of compression algorithms (1/7) • To select the most appropriated algorithm for each datatype based on: • Buffer size. • Redundancy level. • To Increase the transmission speed by using compression depends on: • Number of bits sent. • Time required to compress. • Time required to decompress.

  16. Study of compression algorithms (2/7) • For each algorithm, datatype, buffer size and redundancy level we will study theComplexity and Compression ratio.

  17. Study of compression algorithms (3/7)

  18. Study of compression algorithms (4/7) • Integer dataset

  19. Study of compression algorithms (5/7) • Floating-point dataset

  20. Study of compression algorithms (6/7) • Double precision dataset WITHOUT pattern

  21. Study of compression algorithms (7/7) • Double precision WITH pattern: Data sequence  50001.0, 50003.0 , 50005.0 …

  22. Conclusion of compression study

  23. Summary • Problem description • Main objectives • CoMPI • Study of compression algorithms. • Evaluation of CoMPI • Results • Conclusions

  24. Evaluation of CoMPI

  25. Summary • Problem description • Main objectives • CoMPI • Study of compression algorithms. • Evaluation of CoMPI • Results • Real Applications • Benchmarks • Conclusions

  26. Results (1/5) • BISP3D: • Floating-point data. • Improves between x1.2 and x1.4 with LZO.

  27. Results (2/5) • PSRG: • Integer data. • Improves up to x2 with LZO.

  28. Results (3/5) • STEM-II: • Floating-point data. • Improves to x1.4 with LZO.

  29. Results (4/5) • IS : • Integer data. • Improves to x1.2 with LZO. • Rice obtains good results with 32 processes.

  30. Results (5/5) • LU: • Double precision. • No better performance. Only with 64 processes by using FPC we obtain a speedup of x1.1

  31. Summary • Problem description • Main objectives • CoMPI • Study of compression algorithms. • Evaluation of CoMPI • Results • Conclusions • Principal Conclusion . • On going.

  32. Principal conclusions (1/2) • New Compression library integrated into MPI using MPICH distribution CoMPI. • CoMPI includes five different compression algorithms and compress all MPI primitives. • Main characteristics: • Transparent for the users. • Fit any application without any change in it. • We have evaluated CoMPI using: • Synthetic traces. • Real applications.

  33. Principal conclusion (2/2) • The results of evaluations demonstrated that in most of the cases, the compression: • Reduce the overall execution time. • Enhance the scalability. • When compression is not appropriated: • Little performance degradation.

  34. On going (1/2)

  35. On going (2/2)

  36. Questions?

More Related