1 / 13

Parallel I

Common I/O needs. A set of processesRead input data from a single fileWrite output data to a single fileCheck point during computationsPerform out of core computations. Need for parallel I/O. Sequential I/O becomes a bottleneckTedious pre- and post-processing is necessary to split and merge glo

hunter
Download Presentation

Parallel I

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. Parallel I/O using MPI I/O Giri Chukkapalli SDSC

    2. Common I/O needs A set of processes Read input data from a single file Write output data to a single file Check point during computations Perform out of core computations

    3. Need for parallel I/O Sequential I/O becomes a bottleneck Tedious pre- and post-processing is necessary to split and merge global files if parallel I/O is not used Standard unix I/O is not portable True parallel I/O requires multi-process access from a parallel file system

    4. Advantages of MPI I/O Performance portability Convenience Public domain MPI I/O API ROMIO from ANL

    5. Features of MPI I/O Parallel read/write non-contiguous data read/write non-blocking read/write collective read write portable data representation across platforms HPF style distributed array syntax

    6. Derived data types New type is a combination of existing types and spaces Derived data types are used to communicate noncontiguous data Sender’s data type is mapped onto receiver’s data type writer’s data type is mapped onto fileview and vice versa in MPI I/O

    7. Derived data types MPI_Type _contiguous _vector/_hvector _indexed/_hindexed _struct _subarray/_darray

    8. Fileview Displacement, etype and filetype creates a fileview fileview allows simultaneous writing/reading of noncontiguous interleaved data by multiple processes MPI_File_set_view call each process has a different fileview of a single file

    9. MPI I/O standard All calls start with MPI_File_ open, read, write, seek, close Asynchronous modifier “i”: iread etc. Absolute position modifier “_at”: read_at Collective modifier “_all”: read_all etc split collective modifier “_begin” “_end” shared file pointer modifier: “_shared” MPI_Type to create derived data types

    10. Examples of MPI I/O Usage Simple read/write of contiguous data read/write of noncontiguous data from a file to contiguous memory location Non-blocking read/write of data collective I/O to a single file read/write of distributed arrays look in /usr/local/apps/examples/UsingMPI2 on BlueHorizon

    11. Blue Horizon Specific Always compile with thread safe compilers mpcc_r, mpxlf90_r etc. Use GPFS file system /gpfs/userid 5TB of disk and 12 I/O servers and can achieve ~ 1 GB/s bandwidth Not fully MPI I/O compliant yet. Look in /usr/lpp/ppe.poe/include/mpi.h for function call support Look in /usr/local/apps/examples/UsingMPI2 for example codes

    12. MPI I/O performance issues To achieve good performance: Write as large chunks as possible use derived data types to read/write non-contiguous data Use collective I/O calls use non-blocking I/O calls provide hints through “info” parameter provide complete picture of the total I/O operation on the whole file by all the processes

    13. References Http://www.mcs.anl.gov/software : MPI-2 reference manual http://www.classes.cs.uchicago.edu/classes/archive/2000/fall/CS103-01/Lectures/mpi-io : slides from a very good talk http://hpcf.nersc.gov/software/libs/io/mpiio.html : Intro to MPI I/O at NERSC http://www.llnl.gov/asci/sc99fliers/mpi_io_pg1.html : MPI I/O efforts at Livermore http://www-unix.mcs.anl.gov/romio/papers.html : good papers on MPI I/O http://www.cs.dartmouth.edu/pario/ : A comprehensive parallel I/O archive

More Related