scalable parallel i o alternatives for massively parallel partitioned solver systems n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Scalable Parallel I/O Alternatives for Massively Parallel Partitioned Solver Systems PowerPoint Presentation
Download Presentation
Scalable Parallel I/O Alternatives for Massively Parallel Partitioned Solver Systems

Loading in 2 Seconds...

play fullscreen
1 / 25

Scalable Parallel I/O Alternatives for Massively Parallel Partitioned Solver Systems - PowerPoint PPT Presentation


  • 82 Views
  • Uploaded on

Scalable Parallel I/O Alternatives for Massively Parallel Partitioned Solver Systems. Jing Fu, Ning Liu , Onkar Sahni , Ken Jansen, Mark Shephard , Chris Carothers Computer Science Department Scientific Computation Research Center (SCOREC) Rensselaer Polytechnic Institute

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Scalable Parallel I/O Alternatives for Massively Parallel Partitioned Solver Systems' - etta


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
scalable parallel i o alternatives for massively parallel partitioned solver systems

Scalable Parallel I/O Alternatives for Massively Parallel Partitioned Solver Systems

Jing Fu, Ning Liu, Onkar Sahni,

Ken Jansen, Mark Shephard, Chris Carothers

Computer Science Department

Scientific Computation Research Center (SCOREC)

Rensselaer Polytechnic Institute

chrisc@cs.rpi.edu

Acknowledgments: Partners: Simmetrix, Acusim, Kitware, IBM

NSF PetaApps, DOE INCITE, ITR, CTS; DOE: SciDAC-ITAPS, NERI; AFOSR Industry:IBM, Northrup Grumman, Boeing, Lockheed Martin, Motorola Computer Resources: TeraGrid, ANL, NERSC, RPI-CCNI

Scalable Parallel I/O Alternatives

outline
Outline
  • Motivating application: CFD
  • Blue Gene Platforms
  • I/O Alternatives
    • POSIX
    • PMPIO
    • syncIO
    • “reduce blocking” rbIO
  • Blue Gene Results
  • Summary

Scalable Parallel I/O Alternatives

phasta flow solver parallel paradigm

P2

P1

P3

PHASTA Flow Solver Parallel Paradigm
  • Time-accurate, stabilized FEM flow solver
  • Input partitioned on a per-processor basis
  • Unstructured mesh “parts” mapped to cores
  • Two types of work:
    • Equation formation
      • O(40) peer-to-peer non-blocking comms
      • Overlapping comms with comp
      • Scales well on many machines
    • Implicit, iterative equation solution
      • Matrix assembled on processor ONLY
      • Each Krylov vector is:
        • q=Ap (matrix-vector product)
        • Same peer-to-peer comm of q PLUS
        • Orthogonalize against prior vectors
        • REQUIRES NORMS=>MPI_Allreduce
      • This sets up a cycle of global comms. separated by modest amount of work

Scalable Parallel I/O Alternatives

parallel implicit flow solver incompressible abdominal aorta aneurysm aaa
Parallel Implicit Flow Solver – IncompressibleAbdominal Aorta Aneurysm (AAA)

32K parts shows modest degradation due to 15% node imbalance

Scalable Parallel I/O Alternatives

aaa adapted to 10 9 elements scaling on blue gene p

#of cores

Rgn imb

Vtx imb

Time (s)

Scaling

32k

1.72%

8.11%

112.43

0.987

128k

5.49%

17.85%

31.35

0.885

AAA Adapted to 109 Elements:Scaling on Blue Gene /P

New: @ 294,912 cores  82% scaling

But getting I/O done is a challenge…

Scalable Parallel I/O Alternatives

blue gene l layout
Blue Gene /L Layout
  • CCNI “fen”
  • 32K cores/ 16 racks
  • 12 TB / 8 TB usable RAM
  • ~1 PB of disk over GPFS
  • Custom OS kernel

Scalable Parallel I/O Alternatives

blue gene p layout
Blue Gene /P Layout
  • ALCF/ANL “Intrepid”
  • 163K cores/ 40 racks
  • ~80TB RAM
  • ~8 PB of disk over GPFS
  • Custom OS kernel

Scalable Parallel I/O Alternatives

blue gene p vs bg l
Blue Gene/ P (vs. BG/L)

Scalable Parallel I/O Alternatives

blue gene i o archiectures
Blue Gene I/O Archiectures
  • Blue Gene/L @ CCNI
    • 1 2-core I/O node per 32 compute nodes
    • 32K system has 512, 1 Gbit/sec network interfaces
    • I/O nodes connected 48 GPFS file servers
    • Servers 0, 2, 4, and 6 are metadata servers
    • Server 0 does RAS and other duties
    • 800 TB of storage from 26 IBM DS4200 storage arrays
    • Split into 240 LUNs, each server has 10 LUNs (7 @ 1MB and 3 @ 128KB)
    • Peak bandwidth is ~8GB/sec read and 4 GB/sec write
  • Blue Gene/P @ ALCF
    • Similar I/O node to compute node ratio
    • 128 dual core fileservers over Myrinet w/ 4MB GPFS block size
    • Metadata can be done by any server
    • 16x DDN 9900  7.5 PB (raw) storage w/ peak bandwidth of 60 GB/sec.

Scalable Parallel I/O Alternatives

non parallel i o a bad approach

Data 0

Data 1

Data 2

Data N-1

0

1

2

N-1

Block 0

Block 1

Block 2

Block N-1

Non-Parallel I/O: A Bad Approach…
  • Sequential I/O:
    • All processes send data to rank 0, and 0 writes it to the file

Lacks scaling and results in excessive memory use on rank 0Must think parallel from the start, but that implies data/file partitioning…

Scalable Parallel I/O Alternatives

1 posix file per processor 1pfpp
1 POSIX File Per Processor (1PFPP)
  • Pros:
    • parallelism, high performance at small core counts
  • Cons:
    • lots of small files to manage
    • LOTS OF METADATA – stress parallel filesystem
    • difficult to read back data from different number of processes
    • @ 300K cores yields 600K files
      • @ JSC  kernel panic!!
    • PHASTA currently uses this approach…

Scalable Parallel I/O Alternatives

slide12

New Partitioned Solver Parallel I/O Format

  • Assumes data accessed in a coordinated manner
  • File: master header + series of data blocks
  • Each data block has header and data
  • Ex: 4 parts w/ 2 fields per part
  • Allows for different processor config:
    • (1 core @ 4 parts),
    • (2 core @ 2 parts)
    • (4 cores @ 1 part)
  • Allows for 1 to many files to control metadata overheads

Scalable Parallel I/O Alternatives

mpi file alternatives pmpio
MPI_File alternatives: PMPIO
  • PMPIO  “poor man’s parallel I/O” from “silo” mesh and field library
  • Divides app into groups of writers
  • w/i a group only 1 writer at a time to a file
  • Passing of a “token” ensures synchronization w/i a group
  • Support for HDF5 file format
  • Uses MPI_File_read/write_at routines

Scalable Parallel I/O Alternatives

slide14

MPI_File alternatives: syncIO

  • Flexible design allows variable number files and procs/writers per file
  • Within a file, can be configured to write on “block size boundries” which are typically 1 to 4MB.
  • Implemented using collective I/O routines : e.g., MPI_File_write_at_all_begin

Scalable Parallel I/O Alternatives

slide15

MPI_File alternatives: rbIO

  • Rb  “reduced blocking”
  • Targets “checkpointing”
  • Divides application into workers and writers with 1 writer MPI task per group of workers.
  • Workers send I/O to writers over MPI_Isend and are free to continue –
    • e.g., hides the latency of blocking parallel I/O
  • Writers then perform blocking MPI_File_write_at operation using MPI_SELF communicator

Scalable Parallel I/O Alternatives

slide16

BG/L: 1PFPP w/ 7.7 GB data

Scalable Parallel I/O Alternatives

slide17

BG/L: PMPIO w/ 7.7 GB data

HDF5

Peak: 600MB/sec

RAW MPI-IO

Peak: 900 MB/sec

Scalable Parallel I/O Alternatives

slide18

BG/L: syncIO w/ 7.7 GB data

Write Performance

Peak: 1.3 GB/sec

Read Performance

Peak: 6.6 GB/sec

Scalable Parallel I/O Alternatives

bg p syncio w 60 gb data
BG/P: syncIO w/ ~60 GB data

Scalable Parallel I/O Alternatives

slide20

BG/L: rbIO actual BW w/ 7.7 GB data

Scalable Parallel I/O Alternatives

slide21

BG/L: rbIO perceived BW w/ 7.7 GB data

~22 TB/sec

~11 TB/sec

Scalable Parallel I/O Alternatives

slide22

BG/P: rbIO actual BW w/ ~60 GB data

~17.9 GB/sec

Scalable Parallel I/O Alternatives

slide23

BG/P: rbIO perceived BW w/ ~60 GB data

~21 TB/sec

Scalable Parallel I/O Alternatives

related work
Related Work
  • A. Nisar, W. Liao, and A. Choudhary, “Scaling Parallel I/O Performance through I/O Delegate and Caching System,” in Proceedings of the 2008 ACM/IEEE conference on Supercomputing, 2008.
    • Performance “rbIO” inside MPI via threads and using upto 10% compute cores as I/O workers
  • Benchmark studies (hightlight just a few…)
    • H Yu et al [18] – BG/L: 2 GB/sec @ 1K
    • Saini et al [19] – 512 NEC SX-8 cores – I/O was not scalable when all processors access a shared file.
    • Larkin et al [17] – large performance drop at 2K core count for CrayXT3/XT4
    • Lang et al [30] – large I/O study across many benchmarks on Intrepid/BG-P. Found 60 GB/s read and 45 GB/s write. In practice, Intrepid has a peak I/O rate of around 35 GB/sec

Scalable Parallel I/O Alternatives

summary and future work
Summary and Future Work
  • We examine several parallel I/O approaches..
    • 1 POSIX File per Proc: < 1 GB/sec on BG/L
    • PMPIO: < 1 GB/sec on BG/L
    • syncIO – all processors write as groups to different files
      • BG/L: 6.6 GB/sec read, 1.3 GB/sec write
      • BG/P: 11.6 GB/sec read, 25 GB/sec write
    • rbIO – gives up 3 to 6% of compute nodes to hide latency of blocking parallel I/O.
      • BG/L: 2.3 GB/sec actual write, 22 TB/sec perceived write
      • BG/P: ~18 GB/sec actual write, ~22 TB/sec perceived write
      • Good trade-off on Blue Gene
  • All procs to 1 file does not yield good performance even if aligned.
  • Performance “sweet spot” for syncIO depends significantly on I/O architecture and so file format must be tuned accordingly
    • BG/L @ CCNI has a metadata bottleneck and must adjust # of files according – e.g., 32 to 128 writers
    • BG/P @ ALCF can sustain much higher performance, but requires more files – e.g., 1024 writers
    • Suggest collective I/O is sensitive to underlying file system performance.
  • For rbIO, we observed that 1024 writers was the best performance so far for both BG/L and BG/P platforms..
  • Future Work – impact on performance of different filesystems
    • Leverage Darshan logs @ ALCF to better understand Intrepid performance
    • More experiments on Blue Gene/P under PVFS, CrayXT5 under Lustre

Scalable Parallel I/O Alternatives