1 / 13

Fault Tolerance Challenges and Solutions

Fault Tolerance Challenges and Solutions. Al Geist Computer Science Research Group Computer Science and Mathematics Division Research supported by the Department of Energy’s Office of Science Office of Advanced Scientific Computing Research. Rapid growth in scale drives fault tolerance need.

lemuel
Download Presentation

Fault Tolerance Challenges and Solutions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Fault Tolerance Challengesand Solutions Al Geist Computer Science Research Group Computer Science and Mathematics Division Research supported by the Department of Energy’s Office of ScienceOffice of Advanced Scientific Computing Research

  2. Rapid growth in scaledrives fault tolerance need Example: ORNL LCFhardware roadmap 1 PF (128 new cabinets) 175 TB 1 PF 300 TF(84 quad) 71 TB Cray XT3 Cray XT4 Cray Baker 2009 250 TF 54 TF(56 cabinets) 5294 nodes 10,588 processors 21 TB 100 TF 54 TF 2007 25 TF Today 20X scale changein 2.5 years Late2006 119 TF(124 cabinets) 11,706 nodes 23,412 processors 46 TB Jul 2006 Nov 2006 Dec 2007 Nov 2008

  3. Today’s applications and their runtime libraries may scale, but they are not prepared for the failure rates of sustained petascale systems • Assuming a linear model and a failure rate 20X what is seen today. • The RAS system automatically configures around faults – up for days. • But every one of these failures kills the application that was using that node! ORNL 1 PF Cray “Baker” system, 2009

  4. 200 Time to checkpoint increases with problem size. 160 120 Crossoverpoint Mean time to interrupt (hours) 80 Time 40 November December January 0 MTTI decreasesas numberof parts increases. Good news: MTTI is betterthan expected for LLNL BG/Land ORNL XT4(6–7 days, not minutes). 2009(estimate) 2006 Today’s fault-tolerance paradigm (checkpoint) ceases to be viable on large systems

  5. Store checkpointin memory. Need todevelop rich methodology to “run through” faults. Need to develop new paradigms for applications to handle faults Restart from checkpoint file [large apps today]. Restart from diskless checkpoint[Avoids stressing the I/O systemand causing more faults]. Recalculate lost data from in-memory RAID. Lossy recalculation of lost data[for iterative methods]. Some state saved Recalculate lost data from initialand remaining data. Replicate computation across system. Reassign lost work to another resource. Use natural fault-tolerant algorithms. No checkpoint

  6. Support simultaneous updates 24/7 system can’t ignore faults Harness P2P control research Fast recovery from fault Parallel recovery from multiple node failures

  7. 6 options for system to handle failures Restart–from checkpointor from beginning. Notify applicationand let it handle the problem. Migrate task to other hardwarebefore failure. Reassign work to spare processor(s). Replicate tasks across machine. Ignore the fault altogether. Need a mechanism for each application (or component)to specify to the system what to do if a fault occurs. 7Geist_FT_SC07

  8. These modes affect the size (extent) and ordering of the communicators. It may be time to consideran MPI-3 standard that allows applications to recover from faults. 5 recovery modes for MPI applications Harness project’s FT-MPI explored 5 modes of recovery • ABORT: Just do as vendor implementations. • BLANK: Leave holes (but make sure collectivesdo the right thing afterward). • SHRINK: Reorder processes to makea contiguous communicator (some ranks change). • REBUILD: Respawn lost processesand add them to MPI_COMM_WORLD. • REBUILD_ALL: Same as REBUILD except that it rebuilds all communicators, and groups and resets all key values, etc.

  9. Can’t afford to run every jobthree (or more) times. • Yearly allocationsare like $5M–$10M grants. 4 ways to fail anyway Validation of an answer on such large systems is a growing problem. Simulations are more complex, solutions are being sought in regions never before explored. Fault may not be detected. Recovery introduces perturbations. Result may depend on which nodes fail. Result looks reasonable, but it is actually wrong. 9Geist_FT_SC07

  10. 3 steps to fault tolerance • Detection that something has gone wrong • System: detection in hardware • Framework: detection by runtime environment • Library: detection in math or communication library • Notificationof the application,runtime, or system components • Interrupt: signal sent to job or system component • Error code returned by application routine • Recovery of the application to the fault • By the system • By the application • Neither: natural fault tolerance Subscription notification 10Geist_FT_SC07

  11. 2 reasons the problemis only going to get worse The drive for large-scale simulationsin biology, nanotechnology, medicine,chemistry, materials, etc. • Require much larger problems (space): • easily consume the 2 GB per corein ORNL LCF systems. • Require much longer to run (time) • science teams in climate, combustion, and fusionwant to run for a dedicated couple of months. • From a fault tolerance perspective: • Space means that the job ‘state’ to be recovered is huge. • Time means that many faults will occur during a single run.

  12. 1 holistic solution We need coordinated fault awareness, prediction, and recoveryacross the entire HPC system from the application to the hardware. CIFTS project Fault Tolerance Backplane Notification Detection Recovery Applications Monitor Event manager Recovery services Middleware Logger Predictionand prevention Operating System Autonomic actions Configuration Hardware “Prediction and prevention are critical becausethe best fault is the one that never happens.” Project under way at ANL, ORNL, LBL, UTK, IU, OSU

  13. Contact Al Geist Computer Science Research Group Computer Science and Mathematics Division (865) 574-3153 gst@ornl.gov 13Geist_FT_SC07

More Related