Create Presentation
Download Presentation

Analysis and Modeling of Time-Correlated Failures in Large-Scale Distributed Systems

Analysis and Modeling of Time-Correlated Failures in Large-Scale Distributed Systems

120 Views

Download Presentation
## Analysis and Modeling of Time-Correlated Failures in Large-Scale Distributed Systems

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -

**The FailureTraceArchive**Analysis and Modeling of Time-Correlated Failures in Large-Scale Distributed Systems Nezih Yigitbasi1,Matthieu Gallet2, Derrick Kondo3, Alexandru Iosup1, Dick Epema1 1TUDelft, 2École Normale Supérieure de Lyon, 3INRIA http://guardg.st.ewi.tudelft.nl/**Failures Do Happen**• … Build a computing system with 10 thousand servers with MTBF of 30 years each, watch one fail per day … Jeff Dean, Google Fellow, LADIS’09 Keynote • … Average worker deaths per MapReduce job is 1.2 … MapReduce, OSDI’04 • … 20-45% failures in TeraGrid … Khalili et al., GRID’06 • … During the month of March 2005 on one dedicated cluster with 1500 Xeon CPUs, there were 32,580 Sawzall jobs launched, using an average of 220 machines each. While running those jobs, 18,636 failures occurred (application failure, network outage, system crash, etc.) that triggered rerunning some portion of the job ... Rob Pike et al., Google**Are Failures Independent?**• Common assumption • Is this realistic for large-scale distributed systems? • Already know that space correlations exist • Time correlations may impact • Proactive fault-tolerance solutions • Design decisions • Checkpointing & scheduling decisions (e.g., migrate computation at the beginning of a predicted peak) M.Gallet, N.Yigitbasi, B.Javadi, D.Kondo, A.Iosup, D.Epema, A Model for Space-correlated Failures in Large-scale Distributed Systems, Euro-Par 2010.**Our Goals**GOAL 1 Investigate whether failures have time correlations GOAL 2 Model the time-varying behavior of failures (peaks)**Outline**Background Our Approach Analysis of Time-Correlation Modeling the Peaks of Failures Conclusions**Why Not Root-Cause Analysis?**• Root-cause analysis is definitely useful Challenges • Systems are large and complex • Not all subsystems provide detailed info • Little monitoring/debugging support • Environment-specific or temporary failures • Huge size of failure data • 19 systems**The FailureTraceArchive**Failure Trace Archive (FTA) • Provides • Availability traces of diverse distributed systems of different scale • Standard format for failure events • Tools for parsing & analysis • Enables • Comparing models/algorithms using identical data sets • Evaluation of the generality/specificity of models/algorithms across different types of systems • Analysis of availability evolution across time scales • And many more … http://fta.inria.fr**FTA Schema**• Hierarchical trace format • Resource centric • Event-based • Associated metadata • Codes for different components and events • Available in raw, tabbed and MYSQL formats**Sample Trace**Identifiers for the event/component/node/platform Type of event: unavailability/availability Event start/stop time (UNIX time) Node name**Outline**Background Our Approach Analysis of Time-Correlation Modeling the Peaks of Failures Conclusions**Our Approach (1): Outline**Traces • Nineteen failure traces from the FTA • Mostly production systems Analysis • Use the auto-correlation of failure rate time series Modeling • Fit well-known probability distributions to the failure data to model failure peaks**Our Approach (2): Traces**100K+ hosts ~1.2 M failure events 15+ years of operation in total http://fta.inria.fr**Our Approach (3): Analysis**• Auto-Correlation Function (ACF) • Similarity between observations as a function of the time lag between them • Mathematical tool for finding repeating patterns • Used for assessing time correlations • [-1 1]: weak strong correlation**Our Approach (4): Modeling**• We use five probability distributions to fit to the empirical data • Exponential, Weibull, Pareto, Log-Normal, and Gamma • Maximum likelihood estimation + Goodness of Fit Tests**Outline**Background Our Approach Analysis of Time-Correlation Modeling the Peaks of Failures Conclusions**Analysis (1): Auto-correlation**• Many systems exhibit moderate/strong auto-correlation for moderate/short time lags (GRID5K, LDNS, SKYPE, …) WEBSITES**Analysis (2): Auto-correlation**• Small number of systems exhibit low auto-correlation (TeraGrid, PNNL, NOTRE-DAME) TERAGRID**Analysis (3): Failure Patterns**Daily/Weekly Cycles Daily/Weekly Cycles • Systems with similar usage patterns have similar failure patterns SKYPE MICROSOFT**Analysis (4): Workload Intensity vs Failure Rate**• There is a strong correlation between the workload intensity and the failure rate in some systems GRID5000**Outline**Background Our Approach Analysis of Time-Correlation Modeling the Peaks of Failures Conclusions**Failure Peaks (1): Model**3 4 2 1 μ+kσ μ**Failure Peaks (2): Identification**Our goal • Balance between capturing the extreme system behavior and characterizing an important part of the system failures We use a threshold to isolate peaks • μ + kσ where k is a positive integer • Large k=> Few periods explaining only a small fraction of failures • Small k=> More failures of probably very different characteristics We use k=1 • Tried k={0.5, 0.9, 1.0, 1.1, 1.25, 1.5, 2.0} • Over all traces, average fraction of downtime and average number of failures are close (see Technical Report)**Failure Peaks (3): Modeling Results (1)**• On average, 50% - 95% of the system downtime is caused by the failures that originate during peaks, but the fraction of peaks < 10% for all platforms • The average peak durations are on the order of 1-2 hours • The average time between peaks is on the order of 15-80 hours • Average IAT over the entire trace is about 9x the IAT during peaks**Failure Peaks (4): Modeling Results (2)**• Exponential distribution is not a good fit for IAT during peaks, time between peaks, and failure duration during peaks • Traditional models are not enough • Model parameters do not follow a heavy-tailed distribution • Goodness of fit test results (p-values) for the Pareto distribution are very low • Weibull and the Log-Normal provide the best fit • See the paper for the parameters**Conclusions (1)**Large-Scale Study • Nineteen traces most of which are production systems • 100K+ hosts – ~1.2 M failure events – 15+ years of operation • Four new traces available in the FTA (3 CONDOR + 1 TERAGRID) GOAL 1: Analysis • Failures exhibit strong periodic behavior & time correlation • Systems with similar usage patterns have similar failure patterns • Strong correlation between workload intensity and failure rate**Conclusions (2)**GOAL 2: Modeling • Peak duration, time between peaks, the failure IAT • during peaks, and the failure duration during peaks • On average 50% - 95% of the system downtime is caused by the failures that originate during peaks (fraction of peaks < 10%) • Weibull & the Log-Normal distributions provide good fit**The FailureTraceArchive**Thank you! Questions? Comments? “M.N.Yigitbasi@tudelft.nl” http://www.st.ewi.tudelft.nl/~nezih/ • More Information: • Guard-g Project: http://guardg.st.ewi.tudelft.nl/ • The Failure Trace Archive: http://fta.inria.fr • PDS publication database: http://www.pds.twi.tudelft.nl**X**X X**Autocorrelation Function**+1 Significant positive correlation at short lags 0 Autocorrelation Coefficient -1 lag k 0 100**Autocorrelation Function**+1 0 Autocorrelation Coefficient No statistically significant correlation beyond this lag -1 lag k 0 100**Long-range Dependence**• For most processes (e.g., Poisson, or compound Poisson), the autocorrelation function drops to zero very quickly • usually immediately or exponentially fast • For self-similar processes, the autocorrelation function drops very slowly • i.e., hyperbolically, toward zero, but may never reach zero**Autocorrelation Function**+1 Typical long-range dependent process 0 Autocorrelation Coefficient Typical short-range dependent process -1 lag k 0 100