1 / 13

Measuring End-User Availability on the Web: Practical Experience

Measuring End-User Availability on the Web: Practical Experience. Matthew Merzbacher (visiting research scientist) Dan Patterson (undergraduate) Recovery-Oriented Computing (ROC) University of California, Berkeley http://roc.cs.berkeley.edu. E—Commerce Goal. Non-stop Availability

caesar-lara
Download Presentation

Measuring End-User Availability on the Web: Practical Experience

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Measuring End-User Availability on the Web: Practical Experience Matthew Merzbacher (visiting research scientist) Dan Patterson (undergraduate) Recovery-Oriented Computing (ROC) University of California, Berkeley http://roc.cs.berkeley.edu

  2. E—Commerce Goal • Non-stop Availability • 24 hours/day • 365 days/year • How realistic is this goal? • How do we measure availability? • To evaluate competing systems • To see how close we are to optimum

  3. The State of the World • Uptime measured in “nines” • Four nines == 99.99% uptime (just under an hour downtime per year) • Does not include scheduled downtime • Manufacturers advertise six nines • Under 30s unscheduled downtime/year • May be true in perfect world • Not true in practice on real Internet

  4. Measuring Availability • Measuring “nines” of uptime is not sufficient • Reflects unrealistic operating conditions • Must capture end-user’s experience • Server + Network + Client • Client Machine and Client Software

  5. Existing Systems • Topaz, Porvio, SiteAngel • Measure response time, not availability • Monitor service-level agreements • NetCraft • Measures availability, not performance or end-user experience • We measured end-user experience and located common problems

  6. Experiment • “Hourly” small web transactions • From two relatively proximate sites • (Mills CS, Berkeley CS) • To a variety of sites, including • Internet Retailer (US and international) • Search Engine • Directory Service (US and international) • Ran for 6+ months

  7. Availability: Did the Transaction Succeed?

  8. Types of Errors Network: Medium (11%) Severe (4%) Server (2%) Corporate (1%) Local (82%)

  9. Client Hardware Problems Dominate User Experience • System-wide crashes • Administration errors • Power outages • And many many more… • Many, if not most, caused or aggravated by human error

  10. What About Speed?

  11. Does Retry Help? Green > 80% Red < 50%

  12. What Guides Retry? • Uniqueness of data • Importance of data to user • Loyalty of user to site • Transience of information • And more…

  13. Conclusion • Experiment modeled user experience • Vast majority (81%) of errors were on the local end • Almost all errors were in the “last mile” of service • Retry doesn’t help for local errors • User may be aware of the problem and therefore less frustrated by it

More Related