1 / 9

Measuring and Evaluating Large-Scale CDNs Huang et al.

Measuring and Evaluating Large-Scale CDNs Huang et al. An offensive analysis Daniel Burgener John Otto F'09 - EECS 440 Adv Net - 7 Oct 2009. Argument. Comparing lemons and limes Fundamental limitations of evaluation with DNS Ignored important metrics Flawed definition of metrics

Download Presentation

Measuring and Evaluating Large-Scale CDNs Huang et al.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Measuring and Evaluating Large-Scale CDNsHuang et al. An offensive analysis Daniel Burgener John Otto F'09 - EECS 440 Adv Net - 7 Oct 2009

  2. Argument Comparing lemons and limes Fundamental limitations of evaluation with DNS Ignored important metrics Flawed definition of metrics Availability Delay Does not provide a general solution

  3. Comparing lemons and limes Difference in scale of Akamai and Limelight  Doesn't effectively evaluate differences in architectures and philosophies Paper withdrawn after criticism from Akamai and Limelight Paper makes false assumptions about akamaiedge

  4. Fundamental limitation:measuring with DNS Don't see actual user performance  Based on locality assumption: Client is near LDNS Does not capture traffic volume per DNS server Difficult to determine average performance (ack'd in last paragraph of conclusion)

  5. Ignored important metrics Throughput Load balancing algorithms / server load CDN interface with ISPs: Peering? Paper couldn't include these because it used DNS for evaluation However, these aspects may be just as important as latency and uptime

  6. Defining Metrics: Availability Sec. 5 defines availability in terms of a particular server being online Instead, should be from the user's perspective: that the CDN returns ANY live node Why this is important: Expect that load balancing algorithm is tightly tied to server availability A DNS request should return live servers, with short DNS mapping TTL (So, we shouldn't have stale references to servers) Availability should be defined: if one of the two CDN nodes are available.

  7. Defining Metrics: Delay DNS-perceived latency != user-perceived latency How can you evaluate delay if it's not from the user's perspective? Since Akamai has servers in POPs, they may be much closer to the LDNS than the client Result: underestimating latency

  8. Not a general solution Multiple stages of the analysis were special cases for Akamai and Limelight Difficult to get CNAME list for another CDN that's not as organized as Akamai Got lucky with ease of collecting IPs for Akamai and Limelight What about CoralCDN?

  9. Conclusion • Comparing Akamai and Limelight isn't fair - so different • Might be justified if they had actually discussed the philosophical and architectural differences • Paper should have conducted evaluation from the users' perspective, not DNS • As a result, couldn't include important metrics • Used poorly-defined metrics  • Doesn't provide a general solution • Approach and methodology are flawed • Isn't interesting: doesn't compare CDN approaches

More Related