1 / 24

Aging Through Cascaded Caches: Performance Issues in the Distribution of Web Content.

Aging Through Cascaded Caches: Performance Issues in the Distribution of Web Content. Edith Cohen AT&T Labs-research. Haim Kaplan Tel-Aviv University. HTTP Freshness Control. Cached copies have: Freshness lifetime Age (elapsed time since fetched from origin) TTL (Time to Live) =

toki
Download Presentation

Aging Through Cascaded Caches: Performance Issues in the Distribution of Web Content.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Aging Through Cascaded Caches:Performance Issues in the Distribution of Web Content. Edith Cohen AT&T Labs-research Haim Kaplan Tel-Aviv University Stanford Networking Seminar

  2. HTTP Freshness Control • Cached copies have: • Freshness lifetime • Age (elapsed time since fetched from origin) • TTL (Time to Live) = freshness lifetime – age • Expired copies must be validated before they can be used (request constitutes a ”cache miss”). Cache-directives header Body (content) Stanford Networking Seminar

  3. Aging of Copies 8:00am Origin server Age = 0 TTL = 10 Freshness Lifetime = 10 hours Stanford Networking Seminar

  4. Aging of Copies 12:00pm 3:00pm Origin server Age = 4 TTL = 6 Age = 7 TTL = 3 9:00am Age = 1 TTL = 9 Freshness Lifetime = 10 hours Stanford Networking Seminar

  5. Aging of Copies Origin server Age = 10 TTL = 0 6:00pm Freshness Lifetime = 10 hours Stanford Networking Seminar

  6. Aging thru Cascaded Caches 8:00am origin server Age = 0 TTL = 10 proxy caches reverse-proxy cache Stanford Networking Seminar

  7. Aging thru Cascaded Caches origin server proxy caches 5:00pm reverse-proxy cache Age = 9 TTL = 1 Stanford Networking Seminar

  8. Aging thru Cascaded Caches origin server Age = 10 TTL = 0 proxy caches 6:00pm reverse-proxy cache !! !! Stanford Networking Seminar

  9. Aging thru Cascaded Caches origin server Age = 0 TTL = 10 proxy caches 6:00pm reverse-proxy cache Stanford Networking Seminar

  10. TTL of a Cached Copy M M M TTL From Origin M M From Cache Freshness-lifetime Requests at client cache: t Stanford Networking Seminar

  11. Age-Induced Performance Issues for Cascaded Caches • Caches are often cascaded (path between web server and end-user includes 2 or more caches.). • Copies obtained thru a cache are less effective than copies obtained thru an origin server. Reverse proxies increase validation traffic !! • More misses at downstream caches mean: • Increased traffic between cascaded caches. • Increased user-perceived latency. Stanford Networking Seminar

  12. Research Questions • How does miss-rate depend on the configuration of upstream cache(s) and on request patterns ? • Can upstream caches improve performance by proactively reducing content age ? how? • Can downstream caches improve performance by better selection or use of a source? Our analysis: • Request sequences: Arbitrary, Poisson, Pareto, fixed-frequency, Traces. • Models for Cache/Source/Object relation: Authoritative, Independent, Exclusive. Stanford Networking Seminar

  13. Basic Relationship Modelscache/source/object www.cnn.com Cache-A Cache-B Cache-C Cache-D Cache-3 Cache-2 Cache-1 • Authoritative: “Origin server:” 0 age copies. • Exclusive: all misses directed to the same cache. • Independent: each miss is directed to a different independent upstream cache. Stanford Networking Seminar

  14. Basic Models… Object has fixed freshness-lifetime of T. Miss at time t results in a copy with age: • Authoritative age(t) = 0 • Exclusive age(t) = T - (t+a) mod T • Independent age(t) e U[0,T] Theorem:On all sequences, the number of misses obeys: Authoritative<Exclusive<Independent Theorem: Exclusive< 2*Authoritative Independent < e*Authoritative Stanford Networking Seminar

  15. TTL of “Supplied” Copy Authoritative Exclusive Independent Source: TTL Freshness-lifetime Requests Received at source: t Stanford Networking Seminar

  16. How Much More Traffic? Miss-rate for different configurations Stanford Networking Seminar

  17. Rejuvenation at Source Caches client no rejuv. Rejuvenation: refresh your copy pre-term once its TTL drops below a certain fraction v of the Lifetime duration. TTL v=0.5 source 24h 12h t Requests at client: Stanford Networking Seminar

  18. Rejuvenation’s Basic Tradeoff: • Increases traffic between upstream cache and origin (fixed cost) • Decreases traffic to client caches (larger gain with more clients) Downstream Client caches Upstream cache origin Is increase/decrease monotone in V (?) Stanford Networking Seminar

  19. Interesting Dependence on V… • Independent(v) <> Exclusive(v) • Independent(v) is monotone: if v1 > v2, • Independent(v1)>Independent(v2) • Exclusive(v) is not monotone • (miss-rate can increase !!) • Integral1/v (synchronized rejuvenation): Exclusive(v) < Independent(v) and is monotone (Pareto, Poisson, not with fixed-frequency). Stanford Networking Seminar

  20. Stanford Networking Seminar

  21. Stanford Networking Seminar

  22. How Can Non-integral 1/v Increase Client Misses? Requests at Client cache: Copy at client is not synchronized with source. When it expires, the rejuv source has an aged copy. TTL Upstream Cache Downstream Client Cache Pre-term refreshes Freshness-lifetime t Stanford Networking Seminar

  23. Why Integral 1/v Works Well? Requests at Upstream cache: Cached copies remain synchronized TTL Upstream Cache v=0.5 Downstream Client Cache Pre-term refreshes Freshness-lifetime t Stanford Networking Seminar

  24. Some Conclusions • Configuration: Origin (“Authoritative”) is best. Otherwise, use a consistent upstream cache per object (“Exclusive”). • “No-cache” request headers: resulting sporadic refreshes may increase misses at other client caches. (But it is possible to compensate…). • Rejuvenation: potentially very effective, but a good parameter setting (synchronized refreshes) is crucial. • Behavior patterns: Similar for Poisson, Pareto, traces, (temporal locality). Different for fixed-frequency. • For more go tohttp://www.research.att.com/~edith Full versions of: Cohen, Kaplan SIGCOMM 2001 Cohen, Halperin, Kaplan, ICALP 2001 Stanford Networking Seminar

More Related