Aging through cascaded caches performance issues in the distribution of web content
This presentation is the property of its rightful owner.
Sponsored Links
1 / 24

Aging Through Cascaded Caches: Performance Issues in the Distribution of Web Content. PowerPoint PPT Presentation


  • 66 Views
  • Uploaded on
  • Presentation posted in: General

Aging Through Cascaded Caches: Performance Issues in the Distribution of Web Content. Edith Cohen AT&T Labs-research. Haim Kaplan Tel-Aviv University. HTTP Freshness Control. Cached copies have: Freshness lifetime Age (elapsed time since fetched from origin) TTL (Time to Live) =

Download Presentation

Aging Through Cascaded Caches: Performance Issues in the Distribution of Web Content.

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Aging through cascaded caches performance issues in the distribution of web content

Aging Through Cascaded Caches:Performance Issues in the Distribution of Web Content.

Edith Cohen

AT&T Labs-research

Haim Kaplan

Tel-Aviv University

Stanford Networking Seminar


Http freshness control

HTTP Freshness Control

  • Cached copies have:

    • Freshness lifetime

    • Age (elapsed time since fetched from origin)

  • TTL (Time to Live) =

    freshness lifetime – age

  • Expired copies must be validated before they can be used (request constitutes a ”cache miss”).

Cache-directives

header

Body

(content)

Stanford Networking Seminar


Aging of copies

Aging of Copies

8:00am

Origin server

Age = 0

TTL = 10

Freshness Lifetime

= 10 hours

Stanford Networking Seminar


Aging of copies1

Aging of Copies

12:00pm

3:00pm

Origin server

Age = 4

TTL = 6

Age = 7

TTL = 3

9:00am

Age = 1

TTL = 9

Freshness Lifetime

= 10 hours

Stanford Networking Seminar


Aging of copies2

Aging of Copies

Origin server

Age = 10

TTL = 0

6:00pm

Freshness Lifetime

= 10 hours

Stanford Networking Seminar


Aging thru cascaded caches

Aging thru Cascaded Caches

8:00am

origin server

Age = 0

TTL = 10

proxy caches

reverse-proxy cache

Stanford Networking Seminar


Aging thru cascaded caches1

Aging thru Cascaded Caches

origin server

proxy caches

5:00pm

reverse-proxy cache

Age = 9

TTL = 1

Stanford Networking Seminar


Aging thru cascaded caches2

Aging thru Cascaded Caches

origin server

Age = 10

TTL = 0

proxy caches

6:00pm

reverse-proxy cache

!! !!

Stanford Networking Seminar


Aging thru cascaded caches3

Aging thru Cascaded Caches

origin server

Age = 0

TTL = 10

proxy caches

6:00pm

reverse-proxy cache

Stanford Networking Seminar


Ttl of a cached copy

TTL of a Cached Copy

M

M

M

TTL

From Origin

M

M

From Cache

Freshness-lifetime

Requests

at client

cache:

t

Stanford Networking Seminar


Age induced performance issues for cascaded caches

Age-Induced Performance Issues for Cascaded Caches

  • Caches are often cascaded (path between web server and end-user includes 2 or more caches.).

  • Copies obtained thru a cache are less effective than copies obtained thru an origin server.

    Reverse proxies increase validation traffic !!

  • More misses at downstream caches mean:

    • Increased traffic between cascaded caches.

    • Increased user-perceived latency.

Stanford Networking Seminar


Research questions

Research Questions

  • How does miss-rate depend on the configuration of upstream cache(s) and on request patterns ?

  • Can upstream caches improve performance by proactively reducing content age ? how?

  • Can downstream caches improve performance by better selection or use of a source?

Our analysis:

  • Request sequences: Arbitrary, Poisson, Pareto, fixed-frequency, Traces.

  • Models for Cache/Source/Object relation: Authoritative, Independent, Exclusive.

Stanford Networking Seminar


Basic relationship models cache source object

Basic Relationship Modelscache/source/object

www.cnn.com

Cache-A

Cache-B

Cache-C

Cache-D

Cache-3

Cache-2

Cache-1

  • Authoritative: “Origin server:” 0 age copies.

  • Exclusive: all misses directed to the same cache.

  • Independent: each miss is directed to a different independent upstream cache.

Stanford Networking Seminar


Basic models

Basic Models…

Object has fixed freshness-lifetime of T. Miss at time t results in a copy with age:

  • Authoritative age(t) = 0

  • Exclusive age(t) = T - (t+a) mod T

  • Independent age(t) e U[0,T]

Theorem:On all sequences, the number of misses obeys:

Authoritative<Exclusive<Independent

Theorem: Exclusive< 2*Authoritative

Independent < e*Authoritative

Stanford Networking Seminar


Ttl of supplied copy

TTL of “Supplied” Copy

Authoritative

Exclusive

Independent

Source:

TTL

Freshness-lifetime

Requests

Received

at source:

t

Stanford Networking Seminar


How much more traffic

How Much More Traffic?

Miss-rate for different configurations

Stanford Networking Seminar


Rejuvenation at source caches

Rejuvenation at Source Caches

client

no rejuv.

Rejuvenation: refresh your copy pre-term once its TTL drops below a certain fraction v of the Lifetime duration.

TTL

v=0.5

source

24h

12h

t

Requests at client:

Stanford Networking Seminar


Rejuvenation s basic tradeoff

Rejuvenation’s Basic Tradeoff:

  • Increases traffic between upstream cache and origin (fixed cost)

  • Decreases traffic to client caches (larger gain with more clients)

Downstream

Client caches

Upstream

cache

origin

Is increase/decrease monotone in V (?)

Stanford Networking Seminar


Interesting dependence on v

Interesting Dependence on V…

  • Independent(v) <> Exclusive(v)

  • Independent(v) is monotone: if v1 > v2,

  • Independent(v1)>Independent(v2)

  • Exclusive(v) is not monotone

    • (miss-rate can increase !!)

  • Integral1/v (synchronized rejuvenation): Exclusive(v) < Independent(v) and is monotone (Pareto, Poisson, not with fixed-frequency).

Stanford Networking Seminar


Aging through cascaded caches performance issues in the distribution of web content

Stanford Networking Seminar


Aging through cascaded caches performance issues in the distribution of web content

Stanford Networking Seminar


How can non integral 1 v increase client misses

How Can Non-integral 1/v Increase Client Misses?

Requests at

Client cache:

Copy at client is not synchronized with source.

When it expires, the rejuv source has an aged copy.

TTL

Upstream Cache

Downstream Client Cache

Pre-term

refreshes

Freshness-lifetime

t

Stanford Networking Seminar


Why integral 1 v works well

Why Integral 1/v Works Well?

Requests at

Upstream cache:

Cached copies remain synchronized

TTL

Upstream Cache

v=0.5

Downstream Client Cache

Pre-term

refreshes

Freshness-lifetime

t

Stanford Networking Seminar


Some conclusions

Some Conclusions

  • Configuration: Origin (“Authoritative”) is best. Otherwise, use a consistent upstream cache per object (“Exclusive”).

  • “No-cache” request headers: resulting sporadic refreshes may increase misses at other client caches. (But it is possible to compensate…).

  • Rejuvenation: potentially very effective, but a good parameter setting (synchronized refreshes) is crucial.

  • Behavior patterns: Similar for Poisson, Pareto, traces, (temporal locality). Different for fixed-frequency.

  • For more go tohttp://www.research.att.com/~edith

    Full versions of: Cohen, Kaplan SIGCOMM 2001

    Cohen, Halperin, Kaplan, ICALP 2001

Stanford Networking Seminar


  • Login