Http evolution tcp ip issues lecture 4
This presentation is the property of its rightful owner.
Sponsored Links
1 / 14

HTTP evolution - TCP/IP issues Lecture 4 PowerPoint PPT Presentation

  • Uploaded on
  • Presentation posted in: General

HTTP evolution - TCP/IP issues Lecture 4. CM214 1999-2000 David De Roure [email protected] The HTTP Lectures. What is HTTP? The architecture and the methods. APIs - implementing and using HTTP HTTP evolution - TCP/IP issues HTTP for Internet Computing

Download Presentation

HTTP evolution - TCP/IP issues Lecture 4

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

Http evolution tcp ip issues lecture 4

HTTP evolution - TCP/IP issuesLecture 4

CM214 1999-2000

David De Roure

[email protected]

The http lectures

The HTTP Lectures

  • What is HTTP? The architecture and the methods.

  • APIs - implementing and using HTTP

  • HTTP evolution - TCP/IP issues

  • HTTP for Internet Computing

    See and RFC2616

CM214 1999-2000



  • HTTP is typically used for distributed information systems, where performance can be improved by the use of response caches.

  • The architecture involves ‘caching proxies’ which sit between client and server. Proxies can be chained.

  • A cache may be shared by a group of users, enabling users to benefit from the caching of documents that have been read by colleagues.

  • Caches may themselves be interconnected (e.g. as ‘neighbours’), to exchange cached documents.

CM214 1999-2000

Caching in http

Caching in HTTP

  • The goal of caching in HTTP/1.1 is to eliminate the need to send requests in many cases, and to eliminate the need to send full responses in many other cases.

    • The former reduces the number of network round-trips required for many operations.

    • The latter reduces network bandwidth requirements.

  • NB All methods that might be expected to cause modifications to the server's resources MUST be written through to the server.

CM214 1999-2000

Semantic transparency

Semantic transparency

  • A cache behaves in a semantically transparent manner, with respect to a particular response, when its use affects neither the requesting client nor the origin server, except to improve performance.

  • When a cache is semantically transparent, the client receives exactly the same response (except for hop-by-hop headers) that it would have received had its request been handled directly by the origin server.

  • Requirements for performance, availability, and disconnected operation require us to be able to relax the goal of semantic transparency. The HTTP/1.1 protocol allows servers, caches, and clients to explicitly reduce transparency when necessary.

CM214 1999-2000

Expiration model

Expiration model

  • HTTP caching works best when caches can entirely avoid making requests to the origin server. The primary mechanism for avoiding requests is for an origin server to provide an explicit expiration time in the future, indicating that a response MAY be used to satisfy subsequent requests. In other words, a cache can return a fresh response without first contacting the server.

  • Since servers do not always provide explicit expiration times, HTTP caches typically assign heuristic expiration times, employing algorithms that use other header values (such as the Last-Modified time) to estimate a plausible expiration time.

  • NB Heuristic expiration times might compromise semantic transparency.

CM214 1999-2000

Validation model

Validation Model

  • When a cache has a stale entry that it would like to use as a response to a client's request, it first has to check with the server (or possibly an intermediate cache with a fresh response) to see if its cached entry is still usable. We call this "validating" the cache entry.

  • Since we do not want to have to pay the overhead of retransmitting the full response if the cached entry is good, and we do not want to pay the overhead of an extra round trip if the cached entry is invalid, the HTTP/1.1 protocol supports the use of conditional methods.

CM214 1999-2000

Http evolution tcp ip issues lecture 4

  • When an origin server generates a full response, it attaches some sort of validator to it, which is kept with the cache entry.

  • When a client (user agent or proxy cache) makes a conditional request for a resource for which it has a cache entry, it includes the associated validator in the request.

  • The server then checks that validator against the current validator for the entity, and, if they match, it responds with a special status code (usually, 304 (Not Modified)) and no entity-body. Otherwise, it returns a full response (including entity-body).

  • A conditional request looks exactly the same as a normal request for the same resource except that it carries a special header (including the validator) that implicitly turns the method (usually, GET) into a conditional.

CM214 1999-2000

What is cacheable

What is cacheable?

  • By default, a response is cacheable if the requirements of the request method, request header fields, and the response status indicate that it is cacheable.

  • Cache-Control response directives allow a server to override the default cacheability of a response:

    • public indicates that the response MAY be cached by any cache

    • private indicates that all or part of the response message is intended for a single user and MUST NOT be cached by a shared cache

    • no-cache a cache MUST NOT use the response to satisfy a subsequent request without successful revalidation with the origin server

CM214 1999-2000

Hierarchical caching

Hierarchical caching

  • Cache Hierarchies are a logical extension of the caching concept. A group of Web caches can benefit from sharing another cache in the same way that a group of Web clients benefit from sharing a cache.

  • Web caches can be arranged hierarchically, or in a mesh. When the cache topology has a tree-like structure, we usually use the term hierarchy. If the structure is rather flat, we call it a mesh.

  • Squid is a popular hierarchical cache. See

CM214 1999-2000

Parents and siblings

Parents and siblings

  • In a parent relationship, the child cache will forward requests to its parent cache. If the parent does not hold a requested object, it will forward the request on behalf of the child. A cache hierarchy should closely follow the underlying network topology. Parent caches should be located along the network paths towards the greater Internet.

  • With a sibling relationship, a peer may only request objects already held in the cache; a sibling cannot forward cache misses on behalf of the peer. The sibling relationship should be used for caches “nearby’” but not in the direction of your route to the Internet.

CM214 1999-2000

Internet cache protocol

Internet Cache Protocol

  • The Internet Cache Protocol (ICP) allows one cache to ask another if it has a valid copy of a named object, thereby improving its chances of selecting a neighbour cache that will return a cache hit.

  • ICP also provides an indication of network conditions. Failure to receive an ICP reply likely means the path is severely congested, severed, or the peer host is down.

  • ICP messages are transmitted as UDP packets

  • However, ICP becomes useless on highly congested links, perhaps exactly where caching is needed most. It also ICP adds some delay to the processing of a request.

CM214 1999-2000

Http evolution tcp ip issues lecture 4

  • Squid sends an ICP query message to its neighbours (both parents and siblings). The ICP query includes the requested URL and is sent in a UDP packet. Squid remembers how many queries it sends for a given request.

  • Each neighbour receives its ICP query and looks up the URL in its own cache. If a valid copy of the URL exists, the cache sends ICP_HIT, otherwise an ICP_MISS message.

  • The querying cache now collects the ICP replies from its peers. If the cache receives an ICP_HIT reply from a peer, it immediately forwards the HTTP request to that peer. If the cache does not receive an ICP_HIT reply, then all replies will be ICP_MISS.

  • Squid waits until it receives all replies, up to two seconds. If one of the ICP_MISS replies comes from a parent, Squid forwards the request to the parent whose reply was the first to arrive. We call this reply the FIRST_PARENT_MISS. If there is no ICP_MISS from a parent cache, Squid forwards the request to the origin server.

CM214 1999-2000



  • To what extent do you think content could be pushed by servers to caches instead of caches pulling the data from servers?

  • Could IP multicast be used for this?

CM214 1999-2000

  • Login