1 / 22

Improving the WWW: Caching or Multicast?

Improving the WWW: Caching or Multicast?. Pablo Rodriguez Ernst W. Biersack Keith W. Ross Institut EURECOM 2229, route des Cretes. BP 193 06904, Sophia Antipolis Cedex, FRANCE 元智大學 資訊工程研究所 系統實驗室 陳桂慧 March, 2, 1999. Outline. Caching Continuous multicast push (CMP) Model

gita
Download Presentation

Improving the WWW: Caching or Multicast?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Improving the WWW: Caching or Multicast? Pablo Rodriguez Ernst W. Biersack Keith W. Ross Institut EURECOM 2229, route des Cretes. BP 193 06904, Sophia Antipolis Cedex, FRANCE 元智大學 資訊工程研究所 系統實驗室 陳桂慧 March, 2, 1999

  2. Outline • Caching • Continuous multicast push (CMP) • Model • Total Latency time Ttot • First-Packet Time Tf • Completion Time Tc • pure multicast • CMP-Cache • Pure Cache • Caching and Multicast: Push Caching • Caching-Multicast Cooperation

  3. Caching • Caching • reduces the bandwidth • decrease the latency to the receivers • take place at the application layer • Open issues of Caching • requires additional resources • cooperate together to increase the hit rate • maintain document consistency and provide the most recent update • Additional delays that a multicast distribution has not • resolution delay • TCP delay • queuing delay • server delay

  4. Caching (2) • Different kinds of misses • First-Access • capacity • updates • uncacheable • Given that a cache keeps all previously requested document • the first requests for a document travels all the caching hierarchy until the original server accounting for the First-Access miss. • When the document expires a new request needs to travel again to the original server accounting for an Update miss.

  5. CMP • The CMP distribution works as follows: • A web server monitors the number of requests for a document to decide which document to multicast. • The server takes the popular document and sends it cyclicly in a multicast address. • Receivers obtain a mapping of the document’s name (URL) into a multicast address and then join the multicast group. • The server keeps monitoring the number of requests and stops multicasting the document if there are no more receiver.

  6. CMP(2) • CMP take place at the transport layer • with reliability and congestion control ensured by the end systems (server and clients) • requires that the network connecting a server with its clients is multicast capable • a single packet sent by a server will be forwarded along the multicast tree, Fig 1

  7. CMP (3) • A continuous multicast distribution also requires some additional mechanisms: • Session servers or a similar mechanism are need to map the document’s name into a multicast address. • A Web Server needs to monitor the number of document request and their rate of change to decide which document to multicast and when to stop multicast them. • There is an overhead in the multicast capable routers to maintain state information for each active multicast group. • There is also an overhead due to the join and prune messages needed for the multicast tree to grow and shrink depending on the location of the receivers. • Multicast congestion control is still an open issue.

  8. Model International Path National Network Regional Network Institutional Network

  9. Model (2)

  10. Latency • Total Latency time Ttot • First-Packet Time Tf • The time between one receiver makes a request and the time the first packet arrives at that receiver. • Completion Time Tc • The time between the arrival of the first packet and the time that the receiver completes the reception of the most up-to-date document version. • Ttot = Tf + Tc

  11. First-Packet Time • The expected first-packet time for a multicast and a caching distribution is: • Ecmp[Tf] = 2d * Ecmp[L] • Ecache[Tf] = 2d * Ecache[L] • L: The number of links a new request has to travel on the multicast tree or on the caching hierarchy to meet the document. • d: The propagation and transmission delay on one link, homogeneous for all links. • The average number of links trversed by a request on a multicast tree Ecmp[L] or on a caching hierarchy Ecache[L] depending on, • total number of requests for a document • document size • document’s rate of change

  12. First-Packet Time (2) T: the period of change of the document S: the web document size The multicast transmission rate seen by a receiver

  13. Completion Time Indicative values of capacities Assume that: no other kind of traffic more than web traffic going through these networks

  14. Completion Time - Pure Multicast • The completion time for one Hot-Changing document The capacity needed to answer the Hot-Stable and Cold request from all LANs.

  15. Completion Time - CMP-Cache

  16. Completion Time - CMP-Cache (2)

  17. Completion Time - Pure Cache • The average completion time, • Percentages of requests for a Hot-Changing document that see a document update.

  18. Completion Time - Pure Cache (2)

  19. Completion Time - Pure Cache (3)

  20. Caching and Multicast: Push Caching • The solutions to improve the completion time on a caching hierarchy • increase the bandwidth of channel • reduce the request rate at every cache by distributing the document over many caches. • use bandwidth more efficiently.

  21. Caching-Multicast Cooperation • A final Web server is the only one that certainly knows when a document has changed. Only popular and changing documents should be pushed. • Every time that a popular document changes, the web server can take the decision to multicast the document update towards all the national caches. • The national caches themselves forward the document update to all regional caches. • The regional cacheskeep track of which documents are popular for their children, decide to keep or remove the document. • The regional caches that have interested receivers in that document update, will forward it towards all institutional caches. • The institutional caches will do the same process as the regional caches.

  22. Conclusions • The use of caching and multicast together gives better performances results (latency, bandwidth) than each of them alone.

More Related