1 / 14

Cost-Aware WWW Proxy Caching Algorithms

Cost-Aware WWW Proxy Caching Algorithms. Pei Cao University of Wisconsin-Madison Sandy Irani University of California-Irvine Proceedings of the USENIX Symposium on Internet Technologies and Systems                   

Download Presentation

Cost-Aware WWW Proxy Caching Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cost-Aware WWW Proxy Caching Algorithms Pei Cao University of Wisconsin-Madison Sandy Irani University of California-Irvine Proceedings of the USENIX Symposium on Internet Technologies and Systems                    Monterey,California,Dec ember 1997 University of Yuan Ze System Lab Mike Tien miketien@syslab.cse.yzu.edu.tw

  2. Outline • 1.Existing Result • 2.Existing Document Replacement Algo • 3.Comparison between Existed Proxy Replacement Algo • 4.GreedyDual-Size Algo • 5.Performance Comparison • 6.Summary • 7.Conclusion

  3. 1.Existing Result • Three differences between Web caching and memory page 1.web cache is variable-size caching. 2.web objects take different amounts of time to download,even if they are of the same size. 3.access streams seen by the proxy cache are from ten to thousands of user. • Optimal Result:For a sequence of requests to uniform size blocks of memory,the simple rule of evicting the block whose next request is farthest in the future.(web hit ratio 50% [WASA96]).

  4. 2.Existing Document Replacement Algorithms • LRU ad:simplicity disad:not include file size and latency • LFU • Size • LRU-Threshold—LRU,except documents larger than certain threshold size. • Pitkow/Recker—removes the least-recently-used document,except if all documents are accessed today,in which case the largest one is removed. • Lowest-Latency-First—tries to minimize average latency by removing the document with the lowest download latency first. • Hybrid--F= • Lowest Relative Value– takes into account locality,cost and size of a document.

  5. 3.Comparison between Existed Proxy Replacement Algorithms • Size >> LFU,LRU-threshold,and Pitkow/Recker [WASAF96].Size >> LRUin most situation. • LRU >> Sizein terms of byte hit rate. • Inmost cases,LFU << LRU. • In terms of minimizing latency,Hybrid >> LLF. • LRV >> both LRU and Sizein terms of hit ratio and byte hit ratio. • One disadvantage of both Hybrid and LRV is their heavy parameterization. • But ,the studies offer no conclusion on which algorithm a proxy should use.

  6. 4.GreedyDual-Size Algorithm

  7. 5.Performance Comparison • Hit Rate &Byte Hit Rate

  8. Reduced Latency

  9. Network Cost

  10. 6.Summary • High Hit Ratio -> GD-Size(1) • High Byte Hit Ratio -> GD-Size(packet) • If network latency don’t change over time or change slowly ->GD-Size(hops) or GD-Size(weightedhops) • Small cache size -> GD-Size(1)

  11. 7.Conclusion • GD-Size combines locality,costand size • Only optimize one performance measure at a time.Looking into how to adjust the algo. To optimize more than one performance measures • Plan to study the integration of hint-based prefetching with the cache replacement algo.

More Related