1 / 33

Improving On-demand Data Access Efficiency with Cooperative Caching in MANETs

Improving On-demand Data Access Efficiency with Cooperative Caching in MANETs. Phd Dissertation Defense 11.21.05@CSE.ASU Yu Du. Supported in part by NSF grants ANI-0123980, ANI-0196156, and ANI-0086020, and Consortium for Embedded Systems. Roadmap. 1. Introduction 2. Cooperative caching

rowa
Download Presentation

Improving On-demand Data Access Efficiency with Cooperative Caching in MANETs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Improving On-demand Data Access Efficiency with Cooperative Caching in MANETs Phd Dissertation Defense 11.21.05@CSE.ASU Yu Du Supported in part by NSF grants ANI-0123980, ANI-0196156, and ANI-0086020, and Consortium for Embedded Systems.

  2. Roadmap 1. Introduction 2. Cooperative caching 3. Related works 4. Proposed approach – COOP 5. Performance evaluation 6. Conclusions and future works

  3. 1. Introduction Data Server Data Server Data Server Data Server Data Server Data Server Data Server Data Server 1.1 Problems of data access in MANETs • MANETs – Mobile Ad hoc Networks • Wireless medium • Multi-hop routes • Dynamic topologies • Resource constraints • On-demand data access – client/server model.

  4. 1. Introduction 1.2. Reducing data access costs in MANETs • The locality principle [Denning] • Computer programs tend to repeat referencing a subset of data/instructions. • Used in processor caches, storage hierarchies, Web browsers, and search engines. • Zipf’s law [Zipf] • P(i) ∝ 1/iα(αclose to unity), common interests in popular data. • 80-20 rule: 80% data accesses happen on 20% data. • Cooperative caching • Multiple nodes share and cooperatively manage their cached contents.

  5. 1. Introduction Bob Ashley To remote server 1.3. Cooperative caching • Cooperative caching • A caching node not only serves its own data requests but also the requests from others. • A caching node not only stores data for its own needs but also for others. • Shorter path, less expensive links, less conflictions, lower risks of route breakage. • save time, energy, and bandwidth consumption as well as improves data availability. • Why? • Data locality and commonality in users’ interests. • Client/Server communication Vs. inter-cache communication. • Users around the same location tend to have similar interests. • People gathered around the food court: menus. • Exploration team: environmental information.

  6. Roadmap 1. Introduction 2. Cooperative caching 2.1. Overview 2.2. Cache resolution 2.3. Cache management 2.4. Cache consistency control 3. Related works 4. Proposed approach – COOP 5. Performance evaluation 6. Conclusions and future works

  7. 2. Cooperative caching 2.1 Overview • Cooperative caching • Multiple nodes share and cooperatively manage their cached contents. • Cache resolution • Cache management • Cache consistency control • Used in Webcache/Proxy servers on Internet. • To alleviate server overloading and response delay. • Did not consider special features of MANETs.

  8. 2. Cooperative caching 2.2 Cache resolution • How to find a cache storing the requested data? Hierarchical Directory-based Hash table based Harvest [Chank96] Summary [Fan00] Squirrel [Lyer02]

  9. 2. Cooperative caching 2.3 Cache management • What to cache? • Admission control. • Cache replacement algorithm. • LRU • Extended LRU (Squirrel) • any access has same impact, whether it is from the local node or other nodes.

  10. 2. Cooperative caching 2.4 Cache consistency control • How to maintain the consistency between server and cache? • Strong/Weak consistency: whether consistency is always guaranteed. • Pull/Push-based: who (client/server) initiates the consistency verification. • TTL is used in this research. • Each data item has a Time-To-Live field – allowed caching time. • TTL is popularly adopted in real applications – HTTP. • Lower cost than strong-consistency protocols.

  11. 3. Related works

  12. Roadmap 1. Introduction 2. Cooperative caching 3. Related works 4. Proposed approach – COOP 4.1. System architecture 4.2. Cache resolution 4.3. Cache management 5. Performance evaluation 6. Conclusions and future works

  13. 4. Proposed approach – COOP 4.1 System architecture • Each node runs a COOP instance. • The running COOP instance • Receives data requests from user’s applications. • Resolves requests using the cocktail cache resolution scheme. • Decides what data to cache using COOP cache management scheme. • Uses the underlying protocol stack.

  14. 4. Proposed approach – COOP 4.2. Cache Resolution 4.2.1. Hop-by-Hop 4.2.2. Zone-based 4.2.3. Profile-based 4.2.4. COOP cache resolution – a cocktail approach

  15. 4. Proposed approach – COOP, 4.2 Cache resolution Data Server Data Server Data Server Data Server 4.2.1 Hop-by-Hop cache resolution • The forwarding nodes try to resolve a data request before relaying it to the next hop. • Reduces the travel distance of requests/replies. • Helps to avoid expensive/unreliable network channels.

  16. 4. Proposed approach – COOP, 4.2 Cache resolution Data Server Data Server Data Server Data Server Data Server Data Server 4.2.2 Zone-based cache resolution • Users around the same location tend to share common interests. • Cooperation zone – the surrounding nodes within r-hop range. • r: the radius of the cooperation zone • To find an item within the cooperation zone • Reactive approach – flooding within the cooperation zone. • Proactive approach – record previous heard requests.

  17. 4. Proposed approach – COOP, 4.2 Cache resolution 4.2.3 Profile-based cache resolution • Records received request to assist future cache resolution • RRT – Recent Request Table. • Entry is deleted when if the recorded requester fails to reply the corresponding data item. • When the table is full, use LRU to decide replacement.

  18. 4. Proposed approach – COOP, 4.2 Cache resolution 4.2.4 COOP cache resolution – a cocktail approach

  19. 4. Proposed approach – COOP 4.3. Cache Management 4.3.1. Primary and secondary data 4.3.2. Inter-category and intra-category rules

  20. 4. Proposed approach – COOP, 4.3 Cache management 4.3.1. Primary and secondary data • Different cache misses may introduce different costs. • Example: cache miss cost for X is higher than cache miss cost for Y. • Primary data and secondary data. • Primary data – not available within cooperation zone. • Secondary data – available within cooperation zone. Y has to be obtained from the server. X can be obtained from a neighbor. Data Server Data Server

  21. 4. Proposed approach – COOP, 4.3 Cache management T0 T1 T2 T3 T4 4.3.2. Inter-category and intra-category rules • Inter-category rule • when replacement decision is to be made between different categories. • Primary data have precedence over secondary data • Intra-category rule • when replacement decision is to be made within the same category. • LRU • Example: A1 – A5 (Primary); B1 – B6 (Secondary)

  22. Roadmap 1. Introduction 2. Cooperative caching 3. Related works 4. Proposed approach – COOP 5. Performance evaluation 5.1. The impact of different zone radius 5.2. The impact of data access pattern 5.3. The impact of cache size 5.4. Data availability 5.5. Time cost: average travel distance 5.6. Cache miss ratio 5.7. Energy cost: message overhead 6. Conclusions and future works

  23. 5. Performance evaluation 5.1 The impact of different zone radius (1) Average probability of finding a requested item d within the zone. (2) Average time cost • assuming time cost is proportional to the number of covered hops (3) Average energy cost • assuming time cost is proportional to the number of messages. (1) (2) (3)

  24. 5. Performance evaluation 5.1 The impact of different zone radius • If an item is not found within a certain size cooperation zone, it is unlikely to find it within a larger size zone. • The saturation point.

  25. 5. Performance evaluation 5.2 The impact of access pattern • α ++ • Cache miss ratio - - • CT-3, CT-2, CT-1, HBH, SC • Average travel distance - - • CT-3, CT-2, CT-1, HBH, SC • Average #messages - - • HBH CT-1, SC CT-2, CT-3

  26. 5. Performance evaluation 5.3 The impact of cache size • Cache size ++ • Cache miss ratio - - • CT-3, CT-2, CT-1, HBH, SC • Average travel distance - - • CT-3 CT-2, CT-1, HBH, SC • Average #messages - - • HBH CT-1, SC CT-2, CT-3

  27. 5. Performance evaluation 5.4 Data availability • Varied factors • node number • pause time • node velocity • Data availability • CT-2, CT-1, HBH, SC

  28. 5. Performance evaluation 5.5 Time cost: average travel distance • Varied factors • node number • pause time • node velocity • Average travel distance • CT-2, CT-1, HBH, SC

  29. 5. Performance evaluation 5.6 Cache miss ratio • Varied factors • node number • pause time • node velocity • Cache miss ratio • CT-2, CT-1, HBH, SC

  30. 5. Performance evaluation 5.7 Energy cost: average #messages • Varied factors • node number • pause time • node velocity • Average #messages • CT-1 HBH, SC, CT-2

  31. 6. Conclusions and future works • Cooperative caching is supported by data locality and the commonality in users’ interests. • Proposed approach – COOP • Higher data availability • Less time cost • Smaller cache miss ratio • The tradeoff is message overhead • Tradeoff is dependent the cooperation zone radius. • Future works • Adapt cooperation zone radius based on user’s requirements. • Explore different cooperation structure. • Enforce fairness in cooperative caching.

  32. References • [Cao04] L. Yin and G. Cao, “Supporting cooperative caching in ad hoc networks”, INFOCOM, 2004. • [Chank96] A. Chankhunthod et al. “A Hierarchical internet object cache”, USENIX Annual Technical Conference, 1996. • [Denning] P. Denning, “The locality principle”, Communications of the ACM, July 2005. • [Fan00] L. Fan et al. “Summary cache: A scalable wide-area web cache sharing protocol”, Sigcomm, 1998. • [Lyer02] S. Lyer et al. “Squirrel: A decentralized peer-to-peer web cache”, PODC, 2002. • [Zipf] G. Zipf, “Human behavior and the principle of least effort”, Addison-Wesley, 1949.

  33. Q & A Thank You!

More Related