benefit based data caching in ad hoc networks n.
Skip this Video
Download Presentation
Benefit-based Data Caching in Ad Hoc Networks

Loading in 2 Seconds...

play fullscreen
1 / 25

Benefit-based Data Caching in Ad Hoc Networks - PowerPoint PPT Presentation

  • Uploaded on

Benefit-based Data Caching in Ad Hoc Networks. Bin Tang, Himanshu Gupta and Samir Das Department of Computer Science Stony Brook University. Outline. Motivation Problem Statement Algorithm and Protocol Design Performance Evaluation Conclusions and future work. Motivation.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Benefit-based Data Caching in Ad Hoc Networks' - alastair-scott

Download Now An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
benefit based data caching in ad hoc networks

Benefit-based Data Caching in Ad Hoc Networks

Bin Tang, Himanshu Gupta and Samir Das

Department of Computer Science

Stony Brook University


  • Motivation
  • Problem Statement
  • Algorithm and Protocol Design
  • Performance Evaluation
  • Conclusions and future work


  • Ad hoc networks are resource constrained
    • Bandwidth scarcity of network
    • Battery energy, memory limitation
  • Cache can save access/communication cost, and thus, energy and bandwidth
  • Our work is the first to present a distributed caching implementation based on an approximation algorithm


problem statement
Problem Statement
  • Given:
    • Ad hoc network graph G(V,E)
    • Multiple data items P, each stored at its server node
    • Access frequency of each node for each data item
    • Memory constraint of each node
  • Goal:
    • Select cache nodes to minimize the total access cost:

∑i є V ∑j є P (access frequency of i to j x distance to nearest cache of j)

    • Under memory constraint


algorithm design outline
Algorithm Design Outline
  • Centralized Greedy Algorithm (CGA)
    • Delivers a solution whose benefit is at least 1/2 of the optimal benefit (for uniform size data)
  • Distributed Greedy Algorithm (DGA)
    • Purely localized


centralized greedy algorithm cga
Centralized Greedy Algorithm (CGA)
  • Benefit of caching a data item in a node: the reduction of total access cost
  • CGA iteratively caches data items into memory pages of nodes that maximizes the benefit at each step
  • Theorem: CGA delivers a solution whose total benefit is at least 1/2 of the optimal benefit for uniform data item
    • 1/4 for non-uniform size data item


proof sketch
Proof Sketch
  • L: greedy solution, C: total benefit in greedy
  • L’: optimal solution, O: total benefit in optimal
  • G’: modified network of G, each node
    • has twice memory capacity as that in G
    • contains the data items selected by CGA and optimal
  • O’: benefit for G’ = sum of the benefits of adding L and L’ in that order
  • O < O’ = C + ∑ benefit of L’ w.r.t L

< C + ∑ benefit of L’ w.r.t. {}

< C + C


distributed greedy algorithm dga
Distributed Greedy Algorithm (DGA)
  • Nearest-cache table
    • maintains nearest cache node for each data
    • If node caches a data, also maintains second-nearest cache
    • Maintenance of nearest-cache and second-nearest cache and its correctness
    • Assume distances values are available from underlying routing protocol
  • Localized caching policy


maintenance of nearest cache table
Node i cache data Dj

notify server of Dj (server maintains cache list Cj for Dj)

broadcast (i, Dj) to neighbors

On recv (i, Dj)

if i is nearer than current nearest-cache of Dj, update and broadcast to neighbors

else send it to nearest-cache of i

i delete Dj

get Cj from server of Dj

broadcast (i, Dj, Cj) to neighbors

On recv (i, Dj, Cj)

if i is current nearest-cache for Dj, update using Cj, broadcast

else send it to nearest- cache of i

Maintenance of Nearest-cache Table


  • Servers periodically broadcasts cache list


localized caching policy
Localized caching policy
  • Observe local traffic and calculate the local benefit of caching or removing a data item
  • Cache the most “beneficial” data items
  • Local benefit/data item size for cache replacement
    • Benefit threshold to suppress traffic


performance evaluation
Performance Evaluation
  • CGA vs. DGA Comparison
  • DGA vs. HybridCache Comparison


supporting cooperative caching in ad hoc networks yin cao infocom 04
“Supporting Cooperative caching in Ad Hoc Networks” (Yin & Cao infocom’04):
  • CacheData – caches passing-by data item
  • CachePath – caches path to the nearest cache
  • HybridCache – caches data if size is small enough, otherwise caches the path to the data
  • Only work of a purely distributed cache placement algorithm with memory constraint


cga vs dga random network of 100 to 500 nodes in a 30 x 30 region
CGA vs. DGA - Random network of 100 to 500 nodes in a 30 x 30 region
  • Parameters:
    • topology-related -- number of nodes, transmission radius
    • application-related -- number of data items, number of clients
    • problem constraint -- memory capacity
  • Summary of simulation results:
    • CGA performs slightly better by exploiting global info
    • DGA performs quite close to CGA
    • The performance difference decreases with increasing memory capacity


varying number of data items and memory capacity transmission radius 5 number of nodes 500
Varying Number of Data Items and Memory Capacity – Transmission radius =5, number of nodes = 500


Varying Network Size and Transmission Radius - number of data items = 1000, each node’s memory capacity = 20 units


dga vs hybridcache
DGA vs. HybridCache
  • Simulation setup:
    • Ns2, routing protocol is DSDV
    • 2000m x 500m area
    • Random waypoint model, 100 nodes move at a speed within (0,20m/s)
    • Tr=250m, bandwidth=2Mbps
  • Simulation metrics:
    • Average query delay
    • Query success ratio
    • Total number of messages


Server Model:

Two servers, 1000 data items: even-id data items in one server, odd-id the other

Data size:[100, 1500] bytes

Client Model:

A single stream of read-only queries

Data access model

Spatial access pattern: access frequency depends on geographic location

Random pattern: Each node accesses 200 data items randomly from the 1000 data items

Naïve caching: caches any passing-by item if there is free space, uses LRU for cache replacement

summary of simulation results
Summary of Simulation Results
  • Both HybridCache and DGA outperform Naïve approach
  • DGA outperforms HybridCache in all metrics
    • For frequent queries and small cache size, DGA has much better average query delay and query success ratio
    • For high mobility, DGA has slight worse average delay, but much better query success ratio


conclusions and future work
Conclusions and Future work
  • Data caching problem under memory constraint
  • Provable approximation algorithm
  • Feasible distributed implementation
  • Future work:
    • Reduce nearest-cache table size
    • Node failure
    • Benefit?…Mm…Game theoretical analysis?





correctness of the maintenance
Correctness of the maintenance
  • Nearest-cache table is correct
    • For node k whose nearest-cache table needs to change in response to a new cache i, every intermediate nodes between k and i needs to change its table
  • Second-nearest cache is correct
    • For cache node k whose second-nearest cache should be changed to i in response to new cache i, there exist two distinct neighboring nodes i1, i2 s.t. nearest-cache node of i1 is k and nearest-cache node of i2 is i