Benefit based data caching in ad hoc networks
Download
1 / 25

Benefit-based Data Caching in Ad Hoc Networks - PowerPoint PPT Presentation


  • 65 Views
  • Uploaded on

Benefit-based Data Caching in Ad Hoc Networks. Bin Tang, Himanshu Gupta and Samir Das Department of Computer Science Stony Brook University. Outline. Motivation Problem Statement Algorithm and Protocol Design Performance Evaluation Conclusions and future work. Motivation.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Benefit-based Data Caching in Ad Hoc Networks' - alastair-scott


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Benefit based data caching in ad hoc networks

Benefit-based Data Caching in Ad Hoc Networks

Bin Tang, Himanshu Gupta and Samir Das

Department of Computer Science

Stony Brook University

ICNP'06


Outline
Outline

  • Motivation

  • Problem Statement

  • Algorithm and Protocol Design

  • Performance Evaluation

  • Conclusions and future work

ICNP'06


Motivation
Motivation

  • Ad hoc networks are resource constrained

    • Bandwidth scarcity of network

    • Battery energy, memory limitation

  • Cache can save access/communication cost, and thus, energy and bandwidth

  • Our work is the first to present a distributed caching implementation based on an approximation algorithm

ICNP'06


Problem statement
Problem Statement

  • Given:

    • Ad hoc network graph G(V,E)

    • Multiple data items P, each stored at its server node

    • Access frequency of each node for each data item

    • Memory constraint of each node

  • Goal:

    • Select cache nodes to minimize the total access cost:

      ∑i є V ∑j є P (access frequency of i to j x distance to nearest cache of j)

    • Under memory constraint

ICNP'06


Algorithm design outline
Algorithm Design Outline

  • Centralized Greedy Algorithm (CGA)

    • Delivers a solution whose benefit is at least 1/2 of the optimal benefit (for uniform size data)

  • Distributed Greedy Algorithm (DGA)

    • Purely localized

ICNP'06


Centralized greedy algorithm cga
Centralized Greedy Algorithm (CGA)

  • Benefit of caching a data item in a node: the reduction of total access cost

  • CGA iteratively caches data items into memory pages of nodes that maximizes the benefit at each step

  • Theorem: CGA delivers a solution whose total benefit is at least 1/2 of the optimal benefit for uniform data item

    • 1/4 for non-uniform size data item

ICNP'06


Proof sketch
Proof Sketch

  • L: greedy solution, C: total benefit in greedy

  • L’: optimal solution, O: total benefit in optimal

  • G’: modified network of G, each node

    • has twice memory capacity as that in G

    • contains the data items selected by CGA and optimal

  • O’: benefit for G’ = sum of the benefits of adding L and L’ in that order

  • O < O’ = C + ∑ benefit of L’ w.r.t L

    < C + ∑ benefit of L’ w.r.t. {}

    < C + C

ICNP'06


Distributed greedy algorithm dga
Distributed Greedy Algorithm (DGA)

  • Nearest-cache table

    • maintains nearest cache node for each data

    • If node caches a data, also maintains second-nearest cache

    • Maintenance of nearest-cache and second-nearest cache and its correctness

    • Assume distances values are available from underlying routing protocol

  • Localized caching policy

ICNP'06


Maintenance of nearest cache table

Node i cache data Dj

notify server of Dj (server maintains cache list Cj for Dj)

broadcast (i, Dj) to neighbors

On recv (i, Dj)

if i is nearer than current nearest-cache of Dj, update and broadcast to neighbors

else send it to nearest-cache of i

i delete Dj

get Cj from server of Dj

broadcast (i, Dj, Cj) to neighbors

On recv (i, Dj, Cj)

if i is current nearest-cache for Dj, update using Cj, broadcast

else send it to nearest- cache of i

Maintenance of Nearest-cache Table

ICNP'06


Mobility
Mobility

  • Servers periodically broadcasts cache list

ICNP'06


Localized caching policy
Localized caching policy

  • Observe local traffic and calculate the local benefit of caching or removing a data item

  • Cache the most “beneficial” data items

  • Local benefit/data item size for cache replacement

    • Benefit threshold to suppress traffic

ICNP'06


Performance evaluation
Performance Evaluation

  • CGA vs. DGA Comparison

  • DGA vs. HybridCache Comparison

ICNP'06


Supporting cooperative caching in ad hoc networks yin cao infocom 04
“Supporting Cooperative caching in Ad Hoc Networks” (Yin & Cao infocom’04):

  • CacheData – caches passing-by data item

  • CachePath – caches path to the nearest cache

  • HybridCache – caches data if size is small enough, otherwise caches the path to the data

  • Only work of a purely distributed cache placement algorithm with memory constraint

ICNP'06


Cga vs dga random network of 100 to 500 nodes in a 30 x 30 region
CGA vs. DGA & Cao infocom’04): - Random network of 100 to 500 nodes in a 30 x 30 region

  • Parameters:

    • topology-related -- number of nodes, transmission radius

    • application-related -- number of data items, number of clients

    • problem constraint -- memory capacity

  • Summary of simulation results:

    • CGA performs slightly better by exploiting global info

    • DGA performs quite close to CGA

    • The performance difference decreases with increasing memory capacity

ICNP'06


Varying number of data items and memory capacity transmission radius 5 number of nodes 500
Varying Number of Data Items and Memory Capacity & Cao infocom’04): – Transmission radius =5, number of nodes = 500

ICNP'06


Varying Network Size and Transmission Radius & Cao infocom’04): - number of data items = 1000, each node’s memory capacity = 20 units

ICNP'06


Dga vs hybridcache
DGA vs. HybridCache & Cao infocom’04):

  • Simulation setup:

    • Ns2, routing protocol is DSDV

    • 2000m x 500m area

    • Random waypoint model, 100 nodes move at a speed within (0,20m/s)

    • Tr=250m, bandwidth=2Mbps

  • Simulation metrics:

    • Average query delay

    • Query success ratio

    • Total number of messages

ICNP'06


Server Model: & Cao infocom’04):

Two servers, 1000 data items: even-id data items in one server, odd-id the other

Data size:[100, 1500] bytes

Client Model:

A single stream of read-only queries

Data access model

Spatial access pattern: access frequency depends on geographic location

Random pattern: Each node accesses 200 data items randomly from the 1000 data items

Naïve caching: caches any passing-by item if there is free space, uses LRU for cache replacement


ICNP'06 & Cao infocom’04):


Summary of simulation results
Summary of Simulation Results & Cao infocom’04):

  • Both HybridCache and DGA outperform Naïve approach

  • DGA outperforms HybridCache in all metrics

    • For frequent queries and small cache size, DGA has much better average query delay and query success ratio

    • For high mobility, DGA has slight worse average delay, but much better query success ratio

ICNP'06


Conclusions and future work
Conclusions and Future work & Cao infocom’04):

  • Data caching problem under memory constraint

  • Provable approximation algorithm

  • Feasible distributed implementation

  • Future work:

    • Reduce nearest-cache table size

    • Node failure

    • Benefit?…Mm…Game theoretical analysis?

ICNP'06


Questions

Questions? & Cao infocom’04):

ICNP'06


Correctness of the maintenance
Correctness of the maintenance & Cao infocom’04):

  • Nearest-cache table is correct

    • For node k whose nearest-cache table needs to change in response to a new cache i, every intermediate nodes between k and i needs to change its table

  • Second-nearest cache is correct

    • For cache node k whose second-nearest cache should be changed to i in response to new cache i, there exist two distinct neighboring nodes i1, i2 s.t. nearest-cache node of i1 is k and nearest-cache node of i2 is i

ICNP'06


ICNP'06 & Cao infocom’04):


ICNP'06 & Cao infocom’04):


ad