1 / 16

DIRECTED DIFFUSION

DIRECTED DIFFUSION. Directed Diffusion. Data centric A node request data by sending interest for named data Data matching interest is drawn toward that node Intermediate nodes can cache or transform data directly Attribute-naming based Data aggregation

Download Presentation

DIRECTED DIFFUSION

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. DIRECTED DIFFUSION

  2. Directed Diffusion • Data centric • A node request data by sending interest for named data • Data matching interest is drawn toward that node • Intermediate nodes can cache or transform data directly • Attribute-naming based • Data aggregation • Interest, data aggregation and data propogation are determined by localized interactions. • Trades off some energy efficiency for increased robustness

  3. Directed Diffusion • Consists of elements: Interests, data messages, gradients and reinforcements. • Interest: a query or an interrogation which specifies what a user wants. • Data: collected or processed information • Gradient: direction state created in each node that receives interest. • Gradient direction is toward the neighboring node which the interest is received • Events start flowing from originators of interests along multiple gradient paths.

  4. Naming • Task descriptions are named by a list of attribute value pairs that describe a task • eg: • type=wheeled vehicle // detect vehicle location interval=20ms // send events every 20 ms duration=10s // for the next 10s rect=[-100,100,200,400] // from sensors within rectangle • Interests and Gradients • Interest is usually injected to the network from sink • For each active task, sink periodically broadcasts an interest message to each of its neighbors • Initial interest contains the specified rect and duration attributes but larger interval attribute • Interests tries to determine if there are any sensor nodes that detect the wheeled vehicle(exploratory).

  5. Interests • Soft state, periodically refreshed by the sink • Sink sends the same interest in monotonically increasing timestamp attribute. • Because interests are not reliably transmitted through the network. • Refresh rate increase robustness to loss interests with the trade off overhead • Every node has an interest cache storing each distinct interest. • Interest entries do not contain information about the sink, but just about immediately previous hop. • Two interests overlapping rect attributes aggregated to a single interest entry. • eg: • Type=wheeled vehicle • Interval=1s • Rect=[-100,200, 200,400] • Timestamp=01:20:40 • Expires at=01:30:40

  6. Interests • When a node receives an interest, it checks to see if the interest exists in the cache • If no matching, node creates an entry(gradient and data rate) • If interest exists but no gradient, adds a gradient and updates the timestamp and duration fields. • If interest exists and have gradient, just update the timastamp and duration • When gradient expires, it is removed from the interest entry.

  7. Interests (diffusion) • After receiving an interest, a node may decide to resend the interest to subset of its neighbors. • To its neighbors, it apeears that it is originating from the sending node, although it is coming from distant sink(local interaction). • Not all received interest are resent • If a node recently resent matching interest, it may suppress the received interest

  8. Gradient Establishment • Every node establishes a gradient towards each other • This two way gradient can cause low data rate because it would receive one copy from each node. • Reinforcement is a solution for this problem • Gradient includes data rate and direction in which to send events.

  9. Data Propogation • When a sensor node receives a data message, it searches its interest cache for a matching interest entry. • If matching, checks data cache(keep track of recently seen data items) • Advantage of data cache: loop prevention • By examining the data cache, data rate can be determined • If exists in data cache, silently drop data message • If not, added to the data cache and resent to the neighbors • To resend a received data message, examine gradient list • If all gradient have data rate greater than or equal to the rate of incoming events(means more interest), resend data to neighbors. • If some gradients have lower data rates, node may donwconvert to appropriate gradients. • If no match, the data message is silently dropped

  10. Reinforcement for Path Establishment • Sink periodically diffuses interest for a low-rate event (exploratory events) • Once source detects a matching target, it sends exploratory events toward sink(multiple paths) • After sink starts receiving these, it reinforces one particular neighbor in order to draw down real data.

  11. Positive Reinforcement • Local rule – selects an epmirically low-delay path • Reinforce any neighbor from which node receives a previously unseen event • To reinforce this neighbor, the sink resends the original interest message with a smaller interval(higher data rate) • Type=wheeled vehicle • Interval=10ms • Rect=[-100,200, 200,400] • Timestamp=01:22:35 • Expires at=01:30:40 • When the neighboring node receives this interest, it notices that it already has a gradient toward this node(it notice the interval is small) • If this new data rate is also higher than the existing gradient (outflow from this node has increased), the node must reinforce at least one more neighbor. • We do not need to reinforce neighbors that are already sending data at higher rate.

  12. Local Repair for Failed Paths • Intermediate nodes on a previously reinforced path can apply reinforcement rules(useful for failed or degraded paths) • C detects degradation • By noticing that the event reporting rate from its upstream neighbor(source) is now lower • By realizing that other neighbors have been transmitting previously unseen location estimates. • And apply reinforcement rules • Problem: wasted resources • Avoid this is interpolate location estimates from the events

  13. Negative Reinforcement • If sink reinforces A, but then receives a new event from B, it will reinforce path through B • If path through B is better, negatively reinforce path through A • Two mechanisms • Time out all data gradients in the network unless they are explicitly reinforced • Sink periodically reinforces B, stop reinforcing A • Explicitly degrade the path through A by sending a negative reinforcement(interest with lower data rate) • When A receives this, it degrades its gradients toward the sink • Cost: decreased resource utilization • negatively reinforce which neighbor? • From which no new events have been received within a window of N evets or time T

  14. Self Organization • Zero knowledge of identity or topology • Each node knows its own identity • The base directly connected to the host PC • Base, periodically broadcast out its identity and that it is connected to the PC. • Devices at one-hop distance receive the info and use to update routing information • Rebroadcast a new routing update to everyone that there is a path to the sink through them. • In order to prevent cycles, time is divided into eras and route updates are broadcast once per era.

  15. Tiny Diffusion • Tiny Diffusion • Application Programmer’s Interface(API) • Tiny Diffusion is based on the concept of data-centric or subject-based routing as is the SCADDS datadiffusion implementation. • Provide aninterface to access sensor data by naming attributes.

  16. Tiny Diffusion

More Related