1 / 76

Distributed Multi-Scale Data Processing for Sensor Networks

Distributed Multi-Scale Data Processing for Sensor Networks. Raymond S. Wagner Ph.D. Thesis Defense April 9, 2007. Collaborators. Marco Duarte. J. Ryan Stinnett. V é ronique Delouille. T.S. Eugene Ng. David B. Johnson. Albert Cohen. Shu Du. Richard Baraniuk. Shriram Sarvotham.

brandi
Download Presentation

Distributed Multi-Scale Data Processing for Sensor Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distributed Multi-Scale Data Processing for Sensor Networks Raymond S. WagnerPh.D. Thesis Defense April 9, 2007

  2. Collaborators Marco Duarte J. Ryan Stinnett Véronique Delouille T.S. Eugene Ng David B. Johnson Albert Cohen Shu Du Richard Baraniuk Shriram Sarvotham

  3. Sensor Network Overview Collections of small battery-powereddevices, called sensor nodes, that can: • sense data • process data • share data Nodes form ad-hoc networks to exchange data:

  4. Data Collection Problem network bottleneck region PROBLEM: centralized collection very costly (power, bandwidth), especially near sink.

  5. Distributed Processing Solution SOLUTION: nodes locally exchange data with neighbors, finding answers to questions in-network.

  6. Distributed Data Representations  Dist. Source Coding    Dist. CompressedSensing    Dist. Regression   Dist. Multi-Scale Analysis

  7. Novel Contributions new algorithms support for algorithms • development of multi- scale transform • survey of application communication requirements • analysis ofnumerical stability • development ofprotocols • development of API • analysis of energy cost

  8. Multi-scale Wavelet Analysis WT Unconditional basis for wide range of signal classes – good choice for sparse representation when little known about signal.

  9. Multi-Resolution Analysis (MRA) Vj+1 Vj Vj-1 Fix V0 with scaling functionbasis set , with Project onto to find scaling coefficient , or find from previous-scale SCs as

  10. Wavelet Space Analysis Define difference spaces Wj s.t. Vj+1 Vj Vj-1 Wj-1 Wj Give W0wavelet functionbasis set Project onto to find wavelet coefficient or find from previous-scale SC’s as

  11. Wavelet Analysis for Sensor Networks MRA assumes regular sample point spacing, power-of-two sample size. Not likely in sensor networks.

  12. Wavelet Lifting In-place formulation of WT – distributable, toleratesirregular sampling grids [Sweldens, 1995] Starts with all nodes ( ) , scalar meas. ( ) At each scale j={J-1,…,j0} , transform into: wavelet coefficients scaling coefficients Iterate on SCs to j=j0 so that meas. replaced by:

  13. Lifting Stages … + + U split P _ … U split P _ SPLIT into , Each transform scale decomposes into three stages: split, predict, update…

  14. Lifting Stages … + + U split P _ … U split P _ PREDICT wavelet coeffs. Each transform scale decomposes into three stages: split, predict, update…

  15. Lifting Stages … + + U split P _ … U split P _ UPDATE scaling coeffs. Each transform scale decomposes into three stages: split, predict, update…

  16. Lifting Stages … + + U split P _ … U split P _ Each transform scale decomposes into three stages: split, predict, update… SPLIT into ,

  17. Lifting Stages … + + U split P _ … U split P _ Each transform scale decomposes into three stages: split, predict, update… GOAL: design split,P,U to distributed easily, tolerate grid irregularity, provide sparse representation

  18. Split Design Scale j+1 Scale j Scale j-1 Goal: mimic regular grid split, s.t. ,

  19. Split Design • Pick a , put it in ( ) • Put all with ( )into ( ) • Repeat until all elements of visited Scale j+1 Scale j Scale j-1 Goal: mimic regular grid split, s.t. , Approach:

  20. Split Example Scale-5 Grid Scale-4 Grid Original Grid Scale-2 Grid Scale-1 Grid Scale-3 Grid

  21. Predict Design Goal: encode WC at each ( ) as difference from summary of local neighborhood behavior Approach: fit order-m polynomial to scale-(j+1) SCsat neighboring ( ), evaluate at : WC for is difference between scale-(j+1) SC and estimate:

  22. Predict Design • Consider points s.t. • Pick as smallest (cost) subset s.t. • If can’t satisfy, reduce to , repeat Step 1 , depend only on m, d (dim. of ) Given predict order m, must only specify to find weights

  23. Update Design Goal: enhance transform stability by preserving average value encoded by SCs across scales Approach: choose update weights so that weighted by integrals of constant : Use min-norm solution [Jansen et al., 2001] with

  24. Transform Network Traffic Example: , with , update predict

  25. Coefficient Decay A function is ( ) at point if polynomial of degree and some such that We show that, if is at for , then depends only on constants

  26. WT Application: Distributed Compression IDEA: compress measurements by only allowing sensors with large-magnitude WCs to transmit to the sink.

  27. Compression Evaluation Sample field classes: Piecewise smoothacross discontinuity Globally smooth

  28. Compressing Smooth Fields P onlyP, U average MSE number of coefficients (250 nodes, 100 trials)

  29. Compressing Piecewise-Smooth Fields P onlyP, U average MSE number of coefficients (250 nodes, 100 trials)

  30. Energy vs. Distortion (Smooth Field) MSE bottleneck energy (Joules) (1000 nodes)

  31. Energy vs. Distortion (Smooth Field) Energy to compute WT MSE bottleneck energy (Joules) (1000 nodes)

  32. Energy vs. Distortion (Smooth Field) MSE Energy to dump all measurements to sink bottleneck energy (Joules) (1000 nodes)

  33. Energy vs. Distortion (Smooth Field) Beneficial operating regime MSE bottleneck energy (Joules) (1000 nodes)

  34. Energy vs. Distortion (Piecewise-smooth Field) MSE bottleneck energy (Joules) (1000 nodes)

  35. WT Application: Distributed De-noising noise dominates PSNR coefficientcount • in-network de-noising (requires inverse dist. WT) • compression with de-noising(guides threshold choice)

  36. Implementation Lessons Implemented WT in Duncan Hallsensor network Need to support common patternswith abstraction to ease algorithmprototyping Surveyed IPSN 2003-2006 Distilled common comm. patterns into network application programming interface (API) calls

  37. Address-Based Sending Send to single address – source node ( ) sends message to single destination ( ), drawn from node ID space

  38. Address-Based Sending Send to list of addresses – source node sends message to multiple destinations, drawn from node ID space

  39. Address-Based Sending Send to multicast address – source node sends message to single group address, drawn from multi-cast address space

  40. Address-Based API Calls sendSingle (data, address, effort, hopLimit)sendList (data, addList, effort, hopLimit)sendMulti (data, address, effort, hopLimit)

  41. Address-Based API Calls sendSingle (data, address, effort, hopLimit)sendList (data, addList, effort, hopLimit)sendMulti (data, address, effort, hopLimit) Provide packet fragmentation

  42. Address-Based API Calls Drawn from multicast group address space Drawn from node-ID address space sendSingle (data, address, effort, hopLimit)sendList (data, addList, effort, hopLimit)sendMulti (data, address, effort, hopLimit)

  43. Address-Based API Calls sendSingle (data, address, effort, hopLimit)sendList (data, addList, effort, hopLimit)sendMulti (data, address, effort, hopLimit) Energy-based transmission effort abstraction (per-packet basis)

  44. Address-Based API Calls sendSingle (data, address, effort, hopLimit)sendList (data, addList, effort, hopLimit)sendMulti (data, address, effort, hopLimit) Limit on number of forwarding hops to destination

  45. Region-Based Sending Send to hop radius – source node sends message to all nodes within specified number of radio hops

  46. Region-Based Sending Send to geographic radius – source node sends message to all nodes within specified geographic distance from its location

  47. Region-Based Sending Send to circle – source node sends message to nodes (single or many) within a specified geographic distance of specified center

  48. Region-Based Sending Send to polygon – source node sends message to nodes (single or many) within convex hull of specified list of vertex locations

  49. Region-Based API Calls sendHopRad (data, hopRad, effort, hopLimit) sendGeoRad (data, geoRad, outHops, effort, hopLimit) sendCircle (data, centerX, centerY, radius, single, outHops, effort, hopLimit)sendPolygon (data, vertCount, vertices, single, outHops, effort, hopLimit)

  50. Region-Based API Calls sendHopRad (data, hopRad, effort, hopLimit) sendGeoRad (data, geoRad, outHops, effort, hopLimit) sendCircle (data, centerX, centerY, radius, single, outHops, effort, hopLimit)sendPolygon (data, vertCount, vertices, single, outHops, effort, hopLimit) Region specification

More Related