1 / 33

Distributed & adaptive Data compression in wireless Sensor Networks

Dr. Sudharman K. Jayaweera and Amila Kariyapperuma ECE Department University of New Mexico. Distributed & adaptive Data compression in wireless Sensor Networks. Ankur Sharma Department of ECE Indian Institute of Technology, Roorkee. 5 th July,2007.

nika
Download Presentation

Distributed & adaptive Data compression in wireless Sensor Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dr. Sudharman K. Jayaweera and AmilaKariyapperuma ECE Department University of New Mexico Distributed & adaptive Data compression in wireless Sensor Networks Ankur Sharma Department of ECE Indian Institute of Technology, Roorkee 5th July,2007 Expand Your Engineering Skills (EYES), Summer Internship Program, 2007

  2. Introduction • Wireless Sensor Networks (WSN) consist of nodes for sensing • Temperature • Pressure • Light • Magnetometer • Infrared • Audio/Video etc • Ad hoc WSN may require inter-sensor communication.

  3. Problem • Nodes are • of small physical dimensions • Battery operated • Major concern is energy consumption • Failure of nodes due to energy depletion can lead to • Partition of sensor network • Loss of critical information • Requirement of application/system is that every node should know the data of each other node.

  4. Related Work • Energy aware routing & efficient information processing. [Shah and Rabaey, 2002] • Local compression & probabilistic estimation schemes. [ Luo,2005] • Distributed compression & adaptive signal processing in sensor networks with a fusion center. [ Chou, 2003]

  5. Our Approach i bit 1 2 i bit i bit i bit i bit i bit 4 3

  6. Proposed Algorithm • Sensor jpredicts its own reading, depending upon its past readings and readings from other sensors. • Depending upon error between predicted value and actual value i.e. sensor jcalculates the compressed bits iusing • Chebyshev’sinequality method • Exact error method

  7. Code Construction • A codebook to encode data X to i bits. • One underlying codebook that is NOT changed among the sensors. • Supports multiple compression rates.

  8. A Tree-based Codebook 0 1 0 1 0 1

  9. Chebyshev’s Inequality Method • To prevent decoding errors with i bits • Chebyshev bound for probability of decoding error • Required value of Value of i:

  10. Exact Error Method • To prevent decoding errors using i bits • Aswe know exact error in the prediction of sensor data X, number of bits are • Send extra bits also, specifying the number of bits in the message.

  11. Encoder Sensors • Xis stored as the closest representation from 2n values in the root codebook (A/D converter). • Mapping from X to the bits that specify the subcode-book at level i is done using

  12. Decoder Sensors • Decoders receive i-bit value & code sequence f(x). • Traverse the tree starting from LSB of code sequence to find appropriate subcode book, S. • Calculates the side information Y as • Decodes the side information Y, to the closest value in S as

  13. Correlation Tracking • Linear prediction method • Analytically tractable • Optimal when readings can be modeled as i.i.d.Gaussian random variables. • First sensor always sends its data compressed w.r.t. its own past data. • Prediction of X is where

  14. Least-Squares Parameter Estimation • Prediction error is • Choose filter coefficients in order to minimize weighted least squares error. • Least squares filter coefficient vector at time k is given by where

  15. Recursive Least-Squares (RLS) Algorithm • Filter coefficient computation is performed adaptively using RLS where and • For initialization, each sensor sends uncodeddata samples. • In our approach reference sensor updates the corresponding coefficients and sends them to all other sensors.

  16. Decoding Errors • No decoding errors in exact error method. • In Chebyshev’s method, no of encoding bits are specified within a given probability of error and after every 100 samples. • Leads to few decoding errors, but results in higher compression.

  17. Implementation & Performance • Simulations were performed for measurements on humidity data. • We assumed a 12 bit A/D converter with a dynamic range of [-128,128]. • Simulated results for about 18,000 samples for each sensor (total of 90,000) • Sensor orderings are randomized every 500 samples. • For RLS training, first 25 samples of each sensor are transmitted without any compression. • Coefficients are updated and shared after every 500 samples.

  18. Exact Error implementation • With each code sequence, extra 4 bits to specify the number of bits are also sent. • Decoding Error = 0 • Average Energy Saving %= 43.34%

  19. Tolerable Noise vs. Prediction Noise

  20. Chebyshev’s Inequality method • Encoding bits are specified every 100 samples • Case I: Probability of Error ( Pe)= 0.5% • Average Decoding Error % = 0.07% • Average Energy Saving % = 45.74%

  21. Tolerable Noise vs. Prediction Noise

  22. Chebyshev’s Inequality method • Case II: Probability of Error ( Pe)= 1.0% • Average Decoding Error % = 0.13% • Average Energy Saving % = 49.74%

  23. Chebyshev’s Inequality method • Case II: Probability of Error ( Pe)= 1.5% • Average Decoding Error % = 2.29% • Average Energy Saving % = 52.27%

  24. Comparison • ZERO probability of decoding error • Compression is low (due to extra bit information) • Strict bound • ‘Instantaneous approach’ • Probability of decoding error within a required bound. • Higher Compression can be achieved by varying required probability of error. • Loose bound • ‘Average approach’. Exact Error Method Chebyshev’s Method

  25. Probability of Error vs. Energy Savings

  26. For Temperature Data • Exact error method • Average energy savings % = 56.66% • Average decoding error % = 0 • Chebyshev’s method ( Pe= 0.01) • Average energy savings % = 66.98% • Average decoding error % = 0.61%

  27. For Light Data • Exact error method • Average energy savings % = 33.52% • Average decoding error % = 0 • Chebyshev’s method ( Pe= 0.01) • Average energy savings % = 19.29% • Average decoding error % = 1.13%

  28. Conclusions • Energy savings achieved through our simulations are conservative estimates of what can be achieved in practice. • Further work can be done on • Better predictive models. • Better probability of error bound. • Can be integrated with an energy saving-routing algorithm to increase the energy savings.

  29. Thank You!!!! Queries Please…..

More Related