1 / 22

Data loss and reconstruction in sensory networks

Data loss and reconstruction in sensory networks. 于倡和 5120309435 陈晨 5120309396. Introduction Related Work Matrix features Algorithm Improvements. Introduction.

myrnaw
Download Presentation

Data loss and reconstruction in sensory networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data loss and reconstruction in sensory networks 于倡和 5120309435 陈晨5120309396

  2. Introduction • Related Work • Matrix features • Algorithm • Improvements

  3. Introduction • Wireless sensor networks (WSNs) are expected to be used in many applications such as forest fire detection and habitat monitoring. Data gathering is one of the classical problems to be tackled in WSNs .

  4. Concerning WSNs • Typically, a data gathering sensor network consists of a sink and many sensor nodes.The sink serves as a gateway to connect the sensor network andthe Internet. Over the Internet, users can query the network by sending an inquiry packet to the sink. After receiving a user query, the sink forwards it to the sensor nodes. Once the responses from the sensor nodes come back, the sink sends the query results back to the user

  5. Meaning of our work • However, the amount data transmitted along the WSN network is usually more than large. Moreover, data loss is a inevitable problem during the data transmission. To address the problems mentioned previously, we proposed an innovative data gathering scheme making use of both the low-rank and short term stability features of matrix

  6. RELATED WORK • In our research, we consider a sensor network consistingof N nodes. Each node is assigned an integer ID, n, whichis in the range of 1 to N. • We also assume that time is divided into equal-sized time slots. during each time slot, every sensor node probes the environment and forwards the reading to the sink • As a result, N readings can be collected at the sink for each time slot. For T time slots, N × T readings can be gathered. These readings can be organized into an N × T matrix X (X ∈ RN×T ),

  7. In order to reduce the traffic,we make each sensor node only forwards its readings to the sink according to a preset probability (i.e., a preset sampling ratio). As a result, only a fraction of the readings from each node are transmitted to the sink,. In our research, when an entry in X is missing or not available, we use zero as a placeholder to replace the entry. • . So we have a modified matrix B Obviously, we have: • S(n,t)=X(n,t)B(n,t) here is a scalar product

  8. Low rank • singular value decomposition (SVD) • X, an N × T matrix, can be decomposed using SVD according to: • metric is that the fraction of the nuclear form captured by the top d singular values.

  9. We found that the top 5 singular values capture 82%−99% of the nuclear norm.

  10. short-term stability • the gap between each pair of adjacent readings is equal to: • gap(n, t) = (X(n, t) − X(n, t − 1)), where 1 ≤ n ≤ N and 2 ≤ t ≤ T • the difference between each pair of adjacent gaps is equal to • dif(n, t) = ((X(n, t+1)−X(n, t))−(X(n, t)−X(n, t−1))) = X(n, t+1)+X(n, t− 1) − 2 ·X(n, t)

  11. we calculated the normalized difference for each entry in X using :

  12. Furthermore, we define f(H) as the cumulative distribution • function of {h(n, t)}, where H is in the range of 0 to 2. For a • small H, if the resulting f(H) is large.

  13. Time stability • Space correlation • Low-rank structure Features The sensory values of one certain node are usually similar at adjacent time slots. The sensory values of neighbor nodes are similar for a particular time instant. The major energy concentrates on just a few principle data in EM, which underpins the applicability of compressive sensing.

  14. Problem formulation • Given an SM S, the problem is to find an optimal RM that approximates the original EM X as closely as possible: ||·||F is the Frobenius norm used to measure the error between matrix and .

  15. Algorithm Low-rank Time stability Space correlation

  16. Algorithm

  17. Algorithm

  18. L=RT Algorithm Initialize L randomly Solve the standard linear least squares problem

  19. Improvement • Considering the empty columns If B has completely empty columns, the recovery errors will become larger. • To avoid the serious errors, when there are empty columns in B, we first ignores the empty columns and only recovers the non-empty columns and then use the recovery columns to recover the whole matrix

  20. Then we get by using semidefinite programming to solve the following optimization we can get

More Related