1 / 31

Decomposing Data-Centric Storage Query Hot-Spots in Sensor Networks

Decomposing Data-Centric Storage Query Hot-Spots in Sensor Networks. Mohamed Aly In collaboration with Panos K. Chrysanthis and Kirk Pruhs Advanced Data Management Technologies Lab Dept. of Computer Science University of Pittsburgh. Motivating Application: Disaster Management.

drake
Download Presentation

Decomposing Data-Centric Storage Query Hot-Spots in Sensor Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Decomposing Data-Centric Storage Query Hot-Spots in Sensor Networks Mohamed Aly In collaboration with Panos K. Chrysanthis and Kirk Pruhs Advanced Data Management Technologies Lab Dept. of Computer Science University of Pittsburgh

  2. Motivating Application: Disaster Management

  3. Disaster Management Sensor Networks • Sensors are deployed to monitor the disaster area. • First responders moving in the area issue ad-hoc queries to nearby sensors. • The sensor network is responsible of answering these queries. • First responders use query results to improve the decision making process in the management process of the disaster.

  4. Data Storage Options in Sensor Networks • Base Station Storage: • Events are sent to base stations where queries are issued and evaluated. • Best suited for continuous queries. • In-Network Storage (INS): • Events are stored in the sensor nodes. • Best suited for ad-hoc queries. • All previous INS schemes were Data-Centric Storage (DCS) schemes.

  5. Data-Centric Storage (DCS) • Quality of Data (QoD) of ad-hoc queries. • Define an event owner based on the event value. • Examples: • Distributed Hash Tables (DHT) [Shenker et. al., HotNets’02] • Geographic Hash Tables (GHT) [Ratnasamy et. al., WSNA’02] • Distributed Index for Multi-dimensional data (DIM)[Li et. al., SenSys’03] • Greedy Perimeter Stateless Routing algorithm (GPSR)[Karp & Kung, Mobicom’00] • Among the above schemes, DIM has been shown to exhibit the best performance.

  6. The DIM DCS Scheme

  7. Problems of Current DCS Schemes • Storage Hot-Spots: • A large percentage of events is mapped to few sensor nodes. • Our Solutions • The Zone Sharing (ZS) algorithm on top of DIM [DMSN’05] • The K-D Tree based DCS scheme (KDDCS) [submitted] • Query Hot-Spots: • A large percentage of queries target events stored in few sensor nodes. • Our Solutions [MOBIQUITOUS’06] • The Zone Partitioning (ZP) algorithm • The Zone Partial Replication (ZPR) algorithm

  8. Query Hot-Spots in DIM • Definition: A high percentage of queries accessing a “hot zone” stored by a small number of nodes. • Existence of query hot-spots leads to: • Increased node deaths • Network Partitioning • Reduced network lifetime • Decreased Quality of Data (QoD)

  9. Query Hot-Spots Decomposition Algorithms • Uniform vs. skewed distribution of the number of accesses among the hot-zone events: • The Zone Partitioning (ZP) algorithm • The Zone Partial Replication (ZPR) algorithm • Basic Idea: • Each sensor keeps track of the Average Querying Frequency (AQF) of its stored events • Periodically compares its AQF to its neighbors’ AQFs • In case a large difference is detected, the node (donor) selects the Best neighbor (receiver) that can receive part of its responsibility range • Donor locally determines receiver • Partitioning Criterion (PC)

  10. The Zone Partitioning (ZP) Algorithm

  11. The Zone Partial Replication (ZPR) Algorithm

  12. Query Hot-Spots Decomposition Algorithms • Uniform vs. skewed distribution of the number of accesses among the hot-zone events: • The Zone Partitioning (ZP) algorithm • The Zone Partial Replication (ZPR) algorithm • Basic Idea: • Each sensor keeps track of the Average Querying Frequency (AQF) of its stored events • Periodically compares its AQF to its neighbors’ AQFs • In case a large difference is detected, the node (donor) selects the Best neighbor (receiver) that can receive part of its responsibility range • Donor locally determines receiver • Partitioning Criterion (PC)

  13. Query Hot-Spots Decomposition Algorithms • Uniform vs. skewed distribution of the number of accesses among the hot-zone events: • The Zone Partitioning (ZP) algorithm • The Zone Partial Replication (ZPR) algorithm • Basic Idea: • Each sensor keeps track of the Average Querying Frequency (AQF) of its stored events • Periodically compares its AQF to its neighbors’ AQFs • In case a large difference is detected, the node (donor) selects the Best neighbor (receiver) that can receive part of its responsibility range • Donor locally determines receiver • Partitioning Criterion (PC)

  14. PC: Storage Safety Requirement • The sum of the pre-partitioning load of the receiver and the traded zone should be less than the receiver’s storage capacity • T + lreceiver ≤ S

  15. PC: Energy Safety Requirement (1) • The energy consumed by the donor in the partitioning process should be much less than the total energy of the donor • T / edonor ≤ E1 • E1 ≤ 0.5

  16. PC: Energy Safety Requirement (2) • The energy consumed by the receiver in the partitioning process should be much less than the total energy of the receiver • (T * re) / ereceiver ≤ E2 • E2 ≤ 0.5

  17. PC: Access Frequency Requirement • The average access frequency of the donor is much larger than that of the receiver • AQF(donor) / AQF(receiver) ≥ Q1 • Q1 should be greater than 2 to avoid cyclic migrations

  18. ZPR Initiation Requirements • In case all previous requirements are satisfied: • ZP initiated • If a hot sub range of small size exists within the hot range  ZPR initiated instead of ZP • AQF(hot sub range) / AQF(total range) ≥ Q2 • Q2 should be close to 1, for e.g. 0.9 • size(hot sub range) / size(total range) ≤ Q3 • Q3 should be close to 0, for e.g. 0.2

  19. Partitioning Criterion (PC) • T + lreceiver ≤ S • T / edonor ≤ E1 • (T * re) / ereceiver ≤ E2 • AQF(donor) / AQF(receiver) ≥ Q1 • AQF(hot sub range) / AQF(total range) ≥ Q2 • size(hot sub range) / size(total range) ≤ Q3 1:4 satisfied  ZP initiated 1:6 satisfied  ZPR initiated

  20. More about the Algorithms • Mechanism to lower messaging overhead • GPSR Modifications • Traded Zone List (TZL) • Coalescing Process • Insertion process in ZPR • Bound on the replication hops of ZPR

  21. Roadmap • Background • Problem Statement: Query Hot-spots • Algorithms: ZP, ZPR • Experimental Results • Conclusions

  22. Simulation Description • Compare: DIM, ZP/ZPR. • Simulator similar to the DIM’s [Li et. al., SenSys’03] • Two phases: insertion & query. • Insertion phase (to achieve a steady state of network storage) • Each sensor initiates 5 events • Events forwarded to owners • Query phase • Each sensor generates 20 single-event queries (worst case scenario)

  23. Experimental Setup

  24. Experimental Results: Quality of Data (QoD) 5% hot-spot

  25. Experimental Results: Balancing Energy Consumption 200 nodes, 0.33% hot-spot

  26. Experimental Results: ZP/ZPR Strengths • Increasing the QoD by partitioning the hot range among a large number of sensors, thus, balancing the query load among sensors and keep them alive longer to answer more queries. • Increasing energy savings by balancing energy consumption among sensors. • Increasing the network lifetime by reducing node deaths.

  27. Acknowledgment • This work is part of the “Secure CITI: A Secure Critical Information Technology Infrastructure for Disaster Management (S-CITI)” project funded through the ITR Medium Award ANI-0325353 from the National Science Foundation (NSF). • For more information, please visit: http://www.cs.pitt.edu/s-citi/

  28. Conclusions and Extensions • Query Hot-Spots: An important problem in current DCS schemes. • Contribution: • A query hot-spots decomposition scheme for DCS sensor nets, ZP/ZPR, working on top of the DIM DCS scheme. • Experimental validation of the ZP/ZPR practicality • Work under submission: • KDDCS: A unified DCS scheme for load balancing storage and query loads.

  29. Thank You Questions ? Advanced Data Management Technologies Lab http://db.cs.pitt.edu

  30. Experimental Results: Load Balancing 0.05% hotspot

  31. Experimental Results: Load Balancing 0.05% hot-spot

More Related