1 / 28

The Design of an Acquisitional Query Processor For Sensor Networks

The Design of an Acquisitional Query Processor For Sensor Networks. Samuel Madden, Michael J. Franklin, Joseph M. Hellerstein, and Wei Hong Presentation by John Lynn. Overview. Goals Acquisitional Query Language Optimizations Future Work Conclusions Discussion. Goals.

wesley
Download Presentation

The Design of an Acquisitional Query Processor For Sensor Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Design of an Acquisitional Query Processor For Sensor Networks Samuel Madden, Michael J. Franklin, Joseph M. Hellerstein, and Wei Hong Presentation by John Lynn

  2. Overview • Goals • Acquisitional Query Language • Optimizations • Future Work • Conclusions • Discussion

  3. Goals • Provide a query processor-like interface to sensor networks • Use acquisitional techniques to reduce power consumption compared to traditional passive systems

  4. How? • What is meant by acquisitional techniques? • Where, when, and how often. • Four related questions • When should samples be taken? • What sensors have relevant data? • In what order should samples be taken? • Is it worth it?

  5. What’s the big deal? • Radio consumes as much power as the CPU • Transmitting one bit of data consumes as much energy as 1000 CPU instructions! • Message sizes in TinyDB are by default 48 bytes • Sensing takes significant energy

  6. An Acquisitional Query Language • SQL-like queries in the form of SELECT-FROM-WHERE • Support for selection, join, projection, and aggregation • Also support for sampling, windowing, and sub-queries • Not mentioned is the ability to log data and actuate physical hardware

  7. An Acquisitional Query Language • Example:SELECT nodeid, light, temp FROM sensors SAMPLE INTERVAL 1s FOR 10s • Sensors viewed as a single table • Columns are sensor data • Rows are individual sensors

  8. Queries as a Stream • Sensors table is an unbounded, continuous data stream • Operations such as sort and symmetric join are not allowed on streams • They are allowed on bounded subsets of the stream (windows)

  9. Windows • Windows in TinyDB are fixed-size materialization points • Materialization points can be used in queries • ExampleCREATE STORAGE POINT recentlight SIZE 8 AS (SELECT nodeid, light FROM sensors SAMPLE INTERVAL 10s)SELECT COUNT(*) FROM sensors AS s, recentlight AS r1 WHERE r.nodeid = s.nodeid AND s.light < r1.light SAMPLE INTERVAL 10s

  10. Temporal Aggregation • ExampleSELECT WINAVG(volume, 30s, 5s) FROM sensors SAMPLE INTERVAL 1s • Receive only 6 results from each sensor instead of 30

  11. Event-Based Queries • An alternative to continuous polling for data • ExampleON EVENT bird-detector(loc): SELECT AVG(light), AVG(temp), event.loc FROM sensors AS s WHERE dist(s.loc, event.loc) < 10m SAMPLE INTERVAL 2s FOR 30s

  12. Lifetime-Based Queries • ExampleSELECT nodeid, accel FROM sensors LIFETIME 30 days • Nodes perform cost-based analysis in order to determine data rate • Nodes must transmit at the root’s rate or at an integral divisor of it

  13. Lifetime-Based Queries • Tested a mote with a 24 week query • Sample rate was 15.2 seconds per sample • Took 9 voltage readings over 12 days

  14. Optimization • Three phases to queries • Creation of query • Dissemination of query • Execution of query • TinyDB makes optimizations at each step

  15. Power-Based Optimization • Queries optimized by base station before dissemination • Cost-based optimization to yield lowest overall power consumption • Cost dominated by sampling and transmitting • Optimizer focuses on ordering joins, selections, and sampling on individual nodes

  16. Metadata • Each node contains metadata about its attributes • Nodes periodically send metadata to root • Metadata also contains information about aggregate functions • Information about cost, time to fetch, and range is used in query optimization

  17. Using Metadata • Consider the querySELECT accel, mag FROM sensors WHERE accel > c1 AND mag > c2 SAMPLE INTERVAL 1s • Order of magnitude difference between sample costs • Three options • Measure accel and mag, then process select • Measure mag, filter, then measure accel • Measure accel, filter, then measure mag • First option always more expensive. Second option an order of magnitude more expensive than third • Second option can be cheaper if the predicate is highly selective

  18. Using Metadata • Another exampleSELECT WINMAX(light, 8s, 8s) FROM sensors WHERE mag > x SAMPLE INTERVAL 1s • Unless mag > x is very selective, it is cheaper to check if current light is greater than max • Reordering is called exemplary aggregate pushdown

  19. Dissemination Optimization • Build semantic routing tree (SRT) • SRT nodes choose parents based on semantic properties as well as link quality • Parent nodes keep track of the ranges of values for children

  20. Evaluation of SRT • SRT are limited to constant attributes • Even so, maintenance is required • Possible to use for non-constant attributes but cost can be prohibitive

  21. Evaulation of SRT • Compared three different strategies for building tree, random, closest, and cluster • Report results for two different sensor value distributions, random and geographic

  22. SRT Results

  23. Query Execution • Queries have been optimized and distributed, what more can we do? • Aggregate data that is sent back to the root • Prioritize data that needs to be sent • Naïve - FIFO • Winavg – Average top queue entries • Delta – Send result with most change • Adapt data rates and power consumption

  24. Prioritization Comparison • Sample rate was K times faster than delivery rate. • Readings generated by shaking the sensor • In this example, K = 4

  25. Adaptation • Not safe to assume that network channel is uncontested • TinyDB reduces packets sent as channel contention rises

  26. Future Work • Selectivity of operators based upon range of sensor • Exemplary aggregate pushdown • More sophisticated prioritization schemes • Better re-optimization of sample rate based upon acquired data

  27. Evaluation • TinyDB provides a simple yet powerful interface to sensor networks • TinyDB takes measures to conserve power at all phases of query processing

  28. Discussion • Is this the best way (right way?) to look at a sensor network? • Is their approximation of battery lifetime sufficient? • Was their evaluation of SRT good enough?

More Related