1 / 16

Techniques for Building Long-Lived Wireless Sensor Networks

Techniques for Building Long-Lived Wireless Sensor Networks. Jeremy Elson and Deborah Estrin UCLA Computer Science Department And USC/Information Sciences Institute Collaborative work with R. Govindan, J. Heidemann, and SCADDS of other grad students. What might make systems long-lived?.

pittsj
Download Presentation

Techniques for Building Long-Lived Wireless Sensor Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Techniques for Building Long-Lived Wireless Sensor Networks Jeremy Elson and Deborah Estrin UCLA Computer Science Department And USC/Information Sciences Institute Collaborative work with R. Govindan, J. Heidemann, and SCADDS of other grad students

  2. What might make systems long-lived? • Consider energy the scarce system resource • Minimize communication (esp. over long distances) • Computation costs much less, so: • In-network processing: aggregation, summarization • Adaptivity at fine and coarse granularity • Maximize lifetime of system, not individual nodes • Exploit redundancy; design for low duty-cycle operation • Exploit non-uniformities when you have them • Tiered architecture • New metrics

  3. What might make systems long-lived? • Robustness to dynamic conditions: Make system self-configuring and self-reconfiguring • Avoid manual configuration • Empirical adaptation (measure and act) • Localized algorithms prevent single points of failure and help to isolate scope of faults • Also crucial for scaling purposes!

  4. The Rest of the Talk • Some of our initial building blocks for creating long-lived systems: • Directed diffusion - a new data dissemination paradigm • Adaptive fidelity • Use of small, randomized identifiers • Tiered architecture • Time synchronization

  5. Directed DiffusionA Paradigm for Data Dissemination • Key features • name data, not nodes • interactions are localized • data can be aggregated or processed within the network • network empirically adapts to best distribution path, the correct duty cycle, etc. 1. Low data rate 2. Reinforcement 3. High data rate

  6. 0.03 0.025 Diffusion without suppression 0.02 0.015 Average Dissipated Energy flooding (Joules/Node/Received Event) 0.01 Omniscient multicast 0.005 Diffusion with suppression 0 0 50 100 150 200 250 300 Network Size (nodes) Diffusion: Key Results • Directed diffusion • Can provide significantly longer network lifetimes than existing schemes • Keys to achieving this: • In-network aggregation • Empirical adaptation to path • Localized algorithms and adaptive fidelity • There exist simple, localized algorithms that can adapt their duty cycle • … they can increase overall network lifetime

  7. Adaptivity I: Robustness in Data Diffusion A primary goal of data diffusion is robustness through empirical adaptation: measuring and reacting to the environment. 20% node failure Because of this adaptation, mean latency (shown here) for data diffusion degrades only mildly even with 10%-20% node failure. 10% node failure no failures

  8. Adaptivity II:Adaptive Fidelity • extend system lifetime while maintaining accuracy • approach: • estimate node density needed for desired quality • automatically adapt to variations in current density due to uneven deployment or node failure • assumes dense initial deployment or additional node deployment zzz zzz zzz zzz

  9. Adaptive Fidelity Status • applications: • maintain consistent latency or bandwidth in multihop communication • maintain consistent sensor vigilance • status: • probablistic neighborhood estimation for ad hoc routing • 30-55% longer lifetime with 2-6sec higher initial delay • currently underway: location-aware neighborhood estimation

  10. Small, Random Identifiers • Sensor nets have many uses for unique identifiers(packet fragmentation, reinforcement, compression codebooks...) • It’s critical to maximize usefulness of every bit transmitted; each reduces net lifetime (Pottie) • Low data rates + high dynamics = no space to amortize large (guaranteed unique) ids or claim/collide protocol • So: use small, random, ephemeral transaction ids? • Locality is key: random ids much smaller than guaranteed unique ids if total net size large and transaction density small • ID collisions lead to occasional losses; persistent losses avoided because the identifiers are constantly changing • Marginal cost of occasional losses is small compared to losses from dynamics, wireless conditions, collisions…

  11. Address-Free Fragmentation • AFF Allows us to optimize # bits used for identifiers • Fewer bits = fewer wasted bits per data bit, but high collision rate; vs. • More bits = less waste due to ID collisions • but many bits wasted on headers Data Size=16 bits

  12. Exploit Non-Uniformities I:Tiered Architecture • Consider a memory hierarchy: registers, cache, main memory, swap space on disk • Due to locality, provides the illusion of a flat memory that has speed of registers but size & price of disk space • Similar goal in sensor nets: we want a spectrum of hardware within a network with the illusion of • CPU/memory, range, scaling properties of large nodes • Price, numbers, power consumption, proximity to physical phenomena of the smallest

  13. Exploit Non-Uniformities I:Tiered Architecture • We are implementing a sensor net hierarchy: PC-104s, tags, motes, ephemeral one-shot sensors • Save energy by • Running the lower power and more numerous nodes at higher duty cycles than larger ones • Having low-power “pre-processors” activate higher power nodes or components (Sensoria approach) • Components within a node can be tiered too • Our “tags” are a stack of loosely coupled boards • Interrupts active high-energy assets only on demand

  14. Exploit Non-Uniformities II:Time Synchronization • Time sync is critical at many layers; some affect energy use/system lifetime • TDMA guard bands • Data aggregation & caching • Localization • But time sync needs are non-uniform • Precision • Lifetime • Scope & Availability • Cost and form factor • No single method optimal on all axes

  15. Exploit Non-Uniformities II:Time Synchronization • Use multiple modes • “Post-facto” synchronization pulse • NTP • GPS, WWVB • Relative time “chaining” • Combinations can (?) be necessary and sufficient, to minimize resource waste • Don’t spend energy to get better sync than app needs • Work in progress…

  16. Conclusions • Many promising building blocks exist, but • Long-lived often means highly vertically integrated and application-specific • Traditional layering often not possible • Challenge is creating reusable components common across systems • Create general-purpose tools for building networks, not general purpose networks

More Related