1 / 28

Software Challenges in Wireless Sensor Networks

Software Challenges in Wireless Sensor Networks. Jeremy Elson Microsoft Research IPSN/SPOTS Tutorial Wednesday, April 27, 2005. Outrageous (?) Opinion Tutorial. Outrageous opinion survey A dozen contributors Ideas that deserve credit are theirs, blame for dumb ideas is mine

hans
Download Presentation

Software Challenges in Wireless Sensor Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Software ChallengesinWireless Sensor Networks Jeremy Elson Microsoft Research IPSN/SPOTS Tutorial Wednesday, April 27, 2005

  2. Outrageous(?) Opinion Tutorial • Outrageous opinion survey • A dozen contributors • Ideas that deserve credit are theirs, blame for dumb ideas is mine • More questions than answers • Basic question: Why is writing software for sensor networks so hard? • Some examples of ongoing research

  3. Is it hard? • Internet: • Cost of new devices is really just the device • Pressure is on hardware designers – smaller, cheaper, faster • We can network every node that exists • Far more motes exist than we can network • 100,000 (?) exist • Biggest networks, ~100 or maybe ~1000 • Why is this?

  4. Can we just blame hardware? • Energy is “non-negotiable” • Un-tethered sensing is the reason we’re here • Will always limit radio ranges, link qualities • “Architecture challenges” come somewhere in between hardware and software • e.g., how do we structure hierarchies? • Billions of PCs can’t be wrong! • Even with the right structure, it’s harder • Why?

  5. Worst of All Worlds Data Uncertainty Real-World Sensor Inputs Distributed Robotics Robotics Sensor Networks Timing- Dependent Data OS Kernels/ Device Drivers TCP Squid MS Word MPI/Linda cat “Text” Distributed, Timing Dependent System Single- or Few- Threaded Multi-Threaded System Uncertainty

  6. Big Challenge 1:Visibility, Visibility, Visibility! Of a dozen sensor network researchers polled, 10 listed “debugging support” in some form as a core challenge

  7. Visibility, visibility, visibility… • Ground truth is hard to capture, even with unlimited capacity systems – damn you, real world! • We want to interpret responses to stimuli • The stimuli, not the sensor outputs, are ground truths • The lab is not the same as the field • Things that worked in the lab don’t work during deployment • Bugs in MAC layers, positioning errors, non-Guassian noise… • It’s also hard to model • Models are appearing (e.g., Cerpa, Whitehouse) • But notice that models are highly environment-specific

  8. Visibility, visibility, visibility… • The differences have substantial effects • If only I had $100 for every routing algorithm designed with a circular radio model • I still couldn’t afford to buy a circular radio • Observing reality concurrent with design should be mandatory • Brings us back to needing visibility

  9. Visibility, visibility, visibility… • Even traditionally easy-to-observe things (e.g. messages, states) become hard to capture, because… • Debugging information is now huge compared to data, instead of the opposite (vs. Internet) • You can’t store it on Mote-sized devices, either

  10. Simulators, Emulators, Testbeds • TOSSIM (Levis) • Real-code-simulator for motes • Avrora (Titzer, Palsberg) • Simulates down to microcode level • Motelab (Welsh, et al.) • Always-on testbed, lowers the massive systems motivation effort required for deployments • EmStar (Girod, Elson, et al.) • Reminders: • Sensor outputs are not ground truths • Matt’s office is not the same as a Redwood forest

  11. Simulators, Emulators, Testbeds EmStar’s runtime environments allow high-visibility debugging before jumping into low-visibility deployment

  12. Visibility for Design • Even understanding the behavior of a big parallel process is hard • How do you know what’s going onBut even if you do… • Designing local rules to cause global behavior is hard (Culler, Liu, Welsh) • How do you control what’s going on

  13. Visibility for Management • The need for visibility doesn’t end when the design is done (Madden, Polastre) • Which sensors have failed? • For repair purposes • Because it tells you about the data • Doesn’t obviate the need for statistical elimination of sensors we think are bad • “Shut up, you’re confusing everyone!”

  14. Another take: Predictability • Instead of observing what happened(post-facto),predict what will happen (static analysis) (Srivastava)

  15. “Giving Up” • Tenet (Kohler) – motes can only do the most basic tasks (e.g., thresholding), and route back to a microserver • In the low-visibility nodes, complexity is limited to reasoning about one node’s data • “Hard part” happens at microservers • Mechanism vs. policy separation for routing (Girod) • Motes send link states to microserver • Microserver computes routes; installs them on motes

  16. Big Challenge 2:Higher Level Abstractions How long can we keep on doing it this way?

  17. Why are abstractions important? • They let you reason about software at a higher level • Right now we manually script every packet sent and received, most timers… • They (can) let software interoperate better • Applications can share the underlying building blocks; system is smaller, more consistent • Services like TCP port numbers

  18. Higher Level Abstractions • Consider compiler arc: • First, “compiled code is too slow” • Second, “But computers are now fast” • Third, “The compiler does it better anyway” • We’re still at Step 1 • Not with CPU cycles (we use compilers)… • … but bandwidth, energy, and memory. • Unfortunately there may not be a Step 2

  19. What if there’s no Step 3? • The Internet has TCP, which is well-behaved, and which many apps use • Nice model: fixed data len, variable time • Some (minority?) apps don’t fit this model, adapt at the app layer (e.g. video quality) • Sensor network congestion control (e.g. Woo, Hull): good first steps but still has a focus on collision avoidance • Root of the problem: rate adaptive sensor apps must be the common case; they aren’t! • Common case is no longer “fixed data size, transport it when you can” model – infinite data like streaming video

  20. Some Abstractions • TinyDB (Madden) • Among the first; programming interface is queries • Abstract regions (Welsh) • Program collections at a higher layer than sending messages • Reliable Multi-Hop State Sync (Girod) • Publish and update structs over lossy nets • This week, we’ve seen State-Machines (Kasten) and new intermediate languages (Newton) • But the real question: do they work across a diversity of applications? Only time will tell.

  21. Sub Challenge:Re-Usable Software • TinyOS, EmStar, etc. are modular, yet reuse isn’t as pervasive as it “should be” • One part software engineering, one part Big Problem (as in congestion control) • An encouraging first step: SP (Sensor Protocol) by Polastre, Culler, et al. • Standardized interface to MAC, with some basic feedback in both directions

  22. Meta-Challenge: Applications That Do More Than Web Cameras • 1999 “Grand Challenges” paper:“Data processing must be in-network” • Where are we now? • Many (most?) applications are “bring all the data back” • Some notable exceptions including sniper tracking (vanderbilt), Magneto-Car Tracking (berkeley), Self-Healing networks (sensoria)

  23. Self-Healing Networks • Sensoria Corp under contract from DARPA • Goal: Nodes localize themselves within 1m, MOVE to fill in gaps • Network completely self-organizing, autonomous at many layers

  24. Closing the Loop 20 Nodes; 10 MOBILE. Then, network partitions…

  25. Summary • Ultimately we want to get to systems that do amazing things • We just keep building on ideas; we have to build on each others’ systems • Abstractions are needed, so we can build additive systems instead of just more • None of this will happen without visibility • And above all…

  26. What the heck is thekiller application?!!??

  27. David Culler Henri Dubois-Ferrier Lew Girod Richard Guy Bill Kaiser Eddie Kohler Jie Liu Sam Madden Andrew Parker Joe Polastre Matt Welsh Alec Woo Yan Yu Feng Zhao Acknowledgements

  28. Thank you!Questions? Comments?

More Related