1 / 39

Programming and Tasking Sensor Networks

Programming and Tasking Sensor Networks. Thanos Stathopoulos CS 213, Winter 03. Introduction. Sensor networks Several thousand nodes Constraints Energy, bandwidth, reliability Remote locations Human intervention undesirable or impossible Applications

xandy
Download Presentation

Programming and Tasking Sensor Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Programming and Tasking Sensor Networks Thanos Stathopoulos CS 213, Winter 03

  2. Introduction • Sensor networks • Several thousand nodes • Constraints • Energy, bandwidth, reliability • Remote locations • Human intervention undesirable or impossible • Applications • Requirements/specs can vary over time • Fine-tuning • Debugging

  3. Papers Covered • Focus on mote programming/tasking • TAG • Mate

  4. TAG: a Tiny Aggregation Service for Ad-Hoc Sensor Networks Samuel Madden, Michael Franklin, Joseph Hellerstein and Wei Hong

  5. TAG: Overview • TAG: An aggregation service for networks of motes • Has a simple declarative interface • Distributes and executes queries in the network • In network aggregation • Power efficiency • Uses underlying routing protocol

  6. Routing Protocol • Tag requirements from RP • Deliver query requests to all nodes • Provide at least one route from ever node to the root of the network • No duplicates arrive at root

  7. Routing protocol (cont’d) • Tree-based routing scheme • Root node: user/network interface • Root broadcasts packet with (ID, level) information • level is distance from the root, in hops • Recipients with unassigned levels choose sender of packet as parent and assign level+1 as their level • Process continues until leaves are reached • Periodic broadcasts used for maintenance

  8. Queries • SQL-like syntax • SELECT: specifies an arbitrary arithmetic expression over one ore more aggregation values • expr: The name of a single attribute • agg: The aggregation function • Attrs: partition readings by specific attributes • WHERE, HAVING: Filters irrelevant readings • GROUP BY: specifies an attribute based partitioning of readings • EPOCH DURATION: Time interval of aggregate record computation • Each record is a <groupID, aggregate_value> pair • Different from standard SQL queries

  9. Aggregate Structure • A combination of three functions • Merging function f. Merges two partial state records • Initializer i: Instantiates a state record for a single sensor value • Evaluator e: Computes the actual value of the aggregate from a partial state record

  10. Example: Average • Partial state record: <S,C> • SUM • COUNT • Merging function • f(<S1,C1>,<S2,C2>)=<S1+S2, C1+C2> • Initializer • i(x)=<x,1> • Evaluator • e(<S,C>)=S/C

  11. TAG Algorithm • 2 Phases • Distribution: Queries are pushed down the network. • Parents broadcast queries to their children • Collection: Aggregate values continuously sent from children to parents • Reply from all children required before forwarding an aggregate value • TDMA-like epoch partitioning • Children must deliver records during a parent-specified time interval • Parent collects all values (including its own) and sends the aggregate up the tree • Result: a single aggregate value per epoch

  12. Query propagation • When query is received • Sync clock according to time info in message • Choose sender as parent • Determine when sender expects a reply • Set children delivery interval • Must be before parent expects reply • Forward query

  13. Data propagation • Listen for children’s data at expected time • Compute partial state record • local readings • children’s readings • Transmit data at parent-requested interval

  14. Partial State Record Flow • Parent reception interval must be chosen carefully • All children must be able to report • Cannot exceed end of epoch

  15. Grouping • Each sensor reading placed into exactly one group • Groups partitioned according to expression over one or more attributes • Expression is pushed down along with query • Nodes choose appropriate group • Aggregate values updated in appropriate groups • Example: grouping of AVG(light) based on temp/10 bins

  16. Grouping: Problems • HAVING clause may cause unnecessary group propagation • Solution: Upon detection, notify other nodes • Only possible for monotonic aggregate • Even then, not always • Insufficient storage • Solution: Evict groups from local storage • Forward victim to node’s parent

  17. Dealing with Losses • Neighbor monitoring • Node will choose a new parent: • When link quality to existing parent drops • When existing parent hasn’t transmitted for some time • Records can get lost due to parent switching • But, no duplicates (1 transmission per epoch) • Caching • Parents cache children's partial state records for some number of rounds • Memory duration must be shorter than parent switching interval • Redundancy • If a node has multiple parents, split the aggregate and send at both • Doesn’t work for holistic aggregates (e.g. MEDIAN)

  18. Aggregate Taxonomy

  19. Simulation Results • 2500 nodes, d=50 • TAG outperforms centralized approach by an order of magnitude in most cases • Does equally well in the worst case • Actual benefit depends on topology

  20. Experimental Results • 16 nodes, depth 4 tree, COUNT aggregate • # of messages • Centralized: 4685 • TAG: 2330 • Quality of aggregate is better with TAG • Centralized case: high loss due to contention

  21. Summary • TAG is based on a declarative interface • Uses aggregation extensively • Makes network tasking easier for the end user • TAG outperforms centralized approaches in most cases • Aggregation again • Relies heavily on the underlying routing layer • Placement of query tree is constrained by the characteristics of routing tree

  22. Comments • Very good for • Energy-efficient data collection on motes • Queries are only as efficient as the routing protocol • No need to reach everyone all the time

  23. Mate: A Tiny Virtual Machine for Sensor Networks Philip Levis and David Culler

  24. (Re-)Programming a Mote Network: Standard approach • Small changes:Parameter tweaking • RPC-like handling • Require few packets • Application dependent-low flexibility • Large (fundamental) changes • Re-burning required • Possible through over-the-air programming • Energy expensive • Number of packets • Re-burning

  25. Programming System Requirements • Small • 1K RAM, 16K ROM (rene) • Expressive • Support wide range of application • Concise • Bandwidth limitations • Resilient • Efficent • Energy again • Tailorable • Simple

  26. TinyOS refresher course • Component-based system • Event-based programming model • Commands (downcalls) • Events (upcalls) • Tasks (processes) • Non-preemptive FIFO scheduler • Non-blocking execution • Not very programmer-friendly

  27. Mate • A (tiny) Virtual Machine • Required components • Network stack • EEPROM logger • Hardware (sensors etc) • Scheduler • Code broken in capsules • Stack-based: 2 stacks • 3 execution contexts • Footprint: • ROM: 7286 bytes • RAM: 603 bytes

  28. Architecture • 3 execution contexts • Clock • Send • Receive • 4 user-defined subroutines • 2 stacks per context • Operand (depth 16) • Return address (depth 8) • One-word heap • Share state among contexts

  29. ISA • Classes of instructions • Basic (e.g. arithmetic, halt etc) • s-class • in-memory structures access • Only used by send + receive contexts • x-class (pushc, blez) • 8 available (user-defined) instructions • S and X-class have embedded operands • Values • Sensor readings • Messages

  30. Instructions and Operands • Some instructions operate on specific types • Example: putled <= <value> • Others are polymorphic • Example: add • message + message = append • message + value = append to message payload • sensor reading + value • sensor A reading + sensor B reading = sensor A reading • Stack machine • Instructions get operands from stack • Instructions store operands to stack

  31. Capsules • Program consists of several capsules • 24 instructions per capsule • Fits into a single TOS packet => Atomicity • Capsule types • Message send • Message receive • Timer • Subroutine • Only 4 allowed: RAM constraints

  32. Code Execution • Event-based • Execution starts on an event (timers, messages) • Each event has corresponding capsule • Execute first instruction • Continue until halt • Each instruction is a TOS task • Instruction granularity interleaving • Can operate ‘concurrently’ with non-Mate tasks • Instructions are synchronous • Context suspended until instruction completes • Hide asynchronous TOS message model from applications

  33. Addressing • No sharing among operand stacks of different contexts • Context state sharing only through heap • Bound checks prevent overrun/underrun • Heap addressing trivial • Only one shared variable

  34. Code example: Clock capsule pushc 1 # Push 1 on the operand stack sense # Read sensor 1 copy # Copy sensor reading gets # Get the previous sent reading inv # Invert previous reading add # current-previous pushc 32 add # current-previous+32 blez 17 # If current<(previous-32) jump to PC 17 copy inv gets add pushc 32 add blez 17 halt copy # PC 17 sets # Set shared variable to current reading pusm # Push a message into operand stack clear # clear the message payload add # add reading to message payload send # send message halt

  35. Code Dissemination • Capsules have version numbers • Most recent version is installed and executed • Code updates • forw instruction broadcasts capsule • forwo instruction broadcasts other installed capsules • Useful when part of the program needs to be updated

  36. Instruction Issue Rate • Different instructions have different costs • VM/native ratio worse for simple instructions • Instruction Issue Rate: 10.000 IPS • MCU clock rate: 4 MHz • Every instruction is a task • ~35% of overhead

  37. Energy Cost • Concise representation of programs • Much fewer packets than a full binary • Mate is energy efficient • if # of executions is small • In the long run, native code is better • if code is frequently updated • average complexity application • Interpretation overhead too high for complex apps • Hybrid solution • Run complex app in native mode • Use Mate for control • Common RPC mechanism for all applications

  38. Summary • Mate achieves most of its goals • Small • Expressive • Resilient • provides pseudo user/kernel boundary • Concise • Flexible • Well-suited for small-average tasks • Provides ‘self-programming’ ability to the network

  39. Conclusions • Sensor network programming/tasking is not a trivial task • Several parameters to optimize for • Conflicting goals • Constraints • Current approaches • Aggregate queries (TAG, Diffusion) • VM (Mate) • Other potential approaches • DSM • Tuple spaces

More Related