1 / 31

Network Reprogramming & Programming Abstractions

Network Reprogramming & Programming Abstractions. Network reprogramming. XNP: wireless reprogramming tool Mate: Virtual machine for WSN. Over NW Programming Wireless Sensors. In-System Programming A sensor node is plugged to the serial / parallel port

Download Presentation

Network Reprogramming & Programming Abstractions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Network Reprogramming &Programming Abstractions

  2. Network reprogramming • XNP: wireless reprogramming tool • Mate: Virtual machine for WSN

  3. Over NW Programming Wireless Sensors • In-System Programming • A sensor node is plugged to the serial / parallel port • But, it can program only one sensor node at a time • Network Programming • Delivers the program code to multiple nodes over the air with a single transmission • Saves the efforts of programming each individual node

  4. Network Programming for TinyOS (XNP) • Has been available since release 1.1 • Originally made by Crossbow and modified by UCB • Provides basic network programming capability • Has some limitations • No support of multi-hop delivery • No support of incremental update

  5. Host Machine Sensor Node Program Memory Boot (2) loader User Network Network Application (3) Programming External Programming Section Host Program (1) Module Flash User app Radio SREC file Packets Boot loader Section Background – Mechanisms of XNP • Host: sends program code as download msgs • Sensor node: stores the msgs in the external flash • Sensor node: calls the boot loader. The boot loader copies the program code to the program memory.

  6. Network reprogramming • XNP: wireless reprogramming tool • Mate: Virtual machine for WSN

  7. Mate: A Virtual Machine for WSNs Why VM? • Large number (100’s to 1000’s) of nodes in a coverage area • Some nodes will fail during operation • Change of function during the mission Related Work • PicoJava : assumes Java bytecode execution hardware • K Virtual Machine : requires 160 – 512 KB of memory • XML : too complex and not enough RAM • Scylla : VM for mobile embedded system

  8. Mate features • Small (16KB instruction memory, 1KB RAM) • Concise (limited memory & bandwidth) • Resilience (memory protection) • Efficient (bandwidth) • Tailorable (user defined instructions)

  9. Mate in a nutshell (capsule?) • Stack architecture • Three concurrent execution contexts (clock, send, receive) • Execution triggered by predefined events • Tiny code capsules; self-propagate into network • Built in communication and sensing instructions

  10. When is Mate Preferable? • For small number of executions • Bytecode version is preferable for a program running < 5 days • The energy saved in communicating new program via Mate compensates for the energy wasted due to running virtual machine bytecode interpreter • In energy constrained domains • Use Mate capsule as a general RPC engine, memory protection, virtualization

  11. Mate Architecture • Stack based architecture • Single shared variable • gets/sets • Three events: • Clock timer • Message reception • Message send • Hides asynchrony • Simplifies programming • Less prone to bugs

  12. Instruction Set • One byte per instruction • Three classes: basic, s-type, x-type • basic: arithmetic, halting, LED operation • s-type: messaging system • x-type: pushc, blez • 8 instructions reserved for users to define • Instruction polymorphism • e.g. add(data, message, sensing)

  13. Code Example • Display Counter to LED

  14. Code Capsules • One capsule = 24 instructions • Fits into single TOS packet • Atomic reception • Code Capsule • Type and version information • Type: send, receive, timer, subroutine

  15. Viral Code • Capsule transmission: forw • Forwarding other installed capsule: forwo (use within clock capsule) • Mate checks on version number on reception of a capsule -> if it is newer, install it • Versioning: 32bit counter • Disseminates new code over the network

  16. Component Breakdown • Mate runs on mica with 7286 bytes code, 603 bytes RAM

  17. Network Infection Rate • 42 node network in 3 by 14 grid • Radio transmission: 3 hop network • Cell size: 15 to 30 motes • Every mote runs its clock capsule every 20 seconds • Self-forwarding clock capsule

  18. Bytecodes vs. Native Code • Mate IPS: ~10,000 • Overhead: Every instruction executed as separate TOS task

  19. Customizing Mate • Mate is general architecture; user can build customized VM • Bombilla in TinyOS for querying • Agilla (over Bombilla) for mobile agents in WSNs • User can select bytecodes and execution events • Issues: • Flexibility vs. Efficiency Customizing increases efficiency w/ cost of changing requirements • Java’s solution: General computational VM + class libraries • Mate’s approach: More customizable solution -> let user decide

  20. Programming abstractions Macro-programming approaches • Hood abstraction • Region streams • Kairos

  21. Macroprogramming • Program sensornet as a whole • Easier than programming at the level of individual nodes • e.g) Matrix multiplication Matrix notation vs. Parallel program in MPI • Compile into node-level programs • Non CS researchers shall be able to program without worrying about distributed execution details • Abstract away the details of concurrency and communication

  22. Macro-programming Abstractions Support Global behavior Local Behavior Composition Distribution & Safe Execution Automatic Optimization • Node-independent • TAG, Cougar • DFuse • Node-dependent • Kairos • Regiment • Split-C • Data-Centric • EIP, State-space • Geometric • Regions, Hood Sensorware SNACK Impala Mate Tofu Trickle Deluge Taxonomy of Macroprogramming

  23. Hood (UC Berkeley) • Neighborhood • A neighborhood in Hood is defined by a set of criteria for choosing neighbors and a set of variables to be shared. • A node can define multiple neighborhoods with different variables shared over each of them. • Captures the essence of the neighborhood concepts needed by many existing applications • Defines the relationship between several concepts fundamental to neighborhoods • membership, data sharing, data caching, and messaging. • decouples data sharing and caching • Integrate neighbor lists and caching with messaging • Mirror & filter • Explicitly proposes the neighborhood-oriented programming

  24. Region streams (Harvard) • Purely functional macroprogramming language for sensornet • Basic data abstraction: region streams • A time-varying collection of node state • e.g., “All sensor nodes within area R” form a region • The set of their periodic data samples form a region stream • Example: tracking moving vehicle • A region stream is created that represents the value of the proximity sensor on every node in the network • Each value is also annotated with the location of the corresponding sensor. • Data items that fall below the threshold are filtered out. • The spatial centroid of the remaining collection of sensor values is computed to determine the approximate location of the object that generated the readings

  25. Region streams (Harvard) • Regiment: Functional Macroprogramming Language • Based on functional reactive programming concepts • Functional languages: “pure”, no input no output • cannot manipulate program state • allows the compiler to decide how and where the program state is kept in the volatile mesh of sensor nodes

  26. Market Based Macroprogramming(Harvard) • Basic model: • Nodes act as agents that sell goods (such as sensor readings or routed msgs) • Each good is produced by an associated action that produces it • Nodes attempt to maximize their profit, subject to energy constraints • Each good has an associated price • Network is “programmed” by setting prices for each good • Each action has an associated energy cost • e.g., Cost to sample a sensor << Cost to transmit a radio message material from Matt Welsh

  27. How to program in MBM? • First step: Set the price(s) • use one of many efficient dissemination protocols • update prices as need by the overall application goal • Nodes select actions based on a utility function • Utility depends on: • Price • Advertised by base station • Energy availability • Taking an action must stay within energy budget • Other dependencies • Cannot aggregate data until multiple samples have been received • Cannot transmit if nothing in local buffer material from Matt Welsh

  28. Sequential Program Threadofcontrol Kairos (USC) • In Kairos, a programmer writes a singlesequential program using a simple centralized memory model Centralized Sensor State mapped from Sensors Read/write

  29. Advantage • Centralized sequential programs easier to specify, code, understand and debug than hand-coded distributed versions • Reuse “textbook” algorithms for sophisticated tasks • Ignoring latency and energy considerations, a dumb but obviously trivial “distributed” implementation always possible, by shipping sensor nodes’ state to and from a central location

  30. Kairos Features • Three constructs with which to write programs • node (a first-class datatype) and node_list (iterator on nodes) that facilitate topology independent programming • get_neighbors() to obtain current one-hop neighbors of a node • var@node to synchronously access data and program state of node’s • These constructs are language-agnostic • They can be implemented in the preprocessor stage of compilation

  31. Eventual Consistency • Synchronization model called Loose Synchrony • Useful when there is relatively static node state • Did not work well for a dynamic vehicle tracking scenario • Implemented a tighter semantic called Loop-level Synchrony • Long term, we are exploring temporal abstractions as a fourth construct that can capture this requirement completely

More Related