lecture 2 addendum software platforms n.
Download
Skip this Video
Download Presentation
Lecture 2 Addendum: Software Platforms

Loading in 2 Seconds...

play fullscreen
1 / 58

Lecture 2 Addendum: Software Platforms - PowerPoint PPT Presentation


  • 135 Views
  • Uploaded on

Lecture 2 Addendum: Software Platforms. Anish Arora CIS788.11J Introduction to Wireless Sensor Networks Lecture uses slides from tutorials prepared by authors of these platforms. Outline. Platforms (contd.) SOS slides from UCLA Virtual machines (Maté) slides from UCB

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Lecture 2 Addendum: Software Platforms' - jamal-weeks


Download Now An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
lecture 2 addendum software platforms

Lecture 2 Addendum: Software Platforms

Anish Arora

CIS788.11J

Introduction to Wireless Sensor Networks

Lecture uses slides from tutorials prepared by authors of these platforms

outline
Outline
  • Platforms (contd.)
    • SOS slides from UCLA
    • Virtual machines (Maté) slides from UCB
    • Contiki slides from Upsaala
references
References
  • SOS Mobisys paper
  • SOS webpage
  • Mate: A Virtual Machine for Sensor Networks. ASPLOS
  • Mate webpage
  • Contiki Emnets Paper
  • Contiki webpage
sos motivation and key feature
SOS: Motivation and Key Feature
  • Post-deployment software updates are necessary to
    • customize the system to the environment
    • upgrade features
    • remove bugs
    • re-task system
  • Remote reprogramming is desirable
  • Approach: Remotely insert binary modules into running kernel
    • software reconfiguration without interrupting system operation
    • no stop and re-boot unlike differential patching
  • Performance should be superior to virtual machines
architecture overview

Dynamically

Loadable

Modules

Light

Sensor

Light

Sensor

Tree

Routing

Application

Function Pointer

Control Blocks

Kernel

Services

Dynamic Memory

Scheduler

Serial

Framer

Comm.

Stack

Sensor

Manager

Device

Drivers

Timer

Hardware

Abstraction

Layer

Clock

UART

ADC

SPI

I2C

Architecture Overview

Static Kernel

  • Provides hardware abstraction & common services
  • Maintains data structures to enable module loading
  • Costly to modify after deployment

Dynamic Modules

  • Drivers, protocols, and applications
  • Inexpensive to modify after deployment
  • Position independent
sos kernel
SOS Kernel
  • Hardware Abstraction Layer (HAL)
    • Clock, UART, ADC, SPI, etc.
  • Low layer device drivers interface with HAL
    • Timer, serial framer, communications stack, etc.
  • Kernel services
    • Dynamic memory management
    • Scheduling
    • Function control blocks
kernel services memory management
Kernel Services: Memory Management
  • Fixed-partition dynamic memory allocation
    • Constant allocation time
    • Low overhead
  • Memory management features
    • Guard bytes for run-time memory overflow checks
    • Ownership tracking
    • Garbage collection on completion
  • pkt = (uint8_t*)ker_malloc(hdr_size + sizeof(SurgeMsg), SURGE_MOD_PID);
kernel services scheduling
Kernel Services: Scheduling
  • SOS implements non-preemptive priority scheduling via priority queues
  • Event served when there is no higher priority event
    • Low priority queue for scheduling most events
    • High priority queue for time critical events, e.g., h/w interrupts & sensitive timers
  • Prevents execution in interrupt contexts
  • post_long(TREE_ROUTING_PID, SURGE_MOD_PID, MSG_SEND_PACKET,

hdr_size + sizeof(SurgeMsg), (void*)packet, SOS_MSG_DYM_MANAGED);

modules
Modules
  • Each module is uniquely identified by its ID or pid
  • Has private state
  • Represented by a message handler & has prototype:

int8_t handler(void *private_state, Message *msg)

  • Return value follows errno
    • SOS_OK for success. -EINVAL, -ENOMEM, etc. for failure
kernel services module linking
Kernel Services: Module Linking
  • Orthogonal to module distribution protocol
  • Kernel stores new module in free block located in program memory

and critical information about module in the module table

  • Kernel calls initialization routine for module
    • Publish functions for other parts of the system to use

char tmp_string = {'C', 'v', 'v', 0};

ker_register_fn(TREE_ROUTING_PID, MOD_GET_HDR_SIZE, tmp_string, (fn_ptr_t)tr_get_header_size);

    • Subscribe to functions supplied by other modules

char tmp_string = {'C', 'v', 'v', 0};

s->get_hdr_size = (func_u8_t*)ker_get_handle(TREE_ROUTING_PID, MOD_GET_HDR_SIZE, tmp_string);

    • Set initial timers and schedule events
module to kernel communication

Module A

System Call

System Messages

System

Jump Table

SOS Kernel

Interrupt

HW Specific API

Hardware

Module–to–Kernel Communication
  • Kernel provides system services and access to hardware

ker_timer_start(s->pid, 0, TIMER_REPEAT, 500);

ker_led(LED_YELLOW_OFF);

  • Kernel jump table re-directs system calls to handlers
    • upgrade kernel independent of the module
  • Interrupts & messages from kernel dispatched by a high priority message buffer
    • low latency
    • concurrency safe operation

High Priority

Message

Buffer

inter module communication

Module A

Module B

Module A

Module B

Indirect Function Call

Post

Module Function

Pointer Table

Message

Buffer

Inter-Module Communication

Inter-Module Message Passing

  • Asynchronous communication
  • Messages dispatched by a two-level priority scheduler
  • Suited for services with long latency
  • Type safe binding through publish / subscribe interface

Inter-Module Function Calls

  • Synchronous communication
  • Kernel stores pointers to functions registered by modules
  • Blocking calls with low latency
  • Type-safe runtime function binding
synchronous communication

Module A

Module B

3

2

1

Module Function

Pointer Table

Synchronous Communication
  • Module can register function for low latency blocking call (1)
  • Modules which need such function can subscribe to it by getting function pointer pointer (i.e. **func) (2)
  • When service is needed, module dereferences the function pointer pointer (3)
asynchronous communication

3

Module A

2

Network

Module B

1

4

5

Msg Queue

Send Queue

Asynchronous Communication
  • Module is active when it is handling the message (2)(4)
  • Message handling runs to completion and can only be interrupted by hardware interrupts
  • Module can send message to another module (3) or send message to the network (5)
  • Message can come from both network (1) and local host (3)
module safety
Module Safety
  • Problem: Modules can be remotely added, removed, & modified on deployed nodes
  • Accessing a module
    • If module doesn't exist, kernel catches messages sent to it & handles dynamically allocated memory
    • If module exists but can't handle the message, then module's default handler gets message & kernel handles dynamically allocated memory
  • Subscribing to a module’s function
    • Publishing a function includes a type description that is stored in a function control block (FCB) table
    • Subscription attempts include type checks against corresponding FCB
    • Type changes/removal of published functions result in subscribers being redirected to system stub handler function specific to that type
    • Updates to functions w/ same type assumed to have same semantics
module library

Memory

Debug

Surge

Photo

Sensor

Tree

Routing

Module Library
  • Some applications created by combining already written and tested modules
  • SOS kernel facilitates loosely coupled modules
    • Passing of memory ownership
    • Efficient function and messaging interfaces

Surge Application

with Debugging

module design
Module Design

#include <module.h>

typedef struct {

uint8_t pid;

uint8_t led_on;

} app_state;

DECL_MOD_STATE(app_state);

DECL_MOD_ID(BLINK_ID);

int8_t module(void *state, Message *msg){

app_state *s = (app_state*)state;

switch (msg->type){

case MSG_INIT: {

s->pid = msg->did;

s->led_on = 0;

ker_timer_start(s->pid, 0, TIMER_REPEAT, 500);

break;

}

case MSG_FINAL: {

ker_timer_stop(s->pid, 0);

break;

}

case MSG_TIMER_TIMEOUT: {

if(s->led_on == 1){

ker_led(LED_YELLOW_ON);

} else {

ker_led(LED_YELLOW_OFF);

}

s->led_on++;

if(s->led_on > 1) s->led_on = 0;

break;

}

default:return -EINVAL;

}

return SOS_OK;

}

  • Uses standard C
  • Programs created by “wiring” modules together
sensor manager

Module A

Module B

Polled

Access

Periodic

Access

Signal

Data Ready

Sensor

Manager

getData

dataReady

MagSensor

ADC

I2C

Sensor Manager
  • Enables sharing of sensor data between multiple modules
  • Presents uniform data access API to diverse sensors
  • Underlying device specific drivers register with the sensor manager
  • Device specific sensor drivers control
    • Calibration
    • Data interpolation
  • Sensor drivers are loadable: enables
    • post-deployment configuration of sensors
    • hot-swapping of sensors on a running node
application level performance
Application Level Performance

Comparison of application performance in SOS, TinyOS, and MateVM

Surge Forwarding Delay

Surge Tree Formation Latency

Surge Packet Delivery Ratio

Memory footprint for base operating system with the ability to distribute and update node programs.

CPU active time for surge application.

reconfiguration performance
Reconfiguration Performance
  • Energy trade offs
    • SOS has slightly higher base operating cost
    • TinyOS has significantly higher update cost
    • SOS is more energy efficient when the system is updated one or more times a week

Energy cost of light sensor driver update

Module size and energy profile for installing surge under SOS

Energy cost of surge application update

platform support
Platform Support

Supported micro controllers

  • Atmel Atmega128
    • 4 Kb RAM
    • 128 Kb FLASH
  • Oki ARM
    • 32 Kb RAM
    • 256 Kb FLASH

Supported radio stacks

  • Chipcon CC1000
    • BMAC
  • Chipcon CC2420
    • IEEE 802.15.4 MAC (NDA required)
simulation support
Simulation Support
  • Source code level network simulation
    • Pthread simulates hardware concurrency
    • UDP simulates perfect radio channel
    • Supports user defined topology & heterogeneous software configuration
    • Useful for verifying the functional correctness
  • Instruction level simulation with Avrora
    • Instruction cycle accurate simulation
    • Simple perfect radio channel
    • Useful for verifying timing information
    • See http://compilers.cs.ucla.edu/avrora/
  • EmStar integration under development
mate a virtual machine for sensor networks
Mate: A Virtual Machine for Sensor Networks

Why VM?

  • Large number (100’s to 1000’s) of nodes in a coverage area
  • Some nodes will fail during operation
  • Change of function during the mission

Related Work

  • PicoJava

assumes Java bytecode execution hardware

  • K Virtual Machine

requires 160 – 512 KB of memory

  • XML

too complex and not enough RAM

  • Scylla

VM for mobile embedded system

mate features
Mate features
  • Small (16KB instruction memory, 1KB RAM)
  • Concise (limited memory & bandwidth)
  • Resilience (memory protection)
  • Efficient (bandwidth)
  • Tailorable (user defined instructions)
mate in a nutshell
Mate in a Nutshell
  • Stack architecture
  • Three concurrent execution contexts
  • Execution triggered by predefined events
  • Tiny code capsules; self-propagate into network
  • Built in communication and sensing instructions
when is mate preferable
When is Mate Preferable?
  • For small number of executions
  • GDI example:

Bytecode version is preferable for a program running less than 5 days

  • In energy constrained domains
  • Use Mate capsule as a general RPC engine
mate architecture
Mate Architecture
  • Stack based architecture
  • Single shared variable
    • gets/sets
  • Three events:
    • Clock timer
    • Message reception
    • Message send
  • Hides asynchrony
    • Simplifies programming
    • Less prone to bugs
instruction set
Instruction Set
  • One byte per instruction
  • Three classes: basic, s-type, x-type
    • basic: arithmetic, halting, LED operation
    • s-type: messaging system
    • x-type: pushc, blez
  • 8 instructions reserved for users to define
  • Instruction polymorphism
    • e.g. add(data, message, sensing)
code example 1
Code Example(1)
  • Display Counter to LED
code capsules
Code Capsules
  • One capsule = 24 instructions
  • Fits into single TOS packet
  • Atomic reception
  • Code Capsule
    • Type and version information
    • Type: send, receive, timer, subroutine
viral code
Viral Code
  • Capsule transmission: forw
  • Forwarding other installed capsule: forwo (use within clock capsule)
  • Mate checks on version number on reception of a capsule

-> if it is newer, install it

  • Versioning: 32bit counter
  • Disseminates new code over the network
component breakdown
Component Breakdown
  • Mate runs on mica with 7286 bytes code, 603 bytes RAM
network infection rate
Network Infection Rate
  • 42 node network in 3 by 14 grid
  • Radio transmission: 3 hop network
  • Cell size: 15 to 30 motes
  • Every mote runs its clock capsule every 20 seconds
  • Self-forwarding clock capsule
bytecodes vs native code
Bytecodes vs. Native Code
  • Mate IPS: ~10,000
  • Overhead: Every instruction executed as separate TOS task
installation costs
Installation Costs
  • Bytecodes have computational overhead
  • But this can be compensated by using small packets on upload (to some extent)
customizing mate
Customizing Mate
  • Mate is general architecture; user can build customized VM
  • User can select bytecodes and execution events
  • Issues:
    • Flexibility vs. Efficiency

Customizing increases efficiency w/ cost of changing requirements

    • Java’s solution:

General computational VM + class libraries

    • Mate’s approach:

More customizable solution -> let user decide

how to
How to …
  • Select a language

-> defines VM bytecodes

  • Select execution events

-> execution context, code image

  • Select primitives

-> beyond language functionality

constructing a mate vm
Constructing a Mate VM
  • This generates

a set of files

-> which are

used to build

TOS application

and

to configure

script program

compiling and running a program
Compiling and Running a Program

Send it over the network to a VM

VM-specific binary code

Write programs in the scripter

bombilla architecture
Bombilla Architecture
  • Once context: perform operations that only need single execution
  • 16 word heap sharing among the context;

setvar, getvar

  • Buffer holds up to ten values;

bhead, byank, bsorta

bombilla instruction set
Bombilla Instruction Set
  • basic: arithmetic, halt, sensing
  • m-class: access message header
  • v-class: 16 word heap access
  • j-class: two jump instructions
  • x-class: pushc
enhanced features of bombilla
Enhanced Features of Bombilla
  • Capsule Injector: programming environment
  • Synchronization: 16-word shared heap; locking scheme
  • Provide synchronization model: handler, invocations, resources, scheduling points, sequences
  • Resource management: prevent deadlock
  • Random and selective capsule forwarding
  • Error State
discussion
Discussion
  • Comparing to traditional VM concept, is Mate platform independent? Can we have it run on heterogeneous hardware?
  • Security issues:

How can we trust the received capsule? Is there a way to prevent version number race with adversary?

  • In viral programming, is there a way to forward messages other than flooding? After a certain number of nodes are infected by new version capsule, can we forward based on need?
  • Bombilla has some sophisticated OS features. What is the size of the program? Does sensor node need all those features?
contiki
Contiki

Dynamic loading of programs (vs. static)

Multi-threaded concurrency managed execution (in addition to event driven)

Available on MSP430, AVR, HC12, Z80, 6502, x86, ...

Simulation environment available for BSD/Linux/Windows

key ideas
Key ideas
  • Dynamic loading of programs
    • Selective reprogramming
    • Static vs dynamic linking
  • Concurrency management mechanisms
    • Events and threads
    • Trade-offs: preemption, size
contiki size bytes
Code AVR

1044

-

678

90

226

1934

5218

Contiki size (bytes)

Module

Kernel

Program loader

Multi-threading library

Timer library

Memory manager

Event log replicator

µIP TCP/IP stack

Code MSP430

810

658

582

60

170

1656

4146

RAM

10 + e + p

8

8 + s

0

0

200

18 + b

loadable programs
Loadable programs
  • One-way dependencies
    • Core resident in memory
      • Language run-time, communication
    • Programs “know” the core
      • Statically linked against core
  • Individual programs can be loaded/unloaded

Core

loadable programs contd
Loadable programs (contd.)
  • Programs can be loaded from anywhere
    • Radio (multi-hop, single-hop), EEPROM, etc.
  • During software development, usually change only one module

Core

how well does it work
How well does it work?
  • Works well
    • Program typically much smaller than entire system image (1-10%)
      • Much quicker to transfer over the radio
    • Reprogramming takes seconds
  • Static linking can be a problem
    • Small differences in core means module cannot be run
    • We are implementing a dynamic linker
revisiting multi threaded computation
Revisiting Multi-threaded Computation
  • Threads blocked, waiting for events
  • Kernel unblocks threads when event occurs
  • Thread runs until next blocking statement
  • Each thread requires its own stack
    • Larger memory usage

Thread

Thread

Thread

Kernel

event driven vs multi threaded
Multi-threaded

+ wait() statements

+ Preemption possible

+ Sequential code flow

- Larger code overhead

- Locking problematic

- Larger memory requirements

Event-driven vs multi-threaded

Event-driven

- No wait() statements

- No preemption

- State machines

+ Compact code

+ Locking less of a problem

+ Memory efficient

How to combine them?

contiki event based kernel with threads
Contiki: event-based kernel with threads
  • Kernel is event-based
    • Most programs run directly on top of the kernel
  • Multi-threading implemented as a library
  • Threads only used if explicitly needed
    • Long running computations, ...
  • Preemption possible
    • Responsive system with running computations
threads implemented atop an event based kernel
Threads implemented atop an event-based kernel

Event

Thread

Thread

Event

Kernel

Event

Event

implementing preemptive threads 1

Switch stack

Setup IRQ handler

Switch stack back

Timer IRQ

Implementing preemptive threads 1

Thread

Event

handler

implementing preemptive threads 2

Switch stack

Setup IRQ handler

Switch stack back

Implementing preemptive threads 2

Event

handler

yield()

memory management
Memory management
  • Memory allocated when module is loaded
    • Both ROM and RAM
    • Fixed block memory allocator
  • Code relocation made by module loader
    • Exercises flash ROM evenly
protothreads light weight stackless threads
Protothreads: light-weight stackless threads
  • Protothreads: mixture between event-driven and threaded
    • A third concurrency mechanism
  • Allows blocked waiting
  • Requires per-thread no stack
  • Each protothread runs inside a single C function
  • 2 bytes of per-protothread state