1 / 47

Minitask Architecture and MagTracking Demo Returns NEST PI Meeting Jan 29, 2003

Minitask Architecture and MagTracking Demo Returns NEST PI Meeting Jan 29, 2003. Presented by Cory Sharp, Kamin Whitehouse Rob Szewczyk David Culler, Shankar Sastry. Estimation Ohio State (spanning tree) UVA Localization Lockheed ATC Berkeley Power Management CMU Routing

niran
Download Presentation

Minitask Architecture and MagTracking Demo Returns NEST PI Meeting Jan 29, 2003

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Minitask Architecture and MagTracking Demo ReturnsNEST PI Meeting Jan 29, 2003 Presented by Cory Sharp, Kamin Whitehouse Rob Szewczyk David Culler, Shankar Sastry

  2. Estimation Ohio State (spanning tree) UVA Localization Lockheed ATC Berkeley Power Management CMU Routing Ohio State UVA PARC UMich Service Coordination UC Irvine Lockheed ATC Berkeley (oversight) Team Formation Ohio State Lockheed ATC UVA TimeSync Ohio State (fault tolerance) UCLA Notre Dame Minitask Groups

  3. Minitask Goals • Drive NEST technology and Program Integration • produce viable middleware components for challenge appln • make integration happen early • Drive NEST program concept • application independent composable middleware • concrete framework for exploring composition, coordination and time-bounded synthesis • enable meaningful discussion on each • Guide formulation of metrics • Services Metric: usefulness (refactorization) • Components Metric: composability, modularity • Assist collaboration between groups in the short term • Provide an initial, working testbed • Groups will enhance and replace services • Toward code generation

  4. Composability Gives: • Maintainability • Collaboration • Extensibility • Consistency • Predictability of interface • Verifiability

  5. Architecture Overview

  6. MagTracking Demo Logic • Ad hoc sensor field node nodes • Each node knows only its own location (coordinates) • Neighborhood discovery • Learn of “neighbors” and their locations • Local Processing: Magnetometer readings are acquired, filtered • Group Formation and Data Sharing: • readings placed on the “hood” • In-nw aggregation and leader election: • node with highest reading calculates and sends an estimate… • ... via geographic location-based routing to special node (0,0). • Neighborhood membership restricted to force reliable multi-hop. • The mote at (0,0) sends the estimate to a camera node • Direct map to transition multi-tier sensor scenario

  7. MagTracking Services • MagSensor • Accumulates, filters magnetometer readings. • Routing • Supports a number of routing methodologies. • Neighborhood “hood” • Facilitates sharing among components on a node, and • discovery and data sharing among nodes in a localized region • Localization • Discovers geographic location of the mote • TimeSync • Synchronizes time between motes • Service Coordination • Controls behavior of an aggregation of services

  8. Service Wiring Estimation Scheduler Mag Sensor Time Sync Routing Localization “Hood” Reflected Tuples

  9. MagTracking Services • MagSensor • Accumulates, filters magnetometer readings • Routing • Supports a number of routing methodologies • Neighborhood • Facilitates local discovery and data sharing • Localization • Discovers geographic location of the mote • TimeSync • Synchronizes time between motes • Service Coordination • Controls behavior of an aggregation of services

  10. Magnetometer

  11. Magnetometer Philosophy • Seeking composability • Break functionality into Sensor, Actuator, and Control • Sensors: get() and getDone(value) • Actuators: set(value) and setDone() • Control: domain-specific manipulations that gain nothing from generalization • Separate module behavior from composite behavior • Filter modules live in a vacuum • Maximize opportunity for composability • clean stacking

  12. Magnetometer Interfaces • MagSensor interface • Abstract sensor interface • command result_t read(); • event result_t readDone( Mag_t* mag ); • MagBiasActuator interface • Abstract actuator interface • command result_t set( MagBias_t* bias ); • event result_t setDone( result_t success ); • MagAxesSpecific interface • “Control” functionality that doesn’t fit the model of an actuator or sensor • command void enableAxes( MagAxes_t* axes ); • command MagAxes_t* isAxesEnabled();

  13. Magnetometer Modules • MagMovingAvgM module • Filter from MagSensor to MagSensor • Performs a moving average across magnetometer readings • MagFuseBiasM module • Filter from MagSensor to MagSensor • “Fuses” bias with reading to give one “absolute” reading value • MagAutoBiasM module • Filter from MagSensor to MagSensor • Magnetometer readings vary between 200 and 800 but have an absolute range of about 9000. • Readings often rail at 200 or 800, continually adjust bias to drive readings to 500. • MagSensorM module • Translation layer between TinyOS and MagSensor MagMovingAvgM MagFuseBiasM MagAutoBiasM MagSensorM

  14. Magnetometer Conclusions • Filtering • Transparency in composition

  15. Routing

  16. Routing Overview • Application-level features • Broadcast example • Developer features • General routing structure • Anatomy of a SendByLocation • Routing configuration file • Routing extensions

  17. Application-LevelRouting Features • Send and receive interfaces are nearly-identical to AM TinyOS counterparts. • “Destination” differs per routing semantics • Broadcast: max hops • Location: position and radius in R3, etc. • Unification of routing methods. • Receive is independent of the routing module and method. • Interact with the routing stack as a single, multi-method component. • Message body packing is independent of the routing method. • Depends on NesC features • interpositioning on parameterized interfaces

  18. Application Interface – Broadcast Example • Choose a protocol number, say 99, and wire to it: AppC -> RoutingC.SendByBroadcast[99]; AppC -> RoutingC.Receive[99]; • Initialize and send your message: mydata_t* msgbody = (mydata_t*)initRoutingMsg( &msg, sizeof(mydata_t) ); // ... call SendByBroadcast.send( 2 , &msg ); • Send done event: SendByBroadcast.sendDone( TOS_MsgPtr msg, result_t success ); • Receive time sync messages: TOS_MsgPtr RoutingReceive.receive(TOS_MsgPtr msg);

  19. DeveloperRouting Features • Internal routing components are modular and composable. • Routing modulesonly responsible for decisions of destination. • Routing decorations augment the stack independent of routing modules. • Modules and decorations always provide and use RoutingSend and RoutingReceive. • Routing extensions enabled by per-message key-value pairs. • Header encapsulation determined by composition

  20. SendBy Broadcast SendBy Broadcast SendBy Broadcast SendBy Broadcast SendBy Address SendBy Address SendBy Address SendBy Address SendBy Location SendBy Location SendBy Location SendBy Location Receive Receive Receive Receive Decorations Decorations Decorations Decorations Decorations UCB MCast UCB MCast UCB MCast UCB MCast UCB MHop UCB MHop UCB MHop UCB MHop UCB Locat. UCB Locat. UCB Locat. UCB Locat. Decorations Decorations Decorations Decorations Decorations TinyOSRouting TinyOSRouting TinyOSRouting TinyOSRouting (Secure) GenericComm (Secure) GenericComm (Secure) GenericComm (Secure) GenericComm General Routing Structure Application interfaces Decorations Module Decorations Routing modules Module Decorations Decorations Interface translation TinyOS Comm

  21. Anatomy of a SendByLocation 80 03 40 02 00 00 00 00 22 02 33 02 33 02 17 00 56 1 2 3 4 5 6 SendBy Broadcast SendBy Address SendBy Location Receive 1 Mote 0x233 sends an estimate (3.5,2.25) to location (0,0) using “protocol” 86. 2 Berkeley Broadcast Berkeley Address Berkeley Location 3 Tag Dst Address Tag Src Address 0x233 4 Local Loopback (3.5,2.25) 0x222 Retries, Priority Q. 5 Ignore Dupe Msgs TinyOSRouting 6 (0,0) (Secure) GenericComm

  22. SendBy Broadcast SendBy Address SendBy Location Receive Berkeley Broadcast Berkeley Address Berkeley Location Tag Dst Address Tag Src Address Local Loopback Retries, Priority Q. Ignore Dupe Msgs TinyOSRouting (Secure) GenericComm Routing Configuration File /*<routing> Top: TOSAM 100: provides interface RoutingSendByAddress; BerkeleyAddressRoutingM; TOSAM 101: provides interface RoutingSendByBroadcast; BerkeleyBroadcastRoutingM; TOSAM 102: provides interface RoutingSendByLocation; BerkeleyLocationRouting2M; TagDestinationAddressRoutingM; TagSourceAddressRoutingM; Bottom: LocalLoopbackRoutingM; ReliablePriorityRoutingSendM; IgnoreDuplicateRoutingM; IgnoreNonlocalRoutingM; </routing>*/ includes Routing; configuration RoutingC { } implementation { // ... }

  23. Routing Extensions • Extensions use key-value pairs per message • Modules/decorations that provide an extension have extra markup: • //!! RoutingMsgExt { uint8_t priority = 0; } • //!! RoutingMsgExt { uint8_t retries = 2; } • Preprocessed into a substructure of TOS_Msg. • Applications assume extensions exist: • mymsg.ext.priority = 1; • mymsg.ext.retries = 4; • Unsupported extensions produce compile-time errors • Currently goes beyond nesC IDL capabilities • as approach gets resolved, look for language solution

  24. Routing Conclusions • Application level is nearly identical to TOS • Unification of routing methods • Internally modular and composable • Extensible

  25. Neighborhood

  26. Data Sharing

  27. Data Sharing Today msg protocol Implement comm functionality Choose neighbors Data management Neighborhood Service “Get” interface to remote data Use Neighborhood Infce Data Sharing

  28. Standardized API • Declare data to be shared • Get/set data • Choose neighbors • Choose sync method

  29. Benefits • Refactorization • Simplifies interface to data exchange • Clear sharing semantic • Sharing of data between components w/i node as well as between nodes • Optimization • Trade-offs between ease of sharing and control

  30. Comm Stack Our Implementation Mag Light Temp TupleStore TupleManager TuplePublisher

  31. Our API • Declare data to be shared • Get/set data • Choose neighbors • Choose sync method

  32. Our API • Declare data to be shared //!! Neighbor {mag = 0;} • Get/set data • Choose neighbors • Choose sync method

  33. Our API • Declare data to be shared • Get/set data • Choose neighbors • Choose sync method

  34. Our API • Declare data to be shared • Get/set data • Neighborhood “interface” • Choose neighbors • Choose sync method

  35. Neighborhood Interface • command struct* get(nodeID) • command struct* getFirst() command struct* getNext(struct*) • event updated(nodeID) • command set(nodeID, struct*) • ommand requestRemoteTuples();

  36. Our API • Declare data to be shared • Get/set data • Choose neighbors • Choose sync method

  37. Our API • Declare data to be shared • Get/set data • Choose neighbors • Choose tupleManager component • Choose sync method

  38. Our API • Declare data to be shared • Get/set data • Choose neighbors • Choose sync method

  39. Our API • Declare data to be shared • Get/set data • Choose neighbors • Choose sync method • Choose tuplePublisher component

  40. Limitations and Alternatives • Each component might want to have different • neighbors • k-hop nodes, ... • sharing semantics, publish event, synchronization • Might want group addressing • Might have logically independent “hoods” • Space efficiency • More powerful query interface • many alternative can be layered on top • build from concrete enabler

  41. Service Coordinator Design

  42. Outline • Motivation • Use Scenarios • High-level Overview • Interfaces

  43. Motivation • Services need not to run all the time • Resource management – power, available bandwidth or buffer space • Service conditioning – minimize the interference between services • Run time of different services needs to be coordinated across the network • Generic coordination of services • Separation of service functionality and scheduling (application-driven requirement) • Scheduling information accessible through a single consistent interface across many applications and services • Only a few coordination possibilities • Types of coordination • Synchronized • Colored • Independent

  44. Use Scenarios • Midterm demo • Localization, time sync • Scheduling of active nodes based to preserve sensor coverage • Other apps • Control the duty cycle based on relevance of data • Sensing perimeter maintenance • Generic Sensor Kit

  45. High-level Overview Command Interpreter Radio Service Coordinator Time Sync Neighborhood Localization Scheduler Mag Tracking …

  46. Interfaces • Service Control • Service Scheduler Interface interface ServiceCtl { command result_t ServiceInit(); command result_t ServiceStart(); command result_t ServiceStop(); event void ServiceInitialized(result_t status); } typedef struct { StartTime; Duration; Repeat; CoordinationType; } ServiceScheduleType; interface ServiceSchedule { command result_t ScheduleSvc(SvcID, ServiceScheduleType *); command ServiceScheduleType *getSchedule(SvcID); event ScheduleChanged(SvcID); }

  47. Discussion

More Related