1 / 48

Robust Messaging Minitask Report

Robust Messaging Minitask Report. Notre Dame Ohio State PARC UC Berkeley UC Irvine Hongwei Zhang & Vinod Kulathumani, OSU. Dec 2003. Scope.

brant
Download Presentation

Robust Messaging Minitask Report

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Robust Messaging Minitask Report Notre Dame Ohio State PARC UC Berkeley UC Irvine Hongwei Zhang & Vinod Kulathumani, OSU Dec 2003

  2. Scope • Comparative study of existing messaging protocols for well-understood scenarios (e.g., A Line In The Sand, Pursuer Evader, Red Force Tagging, Shooter Location) • reliability • delay • throughput/goodput • scalability • Comparison at scale of 100 nodes by testbed-based experiments • Comparison at the scale of 1000 nodes by simulations

  3. Issues • Importance of testing at scale • repeatable result: What works for n nodes does not work for 10n nodes ! • several observed routing results for 10-20 nodes do not port to 50-100 nodes • Importance/hardness of validating simulation completeness & precision • especially, fidelity of simulation model (e.g., radio transmission, collision) • several observed discrepancies between simulations & experiment • complexity of building adequate mathematical models due to • large space of dimensions • hardness of extract parameters from expt. traces in protocol independent way • Benefits of standardized API • for porting codes between simulation & experimentation • for composability (plug & play) • for easy comparison of different protocols

  4. Contributions Notre Dame • Robust routing strategy for Red Force Tagging • Partial list of robustness techniques PARC • Modeling & Simulation Environment for Ad-hoc Routing Applications in Wireless Sensor Networks • Baseline Routing Strategies • Spanning tree, Flooding • Meta Adaptive Routing Strategies based on Reinforcement Learning • Adaptive tree, constraint-based search, constrained flooding • Test Case Studies

  5. Contributions (contd.) OSU • Experiments • compared GridRouting/ReliableComm & MintRoute wrt A Line In The Sand scenario • generated experimental traffic traces for different types of events in the A Line In The Sand scenario • Simulation • compared GridRouting/ReliableComm with GridRouting/TDMA in Prowler wrt A Line In The Sand scenario • defined a uniform interface between modules of Prowler • Compiled a list of existing protocols, papers, and studies related to robust-messaging

  6. Contributions (contd.) UC Berkeley • Midterm demo 7/2003 report: Landmark Routing tree • evader information reaches landmark • landmark forwards information via crumb-trail to pursuer • Alec Woo et al Sensys 11/2003 report: MintRoute tree routing • distance vector with minimum transmission cost metric • link quality estimates used to calculate expected total # of trans. • Jason Hill’s Surge report on robust routing • 19 node experiment-based fine grain analysis of a multihop data collection application using Alec’s routing protocol UC Irvine • TDMA-based routing experiment & simulations in various traffic pattern scenarios

  7. Outline • OSU • Experimental study of GridRouting/ReliableComm & MintRoute • PARC • Network & application modeling • Strategy learning for wireless ad hoc routing • UCI • Experiment • Simulations

  8. Experimental study: GridRouting/ReliableComm vs. MintRoute OSU

  9. Overview • Objective • Comparative study of the performance of GridRouting/ReliableComm & MintRoute/QueuedSend in the A Line In The Sand scenario • For GridRouting/ReliableComm , study the impact of node location, power level, and maximum number of retransmissions on the end-to-end delay as well as reliability • Metrics: Mean and variance in • Packet delivery ratio (per event basis) • End-to-end delay • Goodput for a given event

  10. Not using beta/CC1000RadioAck due to availability as well as weather constraint LITeS GridRouting LITeS ReliableComm MintRoute RadioCRCPacket QueuedSend GenericComm-Promiscuous Software components OSU UCB

  11. Base station Network testbed • 7 * 7 grid of MICA2 motes

  12. Application traffic • Car moving across the network from left to right at a speed of 5~15 MPH • A mote generates a “start” message at the beginning of an event; the mote generates an “end” message at the end of the event • All the messages are sent to the base station, which performs higher-level detection and classification

  13. GridRouting/ReliableComm vs. MintRoute Power level = 9

  14. Base station Per-node packet delivery ratio: GridRouting/ReliableComm

  15. Base station Per-node packet delivery ratio: MintRoute

  16. Summary: GridRouting vs. MintRoute • GridRouting provides better packet delivery ratio & goodput • The packet delivery ratio for each individual mote is distributed more evenly in GridRouting • End-to-end delay is shorter in MintRoute • To do: • Compare GridRouting/ReliableComm with MintRoute/RadioACK

  17. Outline • OSU • Experimental study of GridRouting/ReliableComm and MintRoute • PARC • Network & application modeling • Strategy learning for wireless ad hoc routing • UCI • Experiment • Simulation

  18. Network and Application Modeling and Strategy Learning for Wireless Ad-hoc Routing PARC

  19. Outline • OneModeling and Simulation Environment for Ad-hoc Routing Applications in Wireless Sensor Networks • TwoBaseline Routing Strategies • ThreeMeta Adaptive Routing Strategiesbased on Reinforcement Learning • FourTest Case Studies

  20. RMASE: Routing Modeling & Application Simulation Environment • Motivation: • Comparing Routing Algorithms in a Systematic Way • Functions: • Network Models: • Generate Network Topologies • Radio and Fault Models: • Set Transmission Parameters and Fault/Alive Probabilities • Application Models: • Generate Application Scenarios • Performance Metrics: • Calculate Performance Metrics for Simulated Runs • Layered Routing Architecture • Developed on Prowler with Application Name ‘generator’

  21. Network Topology Models (I) • Default Regular Grid • Parameter Settings

  22. Network Topology Models (II) • Small and Large Random Offsets

  23. Network Topology Models (III) • Grid Shifts

  24. Network Topology Models (IV) • Distance and Density

  25. Network Topology Models (V) • Fixed and Random Holes

  26. Radio and Fault Models • Prowler’s Radio Model • Signal Fading Formula • Asymmetric Link • Dynamic Link • Random Error • Collision • Energy Use Model • One unit for every transmission • Faulty/Alive Model • If fault, become alive with probability p • If alive, become fault with probability q

  27. Application Models • Source and Destination Specifications • Source Rate: r • packages per second • Initialization Time • Source Amount: n • total packages per source • Source/Destination Distance • Source Trace • given by a trace file

  28. Minimize Maximize Minimize Minimize Performance Metrics • Latency (s): Tarrived – Tsent • Throughput (p/s): N/T • N: the total number of packets received • T: the duration of simulation • Loss Rate: n/N • n: the number of packets missing • N: the total number of packets received • Energy Use: Σipi • The total number of packets sent in the network

  29. Layered Routing Architecture Stats App Router Init_Application Packet_Sent Packet_Received Clock_Tick Send_Packet MAC generator_application

  30. Baseline Routing Strategies Stats Stats App App Flood SpanTree Ignore_Duplicate MAC MAC Unconstrained Flood Routing Spanning Tree Routing

  31. Meta Routing Strategies based on Reinforcement Learning • Meta-Routing Strategies: • destination specification: constraints on attributes • cost function: function on attributes • meta-strategies: independent to destination and cost specification Structured Source-Destination Path Spanning Tree Adaptive Spanning Tree Connectionless Real-time Search Flooding Constraint-based Search Constrained Flooding Reinforcement Learning

  32. Application Studies • Case I (OSU): A Line in the Sand (LIS) • Network: 10x10, offset 0.1, hole <6.5, 4.5, 2, 9, 1> • Source: dynamic, given by trace • Destination: static, fixed • 300 sec, 3 runs • Case II (ND): Red Force Tagging (RFT) • Network: 5x10, offset 0.1 • Source: mobile, fixed, unique • Destination: static, fixed, unique • 30 sec, 4p/s, 5 runs • Case III (UCB): Pursuer Evader Game (PEG) • Network: 7x7, offset 0.1 • Source: dynamic, fixed, unique • Destination: mobile, fixed, unique • 15 sec, 4p/s, 5 runs • Case IV (Vanderbilt): Shooter Locator (SL) • Source: dynamic, random, not unique • Destination: static, fixed, unique • Future work

  33. Routing Strategies Comparisons • Five Strategies • Flood • Spanning tree • Adaptive tree • Constraint-based search • Constrained flooding • Four metrics • Latency • Throughput • Loss rate • Energy

  34. Flood Span Tree Adaptive Tree Constraint-based Search Constrained Broadcast A Line in the Sand

  35. Flood Span Tree Adaptive Tree Constraint-based Search Constrained Broadcast Red Force Tagging

  36. Flood Span Tree Adaptive Tree Constraint-based Search Constrained Broadcast Pursuer Evader Game

  37. Take Away Points • Rmase • Provides a virtual experimental platform for studying routing strategies • None of the routing strategy is superior to others; performance depends on • the network and application types • metrics the application cares about • The relationship between simulation and hardware • Simulation makes assumptions • Hardware verifies assumptions

  38. TDMA-based Routing Experiments & Simulations UCI

  39. Routing Tree PowerNode master gate worker Group 2 Group 3 Group 1 • Motes • 24 Mica2 motes • Topology • 6x4 grid with 4 ft. spacing, outdoors • PowerNode at upper left corner to test the longest routing paths • Communication settings • Size of TDMA slot = 48 msec • 12 TDMA slots per cycle • Packet transmission frequency: 1.736Hz (one per TDMA cycle) • Radio transmission power: 3 • Total number of packets: 36,840 • Data contents of msgs: 3 ~ 24 bytes (variable sizes) • Metric: Response Time = Sensing-to-Tracking Time PN Group 4 Group 5 Group 6

  40. Experimental Results & Observations • Every worker node was programmed to generate sensor data reports once every TDMA round. ==>Multiple simultaneous reports were handled without unnecessary collisions. • Over 95% of one-hop link reliability is achieved ==> Reflects high performance of the global clock synchronization mechanisms built. • 18 out of 24 motes reported their environment sensing data. 17 out of 18 motes experienced negligible variances in power node response times ==> Proves highly deterministic nature of the protocol. End-to-End Delivery Success Rate

  41. Simulation of TDMA & Routing with Prowler • Application layer: Describe the motes’ handling of events: Packet_Sent, Packet_Received, Clock_Tick. It also implements the mote initiation and data file storage. • Radio channel layer: On top of the CSMA layer, a layer which executed TDMA and routing was built. • Worker nodes, gate nodes, and master nodes were all simulated. Neighborhood information of each node TDMA schedule & Routing tree Response time & Queue length Prowler with TDMA TDMA Scheduler

  42. Simulation Setup • Network topology • 4x6 grid (UCI), 5x10 grid (UND), and 10x10 grid (OSU) • Simulation scenarios • Heavy load: message demand of 1 packet per TDMA round • Average load: message demand of 1 packet per 2 TDMA rounds • Tracking of one moving target • Trajectory: one linear movement along one axis (OSU’s application model) • As long as a mote detects the target, it transmits one packet per TDMA round. • Packet losses due to buffer overflow • Evaluation metric: Worst-case response time

  43. Simulation Results

  44. Robust Messaging: Fundamental issues and strategies in “Red Force Tagging” What causes difficulties? (A) Node reliability (B) Node locations (incl uncertainty) (C) Channel characteristics (incl interference) Note: Power trivially solves all robustness (and latency) problems. So, for a meaningful problem, maximum power and average power must be bounded Notre Dame

  45. Difficulties I (A) Node reliability If failures are independent with failure rate p and nodes are uniformly randomly distributed with density λ, the node density is (1-p)λ (B) Node locations Perfectly known locations: The variance in internode distances results in varying link quality or stringent requirements for power control (in particular for nearest-neighbor routing) Uncertainty in positions: Can be viewed as uncertainty in the channel. Lifetime is an issue in irregular networks.

  46. Difficulties II (C) Channel characteristics Channel is unknown due to fading and interference (and localization errors) - Slow fading: obstacles, multipath geometry (lognormal) - Fast fading: mobility (Rayleigh, Rice) - Interference: makes channel estimation difficult (need to distinguish between noise and interference)Remark: Low path loss exponents are desirable in terms of power consumption. But the average per-node throughput goes to zero if =m in an m-dimensional network To achieve good scaling, we need high 

  47. Robust Messaging Strategies Techniques to achieve robustness: • Avoid random node placement. Deploy nodes regularly • No nearest-neighbor routing in random networks • Estimate link quality. Choose good links • Exploit time, frequency, and path diversity: * retransmissions (with implicit/explicit ACK); coding * frequency hopping or spread-spectrum * multipath routing; find backup routes • Reduce interference (good MAC, spread-spectrum, light traffic [high data rates], power control, directional transmission)

  48. Robust Messaging in Red Force Tagging Characteristics: Mobile Tagmote. Large amount of data. Only one connection active. Throughput is crucial. Approach: - Regular network topology - Always use maximum power - Use ARQ-N ACKnowledgments (increases throughput) - Keep track of number of “retries” for a link estimate - Maintain list of multiple next-hop neighbors (multi-tree structure) Achievable reliability: 90-100% with a goodput of 200bytes/s.

More Related