1 / 45

Flush: A Reliable Bulk Transport Protocol for Multihop Wireless Networks

Flush: A Reliable Bulk Transport Protocol for Multihop Wireless Networks. Sukun Kim †# , Rodrigo Fonseca † , Prabal Dutta † , Arsalan Tavakoli † , David Culler † , Philip Levis*, Scott Shenker †‡ , and Ion Stoica †. International Computer Science Institute. University of California at Berkeley.

Download Presentation

Flush: A Reliable Bulk Transport Protocol for Multihop Wireless Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Flush: A Reliable Bulk Transport Protocol for Multihop Wireless Networks Sukun Kim†#, Rodrigo Fonseca†, Prabal Dutta†, Arsalan Tavakoli†, David Culler†, Philip Levis*, Scott Shenker†‡, and Ion Stoica† International Computer Science Institute University of California at Berkeley ‡ Stanford University Samsung Electronics * # † SenSys 2007

  2. Motivating Example • All data from all nodes are needed • As quickly as possible • Collecting data from one node at a time is completely acceptable • Over 46 hop network ! Sausalito (north) SF (south) 500 ft 1125 ft 4200 ft 246 ft 56 nodes 8 nodes Structural Health Monitoring of the Golden Gate Bridge

  3. Introduction • Target applications • Structural health monitoring, volcanic activity monitoring, bulk data collection • One flow at a time • Remove inter-path interference • Easier to optimize for intra-path interference • Built on top of MAC layer • No merging with MAC layer for easy porting

  4. Table of Contents • Introduction • Algorithm • Implementation • Evaluation • Discussion • Related Work • Conclusion

  5. Flush Algorithm Overview • Receiver-initiated • Selective-NACK • Hop-by-hop Rate Control • Empirically Measure Interference Range

  6. Rate Control 8 6 5 4 8 7 6 5 4 3 Interferer: 6 4 5 1 / Rate = Packet Interval = δ8 + δ7 + δ6 + δ5 δX : Packet transmission time at node X

  7. Interference Range > Reception Range However, Jammer Vulnerable to Jammer No problem to Jammer Signal Strength Noise Floor Noise Floor + SNR Threshold Noise Floor + 2 X SNR Threshold SNR Threshold – minimum SNR to decode a packet Jammer – a node which can conflict with the transmission, but cannot be heard

  8. Identifying the Interference Set CDF of the difference between the received signal strength from a predecessor and the local noise floor A large fraction of interferers are detectable and avoidable Fraction of Links

  9. Implementation – Control Information • Control information is snooped • δ: packet transmission time, 1 byte • f: sum of δ’s of interfering nodes, 1 byte • D: Packet Interval = 1 / Rate, 1 byte • δ, f, D are put into packet header, and exchanged through snooping

  10. Implementation – Rate-limited Queue • 16-deep Rate-limited Queue • Enforces departure delay D(i) • When a node becomes congested (depth 5), it doubles the delay advertised to its descendants • But continues to drain its own queue with the smaller delay until it is no longer congested

  11. Table of Contents • Introduction • Algorithm • Implementation • Evaluation • Discussion • Related Work • Conclusion 100 MicaZ nodes – Purple nodes Diameter of 6~7 hops * Mirage Testbed in Intel Research Berkeley Sink

  12. Packet Throughput of Different Fixed Rates Effective Throughput (pkt/s) Packet throughput of fixed rate streams over different hop counts No fixed rate is always optimal

  13. Packet Throughput of Flush Overall Throughput (pkt/s) Effective packet throughput of Flush compared to the best fixed rate at each hop Flush tracks the best fixed packet rate

  14. Bandwidth of Flush Overall Bandwidth (B/s) Effective bandwidth of Flush compared to the best fixed rate at each hop Flush’s protocol overhead reduces the effective data rate

  15. Fraction of Data Transferred in Different Phases • Fraction of data transferred from the 6th hop during the transfer phase and acknowledgment phase • Greedy best-effort routing is unreliable, and exhibits a loss rate of 43.5%. A higher than sustainable rate leads to a high loss rate

  16. Amount of Time Spent in Different Phases • Fraction of time spent in different stages • A retransmission during the acknowledgment phase is expensive, and leads to a poor throughput

  17. Packet Throughput at Transfer Phase Transfer Phase Throughput (pkt/s) Effective goodput during the transfer phase Flush provides comparable goodput at a lower loss rate which reduces the time spent in the expensive acknowledgment phase, which increases the effective bandwidth

  18. Packet Rate over Time for a Source Flush-e2e has no in-network rate control • Source is 7 hops away, Data is smoothed by averaging 16 values • Flush approximates the best fixed rate with the least variance

  19. Maximum Queue Occupancy across All Nodes for Each Packet • Flush exhibits more stable queue occupancies than Flush-e2e

  20. Sending Rate at Lossy Link 6 3 4 5 2 1 0 Packets were dropped from hop 3 to hop 2 with 50% probability between 7 and 17 seconds Both Flush and Flush-e2e adapt while the fixed rate overflows its queue

  21. Queue Length at Lossy Link Flush and Flush-e2e adapt while the fixed rate overflows its queue

  22. Route Change Experiment 4 5 • We started a transfer over a 5 hop path • Approximately 21 seconds into the experiment forced the node 4 hops from the sink to switch its next hop • Node 4’s next hop is changed, changing all nodes in the subpath to the root • No packets were lost, and Flush adapted quickly to the change 3a 3b 2a 1a 2b 1b 0

  23. Scalability Test Effective bandwidth from the real-world outdoor scalability test where 79 nodes formed 48 hop network with 35B payload size Flush closely tracks or exceeds the best possible fixed rate across at all hop distances that we tested Overall Throughput (B/s)

  24. Table of Contents • Introduction • Algorithm • Implementation • Evaluation • Discussion • Related Work • Conclusion

  25. Discussion • High-power node • reduces hop count and interference • Not an option on many structural health monitoring due to power and space problems • Interactions with Routing • Link estimator of a routing layer breaks down under heavy traffic

  26. Related Work • Li et al – capacity of a chain of nodes limited by interference using 802.11 • ATP, W-TCP – rate-based transmission in the Internet • Wisden – concurrent transmission without a mechanism for a congestion control • Fetch – single flow, selective-NACK, no mention about rate control

  27. Conclusion • Rate-based flow control • Directly measure intra-path interference at each hop • Control rate based on interference information • Light-weight solution reduces complexity • Overhearing is used to measure interference and to exchange information • Two rules to determine a rate • At each node, Flush attempts to send as fast as possible without causing interference at the next hop along the flow • A node’s sending rate cannot exceed the sending rate of its successor • In combination, Flush provides as good bandwidth as fixed rate, and also gives a better adaptability

  28. Questions

  29. Reliability Source Sink 2, 4, 5 2 4 5 4, 9 4, 9 4 9

  30. Rate Control: Conceptual Model Assuming disk model N: Number of nodes, I: Interference range Rate:

  31. Rate Control 1. At each node, Flush attempts to send as fast as possible without causing interference at the next hop along the flow 2. A node’s sending rate cannot exceed the sending rate of its successor 8 6 5 4 d8 = δ8 + H7 = δ8 + δ7 + f7 = δ8 + δ7 + δ6 + δ5 8 7 6 5 4 3

  32. Implementation • RSSI is measured by snooping • Information is also snooped • δ, f, D are put into packet header, and exchanged through snooping • δ, f, D take 1 byte each, 3 bytes total • Cutoff • A node i considers a successor node (i− j) an interferer of node i+1 if, for any j > 1, rssi(i+1) − rssi(i−j) < 10 dBm • The threshold of 10 dBm was chosen after empirically evaluating a range of values

  33. Implementation • 16-deep Rate-limited Queue • Enforces departure delay D(i) • When a node becomes congested (depth 5), it doubles the delay advertised to its descendants • But continues to drain its own queue with the smaller delay until it is no longer congested • Protocol Overhead • Out of 22B provided by Routing layer, 2B sequence number + 3B control field + 17B payload

  34. Test Methodology • Mirage testbed in Intel Research Berkeley, consists of 100 MicaZ • -11 dBm • Diameter of 6~7 hops • Average of 4 runs

  35. Bottom line performance • High-power node • reduces hop count and interference • Not an option on the Golden Gate Bridge due to power and space problems • Interactions with Routing • Link estimator of a routing layer breaks down under heavy traffic • Bottom line performance???

  36. Average Number of Transmissions per node for sending 1,000 packets

  37. Bandwidth at Transfer Phase Transfer Phase Throughput (B/s) Effective goodput during the transfer phase Effective goodput is computed as the number of unique packets received over the duration of the transfer phase

  38. Details of Queue Length for Flush-e2e

  39. Memory and Code Footprint

  40. 8 6 5 4 8 7 6 5 4 3

  41. 6 3 4 5 2 1 0

  42. 4 5 3a 3b 2a 1a 2b 1b 0

  43. SF (south) Sausalito (north) 500 ft 1125 ft 4200 ft 246 ft 56 nodes 8 nodes Motivating Example • Every data from every node is needed • Partial data has no or little value • Should work over 46 hops

More Related