1 / 28

A Scalable Location Service for Geographic Ad Hoc Routing

A Scalable Location Service for Geographic Ad Hoc Routing. Jinyang Li, John Jannotti, Douglas S. J. De Couto, David R. Karger, Robert Morris MIT Laboratory for Computer Science. Overview. Motivation for Grid: scalable routing for large ad hoc networks metropolitan area, 1000s of nodes

jjamieson
Download Presentation

A Scalable Location Service for Geographic Ad Hoc Routing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Scalable Location Service forGeographic Ad Hoc Routing Jinyang Li, John Jannotti, Douglas S. J. De Couto, David R. Karger, Robert Morris MIT Laboratory for Computer Science

  2. Overview • Motivation for Grid: scalable routing for large ad hoc networks • metropolitan area, 1000s of nodes • Protocol Scalability: The number of packets each node has to forward and the amount of state kept at each node grow slowly with the size of the network.

  3. Current Routing Strategies • Traditional scalable Internet routing • address aggregation hampers mobility • Pro-active topology distribution (e.g. DSDV) • reacts slowly to mobility in large networks • On-demand flooded queries (e.g. DSR) • too much protocol overhead in large networks

  4. Flooding causes too much packet overhead in big networks Flooding-based on-demand routing works best in small nets. Can we route without global topology knowledge? Avg. packets transmitted per node per second Number of nodes

  5. Geographic Forwarding Scales Well • Assume each node knows its geographic location. • A addresses a packet to G’s latitude, longitude • C only needs to know its immediate neighbors to forward packets towards G. • Geographic forwarding needs a location service! C’s radio range A D F C G B E

  6. Possible Designs for a Location Service • Flood to get a node’s location (LAR, DREAM). • excessive flooding messages • Central static location server. • not fault tolerant • too much load on central server and nearby nodes • the server might be far away for nearby nodes or inaccessible due to network partition. • Every node acts as server for a few others. • good for spreading load and tolerating failures.

  7. Desirable Properties of a Distributed Location Service • Spread load evenly over all nodes. • Degrade gracefully as nodes fail. • Queries for nearby nodes stay local. • Per-node storage and communication costs grow slowly as the network size grows.

  8. level-0 level-1 level-2 level-3 GLS’s spatial hierarchy All nodes agree on the global origin of the grid hierarchy

  9. sibling level-0 squares s n s s s s sibling level-1 squares s s sibling level-2 squares s s 3 Servers Per Node Per Level • s is n’s successorin that square. • (Successor is the node with “least ID greater than” n )

  10. s n s s s s s s s1 x s2 s s3 location query path Queries Search for Destination’s Successors Each query step: visit n’s successor at each level.

  11. 9 location table content 11 6 23 GLS Update (level 0) Invariant (for all levels): For node nin a square, n’s successor in each sibling square “knows” about n. 11 1 2 3 9 23 29 16 7 6 Base case: Each node in a level-0 square “knows” about all other nodes in the same square. 17 5 26 25 4 8 21 19

  12. location table content location update GLS Update (level 1) Invariant (for all levels): For node nin a square, n’s successor in each sibling square “knows” about n. 9 11 1 2 3 2 11 9 6 23 29 2 16 2 23 7 6 17 5 26 25 4 8 21 19

  13. location table content GLS Update (level 1) ... Invariant (for all levels): For node nin a square, n’s successor in each sibling square “knows” about n. 9 ... 11 1 2 ... 3 11, 2 9 6 ... 23 29 2 16 ... 23, 2 7 6 ... ... ... 17 5 ... 26 25 ... ... ... 8 21 4 ... 19

  14. location table content location update GLS Update (level 2) ... Invariant (for all levels): For node nin a square, n’s successor in each sibling square “knows” about n. 9 ... 1 11 1 1 2 ... 3 11, 2 9 6 ... 23 29 2 16 ... 23, 2 7 6 ... ... ... 17 5 ... 26 25 ... ... ... 8 21 4 ... 19

  15. GLS Query ... 9 ... 1 11 1 1 2 ... 3 11, 2 9 6 ... 23 29 2 16 ... 23, 2 7 6 ... ... ... 17 5 ... 26 25 location table content ... ... ... 8 21 4 query from 23 for 1 ... 19

  16. Challenges for GLS in a Mobile Network • Out-of-date location information in servers. • Tradeoff between maintaining accurate location data and minimizing periodic location update messages. • Adapt location update rate to node speed • Update distant servers less frequently than nearby servers. • Leave forwarding pointers until updates catch up.

  17. Performance Analysis • How well does GLS cope with mobility? • How scalable is GLS? • How well does GLS handle node failures? • How local are the queries for nearby nodes?

  18. Simulation Environment • Simulations using ns with CMU’s wireless extension (IEEE 802.11) • Mobility Model: • random way-point with speed 0-10 m/s (22 mph) • Area of square universe grows with the number of nodes in the network. • Achieve spatial reuse of the spectrum • GLS level-0 square is 250m x 250m • 300 seconds per simulation

  19. query success rate Number of nodes GLS Finds Nodes in Big Mobile Networks Biggest network simulated: 600 nodes, 2900x2900m (4-level grid hierarchy) • Failed queries are not retransmitted in this simulation • Queries fail because of out-of-date information for destination nodes or intermediate servers

  20. GLS Protocol Overhead Grows Slowly Avg. packets transmitted per node per second Number of nodes • Protocol packets include: GLS update, GLS query/reply

  21. Average Location Table Size is Small Avg. location table size Number of nodes • Average location table size grows extremely slowly with the size of the network

  22. 7 2 8 11 21 1 27 26 18 23 Node 3 is to be the location server for all other nodes 16 1,2,6,7,8 11,16,18, 21,23,25 25 6 3 Non-uniform Location Table Size Simulated universe The complete Grid hierarchy of level 3 Possible solution: dynamically adjust square boundaries

  23. Query success rate Fraction of failed nodes GLS is Fault Tolerant • Measured query performance immediately after a number of nodes crash simultaneously. • (200-node-networks)

  24. Ratio of query path to src/dst distance Hop count between source and destination Query Path Length is proportional to the distance between source and destination

  25. Performance Comparison between Grid and DSR • DSR (Dynamic Source Routing) • Source floods route request to find the destination. • Query reply includes source route to destination. • Source uses source route to send data packets. • Simulation scenario: • 2Mbps radio bandwidth • CBR sources, 4 128-byte packets/second for 20 seconds. • 50% of nodes initiate over 300-second life of simulation.

  26. Grid DSR Successfully delivered data Number of nodes Fraction of Data Packets Delivered • Geographic forwarding is less fragile than source routing. • Why does DSR have trouble with > 300 nodes?

  27. DSR Avg. packets transmitted per node per second Grid Number of nodes Protocol Packet Overhead • DSR prone to congestion in big networks: • Sources must re-flood queries to fix broken source routes • These queries cause congestion • Grid’s queries cause less network load. • Queries are unicast, not flooded. • Un-routable packets are discarded at source when query fails.

  28. Conclusion • GLS enables routing using geographic forwarding. • GLS preserves the scalability of geographic forwarding. • Current work: • Implementation of Grid in Linux http://pdos.lcs.mit.edu/grid

More Related