1 / 28

Topology-Aware Overlay Construction and Server Selection

Topology-Aware Overlay Construction and Server Selection. Sylvia Ratnasamy Mark Handley Richard Karp Scott Shenker. Infocom 2002. Connections of a node. Introduction. Problem: Inefficient routing in large-scale networks

monita
Download Presentation

Topology-Aware Overlay Construction and Server Selection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Topology-Aware Overlay Construction and Server Selection Sylvia Ratnasamy Mark Handley Richard Karp Scott Shenker Infocom 2002

  2. Connections of a node

  3. Introduction • Problem: Inefficient routing in large-scale networks • In large-scale overlay networks, each node is logically connected to a small subset of other participants. • Due to the lack of effort to ensure that application-level connectivity is congruentwith underlying IP-level network topology • Basic Idea: Optimize routing paths in network • Define a binning scheme whereby nodes partition themselves into bins • Nodes that fall within a given bin are relatively close to one another in terms of network latency

  4. Outline • Introduction • Distributed Binning • Topologically-aware construction of overlay networks • Topologically-aware server selection • Conclusion

  5. Extracting proximity information • Measuments that can be used to derive topological information: • traceroute: • intended for network diagnostic purposes, • too heavy-weight, • excessive load on the network, • disabled ICMP at some sites for security • BGP routing table: • not directly available for end users, • requires privilege or third party service • Network latency: • often a direct indicator of network performance, • light-weight, • end-to-end measurement, • non-intrusive manner s 2 sec a 7 sec b 5 sec c t

  6. Distributed Binning • Goal: • Have a set of nodes independently partition themselves into disjoint “bins” • Nodes within a single bin are relatively closer to one another than to nodes not in their bin • Scheme: • A well-known set of machines that act as landmarks on the Internet • Form a distributed binning of nodes based-on their relative distances • A node measures round-trip-time (RTT) to each landmark and orders landmarks in order of increasing RTT • Every node has an associated ordering of landmarks(or bin)

  7. Distributed Binning • Scheme: (Cont.) • After finding ordering, we calculate absolute values of each RTT in ordering as follows • We divide the range of possible latency values into a number of levels. • Convert RTT values into level number and obtain a level vector • Example: Level 0 0-100 ms Level 1 100-200 ms Level 2 > 200ms Node A’s bin becomes “l2l3l1:0 1 2” • Topologically close nodes likely to have same ordering and belong to same bin l2 l1 57 ms 232 ms l3 A 117 ms

  8. Distributed Binning Distributed Binning Scheme

  9. Performance of Distributed Binning • Even though it is clearly scalable, does it do a reasonable job? • Metric used: average inter-bin latency = average latency from a given node to all nodes not in its bin average intra-bin latency = average latency from a given node to all nodes in its bin • A higher gain ratio indicates a higger reduction in latency, hence more desirable

  10. Performance of Distributed Binning • Datasets or test topologies: • TS-10K and TS-1K: • Transit-Stub topologies with 10000 and 1000 nodes respectively. • 2-level hierarchy • PLRG1 and PLRG2: • Power-Law Random graph with 1166 and 1779 nodes • Edge latencies assigned randomly • NLANR: • Distributed network of over 100 active monitors • Systematically perform scheduled measurement between each other

  11. Performance of Distributed Binning • Other binning algorithms used in experiments: • Random Binning: • Each nodes selects a bin at random • acts as a lower bound for the gain ratio • Nearest Neighbor clustering: • Each node is initially assigned to a cluster itself. • At each iteration, two closest clusters are merged into a single cluster. • The algorithm terminated when the required number of clusters is obtained _

  12. Performance of Distributed Binning • Experiments: Effect of number of levels (#landmarks=12) Effect of number of landmarks (#level=1)

  13. Performance of Distributed Binning • Experiments: Comparison of different binning techniques(#levels=1)

  14. Topologically-aware construction of overlay networks • Two types of overlay networks • Structured: • Nodes are interconnected in some well-defined manner(Application-level) • Unstructured: • Much less structured like Gnutella,Freenet • Metric for evaluation:

  15. Topologically-sensitive CAN construction • Content-Addressable Network • Scalableindexing system for large-scale decentralized storage applications on the Internet • Built around a virtual multi-dimensional Cartesian coordinate space • Entire coordinate space is dynamically partitioned among all the peers, i.e. every peer possesses its individual, distinct zone within the overall space • A CAN peer maintains a routing table that holds the IP address and virtual coordinate zone of each of its neighbor coordinates

  16. 2D CAN Example State of the system at time t Peer Resource Zone x In this 2 dimensional space, a key is mapped to a point (x,y)

  17. Q(x,y) key Routing in CAN y • d-dimensional space with n zones • Routing path of length: • Algorithm: Choose the neighbor nearest to the destination (x,y) Peer Q(x,y) Query/ Resource

  18. Contribution to CAN • Construct CAN topologies that are congruent with underlying IP topology • Scheme: • With m landmarks, m! such ordering is possible • For example, if m=2, then possible orderings are “ab” and “ba” • We partion the coordinate space into m!equal sized portions, each corresponding to a single ordering • Divide the space along first dimension into m portions • Each portion is then sub-divided along the second dimension into m-1 portions • Each of these are divided into m-2 portion and so on… • When a node joins CAN at a random point, the node determines its associated bin based-on delay measurement • According to its landmark ordering, it takes place in the correspanding portion of CAN

  19. Gain in CAN using Distributed Binning Stretch for a 2D CAN; topology TS-1K;#levels=1 Stretch for a 2D CAN; topology PLRG2;#levels=1

  20. Topologically-aware construction of unstructured overlays • Aims much less structured overlay such as Gnutella, Freenet • Focusing on the following general problem in unstructured overlays: • Optimal overlay is NP-hard, so used some heuristic called Short-Long “Given a set of n nodes on the Internet, have each node picks any k neighbor nodes from this set so that the average routing latency on the resultant overlay is low”

  21. Topologically-aware construction of unstructured overlays • Short-Long Heuristic • A node picks its k neighbors by picking k/2 nodes closest to itself and then picks another k/2 nodes at random • Well-connected pocket of closest nodes and inter-connections to far pockets with random picks • BinShort-Long (Contribution) : • A node picks k/2 neighbors at random from its bin and picks remaining k/2 at random Current Node Nearby Nodes Distant Nodes Other Nodes

  22. Gain in Unstructured Overlay using Distributed Binning Unstructured overlays; TS-10K;#levels=1;#landmarks=12

  23. Topology-aware server selection • Replication of content over Internet gives rise to the problem of server selection • Parameter:Server load and distance(in term of Network Latency) • _ Replication Server Client

  24. Topology-aware server selection • Server selection process with distributed binning works as follows: • If there exist one or more servers within same bin as client, then client is redirected to a random server from its own bin • If no server exists within same bin as client, then an existing server whose bin is most similar to client’s bin is selected at random • Compared performance to 3 schemes: • Random: Client selects server at random • Hotz Metric: Uses RTT measure from a node to well known landmarks to estimate internode distance (Triangle inequality) • Cartesian Distance: Calculates Euclidean distance using level vector of node and selects the server with minimum distance • Measurement for evaluation:

  25. Topology-aware server selection • Comparison of different schemes under following conditions: • 12 landmarks and 3 levels • 1000 servers for TS-10K, 100 servers for TS-1K, PLRG1 and • PLRG2 and 10 for NLANR

  26. Topology-aware server selection-Node Perspective CDF of latency stretch for TS-10K data CDF of latency stretch for NLANR data

  27. Conclusion • Described a simple,scalable,binning scheme that can be used to infer network proximity information • Nature of the underlying network topology affects behavior of the scheme • It is applied to the problem of topologically-aware overlay construction and server selection domains • Three applications of distributed binning is given: • Structured Overlay • Unstructured Overlay • Server selection • A small number of landmarks yields significant improvements. • Can be referred as network-level GPS system _

  28. Happy end! Thank you for your patience!

More Related