1 / 47

Reliable Distributed Systems

Reliable Distributed Systems. Overlay Networks. Resilient Overlay Networks. A hot new idea from MIT Shorthand name: RON Today: What’s a RON? Are these a good idea, or just antisocial? What next: Underlay networks. What is an overlay network?.

Download Presentation

Reliable Distributed Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reliable Distributed Systems Overlay Networks

  2. Resilient Overlay Networks • A hot new idea from MIT • Shorthand name: RON • Today: • What’s a RON? • Are these a good idea, or just antisocial? • What next: Underlay networks

  3. What is an overlay network? • A network whose links are based on (end-to-end) IP hops • And that has multi-hop forwarding • I.e. a path through the network will traverse multiple IP “links” • As a “network” service (i.e. not part of an application per se) • I.e. we are not including Netnews, IRC, or caching CDNs in our definition

  4. Why do we have overlay networks? • Historically (late 80’s, early 90’s): • to get functionality that IP doesn’t provide • as a way of transitioning a new technology into the router infrastructure • mbone: IP multicast overlay • 6bone: IPv6 overlay

  5. Why do we have overlay networks? • More recently (mid-90’s): • to overcome scaling and other limitations of “infrastructure-based” networks (overlay or otherwise) • Yoid and End-system Multicast • Two “end-system” or (nowadays) “peer-to-peer” multicast overlays • Customer-based VPNs are kind-of overlay networks • If they do multi-hop routing, which most probably don’t

  6. Why do we have overlay networks? • Still more recently (late-90’s, early 00’s): • to improve the performance and reliability of native IP!!! • RON (Resilient Overlay Network) • Work from MIT (Andersen, Balakrishnan) • Based on results of Detour measurements of Savage et.al., Univ of Washington, Seattle

  7. End-to-end effects of Internet Path Selection • Savage et. al., Univ of Washington Seattle • Compared path found by internet routing with alternates • Alternates composed by gluing together two internet-routed paths • Roundtrip time, loss rate, bandwidth • Data sets: Paxson, plus new ones from UW • Found improvements in all metrics with these “Detour” routes

  8. BGP and other studies • Paxson (ACIR), Labovitz (U Mich), Chandra (U Texas) • Show that outages are frequent (>1%) • BGP can take minutes to recover

  9. RON Rational • BGP cannot respond to congestion • Because of information hiding and policy, BGP cannot always find best path • Private peering links often cannot be discovered • BGP cannot respond quickly to link failures • However, a small dynamic overlay network can overcome these limitations

  10. BGP lack of response to congestion • Very hard for routing algorithms to respond to congestion (route around it) • Problem is oscillations • Traffic moved from congested link to lightly-loaded link, then lightly-load link becomes congestions, etc. • ARPANET (~70 node network) struggled with this for years • Khanna and Zinky finally solved this (SIGCOMM ’89) • Heavy damping of responsiveness

  11. BGP information hiding Internet 30.1/16 20.1/16 ISP1 ISP2 30.1.3/24 20.1.5/24 30.1.3/24 Site1 Site2 20.1.5/24 Private peering link. Site1 and Site2 can exchange traffic, but Site2 cannot receive internet traffic via ISP1 (even if policy allows it).

  12. Acceptable Use Policies • Why might Cornell hide a link • Perhaps, Cornell has a great link to the Arecebo telescope in Puerto Rico but doesn’t want all the traffic to that island routing via Cornell • E.g. we pay for it, and need it for scientific research • But any Cornell traffic to Puerto Rico routes on our dedicated link • This is an example of an “AUP” • “Cornell traffic to 123.45.00.00/16 can go via link x, but non-Cornell traffic is prohibited”

  13. BGP information hiding Internet ISP1 ISP2 30.1/16 20.1/16 X 30.1.3/24 20.1.5/24 Site1 Site2

  14. RON can bypass BGP information hiding Internet RON3 ISP1 ISP2 30.1/16 20.1/16 X 30.1.3/24 20.1.5/24 Site1 Site2 RON1 RON2 20.1.5.7 30.1.3.5 …but in doing so may violate AUP

  15. RON test network had private peering links

  16. BGP link failure response • BGP cannot respond quickly to changes in AS path • Hold down to prevent flapping • Policy limitations • But BGP can respond locally to link failures • And, local topology can be engineered for redundancy

  17. Local router/link redundancy eBGP and/or iBGP can respond to peering failure without requiring an AS path change AS path responsive-ness is not strictly necessary to build robust internets with BGP. Note: the telephone signalling network (SS7, a data network) is built this way. ISP ISP R R R R R R R R AS1 AS2 Intra-domain routing (i.e. OSPF) can respond to internal ISP failures

  18. Goals of RON • Small group of hosts cooperate to find better-than-native-IP paths • ~50 hosts max, though working to improve • Multiple criteria, application selectable per packet • Latency, loss rate, throughput • Better reliability too • Fast response to outages or performance changes • 10-20 seconds • Policy routing • Avoid paths that violate the AUP (Acceptable Usage Policy) of the underlying IP network • General-purpose library that many applications may use • C++

  19. Some envisioned RON applications • Multi-media conference • Customer-provided VPN • High-performance ISP

  20. Basic approach • Small group of hosts • All ping each other---a lot • Order every 10 seconds • 50 nodes produces 33 kbps of traffic per node! • Run a simplified link-state algorithm over the N2 mesh to find best paths • Metric and policy based • Route over best path with specialized metric- and policy-tagged header • Use hysteresis to prevent route flapping

  21. Major results (tested with 12 and 16 node RONs) • Recover from most complete outages and all periods of sustained high loss rates of >30% • 18 sec average to route around failures • Routes around throughput failures, doubles throughput 5% of time • 5% of time, reduced loss probability by >0.5 • Single-hop detour provides almost all the benefit

  22. RON Architecture Simple send(), recv(callback) API Router is itself a RON client Local or shared among nodes. Relational DB allows a rich set of query types.

  23. Conduit, Forwarder, and Router

  24. RON Header (inspired by IPv6!…but not IPv6) Runs under IP Performance metrics IPv4 Unique per flow. Cached to speed up (3-phase) forwarding decision Selects conduit Runs over UDP

  25. Link evaluation • Defaults: • Latency (sum of): • lat1 •lat1 + (1 - ) • new_sample1 • exponential weighted moving average ( = 0.9) • Loss rate (product of): • average of last 100 samples • Throughput (minimum of): • Noisy, so look for at least 50% improvement • Use simple TCP throughput formula: • √1.5 / (rtt• √p) p=loss probability • Plus, application can run its own

  26. Responding to failure • Probe interval: 12 seconds • Probe timeout: 3 seconds • Routing update interval: 14 seconds

  27. RON overhead • Probe overhead: 69 bytes • RON routing overhead: 60 + 20 (N-1) • 50: allows recovery times between 12 and 25 s

  28. Two policy mechanisms • Exclusive cliques • Only member of clique can use link • Good for “Internet2” policy • No commercial endpoints went over Internet2 links • General policies • BPF-like (Berkeley Packet Filter) packet matcher and list of denied links • Note: in spite of this, AUPs may easily, even intentionally, be violated

  29. RON deployment (19 sites) To vu.nl lulea.se ucl.uk To kaist.kr, .ve .com (ca), .com (ca), dsl (or), cci (ut), aros (ut), utah.edu, .com (tx) cmu (pa), dsl (nc), nyu , cornell, cable (ma), cisco (ma), mit, vu.nl, lulea.se, ucl.uk, kaist.kr, univ-in-venezuela

  30. AS view

  31. Latency CDF ~20% improvement?

  32. Same latency data, but as scatterplot Banding due to different host pairs

  33. RON greatly improves loss-rate 30-min average loss rate on Internet RON loss rate never more than 30% 13,000 samples 30-min average loss rate with RON

  34. An order-of-magnitude fewer failures 30-minute average loss rates

  35. Resilience Against DoS Attacks

  36. Some unanswered questions • How much benefit comes from smaller or larger RONs? • Would a 4-node RON buy me much? • Do results apply to other user communities? • Testbed consisted mainly of high-bandwidth users (3 home broadband) • Research networks may have more private peering than residential ISPs

  37. Some concerns • Modulo unanswered questions, clearly RON provides an astonishing benefit • However . . .

  38. Some concerns • Is RON TCP-unfriendly? • RON path change looks like a non-slowstarted TCP connection • On the other hand, RON endpoints (TCP) would back off after failure

  39. Some concerns • Would large-scale RON usage result in route instabilities? • Small scale probably doesn’t because a few RONs are not enough to saturate a link • Note: internet stability is built on congestion avoidance within a stable path, not rerouting

  40. Some concerns • RON’s ambient overhead is significant • Lots of RONs would increase overall internet traffic, lower performance • This is not TCP-friendly overhead • 32 Kbps (50-node RON) is equivalent to high-quality audio, low-quality video! • Clearly the internet can’t support much of this • RON folks are working to improve overhead

  41. RON creators’ opinion on overhead • “Not necessarily excessive” • “Our opinion is that this overhead is not necessarily excessive. Many of the packets on today’s Internet are TCP acknowledgments, typically sent for every other TCP data segment. These “overhead” packets are necessary for reliability and congestion control; similarly, RON’s active probes may be viewed as “overhead” that help achieve rapid recovery from failures.”

  42. Some concerns • RONs break AUPs (Acceptable Usage Policies) • RON has its own policies, but requires user cooperation and diligence

  43. Underlay Networks • Idea here is that the Internet has a lot of capacity • So suppose we set some resources to the side and constructed a “dark network” • It would lack the entire IP infrastructure but could carry packets, like Ethernet • Could we build a new Internet on it with better properties?

  44. The vision: Side by side Internets Shared HW, but not “internet” The Internet MediaNet SecureNet ReliableNet

  45. Doing it on the edges • We might squeak by doing it only at the edges • After all, core of Internet is “infinitely fast” and loses no packets (usually) • So if issues are all on the edge, we could offer these new services just at the edge • Moreover, if a data center owns a fat pipe to the core, we might be able to do this just on the consumer side, and just on last few hops…

  46. Pros and cons • Pros: • With a free hand, we could build new and much “stronger” solutions, at least in desired dimensions • New revenue opportunities for ISPs • Cons • These “SuperNets” might need non-standard addressing or non-IP interfaces, which users might then reject • And ISPs would need to agree on a fee model and how to split the revenue

  47. Summary • The Internet itself has become a serious problem for at least some applications • To respond, people have started to hack the network with home-brew routing • But a serious response might need to start from the ground up, and would face political and social challenges!

More Related