1 / 31

The Design and Implementation of a Next Generation Name Service for the Internet

The Design and Implementation of a Next Generation Name Service for the Internet. Venugopalan Ramasubramanian Emin G ü n Sirer. Computer Science, Cornell University. introduction. SIGCOMM announcement in the lobby: “ There is a problem with DNS .”. DNS problems. failure resilience

hao
Download Presentation

The Design and Implementation of a Next Generation Name Service for the Internet

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Design and Implementation of a Next Generation Name Servicefor the Internet Venugopalan Ramasubramanian Emin Gün Sirer Computer Science, Cornell University

  2. introduction SIGCOMM announcement in the lobby: “There is a problem with DNS.”

  3. DNS problems • failure resilience • performance • consistency • large scale survey • 593160 domain names from Yahoo! and DMOZ • 164089 nameservers

  4. failure resilience (1/2) • 75% of names at large have a delegation bottleneck of only two nameservers

  5. failure resilience (2/2) • even the top-500 web sites have small delegation bottlenecks

  6. physical bottlenecks • majority of names bottlenecked on a single network link

  7. DoS attacks • delegation and network bottlenecks make DoS attacks feasible • january 2001 attack on Microsoft nameservers • DoS attacks high up in the hierarchy can affect the whole system • october 2002 attack on root servers • roots are already disproportionately loaded [Brownlee et al. 01a, 01b] • root anycast helps but does not solve the fundamental problem

  8. performance • lookups can be expensive • ~20-40% of web object retrieval time spent on DNS • ~20-30% of DNS lookups take more than 1s • [Jung et al. 01, Huitema et al. 00, Wills & Shang 00, Bent & Voelker 01] • updates conflict with timeout-driven caching • an emergency remapping/redirection cannot be performed unless anticipated • fundamental tradeoff between lookup and update performance • 86% of records have TTLs longer than 0.5 hours • 95% of records have TTLs shorter than 1 day, only 0.7% of records modified every day

  9. consistency • manual configuration can lead to inconsistencies • 0.8% of the delegations for the name system at large are lame • 2 lame delegations among the top-500 hosts • legacy DNS records do not closely track nameserver failures • 0.6% of nameservers unreachable at any one time • less diversity and robustness than intended • possibly masked until failure

  10. problems • failure resilience • DoS attacks • performance • lookup latency • update propagation • consistency • lame delegations • monopoly power • lookup redirection at TLDs

  11. Cooperative Domain Name System (CoDoNS) approach • supplement and/or replacement for legacy DNS • based on distributed hash tables (DHTs) • self-organizing • failure resilient • scalable • worst-case performance bounds • naïve application of DHTs fails to achieve performance comparable to legacy DNS

  12. 0021 0112 0122 prefix-matching DHTs object 0121 0121 = hash(“www.cnn.com”) • map all nodes into an identifier space • map all objects into same space based on key • logbN hops • several RTTs on the Internet 2012

  13. key intuition • tunable latency • adjust extent of replication for each object • fundamental space-time tradeoff 0021 0112 0122 2012

  14. proactive caching • proactive, model-driven caching can provide low latency with low overhead • optimization problem: minimize total number of replicas, s.t., average lookup performance C • for zipf-like query distributions • number of queries to rth popular object  1/r • commonly encountered in practice • dns is Zipf with  ~ 0.9 [Jung et al. 01] • high (O(1)) lookup performance • configurable target • continuous range, better than one-hop

  15. optimization problem minimize (storage/bandwidth) x0 + x1/b + x2/b2 + … + xK-1/bK-1 such that (average lookup time is C hops) K – (x01- + x11- + x21- + … + xK-11-)  C and x0  x1  x2  …  xK-1  1 i: object replicated at level i shares i digits with its servers b: base K: logb(N) xi: fraction of objects replicated at level i

  16. 1 [ ] 1 -  di (K’ – C) 1 + d + … + dK’-1 optimal closed-form solution , 0  i  K’ – 1 x*i = , K’  i  K 1 where d = b(1- ) / K’ is determined by setting x*K’-1 1  dK’-1 (K’ – C) / (1 + d + … + dK’-1)  1

  17. latency vs. overhead tradeoff x106

  18. CoDoNS vision • a cooperative cache for DNS data • composed of local resolvers and DNS nameservers • serves the same namespace as legacy DNS • supports the same interface as legacy DNS LegacyDNS

  19. CoDoNS operation • home node initially populates CoDoNs with binding from legacy DNS • replication level modified in response to distributed solution of the optimal formula • every node periodically checks the relative object popularity, estimates , discards replicas or pushes records to neighbors • with hysteresis www.cnn.com

  20. CoDoNS name management • explicit cache management • records stored until invalidated by updates • TTLs used only for clients, not necessary for consistency in the ring • upon TTL expiration, the home node checks binding for change • local names treated specially • a copy of the record retained at the local nameserver in addition to the home node • queries can be resolved locally without introducing load into the ring • server-side computation supported • low-TTL records not cached, replaced with forwarding pointers • supports Akamai and other CDN trickery • updates can be disseminated quickly at any time • the home node initiates a multicast using entries in DHT routing tables

  21. CoDoNS name security • all records carry cryptographic signatures • if the nameowner has a DNSSEC nameserver, CoDoNS will preserve the original signature • if not, CoDoNS will sign the DNS record with its own master key • malicious peers cannot introduce fake bindings • delegations are cryptographic • names not bound to servers

  22. CoDoNS implications • name delegations can be purchased and propagated independently of server setup • naming hierarchy independent of physical server hierarchy • domains may be served by multiple namespace operators • competitive market for delegation services

  23. CoDoNS deployment • incremental deployment path • uses legacy DNS to populate resource records on demand • completely transparent to clients • can operate without legacy DNS • deployed on planet-lab • 50 to 100 hundred nodes at any given time • planned expansion to ISPs (e.g. CNNIC)

  24. evaluation • MIT trace • 12 hour trace, 4th December 2000 • 281,943 queries • 47,230 domain names • Planetlab deployment • 75 nodes • Lookup performance • Adaptation to flash crowds • Load balance • Update propagation

  25. CoDoNS lookup performance (1/2)

  26. CoDoNS lookup performance (2/2)

  27. CoDoNS flash crowds • CoDoNs adapts to sudden surges

  28. advantages of CoDoNS • high performance • low lookup latency • updates can be propagated at any time • secure • resilient against denial of service attacks • load balances around hotspots • self configures around host and network failures • consistent • no manual configuration, no lame delegations

  29. future directions • system interface • currently, populated through legacy DNS • ultimately, name bindings manipulated directly • admission control • limit the number of objects any given entity can insert • wider deployment

  30. conclusions • proactive, model-driven caching enables DHTs to support latency-sensitive applications • CoDoNS can serve as a self-configuring, failure and DoS-resilient, automatic system for disseminating DNS records • can act as a safety net for legacy DNS • prototype deployed on Planetlab http://www.cs.cornell.edu/people/egs/beehive/

  31. CoDoNS servers planetlab3.cs.duke.edu 152.3.136.3 planetlab01.ethz.ch 129.132.57.2 planetlab03.ethz.ch 129.132.57.4 planetlab1.netmedia.gist.ac.kr203.237.53.170 planet1.cc.gt.atl.ga.us 199.77.128.193pli2-br-1.hpl.hp.com 192.170.103.20planet1.ics.forth.gr 139.91.70.61 planet1.cavite.nodes.planet-lab.org203.177.76.242 planet1.leixlip.nodes.planet-lab.org 192.198.151.98 planetlab1.postel.org 206.117.37.4 planetlab1.netlab.uky.edu 206.240.24.20 planetlab1.eecs.umich.edu 141.213.4.201 planetlab3.csail.mit.edu 128.31.1.13 planetlab5.csail.mit.edu 128.31.1.15 phys0bha-5a.chem.msu.ru 212.192.241.155soccf-planet-001.comp.nus.edu.sg 137.132.80.104 planet1.att.nodes.planet-lab.org 192.20.225.130 planetlab1.ias.csusb.edu 139.182.137.141 planlab1.cs.caltech.edu 131.215.45.71planetlab-1.cmcl.cs.cmu.edu 128.2.198.188 planetlab-3.cmcl.cs.cmu.edu 128.2.198.199 planetlab1.cs.cornell.edu 128.84.154.49 planetlab2.cs.cornell.edu 128.84.154.71 planetlab1.ewi.tudelft.nl 130.161.40.153

More Related