1 / 62

Today’s Lecture

Explore the relationship between peer-to-peer systems and domain name systems (DNS), and how incentives can contribute to building robustness in BitTorrent. Learn about the challenges of centralized DNS and alternative solutions such as decentralized lookups and caching. Understand the goals of DNS and the design principles behind it, including hierarchy, zones, and servers. Discover how DNS resolution works and the impact of workload and caching on DNS performance.

rarlene
Download Presentation

Today’s Lecture

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Today’s Lecture • P2P and DNS • Required readings • Peer-to-Peer Systems • Do incentives build robustness in BitTorrent? • Optional readings • DNSCaching, Coral CDN, Semantic-Free Referencing

  2. Overview • DNS • P2P Lookup Overview • Centralized/Flooded Lookups • Bittorrent

  3. Obvious Solutions (1) Why not centralize DNS? • Single point of failure • Traffic volume • Distant centralized database • Single point of update • Doesn’t scale!

  4. Obvious Solutions (2) Why not use /etc/hosts? • Original Name to Address Mapping • Flat namespace • /etc/hosts • SRI kept main copy • Downloaded regularly • Count of hosts was increasing: machine per domain  machine per user • Many more downloads • Many more updates

  5. Domain Name System Goals • Basically building a wide area distributed database • Scalability • Decentralized maintenance • Robustness • Global scope • Names mean the same thing everywhere • Don’t need • Atomicity • Strong consistency • CAP theorem – hard to get consistency, availability and partition tolerance

  6. FOR IN class: Type=A name is hostname value is IP address Type=NS name is domain (e.g. foo.com) value is name of authoritative name server for this domain RR format: (class, name, value, type, ttl) DNS Records • DB contains tuples called resource records (RRs) • Classes = Internet (IN), Chaosnet (CH), etc. • Each class defines value associated with type • Type=CNAME • name is an alias name for some “canonical” (the real) name • value is canonical name • Type=MX • value is hostname of mailserver associated with name

  7. DNS Design: Hierarchy Definitions • Each node in hierarchy stores a list of names that end with same suffix • Suffix = path up tree • E.g., given this tree, where would following be stored: • Fred.com • Fred.edu • Fred.cmu.edu • Fred.cmcl.cs.cmu.edu • Fred.cs.mit.edu root org com uk net edu mit gwu ucb cmu bu cs ece cmcl

  8. DNS Design: Zone Definitions • Zone = contiguous section of name space • E.g., Complete tree, single node or subtree • A zone has an associated set of name servers root org ca com uk net edu mit gwu ucb cmu bu Subtree cs ece cmcl Single node Complete Tree

  9. DNS Design: Cont. • Zones are created by convincing owner node to create/delegate a subzone • Records within zone stored multiple redundant name servers • Primary/master name server updated manually • Secondary/redundant servers updated by zone transfer of name space • Zone transfer is a bulk transfer of the “configuration” of a DNS server – uses TCP to ensure reliability • Example: • CS.CMU.EDU created by CMU.EDU administrators

  10. Servers/Resolvers • Each host has a resolver • Typically a library that applications can link to • Local name servers hand-configured (e.g. /etc/resolv.conf) • Name servers • Either responsible for some zone or… • Local servers • Do lookup of distant host names for local hosts • Typically answer queries about local zone

  11. Responsible for “root” zone Approx. dozen root name servers worldwide Currently {a-m}.root-servers.net Local name servers contact root servers when they cannot resolve a name Configured with well-known root servers DNS: Root Name Servers

  12. Physical Root Name Servers • Several root servers have multiple physical servers • Packets routed to “nearest” server by “Anycast” protocol • 346 servers total

  13. www.cs.cmu.edu NS ns1.cmu.edu NS ns1.cs.cmu.edu A www=IPaddr Typical Resolution root & edu DNS server www.cs.cmu.edu ns1.cmu.edu DNS server Local DNS server Client ns1.cs.cmu.edu DNS server

  14. Recursive query: Server goes out and searches for more info (recursive) Only returns final answer or “not found” Iterative query: Server responds with as much as it knows (iterative) “I don’t know this name, but ask this server” Workload impact on choice? Local server typically does recursive Root/distant server does iterative local name server dns.eurecom.fr intermediate name server dns.umass.edu Lookup Methods root name server 2 iterated query 3 4 7 5 6 authoritative name server dns.cs.umass.edu 1 8 requesting host surf.eurecom.fr gaia.cs.umass.edu

  15. Workload and Caching • What workload do you expect for different servers/names? • Why might this be a problem? How can we solve this problem? • DNS responses are cached • Quick response for repeated translations • Other queries may reuse some parts of lookup • NS records for domains • DNS negative queries are cached • Don’t have to repeat past mistakes • E.g. misspellings, search strings in resolv.conf • Cached data periodically times out • Lifetime (TTL) of data controlled by owner of data • TTL passed with every record

  16. www.cs.cmu.edu NS ns1.cmu.edu NS ns1.cs.cmu.edu A www=IPaddr Typical Resolution root & edu DNS server www.cs.cmu.edu ns1.cmu.edu DNS server Local DNS server Client ns1.cs.cmu.edu DNS server

  17. Subsequent Lookup Example root & edu DNS server ftp.cs.cmu.edu cmu.edu DNS server Local DNS server Client ftp.cs.cmu.edu cs.cmu.edu DNS server ftp=IPaddr

  18. Reliability • DNS servers are replicated • Name service available if ≥ one replica is up • Queries can be load balanced between replicas • UDP used for queries • Need reliability  must implement this on top of UDP! • Why not just use TCP? • Try alternate servers on timeout • Exponential backoff when retrying same server • Same identifier for all queries • Don’t care which server responds

  19. Reverse Name Lookup • 128.2.206.138? • Lookup 138.206.2.128.in-addr.arpa • Why is the address reversed? • Happens to be www.intel-iris.net and mammoth.cmcl.cs.cmu.edu  what will reverse lookup return? Both? • Should only return name that reflects address allocation mechanism

  20. Prefetching • Name servers can add additional data to any response • Typically used for prefetching • CNAME/MX/NS typically point to another host name • Responses include address of host referred to in “additional section”

  21. Root Zone • Generic Top Level Domains (gTLD) = .com, .net, .org, etc… • Country Code Top Level Domain (ccTLD) = .us, .ca, .fi, .uk, etc… • Root server ({a-m}.root-servers.net) also used to cover gTLD domains • Load on root servers was growing quickly! • Moving .com, .net, .org off root servers was clearly necessary to reduce load  done Aug 2000

  22. New gTLDs • .info  general info • .biz  businesses • .aero  air-transport industry • .coop  business cooperatives • .name  individuals • .pro  accountants, lawyers, and physicians • .museum  museums • Only new one actives so far = .info, .biz, .name

  23. New Registrars • Network Solutions (NSI) used to handle all registrations, root servers, etc… • Clearly not the democratic (Internet) way • Large number of registrars that can create new domains  However, NSI still handle root servers

  24. Do you trust the TLD operators? • Wildcard DNS record for all .com and .net domain names not yet registered by others • September 15 – October 4, 2003 • February 2004: Verisign sues ICANN • Redirection for these domain names to Verisign web portal (SiteFinder) • What services might this break?

  25. Protecting the Root Nameservers • Redundancy: 13 root nameservers • IP Anycast for root DNS servers {c,f,i,j,k}.root-servers.net • RFC 3258 • Most physical nameservers lie outside of the US Attack On Internet Called Largest EverDavid McGuire and Brian Krebs, Washington Post 2002-10-23 The heart of the Internet sustained its largest and most sophisticated attack ever, starting late Monday, according to officials at key online backbone organizations. Around 5:00 p.m. EDT on Monday, a "distributed denial of service" (DDOS) attack struck the 13 "root servers" that provide the primary roadmap for almost all Internet communications. Despite the scale of the attack, which lasted about an hour, Internet users worldwide were largely unaffected, experts said. Sophisticated? Why did nobody notice? seshan.org. 13759 NS www.seshan.org. Defense Mechanisms

  26. DNS Hacks: Blackhole Lists • First: Mail Abuse Prevention System (MAPS) • Paul Vixie, 1997 • Today: Spamhaus, spamcop, dnsrbl.org, etc. Different addresses refer to different reasons for blocking % dig 91.53.195.211.bl.spamcop.net ;; ANSWER SECTION: 91.53.195.211.bl.spamcop.net. 2100 IN A 127.0.0.2 ;; ANSWER SECTION: 91.53.195.211.bl.spamcop.net. 1799 IN TXT "Blocked - see http://www.spamcop.net/bl.shtml?211.195.53.91"

  27. DNS Experience • 23% of lookups with no answer • Retransmit aggressively  most packets in trace for unanswered lookups! • Correct answers tend to come back quickly/with few retries • 10 - 42% negative answers  most = no name exists • Inverse lookups and bogus NS records • Worst 10% lookup latency got much worse • Median 8597, 90th percentile 4471176 • Increasing share of low TTL records  what is happening to caching?

  28. DNS Experience • Hit rate for DNS = 80%  1-(#DNS/#connections) • Most Internet traffic is Web • What does a typical page look like?  average of 4-5 imbedded objects  needs 4-5 transfers  accounts for 80% hit rate! • 70% hit rate for NS records  i.e. don’t go to root/gTLD servers • NS TTLs are much longer than A TTLs • NS record caching is much more important to scalability • Name distribution = Zipf-like = 1/xa • A records  TTLs = 10 minutes similar to TTLs = infinite • 10 client hit rate = 1000+ client hit rate

  29. Some Interesting Alternatives • CoDNS • Lookup failures • Packet loss • LDNS overloading • Cron jobs • Maintenance problems • Cooperative name lookup scheme • If local server OK, use local server • When failing, ask peers to do lookup • Push DNS • Top of DNS hierarchy is relatively stable • Why not replicate much more widely?

  30. Overview • DNS • P2P Lookup Overview • Centralized/Flooded Lookups • Bittorrent

  31. Peer-to-Peer Networks • Typically each member stores/provides access to content • Basically a replication system for files • Always a tradeoff between possible location of files and searching difficulty • Peer-to-peer allow files to be anywhere  searching is the challenge • Dynamic member list makes it more difficult • What other systems have similar goals? • Routing, DNS

  32. The Lookup Problem N2 N1 N3 Key=“title” Value=MP3 data… Internet ? Client Publisher Lookup(“title”) N4 N6 N5

  33. Centralized Lookup (Napster) N2 N1 SetLoc(“title”, N4) N3 Client DB N4 Publisher@ Lookup(“title”) Key=“title” Value=MP3 data… N8 N9 N7 N6 Simple, but O(N) state and a single point of failure

  34. Flooded Queries (Gnutella) N2 N1 Lookup(“title”) N3 Client N4 Publisher@ Key=“title” Value=MP3 data… N6 N8 N7 N9 Robust, but worst case O(N) messages per lookup

  35. Routed Queries (Chord, etc.) N2 N1 N3 Client N4 Lookup(“title”) Publisher Key=“title” Value=MP3 data… N6 N8 N7 N9

  36. Overview • DNS • P2P Lookup Overview • Centralized/Flooded Lookups • Bittorrent

  37. Centralized: Napster • Simple centralized scheme  motivated by ability to sell/control • How to find a file: • On startup, client contacts central server and reports list of files • Query the index system  return a machine that stores the required file • Ideally this is the closest/least-loaded machine • Fetch the file directly from peer

  38. Centralized: Napster • Advantages: • Simple • Easy to implement sophisticated search engines on top of the index system • Disadvantages: • Robustness, scalability • Easy to sue!

  39. Flooding: Old Gnutella • On startup, client contacts any servent (server + client) in network • Servent interconnection used to forward control (queries, hits, etc) • Idea: broadcast the request • How to find a file: • Send request to all neighbors • Neighbors recursively forward the request • Eventually a machine that has the file receives the request, and it sends back the answer • Transfers are done with HTTP between peers

  40. Flooding: Old Gnutella • Advantages: • Totally decentralized, highly robust • Disadvantages: • Not scalable; the entire network can be swamped with request (to alleviate this problem, each request has a TTL) • Especially hard on slow clients • At some point broadcast traffic on Gnutella exceeded 56kbps – what happened? • Modem users were effectively cut off!

  41. Flooding: Old Gnutella Details • Basic message header • Unique ID, TTL, Hops • Message types • Ping – probes network for other servents • Pong – response to ping, contains IP addr, # of files, # of Kbytes shared • Query – search criteria + speed requirement of servent • QueryHit – successful response to Query, contains addr + port to transfer from, speed of servent, number of hits, hit results, servent ID • Push – request to servent ID to initiate connection, used to traverse firewalls • Ping, Queries are flooded • QueryHit, Pong, Push reverse path of previous message

  42. Flooding: Old Gnutella Example Assume: m1’s neighbors are m2 and m3; m3’s neighbors are m4 and m5;… m5 E m6 F D E E? E m4 E? E? E? E C A B m3 m1 m2

  43. Flooding: Gnutella, Kazaa • Modifies the Gnutella protocol into two-level hierarchy • Hybrid of Gnutella and Napster • Supernodes • Nodes that have better connection to Internet • Act as temporary indexing servers for other nodes • Help improve the stability of the network • Standard nodes • Connect to supernodes and report list of files • Allows slower nodes to participate • Search • Broadcast (Gnutella-style) search across supernodes • Disadvantages • Kept a centralized registration  allowed for law suits 

  44. Overview • DNS • P2P Lookup Overview • Centralized/Flooded Lookups • Bittorrent

  45. Peer-to-Peer Networks: BitTorrent • BitTorrent history and motivation • 2002: B. Cohen debuted BitTorrent • Key motivation: popular content • Popularity exhibits temporal locality (Flash Crowds) • E.g., Slashdot/Digg effect, CNN Web site on 9/11, release of a new movie or game • Focused on efficient fetching, not searching • Distribute same file to many peers • Single publisher, many downloaders • Preventing free-loading

  46. BitTorrent: Simultaneous Downloading • Divide large file into many pieces • Replicate different pieces on different peers • A peer with a complete piece can trade with other peers • Peer can (hopefully) assemble the entire file • Allows simultaneous downloading • Retrieving different parts of the file from different peers at the same time • And uploading parts of the file to peers • Important for very large files

  47. BitTorrent: Tracker • Infrastructure node • Keeps track of peers participating in the torrent • Peers register with the tracker • Peer registers when it arrives • Peer periodically informs tracker it is still there • Tracker selects peers for downloading • Returns a random set of peers • Including their IP addresses • So the new peer knows who to contact for data • Can have “trackerless” system using DHT

  48. BitTorrent: Chunks • Large file divided into smaller pieces • Fixed-sized chunks • Typical chunk size of 256 Kbytes • Allows simultaneous transfers • Downloading chunks from different neighbors • Uploading chunks to other neighbors • Learning what chunks your neighbors have • Periodically asking them for a list • File done when all chunks are downloaded

  49. Tracker Web Server Web page with link to .torrent .torrent C A Peer [Seed] Peer [Leech] Downloader “US” B Peer [Leech] BitTorrent: Overall Architecture

  50. Tracker Web Server Web page with link to .torrent Get-announce C A Peer [Seed] Peer [Leech] Downloader “US” B Peer [Leech] BitTorrent: Overall Architecture

More Related