1 / 23

Hari Balakrishnan

Peering Peer-to-Peer Providers. Hari Balakrishnan. Scott Shenker. Michael Walfish. MIT CSAIL UC Berkeley / ICSI. IRIS Project. 24 February 2005. Academic P2P: An Abridged History. Early days: B.Y.O.I. (Bring Your Own Infrastructure). <k1,v1>. <0x2da7, >. <k2,v2>. DHT.

eloise
Download Presentation

Hari Balakrishnan

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Peering Peer-to-Peer Providers Hari Balakrishnan Scott Shenker Michael Walfish MIT CSAIL UC Berkeley / ICSI IRIS Project 24 February 2005

  2. Academic P2P: An Abridged History • Early days: B.Y.O.I. (Bring Your Own Infrastructure) <k1,v1> <0x2da7, > <k2,v2> DHT • Recently: proposals to use P2P technology (DHTs resolve flat names) for core network services • Examples: CoDoNs, HIP, P6P, DOA <0x2da7, 8.2.9.2> DNS client CoDoNS node CoDoNS node DNS client DHT

  3. Academic P2P: An Abridged History, Cont. CoDoNS node CoDoNS node DHT • The DHT still has to run somewhere • But core network services cannot (or should not) depend on teenagers with cable modems … • Solution?

  4. A New School of DHT Research Open DHT [IPTPS04]: • DHT nodes should be managed • Run DHT as shared service • Running one is complex • Reuse minimal put-get iface • From B.Y.O.I. to Frat Party • Open DHT is a communal keg Sean Rhea

  5. So What’s the Problem? Sean has made Open DHT a stable, available, high-performance infrastructure … … but can’t afford to run it by himself, forever Shared infrastructure should be supported by a market, not by a benevolent donor

  6. Shared, Commercial DHT Service? • Must present users with a uniform “DHT dialtone” … • … in a competitive market for DHT service • Can multiple, competing providers coordinate? • Analogy: competing ISPs peer to give IP “dialtone” • Imagine: DSPs (DHT Service Providers) do likewise • For now, assume market demand exists • Investigate: federated P4 Infrastructure (Peering Peer-to-Peer Providers) of DSPs

  7. Requirements for a Global DHT Dialtone get (k) put(k,v) DSP DSP v Customer Customer P4 Infrastructure Customer pays its DSP for this service: • Puts of <k,v> accessible to all other P4 customers • Gets on keys will be fulfilled, no matter which provider serviced the put of <k,v> • Best effort service model

  8. Outline • P4 Design Spectrum • Challenges • Conclusion

  9. Scenario P4 Proxy <k4,v4> P4 Proxy <k1,v1> <k5,v5> DSP DSP v1 = get(k1) put(k1,v1) <k2,v2> <k3,v3> DSP Home User Company • Each DSP owns hosts, stores subset of {<k,v>} • Customer/provider interface: P4 Proxy (like DNS) • Assume for now DSPs all talk to each other • We now discuss possible relationships …

  10. Possible DSP Relationships (First Two) • All one DHT • Existing DHT mechanisms work • No incentive for DSP to contribute resources • Administrative separation (separate DHTs) • DSP coded into key  right incentives • DSPs store <k,v> only for customers • Switching DSPs means switching keys DSP ID Rest of the key

  11. Possible DSP Relationships (Third) • Now assume: • Every DSP runs own lookup infrastructure • Keys don’t encode DSP • Therefore: • DSPs must exchange customers’ <k,v> pairs • We believe this 3rd relationship is the tenable one • But how will it work? (For now, assume small set of top-level DSPs)

  12. Different Exchange Regimes • Get-broadcasting; local puts (can cache <k,v>) put (k, v) get (k) <k,v> • Put-broadcasting of <k,v>; local gets <k,v> put (k, v) get (k) <k,v> <k,v> • Put-broadcasting of keys only; forwarded gets provider’s id <k,*> put (k, v) <k,v> get (k) <k,*>

  13. More on the Regimes • They split put and get bandwidth differently • Can and should coexist; putter chooses regime • Different pricing schemes?

  14. Outline • P4 Design Spectrum • Challenges • Conclusion

  15. DSPs’ Incentives • Incentive to be honest? • Commercial relationships; market discipline • No different from DNS or IP service today • Incentive to peer? Settlements (i.e., payments between two peers): • Needed if two DSPs gain unequally from peering • Preclude caching and put-broadcast • Introduce complexity • Paper argues DSPs gain equally from peering w/out settlements?

  16. Coherence and Correctness • <k,v> inserted by a customer must be visible to customers of other providers • Discussed earlier • Customers must not be able to own the same key or overwrite each other’s <k,v> pairs • Inherit from existing DHTs, especially Open DHT • k=hash(v), k=(salt, pubkey),e.g. • Cryptography unaffected by # of providers

  17. Market Structure and Scale Top-level DSPs Child DSPs Forest structure (ISP analogy again) • Top-level DSPs do put- and get-broadcasting • Children of top-level DSPs either: • Redirect customer put/get requests to top-level • Maintain a local lookup service

  18. Outline • P4 Design Spectrum • Challenges • Conclusion

  19. Conclusion • P2P: technical revolution, yes. Economic novelty? • A social theory of DHTs (compare with Marx): Anarchism (B.Y.O.I.) Communism (benevolent entity) Capitalism (P4 a form of privatization) • Our goals: DHT dialtone for customers, proper incentives for providers • Peering arrangements necessary but not sufficient • Market requires demand, too

  20. Appendix Slides

  21. DSPs Gain Roughly Equally From Peering • Assume DSP’s benefit proportional to: • Its customers’ benefit from reads in other DSPs • Its customers’ benefit from having their data read • Case I: avg. benefit to a customer from a “get” is equal to avg. benefit from having its “put” read • Case II: avg. benefits not equal. Under certain assumptions, # of “gets” in each direction equal. # from B to A (larger fraction of smaller #) # from A to B (smaller fraction of larger #) # “gets” from DSP B # “gets” from DSP A

  22. Latency • Puts: customer talks to P4 Proxy. Low latency. • For gets, separate by exchange regime: • Get-broadcast: • Latency can be high • But opportunistic caching can mitigate • Put-broadcast of key; forwarded get: (same.) • Put-broadcast of <k,v>; local get: • All DSPs have copies of <k,v>; low latency • Adaptive algorithm to decide which propagation regime is optimal for a key?

  23. Can’t Google Do This? • Sure. • Will they charge for the service? • If not, great! • If so … • This talk: whether P4 infrastructure could emerge • Not whether P4 infrastructure will emerge • (We assumed market demand exists.)

More Related