P2P Networking - PowerPoint PPT Presentation

P2p networking l.jpg
Download
1 / 85

P2P Networking. What is peer-to-peer (P2P)?.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

Download Presentation

P2P Networking

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


P2p networking l.jpg

P2P Networking


What is peer to peer p2p l.jpg

What is peer-to-peer (P2P)?

  • “Peer-to-peer is a way of structuring distributed applications such that the individual nodes have symmetric roles. Rather than being divided into clients and servers each with quite distinct roles, in P2P applications a node may act as both a client and a server.”-- Charter of Peer-to-peer Research Group, IETF/IRTF, June 24, 2004(http://www.irtf.org/charters/p2prg.html)


Client server architecture l.jpg

GET /index.html HTTP/1.0

HTTP/1.1 200 OK ...

Client/Server Architecture

Client

Server


Disadvantages of c s architecture l.jpg

Disadvantages of C/S Architecture

  • Single point of failure

  • Strong expensive server

  • Dedicated maintenance (a sysadmin)

  • Not scalable - more users, more servers


The client side l.jpg

The Client Side

  • Today’s clients can perform more roles than just forwarding users requests

  • Today’s clients have:

    • more computing power

    • more storage space

  • Thin client  Fat client


Evolution at the client side l.jpg

Evolution at the Client Side

IBM PC

@ 4.77MHz

360k diskettes

A PC @ 4GHz

100GB HD

DEC’S VT100

No storage

2007

‘70

‘80


What else has changed l.jpg

What Else Has Changed?

  • The number of home PCs is increasing rapidly

  • Most of the PCs are “fat clients”

  • As the Internet usage grow, more and more PCs are connecting to the global net

  • Most of the time PCs are idle

  • How can we use all this?


Resources sharing l.jpg

Resources Sharing

  • What can we share?

    • Computer resources

  • Shareable computer resources:

    • CPU cycles- seti@home, GIMPS

    • Data- Napster, Gnutella

    • Bandwidth- PPLive, PPStream

    • Storage Space- OceanStore, CFS, PAST


Seti@home l.jpg

SETI@Home

  • SETI – Search for Extra-Terrestrial Intelligence

  • @Home – On your own computer

  • A radio telescope in Puerto Rico scans the sky for radio signals

  • Fills a DAT tape of 35GB in 15 hours

  • That data have to be analyzed


Seti@home cont l.jpg

SETI@Home (cont.)

  • The problem – analyzing the data requires a huge amount of computation

  • Even a supercomputer cannot finish the task on its own

  • Accessing a supercomputer is expensive

  • What can be done?


Seti@home cont11 l.jpg

SETI@Home (cont.)

  • Can we use distributed computing?

    • YEAH

  • Fortunately, the problem can be solved in parallel - examples:

    • Analyzing different parts of the sky

    • Analyzing different frequencies

    • Analyzing different time slices


Seti@home cont12 l.jpg

SETI@Home (cont.)

  • The data can be divided into small segments

  • A PC is capable of analyzing a segment in a reasonable amount of time

  • An enthusiastic UFO searcher will lend his spare CPU cycles for the computation

    • When? Screensavers


Seti@home example l.jpg

SETI@Home - Example


Seti@home summary l.jpg

SETI@Home - Summary

  • SETI reverses the C/S model

    • Clients can also provide services

    • Servers can be weaker, used mainly for storage

  • Distributed peers serving the center

    • Not yet P2P but we’re close

  • Outcome - great results:

    • Thousands of unused CPU hours tamed for the mission

    • 3+ millions of users


Slide18 l.jpg

Google -- Larry Page and Sergey Brin


Cloud computing l.jpg

Cloud computing


Murex a mutable replica control scheme for peer to peer storage systems l.jpg

MUREX: A Mutable Replica Control Scheme for Peer-to-Peer Storage Systems


Murex basic concept l.jpg

HotOS

Attendee

Murex: Basic Concept


Peer to peer video streaming l.jpg

Video stream

Peer-to-Peer Video Streaming


Peer to peer video streaming23 l.jpg

Peer-to-Peer Video Streaming


Napster shawn fanning l.jpg

Napster -- Shawn Fanning


History of napster 1 2 l.jpg

History of Napster (1/2)

  • 5/99: Shawn Fanning (freshman, Northeastern University) founds Napster Online (supported by Groove)

  • 12/99: First lawsuit

  • 3/00: 25% Univ. of Wisconsin traffic on Napster


History of napster 2 2 l.jpg

History of Napster (2/2)

  • 2000: estimated 23M users

  • 7/01: simultaneous online users 160K

  • 6/02: file bankrupt

  • 10/03: Napster 2 (Supported by Roxio) (users should pay $9.99/month)

1984~2000, 23M domain names are counted

vs.

16 months, 23M Napster-style names are registered at Napster


Slide28 l.jpg

  • “beastieboy”

  • song1.mp3

  • song2.mp3

  • song3.mp3

  • “kingrook”

  • song4.mp3

  • song5.mp3

  • song6.mp3

  • “slashdot”

  • song5.mp3

  • song6.mp3

  • song7.mp3

Napster Sharing Style: hybrid center+edge

Title User Speed

song1.mp3 beasiteboy DSL

song2.mp3 beasiteboy DSL

song3.mp3 beasiteboy DSL

song4.mp3 kingrook T1

song5.mp3 kingrook T1

song5.mp3 slashdot 28.8

song6.mp3 kingrook T1

song6.mp3 slashdot 28.8

song7.mp3 slashdot 28.8

1. Users launch Napster and connect to Napster server

2. Napster creates dynamic directory from users’ personal .mp3 libraries

3. beastieboy enters search criteria

s

o

n

g

5

4. Napster displays matches to beastieboy

5. beastieboy makes direct connection to kingrook for file transfer

  • song5.mp3


Gnutella history l.jpg

Gnutella History

  • Gnutella was written by Justin Frankel, the 21-year-old founder of Nullsoft.

  • (Nullsoft acquired by AOL, June 1999)

  • Nullsoft (maker of WinAmp) posted Gnutella on the Web, March 14, 2000.

  • A day later AOL yanked Gnutella, at the behest of Time Warner.

  • Too late: 23k users on Gnutella

  • People had already downloaded and shared the program.

  • Gnutella continues today, run by independent programmers.


Gnutella justin frankel and tom pepper l.jpg

Gnutella -- Justin Frankel and Tom Pepper


Slide31 l.jpg

The ‘Animal’ GNU: Either of two large African antelopes (Connochaetes gnou or

C. taurinus) having a drooping mane and beard, a long tufted tail,

and curved horns in both sexes. Also called wildebeest.

GNU: Recursive Acronym

GNU’s Not Unix ….

+

Gnutella =

GNU

Nutella

Nutella: a hazelnut chocolate spread produced by the Italian confectioner Ferrero….


Slide32 l.jpg

GNU

  • GNU's Not Unix

  • 1983 Richard Stallman (MIT) established Free Software Foundation and Proposed GNU Project

  • Free software is not freeware

  • Free software is open source software

  • GPL: GNU General Public License


About gnutella l.jpg

About Gnutella

  • No centralized directory servers

  • Pings the net to locate Gnutella friends

  • File requests are broadcasted to friends

    • Flooding, breadth-first search

  • When provider located, file transferred via HTTP

  • History:

    • 3/14/00: release by AOL, almost immediately withdrawn


Peer to peer overlay network l.jpg

Peer-to-Peer Overlay Network

Focus at the application layer


Peer to peer overlay network35 l.jpg

Peer-to-Peer Overlay Network

End systems

one hop

(end-to-end comm.)

a TCP thru the Internet

Internet


Topology of a gnutella network l.jpg

Topology of a Gnutella Network


Gnutella issue a request l.jpg

Gnutella: Issue a Request

xyz.mp3 ?


Gnutella flood the request l.jpg

Gnutella: Flood the Request


Gnutella reply with the file l.jpg

Gnutella: Reply with the File

Fully distributed storage and directory!

xyz.mp3


So far l.jpg

So Far

n: number of participating nodes

  • Centralized :

    - Directory size – O(n)

    - Number of hops – O(1)

  • Flooded queries:

    - Directory size – O(1)

    - Number of hops – O(n)


We want l.jpg

We Want

  • Efficiency : O(log(n)) messages per lookup

  • Scalability : O(log(n)) state per node

  • Robustness : surviving massive failures


How can it be done l.jpg

How Can It Be Done?

  • How do you search in O(log(n)) time?

  • Binary search

  • You need an ordered array

  • How can you order nodes in a network and data objects?

  • Hash function!


Example of hasing l.jpg

Object ID (key):AABBCC

Object ID (key):DE11AC

SHA-1

SHA-1

Example of Hasing

Shark

194.90.1.5:8080


Basic idea l.jpg

Basic Idea

P2P Network

Publish (H(y))

Join (H(x))

Object “y”

Peer “x”

H(y)

H(x)

Peer nodes also have hash keys in the same hash space

Objects have hash keys

y

x

Hash key

Place object to the peer with closest hash keys


Mapping keys to nodes l.jpg

Mapping Keys to Nodes

0

M

- a node

- an data object


Viewed as a distributed hash table l.jpg

Internet

Viewed as a Distributed Hash Table

0

2128-1

Hash

table

Peer

node


Slide47 l.jpg

DHT

  • Distributed Hash Table

  • Input: key (file name)Output: value (file location)

  • Each node is responsible for a range of the hash table, according to the node’s hash key. Objects are placed in (managed by) the node with the closest key

  • It must be adaptive to dynamic node joining and leaving


How to find an object l.jpg

How to Find an Object?

0

2128-1

Hash

table

Peer

node


Simple idea l.jpg

Simple Idea

  • Track peers which allow us to move quickly across the hash space

    • a peer p tracks those peers responsible for hash keys(p+2i-1), i=1,..,m

i

i+22

i+24

i+28

0

2128-1

Hash

table

Peer

node


Dht example chord ring structure l.jpg

DHT example: Chord -- Ring Structure

N8 knows of only

six other nodes.

Circular 6-bit

ID space

O(log n) states per node


Dht example chord ring structure51 l.jpg

DHT example: Chord -- Ring Structure

O(log n)-hop query cost


Slide52 l.jpg

Classification of P2P systems

  • Hybrid P2P – Preserves some of the traditional C/S architecture. A central server links between clients, stores indices tables, etc

    • Napster

  • Unstructured P2P– no control over topology and file placement

    • Gnutella, Morpheus, Kazaa, etc

  • Structured P2P – topology is tightly controlled and placement of files are not random

    • Chord, CAN, Pastry, Tornado, etc


What s next in the future l.jpg

What’s next in the future?

  • P2P NEVs (Networked Virtual Environments)

    • P2P MMOGs (Massively Multiplayer Online Games)

    • P2P 3D Scene Streaming


P2p nve peer to peer networked virtual environment l.jpg

P2P-NVE:Peer-to-Peer Networked Virtual Environment

Part of the following slides are adapted from www.movesinstitute.org/~mcgredo/NVE.ppt


Nve examples l.jpg

NVE Examples

  • Commercial:

    • FPS: America’s Army

    • MMOGQuake, Unreal, EverQuest, World of Warcraft (WoW), Lineage, Second Life, FPS (first person shooter)

  • Research: NPSNET, Dive, MASSIVE

  • Military: Close Combat Tactical Trainer, SIMNET


Americas army l.jpg

Americas Army


M assively m ultiplayer o nline g ames l.jpg

Massively Multiplayer Online Games

MMOGs are growing quickly

8 million registered users for World of Warcraft

Over 100,000concurrent players

Billion-dollar business

Adaptive Computing and Networking Lab, CSIE, NCU


Slide58 l.jpg

Adaptive Computing and Networking Lab, CSIE, NCU


Slide59 l.jpg

Adaptive Computing and Networking Lab, CSIE, NCU


Slide62 l.jpg

Adaptive Computing and Networking Lab, CSIE, NCU


Close combat tactical trainer l.jpg

Close Combat Tactical Trainer


Nve 1 l.jpg

NVE (1)

  • Networked Virtual Environments (NVEs) are computer-generated virtual world where multiple geographically distributed users can assume virtual representatives (or avatars) to concurrently interact with each other

  • A.K.A. Distributed Virtual Environments (DVEs)


Nve 3 l.jpg

NVE (3)

  • 3D virtual world with

    • People (avatar)

    • Objects

    • Terrain

    • Agents

  • Each avatar can do a lot of operations

    • Move

    • Chat

    • Other actions…


Nve components l.jpg

NVE Components

  • •Graphic display

  • •User input and communication

  • •Processing/CPU

  • •Data network


Nve components graphics l.jpg

NVE Components: Graphics

  • The display and CPU have become astonishingly cheap in the last few years

  • We typically need to draw in 3D on the display. The great thing about standards is that there are so many to choose from.

  • OpenGL, X3D, Java3D

  • Varying degrees of realism, with FPS emphasizing photorealism and others like EverQuest or Sims Online sacrificing graphics for better gameplay in other aspects


Nve user interfaces l.jpg

NVE User Interfaces

  • You can have all sorts of input/output devices at the user’s location.

  • With commercial games it is often a display, keyboard, and mouse.

  • Military simulations may have more elaborate user environments, such as a mockup of an M1 tank interior

  • Some fancy UIs, such as head-mounted displays, caves, haptic feedback, data gloves, etc.


Processing cpu l.jpg

Processing/CPU

  • Used for physics, AI, agent behavior, some graphics, networking

  • Still on Moore’s law curve. 4+ GHz machines are cheap and plentiful.


Data network l.jpg

Data Network

  • Networks are a real bottleneck in NVE design. Why?

  • Players can send out position updates at 30/sec X 50 bytes/update + 42 bytes/packet overhead = 22,000 bits/sec/player

  • Even a 1 mbit/sec connection can run out of bandwidth after 40-50 players


Data network71 l.jpg

Data Network

  • Latency is another big implementation problem. The position updates that arrive at hosts are always out of date, so we only know where the object was in the past

  • This may be a big problem (wide area network across a satellite link) or a small problem (everyone on a LAN)


Issues for nves l.jpg

Issues for NVEs

  • Scalability

    • To accommodate as many as participants

  • Consistency

    • All participants have the same view of object states

  • Persistency

    • All contents (object states) in NVE need to exist persistently

  • Reliability

    • Need to tolerate H.W and S.W. failures

  • Security

    • To prevent cheating and to keep user information and game state confidentially.


Architectures l.jpg

Architectures

  • NVE architectures fall between two extreme poles: peer-to-peer and client-server

  • In a Client-Server architecture, a server is responsible for sending out updates to the other hosts in the NVE

  • In a P2P architecture, all hosts communicate with other hosts


Architectures client server l.jpg

Architectures (Client-Server)

Popular with commercial game engines


Architectures p2p l.jpg

Architectures (P2P)

More popular in research and military


The scalability problem 1 l.jpg

The Scalability Problem (1)

Client-server: has inherent resource limit

Resource limit

Adaptive Computing and Networking Lab, CSIE, NCU


The scalability problem 2 l.jpg

The Scalability Problem (2)

Peer-to-Peer: Use the clients’ resources

Resource limit

Adaptive Computing and Networking Lab, CSIE, NCU


You only need to know some participants l.jpg

You only need to know some participants

★: self

▲: neighbors

Area of Interest(AOI)

Adaptive Computing and Networking Lab, CSIE, NCU


Voronoi based overlay network von l.jpg

Voronoi-based Overlay Network : VON

  • Observation:

    • for virtual environment applications, the contents we want are messages from AOI neighbors

    • Content discovery is a neighbor discovery problem

  • Solve the Neighbor Discovery Problem in a fully-distributed, message-efficient manner.

  • Specific goals:

    • Scalable Limit & minimize message traffics

    • Responsive Direct connection with AOI neighbors


Voronoi diagram l.jpg

Voronoi Diagram

  • 2D Plane partitioned into regions by sites, each region contains all the points closest to its site

Neighbors

Region

Site


Design concepts l.jpg

Design Concepts

Use Voronoi to solve the neighbor discovery problem

  • Each node constructs a Voronoi of its neighbors

  • Identify enclosing and boundary neighbors

  • Mutual collaboration in neighbor discovery

● node i and the big circle is its AOI

■ enclosing neighbors

▲ boundary neighbors

★ both enclosing and boundary neighbors

▼ normal AOI neighbors

◆ irrelevant nodes


Procedure join l.jpg

Procedure (JOIN)

1)Joining node sends coordinates to any existing node

Join request is forwarded to acceptor

2)Acceptorsends back its own neighbor list

joining node connects with other nodes on the list

Joining node

Acceptor’s region


Procedure move l.jpg

Procedure (MOVE)

1)Positions sent to all neighbors, mark messages to B.N.

B.N. checks for overlaps between mover’s AOI and its E.N.

2)Connect to new nodes upon notification by B.N.

Boundary neighbors

New neighbors


Procedure leave l.jpg

Procedure (LEAVE)

1)Simply disconnect

2)Others then update their Voronoi

new B.N. is discovered via existing B.N.

Leaving node (also a B.N.)

New boundary neighbor


Slide85 l.jpg

Q&A


  • Login