P2p live streaming
This presentation is the property of its rightful owner.
Sponsored Links
1 / 73

P2P Live Streaming PowerPoint PPT Presentation


  • 161 Views
  • Uploaded on
  • Presentation posted in: General

P2P Live Streaming. Yang Gao, Nazanin Magharei, Reza Rejaie, "Mesh or Multiple-Tree: A Comparative Study of Live P2P Streaming Approaches" INFOCOM 07 Y Liu, Y Guo, "A survey on peer-to-peer video streaming systems", Peer-to-peer Networking and Applications, 2008.

Download Presentation

P2P Live Streaming

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


P2p live streaming

P2P Live Streaming

Yang Gao, Nazanin Magharei, Reza Rejaie, "Mesh or Multiple-Tree: A Comparative Study of Live P2P Streaming Approaches" INFOCOM 07

Y Liu, Y Guo, "A survey on peer-to-peer video streaming systems", Peer-to-peer Networking and Applications, 2008.

S Ali, A Mathur, "Measurement of commercial peer-to-peer live video streaming", Recent Advances in Peer-to-Peer Streaming, 2006 .

Deepak Kumar Agarwal ( 71404423 )

Ajay Narayan ( 60006864 )

Nishchint Raina ( 67569992 )


Paper 1 mesh or multiple tree a comparative study of live p2p streaming approaches

Paper 1. Mesh or Multiple-Tree: A Comparative Study of Live P2P Streaming Approaches

- Analyze tree based and mesh based overlays as content delivery overlays

- Evaluates performance of their content delivery mechanisms over a properly connected overlay

- similarities and differences

- ability to tolerate churn

- mesh based > tree based by all measures !


P2p streaming

P2P streaming

Using P2P overlay for streaming live media over network

Participating end-systems (or peers) actively contribute their resources by forwarding their available content to their connected peers.

Push based content delivery over multiple tree shaped overlays.

The tree-based P2P streaming approach expands on the idea of end-system multicast by organizing participating peers into multiple diverse trees.

Mesh-based approach uses swarming content delivery over a randomly connected mesh.


Terms

Terms

Churn:

a peer can leave or join the p2p system at arbitrary time

Deadlock:

In the presence of churn, a tree could become saturated and thus unable to accept any new leaf node.

Content Bottleneck:

When a parent does not have sufficient number of useful packets for a child peer, the bandwidth of its congestion controlled connection to that child peer can not be fully utilized.

Bandwidth Utilization:

ratio of the number of data packets to the total number of delivered packets.

Average Quality:

the average number of descriptions ( of Multiple Description Coded (MDC) content ) it receives during a session.

Multiple Description Coding (MDC):

Encoding streams into multiple sub-streams called description. Each description can be independently decoded. Furthermore, receiving multiple unique descriptions results in a higher quality.


Organized view of random mesh

Organized view of Random Mesh


Delivery trees

Delivery Trees

Mesh – based approach

Tree – based approach


Tree overlay construction

Tree Overlay Construction

Peer decides number of trees to join based on its access link bandwidth

Each peer is placed as an internal node in only one tree and as a leaf node in other trees.

Join:

peer contacts the bootstrapping node to identify a parent in the desired number of trees

Leave:

subtree nodes rejoin the tree

Balance tree:

peer is added as an internal node to the tree that has the minimum number of internal nodes.

Short tree:

a new internal node is placed as a child for the node with the lowest depth


Mesh overlay construction

Mesh Overlay Construction

Participating peers form a randomly connected overlay

Each peer tries to maintain a certain number of parents (i.e., incoming degree)

Each peer serves a specific number of child peers (i.e., outgoing degree).

Upon arrival, a peer contacts a bootstrapping node to receive a set of peers that can potentially serve as parents.


Mesh overlay construction1

...Mesh Overlay Construction

  • The bootstrapping node maintains the outgoing degree of all participating peers. Then, it selects a random subset of peers that can accommodate new child peers in response to an incoming request for parents.

  • Individual peers periodically report their newly available packets to their child peers and request specific packets from individual parent peers

  • A parent peer periodically receives an ordered list of requested packets from each child peer, and delivers the packets in the requested order. The requested packets from individual parents are determined by a packet scheduling algorithm at each child peer.


Packet scheduling algorithm prime

Packet scheduling algorithm ( PRIME )

Each peer maintains two pieces of information for individual parents:

the available packets, and

the weighted average bandwidth ( b/w budget )

Each peer monitors the aggregate incoming bandwidth from all parents and slowly adapt the number of requested descriptions (or their target quality) with the aggregate bandwidth.

Each peer invokes the algorithm every ∆ seconds to request packets from parent ( with n target quality ) as follows:

scheduler identifies the packets with the highest timestamp that have become available among parents since the last request (during last ∆ seconds).

the missing packets for each timestamp (up to n descriptions per timestamp) are identified and a random subset of these packets is requested from all parents to fully utilize their bandwidth.

to balance the load among parents, when a packet is available at more than one parent, it is requested from the parent that has the lowest fraction of its bandwidth budget utilized.


Similarities

Similarities

Both approaches leverage MDC to accommodate the bandwidth heterogeneity among participating peers.

Superimposed view of multiple diverse trees is same as directed random mesh overlays.

Content delivery in both enables individual peers to receive different pieces of content.

All peers receive data from multiple parents and send it down to different child peers.

Both require peers to maintain a loosely synchronized playout time that is sufficiently (τ seconds) behind source’s playout time.


Differences

Differences


Delivery tree in mesh

Delivery Tree in Mesh

Maximize outgoing bandwidth

Diffusion Phase: Once a new packet becomes available at the source, a single peer p in level, i pulls the packet during the next interval ∆ s.

Swarming Phase: During the swarming phase, peers on different diffusion subtrees exchange their new packets to contribute their outgoing bandwidth.

Delivery tree of a packet consists of two parts:

top portion shall be a diffusion subtree

bottom portion is a collection of swarming connections hanging from the diffusion subtree.


Effect of per connection bandwidth

Effect of Per Connection Bandwidth

Tree-based approach has a sweet spot for the ratio of per-connection bandwidth to description bandwidth where high resource utilization and thus high delivered quality is achieved.


Effect of peer degree number of trees

Effect of Peer Degree (Number of Trees)


Effect of bandwidth heterogeneity

Effect of bandwidth heterogeneity

Mesh: as the % of high bandwidth peers increases, the aggregate performance improves

Tree: increasing the % of high bandwidth peers rapidly drops depth of all trees which in turn improves both utilization and the delivered quality.


Performance evaluation properly connected static overlays

Performance Evaluation: Properly Connected Static Overlays


Performance evaluation responsiveness to churn

Performance Evaluation: Responsiveness to Churn !


Summary

Summary

Identifies the key differences between mesh-based and tree-based approaches to P2P streaming.

This in turn sheds an insightful light on the inherent limitations and potentials of these two approaches

Identifies the underlying causes for the observed differences between tree- and mesh-based approaches.


P2p live streaming

Paper 2

A survey on peer-to-peer video streaming systems

Yong Liu; Yang Guo; Chao Liang


P2p live streaming

Introduction

  • Classification of Video Streaming :

    • Live Streaming

    • Video on Demand

  • Different models to achieve video streaming over internet:

    • Client-Server Model

    • Content Delivery Network

    • Peer-to-Peer Networking


P2p live streaming

P2P Live Streaming

  • Live video content is disseminated to all users in real-time. Video playbacks on all users are synchronized.

  • Overlay Structures for P2P live streaming :

    • Tree Based Systems

      • Single-tree streaming

      • Multi-tree streaming

    • Mesh-based Systems


P2p live streaming

Tree Based System [P2P Live Streaming]

  • Tree Based Systems

    • A peer has only one parent in a single streaming tree and downloads all content of the video stream from that parent.

  • Single Tree Streaming

    • Users form a tree at the application layer, rooted at the video server.

    • Considerations while constructing a streaming tree:

      • Depth of the tree.

      • Fan out of the tree.

      • Tree maintenance


P2p live streaming

Tree Based Streaming – Single Tree


P2p live streaming

Tree Maintenance – Single Tree


P2p live streaming

Single Tree Construction & Maintenance

  • Achieved in 2 ways:

    • Centralized

      • central server controls the tree construction and recovery

      • Disadvantage: Performance bottleneck and the single point of failure

    • Distributed

      • cannot recovery fast enough to handle frequent peer churn.


P2p live streaming

Multi – tree Streaming

  • Server divides the stream into multiple sub-streams

  • One sub-tree for each sub-stream

  • Each peer joins all sub-trees to retrieve every sub-stream.

  • Each peer has a different position in different sub trees.


P2p live streaming

Multi-tree Streaming


P2p live streaming

Mesh-based Systems

  • Peers establish and terminate peering relationships dynamically

  • A peer maintains peering relationship with multiple neighboring peers

  • Extremely robust against peer churn


P2p live streaming

Mesh formation and Maintenance

  • A mesh streaming system maintains a tracker.

    • Keeps track of the active peers in the video session.

  • Each peer, when joining the network, contacts the tracker:

    • Peer reports its IP address, port number etc.

    • Tracker returns a subset of active list of peers in the session.


P2p live streaming

Mesh Maintenance

  • Peers identify new node by exchanging peer list with neighbors.

  • Also request for active peer list from tracker.

  • Graceful departure of a peer is informed to the tracker.

  • Unexpected Peer departure:

    • Peers regularly exchange keep-alive messages


P2p live streaming

P2P Video on Demand

  • Video on Demand [VoD]

    • allows users to watch any point of video at any time

    • offers more flexibility and convenience to users

    • key feature to attract consumers to IPTV service

  • Overlays to support VoD:

    • Tree based P2P systems

    • Mesh based P2P systems


P2p live streaming

Tree Based P2P Systems

  • Users grouped into sessions based on arrival time.

  • The server and users in the same session form an application level multicast tree.[Base tree]

  • Server streams entire video over the base tree.

  • Users who join the session later, should obtain the ‘patch’ [ Content that is missed]


P2p live streaming

Tree Based P2P Systems

  • Users act like peers in a P2P network. Each of them provide the following 2 functions:

    • Base Stream Forwarding

      • Users forward the received base stream to child nodes

    • Patch Serving

      • Users cache initial part of stream and forward to peers joining newly.


P2p live streaming

Tree Based P2P Systems


P2p live streaming

Cache-and-relay P2P VoD

  • Based on the concept of interval caching.

  • Server caches a moving window of video content.

    • Efficiently utilizes memory at the server

    • Serve clients whose viewing point falls into the caching window.

  • Serves all clients asynchronously.


P2p live streaming

Cache-and-relay P2P VoD

  • Each peer buffers a moving window of video content around the point where they are watching.

  • Serves other users who are watching around that point by forwarding the stream.


P2p live streaming

Cache-and-relay P2P VoD


P2p live streaming

Mesh-based P2P VoD

  • Achieves fast file downloading by swarming

  • Server disperses data blocks to different users.

  • Diversity Requirement:

    • The data blocks at different users are better-off to be different from each other so that there is always something to exchange.

    • Fully utilize users upload bandwidth

    • Achieve highest downloading throughput.


P2p live streaming

Mesh-based P2P VoD

  • Challenges face in building a mesh based P2P:

    • effective rate of video play back is poor as data blocks are retrieved in a fairly random order.

    • availability of different content blocks is also skewed by users behavior

  • Requires right balance between the overall system efficiency and the conformation to the sequential playback

  • Example of Mesh-based P2P VoD: BiToS


P2p live streaming

BiToS: Mesh-based P2P VoD


P2p live streaming

BiToS: Mesh-based P2P VoD

  • BiToS has 3 components :

    • Received Buffer : Stores all data blocks that have arrived.

    • High Priority Set: Contanins the list of data blocks that are close to playback but are not downloaded yet.

    • Remaining Pieces : List of all blocks that are yet to be downloaded.


P2p live streaming

Mesh-based P2P VoD

  • Availability of Content in Mesh-based P2P:

    • If video is downloaded in the order of playback

      • newly arrived user can make little contribution

      • Not many have content earlier users are looking for

    • Earlier arrived peers serve content to the newly arrived users.

    • The number of peers that serve content to earlier arrived peers constantly reduces, as users might leave the network.

    • One Solution is to introduce a source server.


P2p live streaming

Conclusion

  • Existing Limitations in P2P systems:

    • Quality of Experience is not comparable to traditional TV.

    • Long channel start up times and channel delays.

    • Considerable lag among peers.

    • Low resolution videos due to limited uploading capacity.


P2p live streaming

Conclusion

  • High traffic volumes pose a challenge to ISP’s network capabilities.

  • Video content distribution load is shifted to the ISPs without any profit to them.

  • Requires further investigation to identify an effective method to regulate and manage P2P video streaming traffic and maintain stability of the ISP’s network infrastructure.


Measurement of commercial peer to peer live video streaming

Measurement of Commercial Peer-To-Peer Live Video Streaming

Paper 3


Agenda

Agenda

  • Challenges with analyzing P2P apps

  • How is measurement done?

  • Analysis of Control Protocols

  • Defining Metrics

  • Analysis of Data Plane

  • Summary and Conclusion


P2p systems

P2P Systems

  • Bright side

    • Ubiquity, Resilience, Scalability

    • Distributed Applications

    • Academic interest generated for Video applications

    • Popular

    • Not-so-bright side

    • Little understanding of protocols

    • Proprietary nature makes it difficult


Challenges with proprietary apps

Challenges with proprietary apps

  • No specification of protocols

    • Forced to conduct black-box tests

  • No documentation or API

    • Can’t write test scripts

  • Manual interaction to be done


How is it done

How is it done?

  • Collecting packet traces with Ethereal

  • Separating control traffic from data traffic

  • Reverse engineering the protocols

    • By analyzing control traffic

    • Data plane analysis on some metrics

    • Applications

    • PPLive

    • SOPCast


Test machines

Test Machines

  • Intel Pentium 4 computers

  • Windows XP OS

  • Ethereal Software


Control protocols

Control Protocols

  • Software Update

    • Version checking and downloading updates

  • Channel Lists

    • Downloading channel lists from webserver

  • Bootstrap

    • Getting initialization information from webserver

  • Maintaining Peers

    • Getting initial list of hosts and updating them regularly

  • Requesting data


Separating control and data traffic

Separating control and data traffic

  • Observing packet size

    • Packet size < 40 bytes – ACKs (40%)

    • Packet size > 1KB – Data packets (40 – 50%)

    • In between – Control packets (10 – 20%)

    • Measuring flow rate

    • If > 4KB/s, it’s a data flow


Pplive

PPLive

  • Protocol analysis done on PPLive

  • SOPCast working very similar

  • Both based in Asia with strong American following

  • Attract large number of users


Pplive protocol analysis

PPLive Protocol Analysis

  • Software Update

    • GET message sent to update.pplive.com

    • Checks for update.inf

    • Scalability concerns

    • Channel List

    • Contact centralized server at http://list.pplive.com

    • Get all.xml file which lists channels

    • Channel lists specify trackers

    • Flash crowd point in the system


Pplive protocol analysis1

PPLive Protocol Analysis

  • Software Update

    • GET message sent to update.pplive.com

    • Checks for update.inf

    • Scalability concerns

    • Channel List

    • Contact centralized server at http://list.pplive.com

    • Get all.xml file which lists channels

    • Channel lists specify trackers

    • Flash crowd point in the system


Pplive protocol analysis2

PPLive Protocol Analysis

  • Software Update

    • GET message sent to update.pplive.com

    • Checks for update.inf

    • Scalability concerns

    • Channel List

    • Contact centralized server at http://list.pplive.com

    • Get all.xml file which lists channels

    • Channel lists specify trackers

    • Flash crowd point in the system


Definitions

Definitions

  • Flow

    • F(A1, X1) = {IPA, PA1, IPX, PX1}

  • Rate of Flow

    • Given by

  • Duration of flow

  • Parent and Child

    • Relationship between peers

  • Distance

  • Cost

    • Miles per byte

  • Stability


Data plane analysis

Data Plane Analysis

  • Network Resource Usage

    • Bandwidth

    • Number of children supported

    • Connectivity

  • Locality of peers

    • Cost of downloading/uploading

  • Stability


Network resource usage

Network Resource Usage

Bandwidth

Number of children supported

Connectivity


Bandwidth

Bandwidth

  • Expected

    • Fairness

    • Limit on upload, download and ratio between them

  • Reality

    • No policy control over upload

    • Increases 3x if 3 instances used (bottom)


Children supported

Children Supported

  • Number of parents

    • Between 3-5

    • Same for high capacity (top) and low capacity (bottom) nodes

    • 1 parent not possible due to group dynamics

    • Unfair children distribution – 15 -20 HC, 0 LC


Bandwidth1

Bandwidth

  • Expected

    • Fairness

    • Limit on upload, download and ratio between them

  • Reality

    • No policy control over upload

    • Increases 3x if 3 instances used (bottom)


Connectivity

Connectivity

  • Data plane structure

    • Very small fraction of hosts connected are defined parent or child

    • Unstructured data plane – connectivity maintained through randomness


Locality of peers

Locality of Peers

Cost of Downloading

Cost of Uploading


Visibility

Visibility

  • 3 Levels of visibility

  • We measure at the host


Cost of download

Cost of Download

  • High capacity nodes

    • High cost of download

    • Parents in Asia

    • Low Capacity nodes

    • Lower cost of download

    • Parents in America

    • Reason?


Cost of upload

Cost of Upload

  • Cost v/s Time

    • Done on HC Nodes

    • Average suggests low cost

    • CDF

    • Above 60% children in Asia

    • Parents in America

    • Inefficiency of System

    • Data sent back to Asia in majority cases


Stability

Stability


Stability1

Stability

  • Stability v/s Time

    • 30% of parents change between intervals

    • Cause?

    • Group dynamics and random nature of data plane


Summary1

Summary

  • Unfairness

  • Improper NAT handling

  • Inefficient Distribution of Data

  • Transport Protocol

    • Not ideal for real-time/overhead

    • Delay associated

  • Security

    • Control protocols are not encrypted


Contributions

Contributions

  • Gained valuable insight in working of apps

  • High resource usage

  • Fairness unsatisfactory

  • Metrics defined can be used to study other apps

  • Brings up issues to be addressed


Questions

Questions ?


  • Login