1 / 24

PALS: Peer-to-Peer Adaptive Layers Streaming

PALS: Peer-to-Peer Adaptive Layers Streaming. Reza Rejaie Computer & Information Science Department University of Oregon http://www.cs.uoregon.edu/~reza. In collaboration with Antonio Ortega (IMSC, USC). Motivation.

kiana
Download Presentation

PALS: Peer-to-Peer Adaptive Layers Streaming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PALS: Peer-to-Peer Adaptive Layers Streaming Reza Rejaie Computer & Information Science Department University of Oregon http://www.cs.uoregon.edu/~reza In collaboration with Antonio Ortega (IMSC, USC)

  2. Motivation • Peer-to-Peer (P2P) networks emerge as a new communication paradigm that improves: • Scalability, robustness, load balancing, etc • Research on P2P network has mostly focused locating a desired content, not on delivery • Audio/Video streaming applications are becoming increasingly popular over P2P network • “streaming” a desired content to a peer rather than swapping files, e.g. sharing family videos. • How can we support streaming applications in P2P networks?

  3. Streaming in P2P Networks • Each peer might have limited resources • Limited uplink bandwidth or processing power • Several peers should collectively stream a requested content, this could achieve: • Higher available bandwidth, thus higher delivered quality • Better load balancing among peers • Less congestion across the network • More robust to peer dynamics • But streaming from multiple peers presents several challenges …

  4. Challenges • How to coordinate delivery among multiple senders? • How to cope with unpredictable variations in available bandwidth? • How to cope with dynamics of peer participation? • Which subset of peers can provide max. throughput, thus max. quality?

  5. P2P Adaptive Layer Streaming (PALS) • PALS is a receiver-driven framework for adaptive streaming from multiple senders • All the machinery is implemented at receiver • Using layered representation for streams • Applicable to any multi-sender streaming scenario.

  6. Justifying Main Design Choices • Why receiver-driven? • Receiver is a permanent member of the session • Receiver has knowledge about delivered packets • Receiver knows current subset of active senders • Receiver observes each sender’s bandwidth • Minimum load on senders => more incentive! • Minimum coordination overhead! • Why layered representation for stream? • Both MD and layered encoding should work • Efficiency vs robustness • PALS currently uses layered encoded stream

  7. Assumptions: All peers/flows are cong. controlled Content is layered encoded All layers are CBR with the same BW* All senders have all layers* List of senders is given * Not requirements Goals: To maximize delivered quality Deliver max no of layers Minimize variations in quality Assumptions & Goals

  8. Basic Framework • Receiver: periodically requests an orderedlist of packets from each sender. • Sender: simply delivers requested packets with the given order at the CC rate • Benefits of ordering the requested list: • Receiver can better control delivered packets • Graceful degradation in quality when bw suddenly drops • Periodic requests => less network overhead

  9. Peer 0 Peer 1 Peer 2 Internet BW BW BW 2 1 0 Basic Framework • Receiver keeps track of EWMA BW from each sender • EWMA overall throughput • estimate total no of pkts to be delivered during a period (K) • Allocate K pkts among active layers (Quality Adaptation) • Controlling bw0(t), bw1(t), …, • Assign a subset of pkts to each sender (Packet assignment) • Allocating each sender’s bw among active layers • Keep senders sync. with receiver Demux bw (t) bw (t) bw (t) 1 2 3 bw (t) 0 buf buf buf buf 1 2 0 3 C C C C Decoder

  10. Key Components of PALS • Quality adaptation (QA): to determine quality of delivered stream, i.e. required packets for all layers during each interval • Sliding Window (SW): to keep all senders synchronized with the receiver & busy • Packet Assignment (PA): to properly distribute required packets among senders • Peer Selection (PS): to identify a subset of peers that provide maximum throughout • We present sample mechanisms for QA, SW and PA.

  11. Quality Adaptation ave bw Draining phase BW str bw • Same design philosophy as unicast QA but different approach: • Receiver-based • Delay in the control loop • Multiple senders • Periodic adaptation, rather than on a per-packet basis • Two degrees of control • Add/drop top layer(s) • Adjust inter-layer bandwidth distribution Filling phase t Demux bw (t) bw (t) bw (t) 1 2 3 bw (t) 0 buf buf buf buf 1 2 0 3 C C C C Decoder

  12. Quality Adaptation (cont’d) ave bw • Goal is to control buffer state: • Total buffered data • Distribution of buffered data among active layers • Controlling evolution of buffer state by adjusting inter-layer BW allocation (bw0, bw1, ..) • Efficient buffer state: • min no of buffering layers • most skewed distribution, i.e. allocate max share to buf0 str bw c c c BW Demux bw (t) bw (t) bw (t) 1 2 3 bw (t) 0 buf buf buf buf 1 2 0 3 C C C C Decoder

  13. Quality Adaptation (cont’d) • Efficient buffer state depends on the pattern of variations in bandwidth which is unknown at the receiver • Alternative solutions: • Use measurement to derive the pattern • Use a fix target buf. distribution, e.g. buf(i) = n*buf(i+1)

  14. Quality Adaptation (cont’d) • Run QA mechanism periodically, to plan for one period, • Assume EWMA BW remains fix and estimate no of incoming packets • Determine fill/drain phase, and no of layers to keep • Filling Phase: 1) Sequentially, fill up layers up to next target buffer state with n layers 2) Sequentially, fill up layers up to the target buffer state with n+1 layers 3) Add a new layer, go to Step 1. • Drain Phase: 1) Sequentially, drain layers down to previous target buffer state 2) Drop the top layer, go to Step 1 Demux bw (t) bw (t) bw (t) 1 2 3 bw (t) 0 buf buf buf buf 1 2 0 3 C C C C Decoder

  15. t L L L t p 0 1 2 min Sliding Window • Keep senders loosely synchronized with receiver’s playout time: • Sliding window, send a new request to each sender per window • Overwriting, a new request overwrites old one Period D Playout time time

  16. Sliding Window (cont’d) • Window size controls the tradeoff between responsiveness vs control overhead. • Should be a function of RTT. • Difficult to manage senders with major diff in RTT • Keep senders busy when BW is underestimated. • Reverse Flow Control (RFC): send a new request to a sender before it runs out of packets • How should QA mechanism be coupled with the receiver’s requests to different senders?

  17. Coupling QA & SW • Synchronized Requesting • Send new requests to all senders simultaneously when slides the window forward + QA uses overall BW, rather than per-sender BW - fast sender becomes idle or overwriting slow senders - hard to manage senders with major difference in RTT. • Asynchronous Requesting • Send a new request to a sender when RFC signals + Effectively manages different senders separately - Requires different approach to QA, i.e. QA should factor in individual BW

  18. Packet Assignment • Given a pool of required packets in one interval, how to assign packets to senders? • Number of assigned packets to a sender depends on its BW • Need to cope with sudden decrease in BW • Order each requested list based on importance of packets 1) How to order a given set of required packets? 2) How to divide them among senders?

  19. Packet Assignment (cont’d) 1) Need a criteria for ordering packets, e.g. lower layers or lower playout time. 2) Weighted round robin packet assignment Ideal pattern efficient pattern

  20. Preliminary Evaluation • Synchronized requesting • Configuration parameters • 3 PAL senders over RAP connections with diff RTT • 10 TCP background flows • window size = 4 * SRTT spal2 5ms 10 Mb 10 TCP rpal0 spal1 15ms 10 Mb 20ms 5 Mb 25ms 10 Mb spal0

  21. Scenario I

  22. Scenario II

  23. Related Work • Streaming from multiple senders • MD [Apostolopoulos et al.] • CC streaming from mirror servers [Nguyen et al.] • Streaming in P2P networks: form distribution trees from peers • CoopNet [Padmanabhan et al.] • Existing systems, e.g. Abacast, chaincast,… • Structuring a distribution tree • e.g. Zigzag [Tran et al.] • Receiver-driven adaptation, • RLM [McCanne et al.]

  24. Conclusion & Future Work • PALS, a receiver-driven framework for streaming from multiple senders • Receiver-based QA from multiple senders • Keeping sender loosely synchronized • Distributing load/packets among senders • Future work • Extensive evaluation through simulation • Measurement-based derivation of BW variations • Peer identification and selection mechanism • Effect of peer dynamics on performance • Partially available content at senders • Supporting VBR streams and MD encoding

More Related