1 / 40

Lecture 7. P2P streaming: design aspects and performance issues

Lecture 7. P2P streaming: design aspects and performance issues. Dmitri Moltchanov , TUT, Fall 2015. Outline. Quality: compression basics How to assess video quality? Recent observations of live streaming systems Improving streaming performance in P2P systems

larrybutler
Download Presentation

Lecture 7. P2P streaming: design aspects and performance issues

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 7. P2P streaming: design aspects and performance issues Dmitri Moltchanov, TUT, Fall 2015

  2. Outline • Quality: compression basics • How to assess video quality? • Recent observations of live streaming systems • Improving streaming performance in P2P systems • Overlay architectures • Layered coding • Multiple description coding • Scheduling • Incentive mechanisms • Traffic localization • Active quality control?

  3. The main difference • Scheduling of chunks • Why? Different quality metrics! • Recall BitTorrent • Rarest chunk first • Tit-for-tat • Improve network wide dissemination first • P2P streaming • Scheduling window • Next chunk first • Improve my own performance first

  4. Compression basics

  5. Performance degradation • Network transmission • Losses • Delays • Well, what’s about compression? • Highly desirable/undesirable for network environment • Desirable: reduces required rate up to 20 times • Undesirable • Degrades the quality (lossy compression) • Magnifies the effect of losses

  6. Video compression standards • Video: two or more media Mx-ed streams • Video stream • Audio stream (voice as a special case) • Additional streams, e.g. subtitles, different languages, etc. • Standards • ITU-T Video Coding Experts Group (VCEQ): H.26x • ISO/IEC Moving Pictures Experts Group (MPEG): MPEG-1/2/4/7 • What do these standards provide • Analog/digital digital/analog conversion • Lossy compression of media • Stream description • Packetization

  7. How compression is achieved • Compression removes the following redundancies • Spatial • Temporal • Psycho-visual • We exploit them by • Predicting pixel values in space • Predicting them in time • Loosing some unnecessary info • Compression: up to 30 times

  8. Types of frames • Three types of frames • I-frame (intracoded) • P-frame (predicted) • B-frames (bi-directionally predicted) • Frames can be organized in a group of pictures (GoP) • A repeated sequence of frames • Why? Access unit for players • Example: (3,12), 3 distance between I and P, 12 length of GoP

  9. Temporal prediction • Removing temporal redundancy • Works with digitized macroblocks

  10. A problem with prediction • Recall transmission media for a moment • Loss of a packet • Error propagation in time • Error propagation in space

  11. Assessing video quality

  12. Metrics of interest • Metrics we care here • Service integrity • Audiovisual quality • Service integrity • Continuity of playback • Affected by system topology, network performance • Prefetching delay (response time) • Affected mainly by the topology • … • What quality we are talking about? A joint one • Video quality • Audio quality • Overall quality (audio+video)

  13. Joint assessment of video and audio • We are talking about multimedia here • One of media may have more profound effect • Audio/video degradation do correlate • They may affect joint quality in some complex way… • How to access the quality? • Develop a new test for multimedia? • Develop audio and video separately and then combine them? • Single test for joint quality • Could be complicated • Separate ones • Easier to develop • Reuse of existing concepts • How to join together?

  14. Subjective tests for video/audio • MOS: Mean opinion score • Survey humans watching your service in similar conditions • Ask them to grade it using a certain scale, e.g. 1-5 • Compute the mean… • What are the problems with MOS • Requires an audience of people • Requires very specific conditions • There are a number of standards (e.g. ITU-R BT-500.11 for video) • What’s about network environment?

  15. Video: objective tests • What do they do? • Try to automate the process of quality assessment • How: try to provide good correlation with subjective tests • Simple tests: based on comparing pixel values • Complex tests: based on properties of human visual system • Types of objective tests • Based on availability of video at the evaluation point • RF: full-reference • RR: reduced-reference • NR: no-reference • Suitability for networking • NR are fully appropriate • RR are appropriate • FR are not suitable at all

  16. Video: objective tests • Simple pixel-based metrics • Mean squared error (MSE) • X and Y are arrays of pixels, N is the number of pixels • Peak signal-to-noise (PSNR) • L is max possible pixel value • Some observations • Sometimes provide poor performance • But: only few advanced FR VQA techniques are better • Inherently of FR type… • Can be modified to work in network environment

  17. Audio: subjective tests • First of all, what media are we talking about? • Speech? 0.3-3.4Khz • Audio? 0.02-20KHz • They are way different! • Subjective tests are MoS based • Objective tests • Similar classification NR/RR/FR • ITU-T P.862, PESQ for speech, FR type • ITU-R BS.1387, PEAQ for audio, FR type • No RR/NR tests proposed yet for generic audio • There is for speech… let us consider as an example

  18. Speech: E-model • ITU-T G.107 E-model needs • Type of the codec • Network-intristic performance metrics • IP packet loss ratio (IPLR) • IP packet transfer delay (IPTD) • Measure of quality is R-factor • R0 – starting value • Is – degradation as a result of compression/coding • Id – degradation caused by end-to-end delay • Ie – degradation caused by losses • A – advantage factor

  19. Speech: E-model and MoS • Does not work as expected… Why? • Packet losses tends to clip • Users differently perceives • Switching from bad to good quality seems longer that it is • Switching from good to bad is almost instantly perceived

  20. Joint assessment: combining together • Non-linear model • ra, rv, rav – MOS values of audio/video/combined stream • some coefficients we need to find • rarv– responsible for cross-dependence factor • Usage of this models • You need to fit it to empirical data • Linear without cross-dependence factor rarv • Concluding notes • Most models developed for low-bit-rate low-motion video • Everything greatly depends on the type of the codec • Loss may lead to the loss of a whole frame… • Audio is often more important here (e.g. video call)…

  21. Streaming quality observations

  22. Biggest concerns: asymmetricity + BE • Asymmetric bandwidth • It is the nature of access/subscription (ADSL, cable models) • It is what operator wants (pays for outgoing traffic) • It is what clients sometimes offer (limit outgoing rate) • What can we do? Server farms! • Just like in CDNs… • Best-effort Internet • Quality degradation • Reasons? Losses, delays as a result of congestion • P2P may somehow decrease these effects…

  23. Measurements 2007-now • Some systems may handle high peer churns effectively • Some may handle 100-200 thousands of viewers • A lot of clients get good quality • Some clients get extremely bad quality • Increase of prefetching period increases quality • Often systems require 10-100 seconds of prefetching • There is threshold not allowing further increase • Quality degradation • Picture freezing as a result of losses (problem of decoders) • Results in discontinuity which is very annoying • Mesh topologies are generally better

  24. Quality monitoring system1 • Concept: crawling of buffer maps of clients • Continuous monitoring (they used it for PPLive) • No additional software • Passive sniffing • Problems? • Only one… protocols are proprietary  • Does not represent quality… Better call it starving of nodes. 1Hei et. al “Inferring network-wide quality in P2P live streaming systems”, IEEE JSAC, 25(9),1640-1654, Dec. 2007

  25. Improving streaming in P2P

  26. What are the means? • A plenty of! • We do not depend on the network operator! Well… • We may ask for just not disturbing us with our efforts! • Proactive ways • Overlay architecture • Layered coding • Multiple-description coding • Traffic localization • Scheduling • Incentive mechanisms • Server farms • Active way: monitoring+supporting starving peers • Well, we are not ready yet…

  27. Overlay architecture • Mesh vs. tree • Comparison between these two • In trees information is disseminated faster • In meshes reliability is much better • Reliability in tree topologies • Multiple trees! • Well, multiple meshes will still be more reliable… • Sometimes referred to as PULL (mesh) and PUSH (tree)

  28. Layered coding • Principals • n layers • one base layer • n-1 enhancement layers • Usage in P2P • Tree/mesh carrying base layer is for all to sign for • Other trees/meshes are just for better quality • Requirements • Trees/meshes need to enumerated • You need to follow the order of layers • ks layer if you subscribed to the previous n-k layers Two layers

  29. Multiple description coding • Similar principle • n layers • one base layer • n-1 enhancement layers • layer are independent! • Similar usage in P2P • At least one tree/mesh needs to be joined • Other trees/meshes enhance quality Three layers

  30. Why MDC and LC are so good? The tool behind is path diversity Excellent for P2P systems multiple trees/meshes!

  31. Scheduling • Applicable to mesh systems only • File sharing P2P • File is divided into chunks • It does not matter which of those comes next • File sharing is delay tolerant • P2P streaming: scheduling window • Sequence of packets ahead of the current playback • Needs to be received to ensure continuous playback • One of the most critical parts

  32. Scheduling: trade-offs • Careful tuning of the window size for two metrics • Level of dissemination of data (LOD) • Continuity of playback (COP) • Scheduling window is large • LOD increases • 1st POV: COP increases as more sources have data • 2nd POV: COP decreases as the window is larger • Scheduling window is small • LOD decreases • 1st POV: COP increases as the window is small • 2nd POV: COP decreases as less sources have data • Which of these dominates in each case? No ideas…

  33. Scheduling: strategies • Scheduling strategies • Urgency-based • Rarity-based • Hybrid approaches • Urgency-based • Ask for those that are closer to the playback • Egocentric disruptive approach • LOD decreases, COP increases in the short run • Rarity-based • Ask the rarest block in the window • LOD increases, quality may increase in the long run • Hybrid approach: assign priorities to the packets

  34. Incentive mechanism • Why it is important? • P2P relies on clients contributing bandwidth • Free-riding problem: decrease available bandwidth • Philosophy of incentive mechanisms • Quality needs to be proportional to the contribution of a peer • You do not contribute, you get nothing • Groups of incentive mechanisms • Monetary-based • Reciprocity-based • Effect on quality • Balance of upload and download

  35. Incentive mechanism: monetary • Monetary • Virtual currency • Spent when getting, obtained when providing • Centralized mechanisms using accounting modules • Shortcomings and advantages • Additional network load • Additional timing overhead • Single point of failure • Good for commercial systems • Could be the only way for future commercial P2P systems • Real money instead of virtual currency 

  36. Incentive mechanism: reciprocity • Reciprocity-based • Peers maintain info about other peers’ contributions • Distributed algorithm • Direct and indirect approaches • Direct approach • Peer uses its own history of communication • What if no such history available (e.g. new peer)? • BitTorrent-like system: free-riding is easy • Indirect approach • Peer gathers info from other peers too • Vulnerable to malicious nodes • Something needs to be done with free-riding!

  37. Traffic localization • What is that? • Attempt to localize traffic within domains • Around 90% crosses inter-ISP links • Operators throttle P2P traffic: http://www.p2pon.com/guides/ • Trying to please networks operators… • Is it possible? • Comcast localized 34% in its field trials • General consensus: up to 80% can be localized • What is the effect? • Straight effect: localization • Side effect: better quality as only few long-distance connections • Side effect: high degree of localization decreases quality…

  38. Traffic localization: approaches • Caching techniques • Create cache of files nearby • Where? Operators don’t want to deal with that… • Use of gateways • Needs to be very powerful nodes • Nodes could be too loosely connected • Biased choice of peers • Preference is given to local peers • How to identify local peers? • Measurements of RTT • IP addresses (prefix matching /8, /24, /13) • Special services provided by their party • e.g. Internet Coordinate Systems (ICS)

  39. Server farms • What is that? • Servers injecting the bandwidth into the overlay • Can be done for VOD and live streaming • Implemented in most P2P streaming systems e.g. SopCast • Hybrid CDN/P2P systems!

  40. Active quality improvement • Basic principle • Monitor quality locally • Identify moments when it is about to degrade • Do something about it! • What to do: ask for/contact new peers, contact server farm, etc. • Abstract example

More Related