1 / 58

User Modeling

INF SERV – Media Storage and Distribution Systems:. User Modeling. 8/9 – 2003. Why user modeling?. Multimedia approach If you can’t make it, fake it Translation Present real-life quality If not possible, save resources where it is not recognizable Requirement Know content and environment

Download Presentation

User Modeling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. INF SERV – Media Storage and Distribution Systems: User Modeling 8/9 – 2003

  2. Why user modeling? • Multimedia approach • If you can’t make it, fake it • Translation • Present real-life quality • If not possible, save resources where it is not recognizable • Requirement • Know content and environment • Understand limitations to user perception • If these limitations must be violated, know least disturbing saving options

  3. User Modelling • What? • Formalized understanding of • users’ awareness • user behaviour • Why? • Achieve the best price/performance ratio • Understand actual resource needs • achieve higher compression using lossy compression • potential of trading resources against each other • potential of resource sharing • relax relation between media

  4. Applications of User Modelling • Encoding Formats • Exploit limited awareness of users • JPEG/MPEG video and image compression • MP3 audio compression • Based on medical and psychological models • Quality Adaptation • Adapt to changing resource availability • no models - need experiments • Synchronity • Exploit limited awareness of users • no models - need experiments • Access Patterns • When will users access a content? • Which content will users access? • How will they interact with the content? • no models, insufficient experiments - need information from related sources

  5. User Perception ofQuality Changes

  6. Quality Changes • Quality of a single stream • Issue in Video-on-Demand, Music-on Demand, ... • Not quality of an entire multimedia application • Quality Changes • Usually due to changes in resource availability • overloaded server • congested network • overloaded client

  7. Kinds of Quality Changes • Long-term change in resource availability • Random • Planned • Short-term change in resource availability • Random • Planned

  8. Kinds of Quality Changes • Long-term change in resource availability • Random • no back channel • no content adaptivity • continuous severe disruption • Planned • Short-term change in resource availability • Random • Planned

  9. Kinds of Quality Changes • Long-term change in resource availability • Random • no back channel • no content adaptivity • continuous severe disruption • Planned • change to another encoding format • change to another quality level • requires mainly codec work • Short-term change in resource availability • Random • Planned

  10. Kinds of Quality Changes • Long-term change in resource availability • Random • Planned • Short-term change in resource availability • Random • Planned

  11. Kinds of Quality Changes • Long-term change in resource availability • Random • Planned • Short-term change in resource availability • Random • packet loss • frame drop • alleviated by protocols and codecs • Planned

  12. Kinds of Quality Changes • Long-term change in resource availability • Random • Planned • Short-term change in resource availability • Random • packet loss • frame drop • alleviated by protocols and codecs • Planned • scaling of data streams • appropriate choices require user model

  13. Kinds of Quality Changes • Long-term change in resource availability • Random • no back channel • no content adaptivity • continuous severe disruption • Planned • change to another encoding format • change to another quality level • requires mainly codec work • Short-term change in resource availability • Random • packet loss • frame drop • alleviated by protocols and codecs • Planned • scaling of data streams • appropriate choices require user model

  14. Planned quality changes • Audio • Lots of research in scalable audio • No specific results for distribution systems • Rule-of-thumb • Always degrade video before audio • Video • Long-term changes • Short-term changes

  15. Planned quality changes • Audio • Video • Long-term changes • Use separately encoded streams • Switch between formats • Non-scalable formats compress better than scalable ones (Source: Yuriy Reznik, RealNetworks) • Short-term changes • Switching between formats • Needs no user modeling • Is an architecture issue

  16. Planned quality changes • Audio • Video • Long-term changes • Short-term changes • Use scalable encoding • Reduce short-term fluctuation by prefetching and buffering • Two kinds of scalable encoding schemes • Non-hierarchical • encodings are more error-resilient • fractal single image encoding • Hierarchial • encodings have better compression ratios • Scalable encoding • Support for prefetching and buffering is an architecture issue • Choice of prefetched and buffered data is not

  17. Planned quality changes • Audio • Video • Long-term changes • Short-term changes • Use scalable encoding • Reduce short-term fluctuation by prefetching and buffering • Short-term fluctuations • Characterized by • frequent quality changes • small prefetching and buffering overhead • Supposed to be very disruptive • See for yourself

  18. Planned quality changes

  19. Subjective Assessment • A test performed by the Multimedia Communications Group at TU Darmstadt • Goal • Predict the most appropriate way to change quality • Approach • Create artificial drop in layered video sequences • Show pairs of video sequences to testers • Ask which sequence is more acceptable • Compare two means of prediction • Peak signal-to-noise ratio (higher is better) • compares degraded and original sequences per-frame • ignores order • Spectrum of layer changes (lower is better) • takes number of layer changes into account • ignores content and order

  20. Subjective Assessment • Used SPEG (OGI) as layer encoded video format

  21. Subjective Assessment • What is better?

  22. Subjective Assessment • How does the spectrum correspond with the results of the subjective assessment? • Comparison with the peak signal-to-noise ratio • According to the results of the subjective assessment the spectrum is a more suitable measure than the PSNR

  23. Subjective Assessment • Conclusions • Subjective assessment of variations in layer encoded videos • Comparison of spectrum measure vs. PSNR measure • Observing spectrum changes is easier to implement • Spectrum changes indicate user perception better than PSNR • Spectrum changes do not capture all situations • Missing • Subjective assessment of longer sequences • Better heuristics • "thickness" of layers • order to quality changes • target layer of changes

  24. User Model for Synchronity

  25. Synchronization • Content Relation • se.g.: several views of the same data • Spatial Relations • Layout • Temporal Relations • Intra-object Synchronization • Intra-object synchronization defines the time relation between various presentation units of one time-dependent media object • Inter-object Synchronization • Inter-object synchronization defines the synchronization between media objects • Relevance • Hardly relevant in current NVoD systems • Somewhat relevant in conferencing systems • Relevant in upcoming multi-object formats: MPEG-4, Quicktime

  26. Inter-object Synchronization • Lip synchronization • demands for a tight coupling of audio and video streams with • a limited skew between the two media streams • Slide show with audio comment • Main problem of the user model • permissible skew

  27. A lip synchronized audio video sequence (Audio1 and Video) is followed by a replay of a recorded user interaction (RI), a slide sequence (P1 - P3) and an animation (Animation) which is partially commented using an audio sequence (Audio2). Starting the animation presentation, a multiple choice question is presented to the user (Interaction). If the user has made a selection, a final picture (P4) is shown Main problem of the user model permissible latency analysing object sequence allow prefetching user interaction complicates prefetching Inter-object Synchronization

  28. Synchronization Requirements – Fundamentals • 100% accuracy is not required, i.e., skew is allowed • Skew depends on • Media • Applications • Difference between • Detection of skew • Annoyance of skew • Explicit knowledge on skew • Alleviates implementation • Allows for portability

  29. Experimental Set-Up • Experiments at IBM ENC Heidelberg to quantify synchronization requirements for • Audio/video synchronization • Audio/pointer synchronization • Selection of material • Duration • 30s in experiments • 5s would have been sufficient • Reuse of same material for all tests • Introduction of artificial skew • By media composition with professional video equipment • With frame based granularity • Experiments • Large set of test candidates • Professional: cutter at TV studios • Casual: every day “user” • Awareness of the synchronization issues • Set of tests with different skews lasted 45 min

  30. Lip Synchronization: Major Influencing Factors • Video • Content • Continuous (talking head) vs. discrete events (hammer and nails) • Background (no distraction) • Resolution and quality • View mode (head view, shoulder view, body view) • Audio • Content • Background noise or music • Language and articulation

  31. Lip Synchronization: Level of Detection • Areas • In sync QoS: +/- 80 ms • Transient • Out of sync

  32. Lip Synch.: Level of Accuracy/Annoyance • Some observations • Asymmetry • Additional tests with long movie • +/- 80 ms: no distraction • -240 ms, +160 ms: disturbing

  33. Fundamental CSCW shared workspace issue Analysis of CSCW scenarios Discrete pointer movement (e.g. “technical sketch”) Continuous pointer movements (e.g. “route on map”) Most challenging probes Short audio Fast pointer movement Pointer Synchronization

  34. Pointer Synchronization: Level of Detection • Observations • Difficult to detect “out of sync” • i.e., other magnitude than lip sync • Asymmetry • According to every day experience

  35. Pointer Synchronization: Level of Annoyance • Areas • In sync: QoS -500 ms, +750 ms • Transient • Out of sync

  36. Quality of Service of Two Related Media Objects • Expressed by a quality of service value for the skew • Acceptable skew within the involved data streams • Affordable synchronization boundaries • Production level synchronization • Data should be captured and recorded with no skew at all • To be used if synchronized data will be further processed • Presentation level synchronization • Reasonable synchronization at the user interface • To be used if synchronized data will not be further processed

  37. Quality of Service of Two Related Media Objects

  38. Quality of Service of Two Related Media Objects

  39. User Model for Access Patterns

  40. Modelling • User behaviour • The basis for simulation and emulation • In turn allows performance tests • Separation into • Frequency of using the VoD system • Selection of a movie • User Interaction • Models exist • But are not verified so far • Selection of a movie • Dominated by the access probability • Should be simulated by realistic access patterns

  41. Focus on Video-on-Demand • Video-on-demand systems • Objects are generally consumed from start to end • Repeated consumption is rare • Objects are read-only • Hierarchical distribution system is the rule • Caching approach • Simple approach first • Various existing algorithms • Simulation approach • No real-world systems exist • Similar real-world situations can be adopted

  42. Using Existing Models • Use of existing access models ? • Some access models exist • Most are used to investigate single server or cluster behaviour • Real-world data is necessary to verify existing models • Optimistic model • Cache hit probabilities are over-estimated • Caches are under-dimensioned • Network traffic is higher than expected • Pessimistic model • Cache hit probabilities are under-estimated • Cache servers are too large or not used at all • Networks are overly large

  43. Existing Data Sources for Video-on-Demand • Movie magazines • Data about average user behaviour • Represents large user populations • Small number of observation points (weekly) • Movie rental shops • Actual rental operations • Serves only a small user population • Initial peaks may be clipped • Cinemas • Actual viewing operations • Serves only a small user population • Few number of titles • Short observation periods

  44. Zipf Distribution Verified for VoD by A. Chervenak N - overall number of movies i - movie i in a list ordered by descreasing popularities z(i) - hit probability Many application contexts all kinds of product popularity investigations http://linkage.rockefeller.edu/wli/zipf/ collects applications of Zipf’s law natural languages, monkey-typing texts, web access statistics, Internet traffic, bibliometrics, informetrics, scientometrics, library science, finance, business, ecological systems, ... Model for Large User Populations

  45. Verification: Movie Magazine • Movie magazine • Characteristics of observations on large user populations • Smoothness • Predictability of trends • Sharp increase and slower decrease in popularities

  46. Comparison with the Zipf Distribution • Well-known and accepted model • Easily computable • Compatible with the 90:10 rule-of-thumb

  47. Verification: Small and Large User Populations

  48. Verification: Small and Large User Populations • Similarities • Small populations follow the general trends • Computing averages makes the trends better visible • Time-scale of popularity changes is identical • No decrease to a zero average popularity • Differences • Large differences in total numbers • Large day-to-day fluctuations in the small populations • Typical assumptions • 90:10 rule • Zipf distribution models real hit probability

  49. Problems of Zipf • Does not work in distribution hierarchies • Access to independent caches beyond first-level are not described • Not easily extended to model day-to-day changes • Is timeless • Describes a snapshot situation • Optimistic for the popularity of most popular titles • Chris Hillman, bionet.info-theory, 1995 • Any power law distribution for the frequency with which various combinations of ‘‘letters’’ appear in a sequenceis due simply to a very general statistical phenomenom,and certainly does not indicate some deep underlying process or language. Rather, it says you probably aren’t looking at your problem the right way!

  50. Approaches to Long-term Development • Model variations for long-term studies • Static approach • No long-term changes • Movie are assumed to be distributed in off-peak hours • CD sales model • Smooth curve with a single peak • Models the increase and descrease in popularity • Shifted Zipf distribution • Zipf distribution models the daily distribution • Shift simulates daily shift of popularities • Permutated Zipf distribution • Zipf distribution models the daily distribution • Permutation simulates daily shift of popularities

More Related