1 / 17

Leveraging ... User Models

Leveraging ... User Models. Leveraging Data About Users in General in the Learning of Individual User Models* Anthony Jameson PhD (Psychology) Adjunct Professor of HCI Frank Wittig CS Researcher Saarland University, Saarbrucken Germany * i.e. pooling knowledge to improve learning accuracy.

Download Presentation

Leveraging ... User Models

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Leveraging ... User Models Leveraging Data About Users in General in the Learning of Individual User Models* • Anthony Jameson PhD (Psychology) • Adjunct Professor of HCI • Frank Wittig • CS Researcher • Saarland University, Saarbrucken Germany *i.e. pooling knowledge to improve learning accuracy

  2. Their Contributions • Answer the question: • How can systems that employ Bayesian networks to model users most effectively exploit data about users in general and data about the individual user? • Most previous approaches looked only at: • Learning general user models • Apply the model to users in general • Learning individual user models • Apply each model to its particular user

  3. Collaborative Filtering and Bayesian Networks • Collaborative filtering systems can make individualised predictions based on a subset of users determined to be similar to U • But sometimes we want a more interpretable model  • Causal relationships are represented explicitly • Can predict behaviour of U based on contextual factors • Can make inferences about unobserved contextual factors • Bayesian networks are more straightforwardly applied to this type of task

  4. Collaborative Filtering Example – Recommending Products • Each user rates a subset of products • Determines the users tastes as well as product quality • To recommend a CD for user U • First look for users especially similar to U • ie who have rated similar items in a similar way • Compute the average rating for this subset of users • Recommend products with high ratings • Used by Amazon.com, CDNow.com and MovieFinder.com [Herlocker et al. 1999]

  5. Their Experiment - Inferring Psychological States of the User • Simulated on a computer workstation • Navigating through a crowded airport while asking a mobile assistant questions via speech • Pictures appeared to prompt questions • Some instructed time pressure • Finish each utterance as quickly as possible • Some instructed to do a secondary task • “navigate” through terminal (using arrow keys) • Speech input was later coded semi-automatically to extract features

  6. Learning Models Used • Model #1 - General Model • Learned from experimental data via maximum-likelihood method (not adapted to individual users) • Model #2 - Parametrised Model • Like general model, but baselines for each user and for each speech metric are included • Model #3 - Adaptive (Differential) Model • Uses AHUGIN method (next slide) • Model #4 - Individual Model • Learned entirely on individual data

  7. A Tangent – AHUGIN[Olesen et al. 1992] • Adaptive HUGIN • No explicit dimensional representation for how users differ • The conditional probability tables (CPTs) of the Bayesian network are adapted with each observation • Thus a variety of individual differences can be adapted to, without the designer of the BN anticipating their nature

  8. Equivalent Sample Size (ESS) • However, you also need to address the speed at which the CPTs adapt • The ESS represents the extent of the system's reliance on the initial general model, relative to each users' new data • This paper contributes a principled method of estimating the optimal ESS, which is generally not obvious a priori, nor consistent across the parts of the BN • Differential adaptation

  9. Speech Metrics;Results • Articulation Rate • Syllables articulated per second of speaking • General performs worst, other three on par • Individual takes a while to catch up, as with all metrics • Number of Syllables • The number of syllables in the utterance • Again, General is poor, Parametrised OK, Individual and Adaptive best • Disfluencies and Silent Pauses • Any of four types of disfluency; eg failing to complete a sentence • Duration of silent pauses relative to word number • All about equal (perhaps due to infrequencies)

  10. The plots

  11. Experimental Conditions;Results

  12. Findings

  13. Differential Adaptation Revisited

  14. Summary • Now Dave can rip into it

More Related