1 / 32

Modeling The User

Modeling The User. Klaus Mueller (and Joachim Giesen, MPI). Klaus Mueller. Computer Science Center for Visual Computing Stony Brook University. Dagstuhl 2007 Moments. Dagstuhl 2007 Moments. Traumatizing beginnings: Edi Gröller: “Kill (Eliminate) the user!”. Dagstuhl 2007 Moments.

aadi
Download Presentation

Modeling The User

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Modeling The User Klaus Mueller (and Joachim Giesen, MPI) Klaus Mueller Computer Science Center for Visual Computing Stony Brook University

  2. Dagstuhl 2007 Moments

  3. Dagstuhl 2007 Moments • Traumatizing beginnings: • Edi Gröller: “Kill (Eliminate) the user!”

  4. Dagstuhl 2007 Moments • Traumatizing beginnings: • Edi Gröller: “Kill (Eliminate) the user!” • Regaining hope: • Tom Ertl: “User studies are needed.” • Chuck Hansen: dedicates 1 out of his 4 talks to user studies

  5. Dagstuhl 2007 Moments • Traumatizing beginnings: • Edi Gröller: “Kill (Eliminate) the user!” • Regaining hope: • Tom Ertl: “User studies are needed.” • Chuck Hansen: dedicates 1 out of his 4 talks to user studies • Inspiring thoughts (in unrelated context): • Penny Rheingans: “Improve visualization accuracy by creating a model of the data.”

  6. Overall Tone: • User studies are sorely needed, but hard to do!!

  7. Now, Why Are User Studies So Hard? • Visualization algorithms typically have many parameters, each with many value settings • the sheer permutation complexity can be overwhelming

  8. Now, Why Are User Studies So Hard? • Visualization algorithms typically have many parameters, each with many value settings • the sheer permutation complexity can be overwhelming • Testing them all on one user may actually lead to his death

  9. Now, Why Are User Studies So Hard? • Visualization algorithms typically have many parameters, each with many value settings • the sheer permutation complexity can be overwhelming • Testing them all on one user may actually lead to his death • and we need to perform these tests with many users

  10. Now, Why Are User Studies So Hard? • Visualization algorithms typically have many parameters, each with many value settings • the sheer permutation complexity can be overwhelming • Testing them all on one user may actually lead to his death • and we need to perform these tests with many users • So… Mission Accomplished?

  11. Well… • Let’s have a closer look…

  12. For Example: Volume Rendering • Some rather trivial parameters: • rendering algorithm (X-ray, MIP, US-DVR, S-DVR, GW-DVR) • ray step size (continuous scale) • resolution • background • colormap (color transfer function) • viewpoint

  13. For Example: Volume Rendering • Some rather trivial parameters: • rendering algorithm (X-ray, MIP, US-DVR, S-DVR, GW-DVR) • ray step size (continuous scale) • resolution • background • colormap (color transfer function) • viewpoint • Some more complex ones: • rendering style (various illustrative rendering schemes) • (magic) lenses • (magic) shadows • advanced BRDFs and ray modeling • etc…

  14. The Engine Dataset: A Few Possible Renderings

  15. Sample Testing Scenario (1) • Which colormap shows more detail?

  16. Sample Testing Scenario (2) • Which colormap shows more detail?

  17. Parameter Test Complexity • Notice: • all renderings show all features • all renderings use the same window size • Variables: • 3 colormaps • 5 rendering modes • 6 viewpoints • 2 image resolutions • 3 ray step sizes • 5 backgrounds  2700 permutations  7M pair-wise comparisons

  18. Daunting, But Not Unusual… • Market research deals with these problems on a regular basis • have attributes (parameters) and levels (values)

  19. Daunting, But Not Unusual… • Market research deals with these problems on a regular basis • have attributes (parameters) and levels (values) • For example, consider the design of a new car model, optimizing the following parameters: • comfort and convenience • quality • styling • performance

  20. Daunting, But Not Unusual… • Market research deals with these problems on a regular basis • have attributes (parameters) and levels (values) • For example, consider the design of a new car model, optimizing the following parameters: • comfort and convenience • quality • styling • performance • Sounds familiar?

  21. How Can This Help Us? • A common technique used in market research is conjoint analysis • Conjoint analysis allows one to: • interview a modest number of people • with a modest number of pair-wise comparison tests • The tests simulate real buying situations and statistical significance can be determined

  22. How Can This Help Us? • A common technique used in market research is conjoint analysis • Conjoint analysis allows one to: • interview a modest number of people • with a modest number of pair-wise comparison tests • The tests simulate real buying situations and statistical significance can be determined • We have actually done this: • 786 respondents • 20 pair-wise tests each

  23. How Can This Help Us? • A common technique used in market research is conjoint analysis • Conjoint analysis allows one to: • interview a modest number of people • with a modest number of pair-wise comparison tests • The tests simulate real buying situations and statistical significance can be determined • We have actually done this: • 786 respondents • 20 pair-wise tests each • And the results make sense 

  24. Results • Top 10 (detail / aesthetics):

  25. Results • Top 10 (detail / aesthetics): • Flop 10 (detail / aesthetics):

  26. Method • We apply Thurstone’s Method of Comparative Judgment to each attribute separately • isolate attributes in the choice tasks • determine relative rankings of attribute levels using the frequency a level was chosen over another • assume normal distributed rankings (and their differences) • Conjoint structure requires a modification of Thurstone’s method • the rankings of the various attributes must be transformed into a comparable scale • the transformation factor marks the relative influence of this attribute on the overall visualization experience

  27. What Does This Enable? • Efficient testing of multi-parameter scenarios in visualization

  28. What Does This Enable? • Efficient testing of multi-parameter scenarios in visualization •  Personalization of visualization experiences for specific users (or user groups)

  29. What Does This Enable? • Efficient testing of multi-parameter scenarios in visualization •  Personalization of visualization experiences for specific users (or user groups) •  Learning of user preferences given specific task and rendering scenario descriptions

  30. What Does This Enable? • Efficient testing of multi-parameter scenarios in visualization •  Personalization of visualization experiences for specific users (or user groups) •  Learning of user preferences given specific task and rendering scenario descriptions •  Constructing a model of the user to optimize his/her visualization experiences and efficiency

  31. Acknowledgments • Lujin Wang (Stony Brook University) • for rendering 5000+ images • Eva Schuberth (ETH Zürich) • for contributing on the statistics • Peter Zolliker (EMPA Dübendorf) • for contributing on perceptional issues and stats

  32. Questions? • Which image do you like best? • Which image shows more detail?

More Related