1 / 23

Measuring & Predicting Visual Fidelity

Measuring & Predicting Visual Fidelity. Benjamin Watson Dept. Computer Science Northwestern University watson@northwestern.edu. SIGGRAPH 2001 (additions by rjm). Alinda Friedman Aaron McGaffey Dept. Psychology University of Alberta alinda@ualberta.ca. The case for int LooksLike().

ratana
Download Presentation

Measuring & Predicting Visual Fidelity

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Measuring & Predicting Visual Fidelity Benjamin Watson Dept. Computer Science Northwestern University watson@northwestern.edu SIGGRAPH 2001 (additions by rjm) Alinda Friedman Aaron McGaffey Dept. Psychology University of Alberta alinda@ualberta.ca

  2. The case forint LooksLike() Models are often approximated Model simplification Dynamic LOD Imagery is too Image compression Image synthesis Video compression

  3. Experimental Measures • Search Performance • Two components: time and accuracy • False negative(missed target) vs false positive • Naming Times • Subjective Ratings • Extremely easy for experimenters to collect and for viewers to produce (a good thing) • Threshold Testing • Just noticeable differences (JNDs)

  4. Comparing Experimental Measures • Pyschophysical research and its measures typically centers on external validity (the extent to which the research has meaning outside of the lab) • Cognitive perceptual psychologists • Search performance, naming time, selective ratings • Internal validity (the extent to which experimental inferences are justified, given the many factors that can influence high-level cognitive task performance)

  5. Existing stabs at LooksLike() Among models Distance Coplanarity In imagery Mean squared error Models of human visual system

  6. Is LooksLike() working? To begin, what do people think? Some ways of finding this out: Ratings - (conscious) Forced choice - (conscious) Naming times - (subconscious)

  7. Automatic Measures for Static Imagery • Ahumada; Andrew (Bo) Watson; Daly • Ideal automatic measure of visual fidelity: accurate and simple • Digital Measures • Rmse • Minkowski sum • Single-Channel Measures • Conversion to contrast -> CSF -> differencing

  8. Experiment: what people think 36 stimuli, subjects Independent variables 2 simp methods: QSlim, Cluster 3 simp levels: 0%, 50%, 80% 2 stimuli groups: animals, objects Dependent variables Ratings, preferences, naming

  9. One stimuli close-up Unsimplified model

  10. Simplified close-ups 50% simplified 80% simplified qslim clustering

  11. Animal stimuli

  12. Artifact stimuli

  13. Naming time results

  14. Rating & choice results

  15. Overall: what people think Lessons: Measures agree with intutition Measures disagree on object type

  16. Is LooksLike() working? Now, which LooksLike() to examine? MSE: image BM: image, perceptual [Bolin & Meyer] Metro: 3D [Cignoni, Rocchini & Scopigno] Volume Distance: mean, MSE, max

  17. People vs. lookslike() Stat Sig Correlation Stat Insig Corr > .2 Correlation < .2

  18. Limitations One viewpoint One fidelity manipulation No background No motion No color …

  19. Confirmations Results echo previous CHI study Animal/artifact effect echoes psych More simplification is worse Qslim is better Simplification harder at low poly counts

  20. Surprises Simplification success varies by obj type Qslim best w/ animals, Clust w/ artifacts Differences in exp measures Object type differences Naming/LooksLike() disagreement Due to object type differences? Distillation effect?

  21. Implications For simplification: Specializations for model type? Small output is the real challenge For use of exp measures: Ratings, choices: highly comparative Naming: more conceptual, subconscious How comparative is your app?

  22. Implications For automatic measures: MSE, BM, MetroMn all good Except, big naming problem! For future work: Removing limitations Degree of comparison Naming & distillation effect

  23. Questions?

More Related