html5-img
1 / 59

Lecture 14: Dissemination and Provenance Analytic Provenance

Lecture 14: Dissemination and Provenance Analytic Provenance. November 16, 2010 COMP 150-12 Topics in Visual Analytics. Lecture Outline. Dissemination and Provenance Reporting and Storytelling Al Gore, An Inconvenient Truth Hans Rosling , Gapminder Author-driven vs. read-driven

ziya
Download Presentation

Lecture 14: Dissemination and Provenance Analytic Provenance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 14:Dissemination and ProvenanceAnalytic Provenance November 16, 2010 COMP 150-12Topics in Visual Analytics

  2. Lecture Outline • Dissemination and Provenance • Reporting and Storytelling • Al Gore, An Inconvenient Truth • Hans Rosling, Gapminder • Author-driven vs. read-driven • Example Reporting Systems • GeoTime Stories • ActiveReports • Provenance • Perceive • Capture • Encode • Recover • Reuse

  3. Provenance • Definition: • “origin, source” • “the history of ownership of a valued object or work of art of literature” • Since then, the term has been adapted to: • Data provenance • Information provenance • Insight provenance • Analytic provenance

  4. Analytic Provenance • Goal: • To understand a user’s analytic reasoning process when using a (visual) analytical system for task-solving. • Benefits: • Training • Validation • Verification • Recall • Repeated procedures • Etc.

  5. What is in a User’s Interactions? Keyboard, Mouse, etc Input Visualization Human Output Images (monitor) • Types of Human-Visualization Interactions • Word editing (input heavy, little output) • Browsing, watching a movie (output heavy, little input) • Visual analysis (closer to 50-50)

  6. Van Wijk’smodel of visualization Image source: The Value of Visualization. Jarke van Wijk, InfoVis, 2005

  7. What is in a User’s Interactions? Grad Students (Coders) Compare! (manually) Analysts Strategies Methods Findings Guesses of Analysts’ thinking Logged (semantic) Interactions WireVis Goal: determine if a user’s reasoning and intent are reflected in a user’s interactions. Interaction-Log Vis

  8. The WireVis Interface Search by Example (Find Similar Accounts) Heatmap View (Accounts to Keywords Relationship) Keyword Network (Keyword Relationships) Strings and Beads (Relationships over Time)

  9. Interaction Visualizer

  10. Interaction Visualizer

  11. What’s in a User’s Interactions • From this experiment, we find that interactions contains at least: • 60% of the (high level) strategies • 60% of the (mid level) methods • 79% of the (low level) findings R. Chang et al., Recovering Reasoning Process From User Interactions. IEEE Computer Graphics and Applications, 2009. R. Chang et al., Evaluating the Relationship Between User Interaction and Financial Visual Analysis. IEEE Symposium on VAST, 2009.

  12. What’s in a User’s Interactions • Why are these so much lower than others? • (recovering “methods” at about 15%) • Only capturing a user’s interaction in this case is insufficient.

  13. Questions?

  14. Five Stages Provenance • Five stages of provenance • Proposed by yours truly • Has yet to be published • Currently in submission as a CHI workshop • Critique and suggestions welcomed

  15. Five Stages • Perceive • Record what the user sees • Capture • What interactions to capture (manual capture – user annotations, automatic capture – low level interactions, visualization states, high level semantics, etc.) • Encode • The language used to store the interactions • Recover • Translate the interaction logs into something meaningful • Reuse • Reapply the interaction log to a different problem or dataset

  16. Five Stages • Perceive • Record what the user sees • Capture • What interactions to capture (manual capture – user annotations, automatic capture – low level interactions, visualization states, high level semantics, etc.) • Encode • The language used to store the interactions • Recover • Translate the interaction logs into something meaningful • Reuse • Reapply the interaction log to a different problem or dataset

  17. Perceive • What did the user see that prompted the subsequent actions? Johansson et al. Perceiving patterns in parallel coordinates: determining thresholds for identification of relationships. InfoVis 2008.

  18. Perceive - Uncertainty Correa et al.A Framework for Uncertainty-Aware Visual Analytics. VAST 2009.

  19. Perceive – Visual Quality Sipps et al.Selecting good views of high-dimensional data using class consistency. Eurovis 2009.

  20. Perceive – Visual Quality

  21. Perceive – Visual Quality Dasgupta and Kosara. Pargnostics: Screen-Space Metrics for Parallel Coordinates. InfoVis 2010.

  22. Discussions • What other types of visual perceptual characteristics should we (as designers and developers) be aware of? • As a developer, if you know these characteristics, how can you control them in an open exploratory visualization system?

  23. Questions?

  24. Capture • The “bread and butter” of analytic provenance • Need to choose carefully about “what” to capture • Capturing at low level -> cannot decipher the intent • Capturing at high level -> not usable for other applications

  25. Capturing • Manual Capturing – went in doubt, ask the user! • Annotations • directly edited text • Structured diagrams • illustrating analytical steps • Reasoning graph • reasoning artifacts and relationships

  26. (Manual) Annotations

  27. (Manual) Structured Diagrams Shrinivasan and van Wijk. Supporting the Analytical Reasoning Process in Information Visualization. CHI 2008.

  28. (Manual) Reasoning Graphs

  29. Capturing • Automatic Capturing • Interactions • Capture the mouse and key strokes • Visualization States • Capture the variable state of the visualization

  30. (Automatic) Interaction Capturing

  31. (Automatic) Interaction Capturing Groth and Streefkerk. Provenance and Annotation for Visual Exploration Systems. TVCG 2006.

  32. (Automatic) State Capturing Heer et al. Graphical Histories for Visualization: Supporting Analysis, Communication, and Evaluation. InfovVis 2008.

  33. (Automatic) State Capturing Marks et al. Design Gallaries. Siggraph 1997.

  34. Discussions • How many different levels are there between low level interactions (e.g. mouse x, y) to high level interactions? • What are the pros and cons of manual capturing vs. automatic capturing?

  35. Questions?

  36. Encode • How to do store the captured interactions or visualization states? • Encoding manually captured interactions • Different types of “languages” • Encoding automatically captured interactions • More robust description of event sequences and patterns

  37. Encoding Manual Captures Xiao et al. Enhancing Visual Analysis of Network Traffic Using a Knowledge Representation. VAST 2007.

  38. Encoding Manual Captures

  39. Encoding Manual Captures Garget al. Model-Driven Visual Analytics. VAST 2008.

  40. Encoding Automatic Captures Kadivar et al. Capturing and Supporting the Analysis Process. VAST 2009.

  41. Encoding Automatic Captures Jankun-Kelly et al. A Model and Framework for Visualization Exploration. TVCG 2006.

  42. Encoding Automatic Captures Shrinivasan et al. Connecting the Dots in Visual Analysis. VAST 2009.

  43. Discussions • Is the use of predicates or inductive logic programming scale or generalizable? • Given that we posit that “perceive” is an important aspect of analytic provenance, how could we integrate interaction logging and perceptual logging?

  44. Questions?

  45. Recover • Given all the stored interactions, derive meaning, reasoning processes, and intent • Manually • Ask other humans to interpret a user’s interactions • Automatically • Ask a computer to interpret a human’s interactions

  46. Manual Recovery • From this experiment, we find that interactions contains at least: • 60% of the (high level) strategies • 60% of the (mid level) methods • 79% of the (low level) findings

  47. Automatic Recovery Perry et al.Supporting Cognitive Models of Sensemaking in Analytics Systems DIMACS Technical Report 2009.

  48. Automatic Recovery Perry et al.Supporting Cognitive Models of Sensemaking in Analytics Systems DIMACS Technical Report 2009.

  49. Automatic Recovery Shrinivasan et al. Connecting the Dots in Visual Analysis. VAST 2009.

  50. Discussions • What other models are there? • Can we integrate a manually constructed model with automated learning? • What would that entail?

More Related