1 / 113

Code 342 Cognitive and Neural Sciences Office of Naval Research January 11-13, 2005

Passing the Bubble: Cognitive Efficiency of Augmented Video for Collaborative Transfer of Situational Understanding Review of Human Factors Discovery and Invention Projects. Code 342 Cognitive and Neural Sciences Office of Naval Research January 11-13, 2005.

trilby
Download Presentation

Code 342 Cognitive and Neural Sciences Office of Naval Research January 11-13, 2005

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Passing the Bubble: Cognitive Efficiency of Augmented Video for Collaborative Transfer of Situational UnderstandingReview of Human FactorsDiscovery and Invention Projects Code 342Cognitive and Neural SciencesOffice of Naval Research January 11-13, 2005

  2. Brief Summary of Overall Objectives and this year's Objectives • Experiments conducted and empirical findings. • Expected Final Products • Completed or planned demonstrations/validations of technology developed. • Describe software or other supporting tools used/developed • Identify Recent or Planned Publications • Discuss Lessons Learned Kirsh - Augmented Video - ONR

  3. Research Team • David Kirsh – PI • Bryan Clemons – run experiments • Mike Lee – run experiments • Thomas Rebotier – statistical analysis • Undergraduate assistants • Frank Kuok • Charles Koo • Mike Butler • Kelly Miyashiro • Brian Iligan • Howard Yang • Rigie Chang • Brian Balderson • Bryan Clemons • Mike Dawson Kirsh - Augmented Video - ONR

  4. Project Summary Overview • Project Summary OverviewLong term goalsExpected final product, potential impact, applicationsProject objective, approach

  5. Long-termGoals • Determine how to annotate videos or images to share situational awareness better • Strategic conditions, • tactical conditions, • environmental conditions • commander’s future plans. • distributed team members • Guidelines for annotating video and well chosen stills Kirsh - Augmented Video - ONR

  6. Long-term Goals • Develop methodology for quantitatively measuring the value of asynchronous briefing • Deepen Theoretical Framework • Dynamical representations • What is shared understanding • Distributed cognition and Annotation • Annotation and attention management Kirsh - Augmented Video - ONR

  7. This Year’s Goals • Slice away at the value of co-presence, gesture, real-time interactivity • How good can remote ‘over the shoulder’ observation of a face to face presentation be? • How important is interactivity, even if not face to face? (ie. Asking questions) Kirsh - Augmented Video - ONR

  8. Expected Final Products • Guidelines: When and how to use annotation to improve shared understanding. Major factors: • Annotating stills vs. annotating video’s, cost and benefits • Using annotation types (circles, arrows, moving ellipses) for specific information needs • Annotation and expertise level – who needs it most and when • How to tell good from bad (pointless) annotation • Metrics: cognitive efficiency of different annotational techniques • Relativized to expertise • Relativized to knowledge types • Articles & Theoretical models • Dynamic Representations • Annotation and Sharing Understanding • Empirical Findings and relation to Illustration Kirsh - Augmented Video - ONR

  9. II. Technical Plan • Experimental plan, data collection

  10. Experimental Plan • Add new conditions – see factorial table • Increased orientation on differences between live and taped live presentations • Analyze if certain graphic objects are more effective at conveying certain knowledge objects • Analyze relation of gesture in live with annotation in live Kirsh - Augmented Video - ONR

  11. New Conditions + 36 controls

  12. Examples Experts talking from different venues Intermediates Face to face

  13. New Experiments Kirsh - Augmented Video - ONR

  14. Experimental Design

  15. Opening Context Initial (thin) Subjective Understanding Of Situation Rich Subjective Understanding of Situation thin Different ways of communicating Voice Video Annotations Maps Images Animations Medium of Communication Leaving Duty (A) Coming on (B) Kirsh - Augmented Video - ONR

  16. Closing Context Rich Subjective Understanding Of Situation Rich Subjective Understanding Of Situation Different ways of communicating Voice Video Annotations Maps Images Animations Medium of Communication Leaving Duty Coming on Duty Kirsh - Augmented Video - ONR

  17. Intermediate – Intermediate Expert - Expert No Annotations Static Annotations Dynamic Annotations Control (random stills) Stills Video Snippets Original Factorial Design Annotating video snippets and stills Takes long time to create presentations Kirsh - Augmented Video - ONR

  18. Added Live Presentation Intermediate – Intermediate Live Expert - Expert No Annotations Static Annotations Dynamic Annotations Control (random stills) Gesture Map based Stills Video Snippets Live was a new condition to support face to face presentation And near real-time production Kirsh - Augmented Video - ONR

  19. Separate Live Factors Intermediate – Intermediate Live Expert - Expert No Annotations Static Annotations Dynamic Annotations Control (random stills) Stills Video Snippets Face to face = synchronous, co-located, gesture, questioning Kirsh - Augmented Video - ONR

  20. B Lapsang A B Oolong Hyson Lab set-up Venue A Context and gesture B plays A then passes to C Wireless audio C Boba Time: presentation Time: Game 0 – 15 min Kirsh - Augmented Video - ONR

  21. Research Hypotheses • Graphical annotation adds to performance in presentations made of well chosen stills and well chosen video snippets. • Video is better than Stills in conveying situational understanding in strategic contexts • Annotation is always helpful because it adds info • dynamic annotations are less helpful on video snippets than static graphical annotations are. Kirsh - Augmented Video - ONR

  22. Research Hypotheses • Live presentations to the person are no better well designed canned presentations • Good presentations have significantly more knowledge objects than bad presentations. • Being didactic with other experts is a bad thing • Info about the enemy is more valuable than about our own side Kirsh - Augmented Video - ONR

  23. Project Status Main Results

  24. 1. Graphical annotation is helpful • As predicted: Graphical annotation adds to performance in presentations made of well chosen stills and well chosen video snippets. Kirsh - Augmented Video - ONR

  25. Graphical annotation is helpful Main Effect 200 150 100 50 0 % -50 RANDOM STILLS SELECTED STILLS -100 VIDEO -150 NO ANNOTATIONS STATIC DYNAMIC ANNOTATIONS ANNOTATIONS Kirsh - Augmented Video - ONR

  26. 2. Random stills don’t benefit from annotations • Surprise 1: Graphical annotation may add nothing or even be detrimental for presentations made from machine chosen, i.e. randomly chosen, stills. Kirsh - Augmented Video - ONR

  27. Random stills don’t benefit from annotations On the same games, subjects actually seem to be doing worse after viewing presentations made of randomly picked stills with static annotation. so far significance is marginal F(1,42)=2.418, p=.127

  28. Random stills don’t benefit from annotations .. details • good audio narrative makes sense of randomly chosen stills. • Conjecture: Graphical annotation lowers performance by distracting listener from making own sense of randomly chosen stills. • Stills are in temporal sequence but their story will be fragmented and possibly incomplete. The audio narrative helps but requires substantial inference on the part of listeners. Experts seem able to deal with this fragmentary information and are bothered by the annotations the presenter adds to the scene. This suggests that attending to annotation may carry a bigger cognitive cost than previously assumed. Kirsh - Augmented Video - ONR

  29. WELL CHOSEN STILLS ARE BETTER THAN VIDEO % 150 SELECTED STILLS 100 50 VIDEO 0 NO ANNOTATIONS STATIC ANNOTATIONS DYNAMIC ANNOTATIONS 3. Well chosen stills best • Surprise 2: well chosen stills are best. • Well chosen stills when annotated (whether with static or dynamic annotations) seem to be better than selected video snippets that are annotated.

  30. Well chosen stills = Well chosen video • Video snippets without annotations are not significantly better or worse than selected stills without annotations – the without annotation condition means presentations with voice but no graphic annotation. % 150 SELECTED STILLS 100 VIDEO 50 0 NO ANNOTATIONS

  31. Well chosen stills best WELL CHOSEN STILLS ARE BETTER THAN VIDEO 200 150 100 50 % 0 -50 No Static Dynamic No Static Dynamic -100 -150 SELECTED STILLS VIDEO (F(1,70)=4.528 p=.037)

  32. Video may not be worth it • well chosen stills are like well chosen illustrations, • literature has shown that illustrations are often better than videos or animations at communicating structural, strategic and resource information. • if the process being described is not too complex all the important transitions and states of the process can be identified in static images. • Static images allow viewer greater control over attention management to move at their own pace • May not be true if video or dynamic annotations are used to carry extra information about speed, rate of progress and other time sensitive elements. Kirsh - Augmented Video - ONR

  33. Next step • To explore this unexpected results we have begun looking at how frequently presenters actually used dynamic annotations to convey dynamic information, and we have been analyzing whether there is much to be gained by presenting dynamic information. Kirsh - Augmented Video - ONR

  34. 4. Dynamic annotations • As predicted: dynamic annotations are less helpful on video snippets than static graphical annotations are. • Reason: • cognitive load • distraction of video on video • Forces interpretation in presenters pace Kirsh - Augmented Video - ONR

  35. DYNAMIC STATIC ANNOTATIONS ANNOTATIONS Annotations on Video Video on video: trending to worse than static annotations on video (F(1,19)=1.28 p=.27) Kirsh - Augmented Video - ONR

  36. 5. Dynamic annotations surprise: not better on selected stills than static annotations • Surprise 3: dynamic annotations are not better on selected stills than static annotations and may at times be worse. Kirsh - Augmented Video - ONR

  37. Annotations on Well Chosen Stills Video on well chosen stills: no advantage 200 150 100 50 0 % -50 -100 -150 STATIC DYNAMIC ANNOTATIONS ANNOTATIONS Kirsh - Augmented Video - ONR

  38. Dynamic annotations surprise ... Cont • Video annotation overlays, even on static images, do not add anything extra, and may sometimes add less than static annotations. Although we were not surprised that video on top of video snippets was not as helpful as static on top of video it is surprising that video on top of stills is not the best way to communicate. • Evidently they too must be distracting for viewers who are trying to listen to the audio commentary. This is another area in which more research is needed. Kirsh - Augmented Video - ONR

  39. 6. Live presentations • Surprise 4: Live presentations to the person who will take over are better than all forms of canned presentations. • They are very much faster and easier to make. • Live presentations contain gestures that function like graphic annotation and • they contain mouse pointing which also serves as an attention management mechanism, much like many of the graphical annotations found in our canned presentations. Kirsh - Augmented Video - ONR

  40. LIVE versus CANNED 300 Well Chosen Video Snippets Random 200 100 % 0 -100 -200 Live presentations

  41. Live Live Random Canned Live versus all Canned p=.001 Live versus Canned Live versus Random p=.003 Live Live Chosen Stills Video Kirsh - Augmented Video - ONR Live versus Chosen Stills p=.006 Live versus Video p=.0004

  42. Live presentations • Is the performance boost coming from • giving more relevant information • interaction with presenters – asking questions or showing interest in certain areas • the pace that presenters adopt as a result of subtle interpersonal cues apparent in the face to face condition • gesture Kirsh - Augmented Video - ONR

  43. New Results Kirsh - Augmented Video - ONR

  44. 7. More Knowledge Objects of the right type the better • Best type of Knowledge • Strategy • Resource • KO’s that are a waste of time • Didactic • Past event • Slightly better to give the strategy and resource KO about the enemy rather than about our side Kirsh - Augmented Video - ONR

  45. More Knowledge objects the better • As predicted: Good presentations have significantly more knowledge objects than bad presentations. Kirsh - Augmented Video - ONR

  46. More KO’s the better – Resource and Info • Limited to KO’s of resource and strategy • The regression shows how effective a predictor resource and strategy KO are of score Kirsh - Augmented Video - ONR

  47. Didactic KO’s are harmful • As predicted: Being didactic with other experts is actually a bad thing. We found practically no didactic content in live presentations, and in all other forms of presentation, there was more didactic content than in presentations that resulted in bad performance than in ones that resulted in good performance. Kirsh - Augmented Video - ONR

  48. Didactic and Past events are bad KO’s Resource and strategy [significant] Didactic and Past events [only a trend] Positive correlation p=.002 Negative Correlation p=.24 do didactic KO’s cause listeners to tune out?

  49. KO list • Here we show the KO’s that we used to analyze the stimuli • This list has been revised several times • Without consulting the stimuli • Use one expert to list possible KO’s • Review list with several other experts and have them revise it • After reviewing the stimuli • Modify the list in terms of our experience using it to classify KO’s as found in the stims • Iteratively improve the list as we refine our analysis Kirsh - Augmented Video - ONR

  50. Didactic KOs What are my units capabilities What are the capabilties of my buildings What does my upgrade do What are the enemy units capabilties What are the capabilities of the enemy buildings What does the enemy updgrade do Kirsh - Augmented Video - ONR

More Related