1 / 37

MODELING 3D FRAMES OF REFERENCE FOR THE COMMON OPERATIONAL PICTURE

MODELING 3D FRAMES OF REFERENCE FOR THE COMMON OPERATIONAL PICTURE. Justin G. Hollands, Matthew Lamb, and Jocelyn Keillor Human-Computer Interaction Group Defence Research and Development Canada – Toronto Toronto, Ontario, Canada. NATO IST-043/RWS-006

fabian
Download Presentation

MODELING 3D FRAMES OF REFERENCE FOR THE COMMON OPERATIONAL PICTURE

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MODELING 3D FRAMES OF REFERENCE FOR THE COMMON OPERATIONAL PICTURE Justin G. Hollands, Matthew Lamb, and Jocelyn Keillor Human-Computer Interaction Group Defence Research and Development Canada – Toronto Toronto, Ontario, Canada NATO IST-043/RWS-006 Visualisation and the Common Operational Picture [VizCOP] Canadian Forces College Toronto, 14-17 September 2004

  2. Common Operational Picture: Shared Understanding for Collaboration • Intent of Common Operational Picture (COP) is to provide shared understanding of battlespace to improve responsiveness and provide decision dominance • Sharing and co-ordinating information across different echelons, commands, environments, government departments, and nations: a collaborative working situation • Fundamental problem is co-ordination of views on information/knowledge space • For instance, if shared geospatial awareness is required—the platoon commander with troops on the ground looking at a group of buildings versus the company commander viewing aerial photographs or maps of the same urban terrain • It can be difficult for one commander to communicate to the other. In one view, task-relevant information may be visible; in the other, invisible • Horizontal and vertical collaboration • What is left and right in the forward field of view (FFOV) may be reversed with the map depending on orientation. • Need shared understanding (Concept of Operations for Collaborative Working, v. 2.2, 2004 DRDC Valcartier contract report

  3. Egocentric Navigation and Egocentricity • Immersed, egocentric viewpoint is viewpoint of eyes when walk or control vehicle, “natural” view • Sometimes called FFOV (forward field of view) • Provides traveler with better view of objects that lie in forward path • Order of buildings in path; obstacles to be avoided • With maps, rotating/rotatable map requires no mental rotation to align with FFOV Source: Centre for Landscape Research, U. of Toronto

  4. Exocentric Space Structure and Exocentricity • Exocentric displays are 2D, maplike • See large area of space; Avoids keyhole view problem of egocentric displays • Consistency of representation (no right/left reversal due to rotation) • exocentric displays portray distances between objects unambiguously (3D reps tend to degrade judgments of depth and distance) Source: Centre for Landscape Research, U. of Toronto

  5. Task-Display Dependence • Tasks involving navigation or general structure of space best supported by greater egocentricity (3D immersive) • Tasks involving precise relative position judgments best supported by exocentricity (2D map- like) 2D Exocentric 3D Egocentric Source: Centre for Landscape Research, U. of Toronto

  6. “You are Here” Map • Ego referenced use (people have a particular location when viewing it) • Want it to be oriented to FFOV (principle of pictorial realism)

  7. Fixed vs. Rotating Maps • Fixed: north-up orientation cannot be changed • exocentric view -- good for planning, multiple users • requires mental rotation • Rotating: “up” direction on map corresponds to direction faced • more ego-centered view • principle of pictorial realism • In general, rotating maps provide for better navigation performance Source: www.mapquest.com

  8. Verbal and Spatial Navigation Aids • Implications of route and survey knowledge for maps & training • Maps vs. route lists for mass transit decisions (plan best bus route through London) (Bartram, 1980) • World-centered (exocentric) judgement • Maps resulted in better performance • Maps vs. route lists for navigation of a maze (Wetherell, 1979) • Two types of training: either learning a list of turns (ego-referenced) or studying a spatial map (exo) • List of turns led to superior performance • Subjects had difficulty relating the map to what they saw (linking frames of reference)

  9. Exocentric Better for Multiple Users • When multiple users must co-ordinate position and navigational orientation, ego referenced terms no good • e.g., air traffic control • Look at the aircraft to the left (“your left; my left”) • Exocentric terms or north up map better

  10. Frame of Reference • Dichotomy of egocentric-exocentric really a continuum • Start of movie: Forrest Gump feather • Boom crane in final movie scene: viewpoint pulls back and up at same time • Viewpoint affects frame of reference Exocentric Egocentric

  11. Frame of Reference • Not just viewpoint location that determines egocentricity • Consider: • Taking the position of an object in the scene • Observing motion of an object • Controlling the position of an object/avatar • Now need to consider viewpoint with respect to avatar

  12. Frame of Reference: Level I • Level I: Viewpoint defined with respect to scene (aerial or long shots) • Distance from scene (zoom) • Angle with respect to scene (3D typ 20-45 degrees) Exocentric Egocentric Exocentric Egocentric

  13. Frame of Reference: Level II • Level II: viewpoint defined wrt static objects or point of interest (medium shots and close-ups) • Distance from object • As distance between viewpoint and object decreases, becomes more like observer is “there” • “Closer” in metaphorical sense? Exocentric Egocentric

  14. Frame of Reference: Level III • Level III: viewpoint defined with respect to moving characters, actors, objects, point of interest (moving vs. point of interest distinction) • Viewpoint follows characters as they move (think Law & Order) (tracking and dolly shots) • Discrete changes in camera position on a scene (film cutting) as different actors speak • Autonomous camera controller: viewpoint controller • Mapping viewpoint onto behaviour of object • Distance and rotation (azimuth, vertical [pitch, roll]) important Exocentric

  15. Frame of Reference: Level III Egocentric Exocentric Distance Stays Constant Distance Increases No Rotation Viewpoint Rotation

  16. Frame of Reference: Level IV • Level IV: viewpoint defined with respect to dynamic control of avatar, object • Real-world navigation • First-person gaming, virtual environment • Mapping viewpoint onto behaviour of controlled object, coupling • Now since controlling object/avatar, issues of compatibility become more important • Consider situation where our “camera” (viewpoint) is separated from our “effector” (avatar) (Occurs in tele-operation) • Direction of viewpoint wrt controlled object matters (need to be behind vehicle, not in front) Exocentric Egocentric

  17. Frame of Reference: Level IV • Level IV: viewpoint defined with respect to dynamic control of avatar, object • Distance, rotation and control order (position, velocity, acceleration, etc) important • Consider the properties of a tether connecting viewpoint position with avatar (Wang & Milgram): rigid, non-rigid, nature of damping Exocentric Egocentric

  18. Frame of Reference: Summary • Level I: Scene (Distance, Angle of Elevation) • Level II: Object (Distance, Angle of Elevation) • Level III: Motion (Distance, Viewing Angles) • Level IV: Control (Distance, AnglesCompatibility, Order, Rigidity) • Within levels, across levels, shift from exo to ego • Use taxonomy to predict performance: as the number of shifts in reference frame increases, should see increase in time and error to communicate—to produce shared understanding

  19. Ego Frame of Reference Ignores Distance, Control (Tether Dynamics) Rotating Fixed (North up) 2D Plan View (Map) Exo

  20. Task-Display Switching Problem • Commander has multiple tasks • Therefore, needs multiple views of battlespace • Commander constantly switching displays (and tasks)—switching reference frames • Should cause disorientation and decision error 3D Egocentric 2D Exocentric Source: Centre for Landscape Research, U. Toronto

  21. Visual Momentum • User’s ability to extract and integrate data from multiple consecutive display windows (Woods, 1984) • How do we link views • Family of methods: • Placing perceptual landmarks or anchors across displays • Overlapping consecutive representations • Spatially representing the relationship among the displays • Gradual, “graceful” transformation from one display to another • Display continuous world maps

  22. Smooth Transition  Visual Momentum • Smooth transition between 2D and 3D perspectives incorporating animation of viewpoint during task switching • May provide visual momentum, reduce disorientation and improve decision accuracy

  23. Visual Momentum Experiments • We used two tasks developed by St. John et al. (2001) • A-See-B task: is one ground location visible from another? • A-Hi-B task: which one of two points is of higher altitude? • St. John et al. (2001) found that: • A-See-B task performed better with a 3D display • A-Hi-B task performed better with a 2D topographic map

  24. Results • Smooth transition produced shorter response times than discrete transition for the second trial in a pair • No speed-accuracy tradeoff

  25. Wang & Milgram (2003) Display Conditions Ego-centric Tethered Exo-centric

  26. Tethered Displays for Terrain Navigation and Spatial Awareness • Wang & Milgram—aviation context • With Matt Lamb—terrain navigation • Compare frame of reference for navigation of remotely operated vehicle (ROV) on terrain • And recognition of terrain layout (global spatial awareness) • Include tethered displays 3D Egocentric 3D Tethered 2D Plan View (Exocentric) 3D Perspective

  27. Information Visualization:The Spatial Metapor • Information visualization design ideas often use spatial metaphor to show non-spatial data • Even when information not spatial, the spatial metaphor is well entrenched • E.g., distance between two spreadsheet cells • Distance between menu items • Window layers • Spatial ability helps even when data not spatial • Low spatial ability individuals more like to get lost in a database system (Vicente et al., 1987)

  28. Need for Global View • Information integration (big picture) vs focused attention • Need for global, world-centered view on dataspace • Often absent in modern s/w • In contrast local view typically available (current document, directory, command prompt, text of current email) • Several recent examples: Fisheye View, Data Wall, Cone Tree, • Linked views: better transition between global and local—drill down, roll up

  29. Fisheye View • Expands and displays information concerning specific item of interest • Provides gradually less info about other items as their distance from item of interest increases • Adds egocentricity to an exocentric display • It works: Hollands et al. (1989) found fisheye worked well for planning routes through complex subway network • DOI(a|b) = API(a) – D(a,b) Source: www.idelix.com

  30. Data Wall Source: Stuart Card

  31. Cone Tree Source: Stuart Card

  32. Linked Views • Eases transition between local and global views • Good mapping between objects of interest across displays • Source: Visage (Sage Visualization Group)

  33. Frame of Reference: applied to Information Visualization • Traditionally two dimensional (although not always) • Use of depth is primarily categorical (occlusion) e.g., windowing • Global-Local: Scene Distance (Level I), Object Distance (Level II) • Fisheye: selection of area of interest reduces object distance (Level II) • Cone Tree, Data Wall: Using depth as implicit fisheye—Object Distance (Level II) making display more egocentric • Level III (motion/action): alert windowing • Mapping: Compatibility (Level IV): a control issue—want to see effects of your action upon… • Windowing methods: info on window of interest closer to you-use of occlusion to produce greater distance for information currently of less interest • Within levels, across levels, shift from exo to ego

  34. Helping Commander Transition • Two types of solution to display switching • Compromise displays • Making the transition more continuous

  35. Visual Momentum Taxonomy Spatial Nav, Virtual Env HC Interface Compromise Tethering Fisheye Ease Transition Smooth Rotation Linked Views

  36. Conclusions • Frames of reference advantages in visual navigation in real and virtual environments similar to global/local distinction in human-computer interfaces • Need for both global context and local content leads to need for multiple displays • Show at different times or simultaneously? • If sim then how to show context and detail in limited display space • If succ then how to ease display switches

More Related