1 / 75

Adaptive Control of Gaze and Attention

Adaptive Control of Gaze and Attention. Mary Hayhoe University of Texas at Austin. Jelena Jovancevic University of Rochester. Brian Sullivan University of Texas at Austin. Selecting information from visual scenes. What controls the selection process?. Fundamental Constraints

marcel
Download Presentation

Adaptive Control of Gaze and Attention

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Adaptive Control of Gaze and Attention Mary Hayhoe University of Texas at Austin Jelena Jovancevic University of Rochester Brian Sullivan University of Texas at Austin

  2. Selecting information from visual scenes What controls the selection process?

  3. Fundamental Constraints Acuity is limited. High acuity only in central retina. Attention is limited. Not all information in the image can be processed. Visual Working Memory is limited. Only a limited amount of information can be retained across gaze positions.

  4. Neural Circuitry for Saccades planning movements target selection saccade decision saccade command inhibits SC signals to muscles

  5. Saliency and Attentional Capture Image properties eg contrast, edges, chromatic saliency can account for some fixations when viewing images of scenes (eg Itti & Koch, 2001; Parkhurst & Neibur, 2003; Mannan et al, 1997).

  6. Saliency is computed from the image using feature maps (color, intensity, orientation) at different spatial scales, filtered with a center-surround mechanism, and then summed. Gaze goes to the peak. From Itti & Koch (2000).

  7. Attentional Capture Certain stimuli thought to capture attention or gaze in a bottom-up manner, by interrupting ongoing visual tasks. (eg sudden onsets, moving stimuli, etc Theeuwes et al, 2001 etc ) This is conceptually similar to the idea of salience.

  8. Limitations of Saliency Models Will this work in natural vision? Important information may not be salient eg an irregularity in the sidewalk. Salient information may not be important - eg retinal image transients from eye/body movements. Doesn’t account for many observed fixations, especially in natural behavior - previous lecture. (Direct comparisons: Rothkopf et al 2007, Stirk & Underwood, 2007)

  9. Need to Study Natural Behavior Viewing pictures of scenes is different from acting within scenes. Heading Obstacle avoidance Foot placement

  10. Dynamic Environments

  11. The Problem Any selective perceptual system must choose what to select, and when to select it. How is this done given that the natural world is unpredictable? (The “initial access” problem, Ullman, 1984) Answer - it’s not all that unpredictable and we’re really good at learning it.

  12. Is bottom up capture effective in natural environments? Looming stimuli seem like good candidates for bottom-up attentional capture (Regan & Gray, 200; Franceroni & Simons,2003).

  13. Human Gaze Distribution when Walking • Experimental Question: How sensitive are subjects to unexpected salient events? • General Design: Subjects walked along a footpath in a virtual environment while avoiding pedestrians. Do subjects detect unexpected potential collisions?

  14. Virtual Walking Environment Virtual Research V8 Head Mounted Display with 3rd Tech HiBall Wide Area motion tracker V8 optics with ASL501 Video Based Eye Tracker (Left) and ASL 210 Limbus Tracker (Right) D&c emily Limbus Tracker Video Based Tracker

  15. Virtual Environment Monument Bird’s Eye view of the virtual walking environment.

  16. Experimental Protocol • 1 - Normal Walking: “Avoid the pedestrians while walking at a normal pace and staying on the sidewalk.” • 2 - Added Task: Identical to condition 1. Additional instruction:” Follow the yellow pedestrian.” Normal walking Follow leader

  17. Distribution of Fixations on Pedestrians Over Time 1 0.8 Probability of fixation 0.6 Normal Walking 0.4 0.2 Follow Leader 0 0-1 1-2 2-3 3-4 4-5 Time since the appearance onscreen (sec) -Pedestrians fixated most when they first appear -Fewer fixations on pedestrians in the leader trials

  18. What Happens to Gaze in Response to an Unexpected Salient Event? Pedestrians’ paths Colliding pedestrian path • TheUnexpected Event: Pedestrians veered onto a collision course for 1 second (10% frequency). Change occurs during a saccade. Does a potential collision evoke a fixation?

  19. Fixation on Collider

  20. No Fixation During Collider Period

  21. Probability of Fixation During Collision Period Pedestrians’ paths Colliding pedestrian path More fixations on colliders in normal walking. Normal Walking Controls Colliders

  22. Why are colliders fixated? Small increase in probability of fixating the collider could be caused either by a weak effect of attentional capture or by active, top-down search of the peripheral visual field.

  23. Probability of Fixation During Collision Period Pedestrians’ paths Colliding pedestrian path More fixations on colliders in normal walking. No effect in Leader condition Normal Walking Follow Leader Controls Colliders

  24. Why are colliders fixated? Small increase in probability of fixating the collider could be caused either by a weak effect of attentional capture or by active, top-down search of the peripheral visual field. Failure of collider to attract attention with an added task (following) suggests that detections result from active search.

  25. Prior Fixation of Pedestrians Affects Probability of Collider Fixation Conditional probabilities • Fixated pedestrians may be monitored in periphery, following the first fixation • This may increase the probability of fixation of colliders

  26. Other evidence for detection of colliders? Do subjects slow down during collider period? Subjects slow down, but only when they fixate collider. Implies fixation measures “detection”. Slowing is greater if not previously fixated. Consistent with peripheral monitoring of previously fixated pedestrians.

  27. Detecting a Collider Changes Fixation Strategy Time fixating normal pedestrians following detection of a collider Normal Walking Follow Leader “Miss” “Hit” Longer fixation on pedestrians following a detection of a collider

  28. Effect of collider speed No Leader Colliders are fixated with equal probability whether or not they increase speed (25%) when they initiate the collision path.

  29. No systematic effects of stimulus properties on fixation.

  30. Summary • Subjects fixate pedestrians more when they first appear in the field of view, perhaps to predict future path. • A potential collision can evoke a fixation but the increase is modest. • Potential collisions do not evoke fixations in the leader condition. • Collider detection increases fixations on normal pedestrians.

  31. Subjects rely on active search to detect potentially hazardous events like collisions, rather than reacting to bottom-up, looming signals (attentional capture). To make a top-down system work, Subjects need to learn statistics of environmental events and distribute gaze/attention based on these expectations.

  32. Possible reservation… Perhaps looming robots not similar enough to real pedestrians to evoke a bottom-up response.

  33. Walking -Real World • Experimental question: Do subjects learn to deploy gaze in response to the statistics of environmental events?

  34. Experimental Setup A subject wearing the ASL Mobile Eye System components: Head mounted optics (76g), Color scene camera, Modified DVCR recorder, Eye Vision Software, PC Pentium 4, 2.8GHz processor

  35. Experimental Design (ctd) • Occasionally some pedestrians veered on a collision course with the subject (for approx. 1 sec) • 3 types of pedestrians: Trial 1: Rogue pedestrian - always collides Safe pedestrian - never collides Unpredictable pedestrian - collides 50% of time Trial 2: Rogue Safe Safe Rogue Unpredictable - remains same

  36. Fixation on Collider

  37. Effect of Collision Probability Probability of fixating increased with higher collision probability. (Probability is computed during period in the field of view, not just collision interval.)

  38. Detecting Collisions: proactive or reactive? • Probability of fixating risky pedestrian similar, whether or not he/she actually collides on that trial.

  39. Almost all of the fixations on the Rogue were made before the collision path onset (92%). Thus gaze, and attention are anticipatory.

  40. Effect of Experience Safe and Rogue pedestrians interchange roles.

  41. Learning to Adjust Gaze N=5 • Changes in fixation behavior fairly fast, happen over 4-5 encounters (Fixations on Rogue get longer, on Safe shorter)

  42. Shorter Latencies for Rogue Fixations • Rogues are fixated earlier after they appear in the field of view. This change is also rapid.

  43. Effect of Behavioral Relevance Fixations on all pedestrians go down when pedestrians STOP instead of COLLIDING. STOPPING and COLLIDING should have comparable salience. Note the the Safe pedestrians behave identically in both conditions - only the Rogue changes behavior.

  44. Summary • Fixation probability increases with probability of a collision path. • Fixation probability similar whether or not the pedestrian collides on that encounter. • Fixations are anticipatory. • Changes in fixation behavior fairly rapid (fixations on Rogue get longer, and earlier, and on Safe shorter, and later)

  45. Neural Substrate for Learning Gaze Patterns Neurons at all levels of saccadic eye movement circuitry are sensitive to reward. (eg Hikosaka et al, 2000; 2007; Platt & Glimcher, 1999; Sugrue et al, 2004; Stuphorn et al, 2000 etc) Dopaminergic neurons in basal ganglia signal expected reward. This provides the neural substrate for learning gaze patterns in natural behavior, and for modelling these processes using Reinforcement Learning. (eg Sprague, Ballard, Robinson, 2007)

More Related