1 / 22

Cognitive Effects on Visual Target Acquisition

Cognitive Effects on Visual Target Acquisition. Charles River Analytics Cambridge, MA http://www.cra.com. Presentation Outline. Overview Data Analysis & Model Design Evaluation & Experiments. Objectives. Develop a model of human visual search Use image to be searched as “only” input

roz
Download Presentation

Cognitive Effects on Visual Target Acquisition

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cognitive Effects on Visual Target Acquisition Charles River Analytics Cambridge, MA http://www.cra.com

  2. Presentation Outline • Overview • Data Analysis & Model Design • Evaluation & Experiments

  3. Objectives • Develop a model of human visual search • Use image to be searched as “only” input • Predict probability of detection for hard-to-see targets • Clarify the relationship between stimulus-driven and cognitive effects • Validate the model • Compare model predictions with observed data from perception experiments

  4. Presentation Outline • Overview • Data Analysis & Model Design • Evaluation & Experiments

  5. Fix. Det. Basic Assumption • Probability of detection is conditional upon probability of fixation • But law of conditional probabilities says: • P(detection  fixation) = P(fixation) * P(detection | fixation) • Assumption holds when high visual acuity is needed to detect target objects: P(detection) ­ P(detection  fixation) P(detection) ­ P(fixation) * P(detection | fixation)

  6. Partitioning The Problem • Predict P(fixation) • Generate a 2-D Fixation Probability Map • Select the highest peaks as fixation points • Predict P(detection | fixation) • Extract local features at each fixation point • Train a classifier to emulate human “target”/“non-target” designations

  7. Four Model Components • Peripheral Processing • Fixation probability map • Fixation Selection • Coordinates of most likely fixation points • Foveal Processing • Feature vector for each fixation point • Classification • “Target” or “Non-target” designation for each fixation point

  8. CRAsearch Model Block Diagram

  9. Peripheral Feature Maps • Different features sensed in parallel across whole retinal image • Sub-sample input image (peripheral resolution) • Bandpass filter (on-center/off-surround ganglion cell processing) • Compute different local features (different modalities, scales, orientations, …) Bandpass filtered Subsampled original Absolute value Standard deviation Diff. of std. dev. Doyle

  10. Saliency Map • “Feature Integration” approach to forming a Saliency Map • Threshold each feature map (prevent contribution from sub-threshold stimuli) • Point-wise sum across maps (integration across feature types) Saliency map • Feature maps

  11. Horizon Bias Map Horizon Gating Map Input Image Horizon Bias Map x Summed Feature Maps

  12. Fixation Selection • Turn fixation probability map (FPM) into sequence of fixation points • Select highest peak in FPM as next fixation • Place Gaussian “hole” in FPM at current fixation point • Model exponential memory decay by making previous holes shallower • Range from perfect memory (never refixate) to no memory (refixate often)

  13. Foveal Processing • Window of overt attention • Foveal region centered on fixation point (observable with eye-tracker) • Window of covert attention • Only a small subset of foveal region gets full attention at any given time • This covert attention window can be deployed anywhere within overt window Image Overt attn. window Fixation point (peak in FPM) Covert attention window (target-like object)

  14. Covert Attention • Attracted to target-shaped objects • Convolve overt attention window with a difference-of-elliptical-Gaussians • Inner (positive) Gaussian is the best-fitting ellipse for a typical target • Outer (negative) Gaussian is an elongated, shallower version

  15. Foveal Feature Extraction • Extract a feature vector from each covert attention window • Want features that distinguish details of shape and relative luminance • Textural features (such as Gabor) do not work well for very small targets • One possibility is to use coarse coding, such as: • Average 4x4 pixel squares • Overlap squares by 2 pixels • Tile the covert attention window (6x12 pixels) • Concatenate the averages to form a feature vector (10 elements)

  16. Classifying Feature Vectors • Collect all feature vectors (one per fixation point) for all images • Train classifier on both “Target” and “Non-target” vectors • Run trained classifier on all other feature vectors • Classifier generates a “Target” or ”Non-target” label for each feature vector

  17. Presentation Outline • Overview • Data Analysis & Model Design • Evaluation & Experiments

  18. Observed vs. Predicted FPM • Fixation Probability Maps look qualitatively similar to observed data Fixation Probability Map (mean of 15 observers) Fixation Probability Map (model generated)

  19. Observed vs. Predicted Fixations *

  20. Initial FPM Results Over 20 Images • Adding model as another observer does not change group statistics • Group of 15 observers, viewing 20 images • Model is closer to mean than the typical observer

  21. Initial P(detection | fixation) Results • Sensitive to training sample selection • Sensitive to conflicting designations • Sensitive to random designations • True Positives 50 - 80% • Missed Detections 20 - 50% • False Alarm 5 - 20% • Correct Rejection 80 - 95%

  22. Experiments • Evaluate P(fixation) by comparing predictions with eye-tracker data • Evaluate P(detection | fixation) by comparing predictions with observed detection data • Scheduled Experiments • Search conditions, with eye-tracker • To be conducted by James Hoffman, University of Delaware

More Related