1 / 25

Michael D. Fleetwood and Michael D. Byrne HUMAN-COMPUTER INTERACTION, 2006

Modeling the Visual Search of Displays: A Revised ACT-R Model of Icon Search Based on Eye-Tracking Data. Michael D. Fleetwood and Michael D. Byrne HUMAN-COMPUTER INTERACTION, 2006. Contents. Introduction Relevant Visual Search Literature ACT-R 5.0 General Procedures

idola
Download Presentation

Michael D. Fleetwood and Michael D. Byrne HUMAN-COMPUTER INTERACTION, 2006

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Modeling the Visual Search of Displays:A Revised ACT-R Model of Icon Search Based on Eye-Tracking Data Michael D. Fleetwood and Michael D. Byrne HUMAN-COMPUTER INTERACTION, 2006

  2. Contents • Introduction • Relevant Visual Search Literature • ACT-R 5.0 • General Procedures • Computational Modeling of the Experiment • Model, Results • Eye Tracking the Icon Search Task • Model Predictions, Methods, Results, Discussion • Revising the Model • Number of Gazes per Trial • EMMA • Improving the Model Search Strategy • Modeling Results • Discussion of Modeling Revisions • Discussion

  3. 1. Introduction InGUI, Icons are becoming increasingly prevalent. How does a person search for an icon in a typically croweded screen of other icons? Building a cognitvely plausible model of an icon search.

  4. 1.1 Relevant Visual Search Literature • Paradigm • Efficiency of visual search can be assessed by looking at changes in response time(RT) or accuracy as a function of changes in the set size. • Preattentive search effects • Preattentive Searches • Conjunction Searches • Distractor Ratio Effect • Distinctiveness, Complexity

  5. 1.2 ACT-R 5.0 • ACT-R architecture has been used to successfully model a variety of behavioral phenomena. • This research extends the methodology to a more complex visual environment. • ACT-R system configuration • Procedural memory, Declarative memory • Buffers • Vision Module, Motor Module

  6. 1.2 ACT-R 5.0 • Vision module • Where, what • Shift of visual attention: 135ms • Production to fire: 50ms • Shift and process: 85ms • Motor module • Selecting the icon • Prepare: 50ms • Complex movements: Fitts’ law

  7. 1.2 ACT-R 5.0 keeping track of current goals & intentions retrieving information can only respond to a limited amount of information that is deposited in the buffers of these modules. Identifying objects controlling the hands

  8. 2. General procedures • Set size: 6, 12, 18 or 24 icons • Icon quality: good, fair, poor • Icon border: without, circle, box ☞ 4*3*3 = 36 trials • TM (target-matching) icon ☞ Six icons: one (the target), one (TM icon), four (non TM) • 2.1 Materials • 2.1 Procedures • Show target → “Ready” button → Find target and Click target • Response Time(RT) : Click “Ready” ~ Click target

  9. 3. Computational Modeling 3.1 Model • Each icon is “seen” by ACT-R’s vision module as a list of attribute pairs. • Good Icons : A Single Attribute Pair, “circle red” • Complex Icons : A number ofAttribute Pairs, “circle top white;rectangle top dark-gray: - - - ” • The model stores only one attribute pair of the target icon.

  10. 3. Computational Modeling 3.1 Model The search process 1. Random icon is found. 2-3. Visual attention is shifted to the filename. 4. If the filename matches the filename stored..., then visual attention is shifted up to the icon (to click); 85ms If not matches, then the search keeps progressing. ☞ 50ms * 4 productions + 85ms = 285ms Simulated time = # of productions that fired + # of shifts of visual attention + motor movement ☞ Actions occurred in parallel

  11. 3. Computational Modeling 3.2 Results • R^2, RMSE, and PAAE : 0.98, 126 ms and 4.27%, respectively ☞ Both models only fit the response time data well. ☞ The model accomplished the task in a humanlike manner?

  12. 4. Eye tracking the icon search task 4.1 Model predictions • Number of shifts of attention per trial • Number of shifts of visual attention to TM icons • Search strategy • Reexamination of icons

  13. 4.2 Methods • Gazes, rather than fixations • An uninterrupted series of subsequent fixations on a region of interest was considered a gaze. • ACT-R 5.0 describes patterns of visual attention but does not explicitly predict eye movements or fixations • A given shift of visual attention • a saccade and a single fixation  single gaze • A shift of visual attention is followed by several fixations • A number of fixations  single gaze • Multiple shifts of visual attention occur before any eye movements  Difficult to analyze

  14. 4.3 Results ☞ Patterns in the gaze data were similar.

  15. 4.3 Results Figure7. Ratio of TM gazes to Total gazes ☞ Participants used different strategies. • Good quality of the icons : Directed at TM icons • Poor quality of the icons : Un-directed, covered whole set of icons

  16. 4.3 Results Directed search with Good quality icons Undirected search with poor quality icons

  17. 4.3 Results • Analysis of Fixation Contingency • A participant’s next fixation would be contingent on the location of current fixation. • Nearly all of the participants’ contingent fixations were directed to a nearest TM icon. • Reexamination of Icons • People reexamine icons infrequently.

  18. 4.4 Discussion of eye-tracking results • Despite predicting the response time data well, the model overestimated the number of gazes per trial • A shift of visual attention and encode the item (50 ms production fire and 85 shift attention and encode an item; 135 ms) is too fast • participants are shifting visual attention and encoding information Without making a measurable fixation on the information • Participants can examine multiple icons within a single gaze • The model’s behavior of reexaming icons

  19. 5. Revising the model 5.1 Number of Gazes per Trial 5.2 Eye Movements and Movements of attention EMMA Model • A computational model that serves as a bridge between observable eye movements and the unobservable cognitive processes and shifts of attention • The time T need to encode object I is computed as follows • EMMA describes how cognition, visual encoding, and eye movements interact as interdependent processes.

  20. 5. Revising the model Incorporating EMMA • The number of shifts of visual attention will not decrease • we could expect the number of eye movements, or shifts of POR, to decrease 5.3 Improving the Model Search Strategy • The model would simply select the TM icon nearest to the current focus of visual attention. • The new model would not shift attention to locations that it had previously attended.

  21. 5.4 Modeling results Previous model Revised model  Revised model fares much better

  22. 5.4 Modeling results • General search patterns of participants • Directed strategy (examining only TM icons) • Grouping strategy [modeling data] [Eye tracking data]

  23. 5.5 Discussion of Modeling Revisions • The EMMA model to disassociate eye movements and movements of attention. • Overall increase in response time • Shifting visual attention to and encoding each new icon: greater than 85ms • Two major improvements • Nearest strategy: shorter average times • Marking strategy: no longer revisits icons • substantial improvement in fitting human performance

  24. 6. Discussion • Effect of icon quality • Searching the icon nearest • Beyond the realm of icon search • Grouping of information • Realm of screen design issues • Contribution • A more complex visual environment and task in modern GUI • More humanlike strategy

  25. ? Q & A

More Related