1 / 19

Learning Where to Look: An ACT-R/PM Model

Learning Where to Look: An ACT-R/PM Model. Brian D. Ehret ARCH Laboratory Human Factors and Applied Cognition George Mason University ACT-R Workshop - August 7, 1999. Overview. HCI research has demonstrated that users learn the locations of interface objects

Download Presentation

Learning Where to Look: An ACT-R/PM Model

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Learning Where to Look: An ACT-R/PM Model Brian D. Ehret ARCH Laboratory Human Factors and Applied Cognition George Mason University ACT-R Workshop - August 7, 1999

  2. Overview • HCI research has demonstrated that users learn the locations of interface objects • However, not much known about mechanisms underlying location learning • Systematically vary conditions under which location learning may occur in order to • Infer characteristics of location learning mechanisms • Embed these characteristics into a computational cognitive model using ACT-R/PM (Byrne & Anderson, 1998)

  3. Color-Match Condition

  4. Meaningful Text Condition

  5. Arbitrary Icon Condition

  6. Performance Time Results Trial Time (sec) Blocks

  7. Overview of Model • 35 rules for all four conditions • 20% of rules common to all four conditions • Overlap between conditions ranges from • 23% between Color-Match and Arbitrary to • 86% between Arbitrary and No-Label • Interacts directly with software via ACT-R/PM • Relies on search vs. retrieve paradigm • Akin to compute vs. retrieve (arithmetic facts) • Preference for less costly strategy

  8. Overview of Model (cont’d) • Declarative structures • Buttons • Locations • Labels • Colors • Learning Parameters • Base level learning (d=0.3) • Merging and retrieval • Associative strength learning (:al=1) • Source spread

  9. Search Cost and Strategies • Search Cost - number of buttons evaluated per trial

  10. Search Cost - Buttons Evaluated Number of Buttons Evaluated Blocks

  11. Search Phase Summary • Pre-attentive Search • Search for location of button with correct color • ACT-R/PM find-location command • Controlled Search • Attempt to retrieve a past use of the correct button and its associated location chunk • If retrievals fail, attend to random button • Retrieval attempted first (search vs. retrieve)

  12. Evaluation Cost and Strategies • Evaluation Cost - time required to determine if currently attended button is currently needed

  13. Evaluation Cost - Time Per Button Average Time Per Button (sec) Blocks

  14. Evaluation Phase Summary • Color-Match and Meaningful always label match • Arbitrary label matches only if label retrieved in search phase • No-Label and Arbitrary (if label not retrieved) try location assessment • Attempt to retrieve a past use of the currently attended button and its associated location chunk • If retrievals fail, then wait for ToolTip • Retrieval attempted first (wait vs. retrieve)

  15. Implications • Locations encoded as a by-product of attention • Default ACT-R/PM behavior • Location knowledge, once encoded into memory, is like any other knowledge • Subject to the same learning mechanisms • Task performance characterized as rational • Attempt less costly strategies first • Pre-attentive < retrieve location < search [search phase] • Label-match < location assessment < tip [evaluation phase] • Results in differential location learning

More Related