1 / 32

Automated Detection of Deception and Intent

Automated Detection of Deception and Intent. Judee Burgoon, Ed.D. Center for the Management of Information University of Arizona. 19MAR04. Collaborative Partners. DETECTING DECEPTION IN THE MILITARY INFOSPHERE. Funded by Department of Defense.

Mercy
Download Presentation

Automated Detection of Deception and Intent

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Automated Detection of Deception and Intent Judee Burgoon, Ed.D. Center for the Management of Information University of Arizona 19MAR04

  2. Collaborative Partners DETECTING DECEPTION IN THE MILITARY INFOSPHERE Funded by Department of Defense • Center for the Management of Information, University of Arizona • Center for Computational Bioengineering, Imaging and Modeling, Rutgers University • Funded by Department of Homeland Security AUTOMATED INTENT DETECTION

  3. Deception and Intent Defined • Deception is a message knowingly transmitted with the intent to foster false beliefs or conclusions. • Hostile intent refers to plans to conduct criminal or terrorist activity • Intent is inferred from: • suspicious behavior • overt hostility • deception

  4. Relationship of Deception to Intent

  5. Lies Fabrications Concealments Omissions Misdirection Bluffs Fakery Mimicry Tall tales White lies Deflections Evasions Equivocation Exaggerations Camouflage Strategic ambiguity Hoaxes Charades Imposters Many Ways To Deceive

  6. America under Attack!

  7. Statement of the Problem • Humans have very poor ability to detect deceit and hostile intent. • True of experts as well as untrained individuals • Accuracy rates of 40-60%--about the same as flipping a coin • Reliance on new communication technologies--text, audio, video--may make us more vulnerable to deceit.

  8. In the Headlines

  9. Questions Are there reliable indicators of: deceit? intent to engage in hostile actions? Can detection be automated to augment human abilities? Does mode of communication make a difference?

  10. The Objective: Reducing False Alarms & Misses

  11. Sample Deception Indicators • Arousal • Higher pitch, faster tempo • Emotion • Absence of emotional language, false smiles • Cognitive effort • Delays in responding, nonfluent speech • Memory • Fewer details, briefer messages • Strategic communication • Controlled movement, increasing involvement

  12. Our Experiments 16 exper-iments, 2136 subjects, in 2.5 years

  13. Typical Experiment: Mock Theft • Task • half of participants steal wallet from classroom, other half are innocents • all are interviewed by trained and/or untrained interviewers • Mode of interaction • face-to-face, text, audio, video • Outcomes • accuracy in detecting truth and deception • judged credibility • coding of verbal and nonverbal behavior

  14. Sample Results • Deceivers create longer messages under text than FtF.

  15. Implications Text-based deception allows for planning, rehearsal, editing. Deceivers can use text messages to their advantage.

  16. Questions Are there reliable text-based indicators of deceit or hostile intent? Can these be automated to overcome deceivers’ advantages?

  17. Sample Text-based Cues We Analyzed

  18. Sample Results from Automated Analysis • Deceivers use different language than truth tellers. • Deceivers—more • quantity • uncertainty • references to others • informality • Truthtellers—more • diversity • complexity • positive affect • references to self

  19. Automating Analysis:Agent99 Parser • Find cues in text • Submit to data mining tool

  20. Modifier Quantity: 51 Temporal Immediacy: 0.0 Sensory Ratio: 0.0325 Verb_Quantity: 63 Modifier Quantity: 0.0325 Modal Verb Ratio: 0.2698 True: it is deceptive Decision Tree Analysis

  21. Accuracy in Detecting Deceit Note: Preliminary findings from Mock Theft, from transcribed face-to-face sessions

  22. Implications Linguistic and content features together can reliably identify deceptive or suspicious messages. Text analysis can be successfully automated.

  23. Questions Can hostile intent be mapped to behavior? Are there reliable video-based indicators of deceit and intent? Are the indicators open to automation?

  24. The Mapping Problem

  25. Approach to Analysis • Four data sets: • Pre-polygraph interviews from actual investigations • Mock theft experiment • Two states: innocent (truthful), deceptive (guilty) • Actors in airport/screening location scenarios • Three states: relaxed, agitated (nervous), overcontrolled • Actors showing normal behavior to train neural networks

  26. Intent Recognition from Video • Track and estimate human movement including: • Head • Facial & Head Features • Hands • Body • Legs • Tracking techniques: • Physics-based tracking of face and hands • Statistical model-based motion estimation

  27. Skin Color Tracker: Face & Hands

  28. Sample Results from Human Coders • “Thieves” use fewer head movements and gestures, more self-touching than “innocents.”

  29. Sample Patterns: Actors Head pos. L. hand pos. R. hand pos. Head pos. L. hand pos. R. hand pos. Head pos. L. hand pos. R. hand pos. Head vel. L. hand vel. R. hand vel. Head vel. L. hand vel. R. hand vel. Head vel. L. hand vel. R. hand vel. controlled relaxed nervous

  30. Head pos. L. hand pos. R. hand pos. Head pos. L. hand pos. R. hand pos. Head vel. L. hand vel. R. hand vel. Head vel. L. hand vel. R. hand vel. Sample Patterns: Mock Thieves Nervous (lying) Relaxed (not lying)

  31. Sample Results: Scores differ among relaxed, agitated, and overcontrolled suspects

  32. Summary • Humans are fallible in detecting deception and hostile intent • Automated detection tools to augment human judgment can greatly increase detection accuracy • Verbal and nonverbal behaviors have been identified that: • Can be automated • Together significantly improve detection accuracy • More research under a variety of contexts will determine which indicators and systems are the most reliable

More Related