understanding human behavior from sensor data n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Understanding Human Behavior from Sensor Data PowerPoint Presentation
Download Presentation
Understanding Human Behavior from Sensor Data

Loading in 2 Seconds...

play fullscreen
1 / 61

Understanding Human Behavior from Sensor Data - PowerPoint PPT Presentation


  • 134 Views
  • Uploaded on

Understanding Human Behavior from Sensor Data. Henry Kautz University of Washington. A Dream of AI. Systems that can understand ordinary human experience Work in KR, NLP, vision, IUI, planning… Plan recognition Behavior recognition Activity tracking Goals Intelligent user interfaces

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Understanding Human Behavior from Sensor Data' - angus


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
understanding human behavior from sensor data

Understanding Human Behavior from Sensor Data

Henry Kautz

University of Washington

a dream of ai
A Dream of AI
  • Systems that can understand ordinary human experience
  • Work in KR, NLP, vision, IUI, planning…
    • Plan recognition
    • Behavior recognition
    • Activity tracking
  • Goals
    • Intelligent user interfaces
    • Step toward true AI
punch line
Punch Line
  • Resurgence of work in behavior understanding, fueled by
    • Advances in probabilistic inference
      • Graphical models
      • Scalable inference
      • KR U Bayes
    • Ubiquitous sensing devices
      • RFID, GPS, motes, …
      • Ground recognition in sensor data
    • Healthcare applications
      • Aging boomers – fastest growing demographic
this talk
This Talk
  • Activity tracking from RFID tag data
    • ADL Monitoring
  • Learning patterns of transportation use from GPS data
    • Activity Compass
  • Learning to label activities and places
    • Life Capture
  • Kodak Intelligent Systems Research Center
    • Projects & Plans
this talk1
This Talk
  • Activity tracking from RFID tag data
    • ADL Monitoring
  • Learning patterns of transportation use from GPS data
    • Activity Compass
  • Learning to label activities and places
    • Life Capture
  • Kodak Intelligent Systems Research Center
    • Projects & Plans
object based activity recognition
Object-Based Activity Recognition
  • Activities of daily living involve the manipulation of many physical objects
    • Kitchen: stove, pans, dishes, …
    • Bathroom: toothbrush, shampoo, towel, …
    • Bedroom: linen, dresser, clock, clothing, …
  • We can recognize activities from a time-sequence of object touches
application
Application
  • ADL (Activity of Daily Living) monitoring for the disabled and/or elderly
    • Changes in routine often precursor to illness, accidents
    • Human monitoring intrusive & inaccurate

Image Courtesy Intel Research

sensing object manipulation
Sensing Object Manipulation
  • RFID: Radio-frequency identification tags
    • Small
    • No batteries
    • Durable
    • Cheap
  • Easy to tag objects
    • Near future: use products’ own tags
wearable rfid readers
Wearable RFID Readers
  • 13.56MHz reader, radio, power supply, antenna
    • 12 inch range, 12-150 hour lifetime
  • Objects tagged on grasping surfaces
experiment adl form filling
Experiment: ADL Form Filling
  • Tagged real home with 108 tags
  • 14 subjects each performed 12 of 65 activities (14 ADLs) in arbitrary order
  • Used glove-based reader
  • Given trace, recreate activities
results detecting adls

Legend

General solution

Point solution

Results: Detecting ADLs

Inferring ADLs from Interactions with Objects

Philipose, Fishkin, Perkowitz, Patterson, Hähnel, Fox, and Kautz

IEEE Pervasive Computing, 4(3), 2004

experiment morning activities
Experiment: Morning Activities
  • 10 days of data from the morning routine in an experimenter’s home
    • 61 tagged objects
  • 11 activities
    • Often interleaved and interrupted
    • Many shared objects
baseline individual hidden markov models
Baseline: Individual Hidden Markov Models

68% accuracy11.8 errors per episode

baseline single hidden markov model
Baseline: Single Hidden Markov Model

83% accuracy9.4 errors per episode

cause of errors
Cause of Errors
  • Observations were types of objects
    • Spoon, plate, fork …
  • Typical errors: confusion between activities
    • Using one object repeatedly
    • Using different objects of same type
  • Critical distinction in many ADL’s
    • Eating versus setting table
    • Dressing versus putting away laundry
aggregate features
Aggregate Features
  • HMM with individual object observations fails
    • No generalization!
  • Solution: add aggregation features
    • Number of objects of each type used
    • Requires history of current activity performance
    • DBN encoding avoids explosion of HMM
dynamic bayes net with aggregate features
Dynamic Bayes Net with Aggregate Features

88% accuracy6.5 errors per episode

improving robustness
Improving Robustness
  • DBN fails if novel objects are used
  • Solution: smooth parameters over abstraction hierarchy of object types
abstraction smoothing
Abstraction Smoothing
  • Methodology:
    • Train on 10 days data
    • Test where one activity substitutes one object
  • Change in error rate:
    • Without smoothing: 26% increase
    • With smoothing: 1% increase
summary
Summary
  • Activities of daily living can be robustly tracked using RFID data
    • Simple, direct sensors can often replace (or augment) general machine vision
    • Accurate probabilistic inference requires sequencing, aggregation, and abstraction
    • Works for essentially all ADLs defined in healthcare literature

D. Patterson, H. Kautz, & D. Fox, ISWC 2005

best paper award

this talk2
This Talk
  • Activity tracking from RFID tag data
    • ADL Monitoring
  • Learning patterns of transportation use from GPS data
    • Activity Compass
  • Learning to label activities and places
    • Life Capture
  • Kodak Intelligent Systems Research Center
    • Projects & Plans
the problem
The Problem
  • Using public transit cognitively challenging
    • Learning bus routes and numbers
    • Transfers
    • Recovering from mistakes
  • Point to point shuttle service impractical
    • Slow
    • Expensive
  • Current GPS units hard to use
    • Require extensive user input
    • Error-prone near buildings, inside buses
    • No help with transfers, timing
solution activity compass
Solution: Activity Compass
  • User carries smart cell phone
  • System infers transportation mode
    • GPS – position, velocity
    • Mass transit information
  • Over time system learns user model
    • Important places
    • Common transportation plans
  • Mismatches = possible mistakes
    • System provides proactive help
technical challenge
Technical Challenge
  • Given a data stream from a GPS unit...
    • Infer the user’s mode of transportation, and places where the mode changes
      • Foot, car, bus, bike, …
      • Bus stops, parking lots, enter buildings, …
    • Learn the user’s daily pattern of movement
    • Predict the user’s future actions
    • Detect user errors
technical approach
Technical Approach
  • Map: Directed graph
  • Location: (Edge, Distance)
    • Geographic features, e.g. bus stops
  • Movement: (Mode, Velocity)
    • Probability of turning at each intersection
    • Probability of changing mode at each location
  • Tracking (filtering): Given some prior estimate,
    • Update position & mode according to motion model
    • Correct according to next GPS reading
dynamic bayesian network i
Dynamic Bayesian Network I

mk-1

mk

Transportation mode

xk-1

xk

Edge, velocity, position

qk-1

qk

Data (edge) association

zk-1

zk

GPS reading

Time k

Time k-1

slide31

Mode & Location Tracking

Measurements

Projections

Bus mode

Car mode

Foot mode

Green

Red

Blue

learning
Learning
  • Prior knowledge – general constraints on transportation use
    • Vehicle speed range
    • Bus stops
  • Learning – specialize model to particular user
    • 30 days GPS readings
    • Unlabeled data
    • Learn edge transition parameters using expectation-maximization (EM)
predictive accuracy
Predictive Accuracy

How to improve predictive power?

Probability of correctly predicting the future

City Blocks

transportation routines
Transportation Routines

Home

A

B

Workplace

  • Goal: intended destination
    • Workplace, home, friends, restaurants, …
  • Trip segments: <start, end, mode>
    • Home to Bus stop A on Foot
    • Bus stopAto Bus stopB on Bus
    • Bus stop B to workplaceon Foot
dynamic bayesian net ii
Dynamic Bayesian Net II

gk-1

gk

Goal

tk-1

tk

Trip segment

mk-1

mk

Transportation mode

xk-1

xk

Edge, velocity, position

qk-1

qk

Data (edge) association

zk-1

zk

GPS reading

Time k

Time k-1

slide36

Predict Goal and Path

Predicted goal

Predicted path

detecting user errors
Detecting User Errors
  • Learned model represents typical correct behavior
    • Model is a poor fit to user errors
  • We can use this fact to detect errors!
  • Cognitive Mode
    • Normal: model functions as before
    • Error: switch in prior (untrained) parameters for mode and edge transition
dynamic bayesian net iii

ck-1

ck

gk-1

gk

tk-1

tk

mk-1

mk

xk-1

xk

qk-1

qk

zk-1

zk

Time k

Time k-1

Dynamic Bayesian Net III

Cognitive mode

{ normal, error }

Goal

Trip segment

Transportation mode

Edge, velocity, position

Data (edge) association

GPS reading

status
Status
  • Major funding by NIDRR
    • National Institute of Disability & Rehabilitation Research
    • Partnership with UW Dept of Rehabilitation Medicine & Center for Technology and Disability Studies
  • Extension to indoor navigation
    • Hospitals, nursing homes, assisted care communities
    • Wi-Fi localization
  • Multi-modal interface studies
    • Speech, graphics, text
    • Adaptive guidance strategies
papers
Papers

Liu et al, ASSETS 2006

Indoor Wayfinding: Developing a Functional Interface for Individuals with Cognitive Impairments

Liao, Fox, & Kautz, AAAI 2004 (Best Paper)

Learning and Inferring Transportation Routines

Patterson et al, UBICOMP 2004

Opportunity Knocks: a System to Provide Cognitive Assistance with Transportation Services

this talk3
This Talk
  • Activity tracking from RFID tag data
    • ADL Monitoring
  • Learning patterns of transportation use from GPS data
    • Activity Compass
  • Learning to label activities and places
    • Life Capture
  • Kodak Intelligent Systems Research Center
    • Projects & Plans
slide44
Task
  • Learn to label a person’s
    • Daily activities
      • working, visiting friends, traveling, …
    • Significant places
      • work place, friend’s house, usual bus stop, …
  • Given
    • Training set of labeled examples
    • Wearable sensor data stream
      • GPS, acceleration, ambient noise level, …
learning to label places activities

FRIEND’S HOME

RESTAURANT

OFFICE

HOME

PARKING LOT

STORE

Learning to Label Places & Activities
  • Learned general rules for labeling “meaningful places” based on time of day, type of neighborhood, speed of movement, places visited before and after, etc.
relational markov network
Relational Markov Network
  • First-order version of conditional random field
  • Features defined by feature templates
    • All instances of a template have same weight
  • Examples:
    • Time of day an activity occurs
    • Place an activity occurs
    • Number of places labeled “Home”
    • Distance between adjacent user locations
    • Distance between GPS reading & nearest street
rmn model
RMN Model

s1

Global soft constraints

p1

p2

Significant places

a1

a2

a3

a4

a5

Activities

g1

g2

g3

g4

g5

g6

g7

g8

g9

{GPS, location}

time 

significant places
Significant Places
  • Previous work decoupled identifying significant places from rest of inference
    • Simple temporal threshold
    • Misses places with brief activities
  • RMN model integrates
    • Identifying significant place
    • Labeling significant places
    • Labeling activities
summary1
Summary
  • We can learn to label a user’s activities and meaningful locations using sensor data & state of the art relational statistical models

(Liao, Fox, & Kautz, IJCAI 2005, NIPS 2006)

  • Many avenues to explore:
    • Transfer learning
    • Finer grained activities
    • Structured activities
    • Social groups
this talk4
This Talk
  • Activity tracking from RFID tag data
    • ADL Monitoring
  • Learning patterns of transportation use from GPS data
    • Activity Compass
  • Learning to label activities and places
    • Life Capture
  • Kodak Intelligent Systems Research Center
    • Projects & Plans
intelligent systems research

Intelligent Systems Research

Henry Kautz

Kodak Intelligent Systems Research Center

intelligent systems research center
Intelligent Systems Research Center
  • New AI / CS center at Kodak Research Labs
    • Directed by Henry Kautz, 2006-2007
    • Initial group of 4 researchers, expertise in machine vision
    • Hiring 6 new PhD level researchers in 2007
  • Mission: providing KRL with access to expertise in
    • Machine learning
    • Automated reasoning & planning
    • Computer vision & computational photography
    • Pervasive computing

14 March 2007

university partnerships
University Partnerships
  • Goal: access to external expertise & technology
    • > $3M @ year across KRL
    • Unrestricted grants: $20 - $50K
    • Sponsored research grants: $75 - $150K
    • CEIS (NYSTAR) eligible
    • KEA Fellowships (3 year)
    • Summer internships
    • Individual consulting agreements
  • Master IP Agreement Partners
    • University of Rochester
    • University of Massachusetts at Amherst
    • Cornell
  • Many other partners
    • UIUC, Columbia, U Washington, MIT, …

14 March 2007

new isrc project
New ISRC Project
  • Extract meaning from combination of image content and sensor data
  • First step:
    • GPS location data
    • 10 hand-tuned visual concept detectors
      • People and faces
      • Environments, e.g.forest, water, …
    • 100-way classifier built by machine learning
      • High-level activities, e.g.swimming
      • Specific objects, e.g.boats
    • Learned probabilistic model integrates location and image features
      • E.g.:Locations near water raiseprobability of “image contains boats”

14 March 2007

multimedia gallery blog with automatic annotation

Flying

Sailing

Driving

Walking

Multimedia Gallery/Blog (with automatic annotation)

Swimming in the Caymans

14 March 2007

enhancing video
Enhancing Video
  • Synthesizing 3-D video from ordinary video
    • Surprisingly simple and fast techniques use motion information to create compelling 3-D experience
  • Practical resolution enhancement
    • New project for 2007
    • Goal: convert sub-VGA quality video (cellphone video) to high-quality 640x480 resolution video
    • Idea: sequence of adjacent frames contain implicit information about “missing” pixels

14 March 2007

workflow planning research opportunities
Workflow Planning Research Opportunities

Problem: Assigning workflow(s) to particular jobs requires extensive expertise

Solution: System learns general job planning preferences from expert users

Problem: Sets of jobs often not scheduled optimally, wasting time & raising costs

Solution: Improved scheduling algorithms that consider interactions between jobs (e.g. ganging)

Problem: Users find it hard to define new workflows correctly (even with good tools)

Solution: Create and verify new workflows by automated planning

14 March 2007

conclusion why now
Conclusion: Why Now?
  • An early goal of AI was to create programs that could understand ordinary human experience
  • This goal proved elusive
    • Missing probabilistic tools
    • Systems not grounded in real world
    • Lacked compelling purpose
  • Today we have the mathematical tools, the sensors, and the motivation
credits
Credits
  • Graduate students:
    • Don Patterson, Lin Liao, Ashish Sabharwal, Yongshao Ruan, Tian Sang, Harlan Hile, Alan Liu, Bill Pentney, Brian Ferris
  • Colleagues:
    • Dieter Fox, Gaetano Borriello, Dan Weld, Matthai Philipose
  • Funders:
    • NIDRR, Intel, NSF, DARPA