1 / 64

Activity Recognition: Linking Low-level Sensors to High-level Intelligence

Activity Recognition: Linking Low-level Sensors to High-level Intelligence. Qiang Yang Hong Kong University of Science and Technology http://www.cse.ust.hk/~qyang/. What’s Happening Outside AI?. Pervasive Computing Sensor Networks Health Informatics Logistics Military/security WWW

brinly
Download Presentation

Activity Recognition: Linking Low-level Sensors to High-level Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Activity Recognition: Linking Low-level Sensors to High-level Intelligence Qiang Yang Hong Kong University of Science and Technology http://www.cse.ust.hk/~qyang/

  2. What’s Happening Outside AI? • Pervasive Computing • Sensor Networks • Health Informatics • Logistics • Military/security • WWW • Computer Human Interaction (CHI) • GIS…

  3. What’s Happening Outside AI? Apple iPhone Wii Ekahau WiFi Location Estimation

  4. Theme of The Talk • Activity Recognition: • What it is • Linking low level sensors to high level intelligence • Activity recognition research: Embedded AI • Empirical in nature • Research on a very limited budget

  5. A Closed Loop Cooking: Preconditions: (…), Postconditions: (…), Duration: (…) Eating, Resting, Cooking, Doing Laundry, Meeting, Using the telephone, Shopping, Playing Games, Watching TV, Driving … (From Bao and Intille, Pervasive 04)

  6. Activity Recognition: A Knowledge Food Chain • Action Model Learning • How to model user’s actions? • Activity Recognition • What is the user doing / will do next? • Localization & Context • Where is the user? • What’s around her? • Knowledge Food Chain • Output of each level acts as input to an upper level in a closed feedback loop

  7. Basic: Knowing Your Context • Locations and Context • Where are you? • What’s around you? • Who’s around you? • How long are you there? • Where were you before? • Status of objects (door open?) • What is the temperature like? • …

  8. Knowing Your Context • Locations and Context • Where are you? • What’s around you? • Who’s around you? • How long are you there? • Where were you before? • Status of objects (door open?) • What is the temperature like? • …

  9. Focusing on locations Dr. Yin, Jie @ work (HKUST) • Input: • Sensor Readings • Wifi, RFID, Audio, Visual, Temperature • Infrared, Ultrasound, magnetic fields • Power lines [Stuntebeck, Patel, Abowd et al., Ubicomp2008] • … • Localization Models • Output: predicted locations

  10. Location-basedApplications: Indoor • Healthcare at home and in hospitals • Logistics: Cargo Control • Shopping, Security • Digital Wall • Collaboration with NEC China Lab

  11. How to obtain a localization model? • Propagation-model based • Modeling the signal attenuation • Advantages: Less data collection effort • Disadvantages: • Need to know emitter locations • Uncertainty • Machine Learning based • Advantages: • Modeling Uncertainty Better • Benefit from sequential info • Disadvantages: • May require a lot of labeled data • RADAR [Bahl and Padmanabhan, CCC2000]

  12. Using both labeled and unlabeled data in subspace learning • LeMan: Location-estimation w/ Manifolds [J. J. Pan and Yang et al., AAAI2006] • Manifold assumption: similar signals have similar labels • Objective: Minimize the loss over labeled data, whilepropagating labels to unlabeled data

  13. LeMan [J.J. Pan and Yang et al., AAAI2006] • Supervised vs. Semi-Supervised in a 4m x 5m testbed • To achieve the same accuracy under 80cm error distance

  14. Adding sequences: Graphical Model CRF based localization [R. Pan, Zheng, Yang et al., KDD2007] • Conditional Random Fields [Lafferty, McCallum, Pereira, ICML2001] • Undirected graph, a generalization to HMM State =locations Observations = signals

  15. What if the signal data distribution changes? • Signal may vary over devices, time, spaces … • A -> B: the localization error may increase Transfer Learning!

  16. Our work to address the signal variation problems • Transfer Learning • Problem 1: Transfer Across Devices • [Zheng and Yang et al., AAAI2008a] • Problem 2: Transfer Across Time • [Zheng and Yang et al., AAAI2008b] • Problem 3: Transfer Across Spaces • [S. J. Pan and Yang et al., AAAI2008]

  17. Transferring Localization Models Across Devices [Zheng and Yang et al., AAAI2008a] Input: Output: The localization model on the target device CISCO S=(-30dbm, .., -86dbm), L=(1, 3) S=(-33dbm, .., -90dbm), L=(1, 4) … S=(-44dbm, .., -43dbm), L=(9, 10) S=(-56dbm, .., -32dbm), L=(15, 22) S=(-60dbm, .., -29dbm), L=(17, 24) S=(-37dbm, .., -77dbm), L=(1, 3) S=(-41dbm, .., -83dbm), L=(1, 4) … S=(-49dbm, .., -34dbm), L=(9, 10) S=(-61dbm, .., -28dbm), L=(15,22) S=(-66dbm, .., -26dbm), L=(17, 24) Buffalo D-Link S=(-33dbm, .., -82dbm), L=(1, 3) …S=(-57dbm, .., -63dbm), L=(10, 23) Target device has onlyfew labeled data Source devices have plentiful labeled data

  18. Transferring Localization Models Across Devices [Zheng and Yang et al., AAAI2008a] Localization on each wireless adapter is treated as a learning task. Model: Latent Multi-Task Learning [Caruana, MLJ1997] Each device: a learning task • minimize its localization error, and • devices share some common constraints • in a latent space • Regression with signals x to locations y shared

  19. Transferring Localization Models Over Time [Zheng and Yang et al., AAAI2008b] PhD Student Vincent Zheng @ Work Input: The old time period Plentiful labeled sequences: The new time period Some (non-sequential) labeled data + some unlabeled sequences Output: Localization model for the new time period. S=(-30dbm, .., -86dbm), L=(1, 3) S=(-49dbm, .., -41dbm)L=(1, 3) S=(-44dbm, .., -43dbm)L=(9, 10) S=(-60dbm, .., -29dbm)L=(17, 24) S=(-42dbm, .., -77dbm) S=(-33dbm, .., -82dbm), L=(1, 3) …S=(-57dbm, .., -63dbm), L=(10, 23) S=(-71dbm, .., -33dbm) S=(-43dbm, .., -52dbm)

  20. Transferring Localization Models Over Time [Zheng and Yang et al., AAAI2008b] Model: • Transferred Hidden Markov Model Reference points (RPs) Radio map Transition matrix of user moves Prior knowledge on the likelihood of where the user is

  21. B A Access Point Transferring Localization Models Across Space [S. J. Pan and Yang et al., AAAI2008] Input: Output: Localization model for Area B Area B: Few labeled data & Some unlabeled data Area A: Plentiful labeled data (red dots in the picture)

  22. Summary: Localization using Sensors • Research Issues • Optimal Sensor Placement [Krause, Guestrin, Gupta, Kleinberg, IPSN2006] • Integrated Propagation and learning models • Sensor Fusion • Transfer Learning • Location-based social networks • Locations • 2D / 3D Physical Positions • Locations are a type of context • Other contextual Information • Object Context: Nearby objects + usage status • Locations and Context • Where you are • Who’s around you • How long you are there • Status of objects (door open?) • What is the temperature like?

  23. Activity Recognition • Action Model Learning • How do we explicitly model the user’s possible actions? • Activity Recognition • What is the user doing / trying to do? • Localization and context • Where is the user? • What’s around her? • How long/duration? • What time/day? Events

  24. Steps in activity recognition Loc/Context Recognition Action Recognition Goal Recognition • Also, • Plan, Behavior, Intent, Project … sensor sensor sensor sensor

  25. Activity Recognition: Input & Output • Input • Context and locations • Time, history, current/previous locations, duration, speed, • Object Usage Information • Trained AR Model • Training data from calibration • Calibration Tool: VTrack • Output: • Predicted Activity Labels • Running? • Walking? • Tooth brushing? • Having lunch? http://www.cse.ust.hk/~vincentz/Vtrack.html

  26. Activity Recognition: Applications • GPS based Location-based services • Inferring Transportation Modes/Routines • [Liao, Fox, Kautz, AAAI2004] • Unsupervised, bridges the gap between raw GPS and user’s mode of transportation • Can detect when user missed bus stops  offer help • Healthcare for elders • Example: The Autominder System • [Pollack, et al. Robotics and Autonomous Systems, 2003.] • Provide users w/ reminders when they need them • Recognizing Activities with Cell Phones (Video) • Chinese Academy of Sciences (Prof Yiqiang Chen and Dr. Junfa Liu)

  27. Microsoft Research Asia: GeoLife Project [Zheng, Xie, WWW2008] • Inferring Transportation Modes, and • Compute similarity based on itineraries and link people in a social net: GeoLife Video Segment[i].P(Bike) = Segment[i].P(Bike) * P(Bike|Car) Segment[i].P(Walk) = Segment[i].P(Walk) * P(Walk|Car)

  28. Activity Recognition (AR): ADL • ADL = Activities of daily living (ADLs) • From sound to events, in everyday life • [Lu and Choudhury et al., MobiSys2009] • iCare (NTU): Digital home support, early diagnosis of behavior changes • iCare Project at NTU (Hao-hua Chu, Jane Hsu, et al.) http://mll.csie.ntu.edu.tw/icare/index.php • Duration patterns and inherent hierarchical structures • [Duong, Bui et al., AI Journal 2008]

  29. Early Work: Plan Recognition • Objective [Kautz 1987]: • Inferring plans of an agent from (partial) observations of his actions • Input: • Observed Actions (K,L) • Plan Library • Output: • Recognized Goals/Plans

  30. Review: Event Hierarchy in Plan Recognition Abstraction relationship Actions • The Cooking Event Hierarchy [Kautz 1987] • Some works: • [Kautz 1987]: graph inference • [Pynadath and Wellman, UAI2000]: probabilistic CFG • [Geib and Steedman, IJCAI2007]: NLP and PR • [Geib, ICAPS2008]: string rewriting techniques Step 2 of Make Pasta Dish

  31. A Gap?

  32. AR: Sequential Methods • Dynamic Bayesian Networks • [Liao, Fox, Kautz, AAAI2004] [Yin, Chai, Yang, AAAI2004] • Conditional Random Field [Vail and Veloso, AAAI2008] • Relational Markov Network [Liao, Fox, Kautz, NIPS2005]

  33. Intel [Wyatt, Philipose, Choudhury, AAAI2005] : Incorporating Commonsense • Model = Commonsense Knowledge • Work at Intel Seattle Lab / UW • Calculate Object Usage Information from Web Data P(Obj | Action) • Train a customized model • HMM: parameter learning [Wyatt et al. AAAI2005] • Mine model from Web [Perkowitz, Philipose et al. WWW2004]

  34. Datasets: MIT PlaceLab http://architecture.mit.edu/house_n/placelab.html • MIT PlaceLab Dataset (PLIA2) [Intille et al. Pervasive 2005] • Activities: Common household activities

  35. Datasets: Intel Research Lab • Intel Research Lab [Patterson, Fox, Kautz, Philipose, ISWC2005] • Activities Performed: 11 activities • Sensors • RFID Readers & Tags • Length: • 10 mornings Now: Intel has better RFID wristbands. Picture excerpted from [Patterson, Fox, Kautz, Philipose, ISWC2005].

  36. Complex Actions? Reduce Labels? • Complex Actions: • For multiple activitieswith complex relationships[Hu and Yang, AAAI2008] • concurrent and interleaving activities • Label Reduction: • What if we are short of labeled data in a new domain? [Zheng, Hu, Yang, et al. Ubicomp 2009] • Use transfer learning to borrow knowledge from a source domain (where labeled data are abundant) • For recognizing activities • where labeled data are scarce

  37. Interleaving Activities Concurrent and Interleaving Goals [Hu, Yang, AAAI2008] Concurrent Activities

  38. Factors for skip edges Concurrent and Interleaving Goal and Activity Recognition [Hu, Yang, AAAI2008] Use the long-distance dependencies in Skip-Chain Conditional Random Fields to capture the relatedness between interleaving activities. Factors for linear chain edges

  39. Concurrent and Interleaving Goal and Activity Recognition [Hu, Yang, AAAI2008] • Concurrent Goals: • correlation matrix between different goals learned from training data Example: “attending invited talk” and “browsing WWW”.

  40. Cross Domain Activity Recognition [Zheng, Hu, Yang, Ubicomp 2009] CleaningIndoor • Challenges: • A new domain of activities without labeled data • Cross-domain activity recognition • Transfer some available labeled data from source activities to help training the recognizer for the target activities. Laundry Dishwashing

  41. Calculating Activity Similarities • How similar are two activities? • Use Web search results • TFIDF: Traditional IR similarity metrics (cosine similarity) • Example • Mined similarity between the activity “sweeping” and “vacuuming”, “making the bed”, “gardening”

  42. Example: Pseudo Training Data: <SS, “Make Tea”, 0.6> How to use the similarities? Example: sim(“Make Coffee”, “Make Tea”) = 0.6 <Sensor Reading, Activity Name> Example: <SS, “Make Coffee”> Similarity Measure THE WEB Target Domain Pseudo Labeled Data Source Domain Labeled Data Weighted SVM Classifier

  43. Cross-Domain AR: Performance • Activities in the source domain and the target domain are generated from ten random trials, mean accuracies are reported.

  44. How Does AR Impact AI? • Action Model Learning • How do we explicitly model the user’s possible actions? • Activity Recognition • What is the user doing / trying to do? • Localization • Where is the user?

  45. Relationship to Localization and AR • From context •  state description from sensors • From activity recognition •  activity sequences • Learning action models • Motivation: • solve new planning problems • knowledge-engineering effort • for Planning • Can even recognize goals using planning • [Ramirez and Geffner, IJCAI2009]

  46. What is action model learning? • Input: activity sequences • Sequences of labels/objects: • Example: pick-up(b1) , stack(b1,b2)…etc • Initial state, goal, and partial intermediate states • Example: ontable(b1),clear(b1), …etc • Output: Action models • preconditions of actions: • Example: preconditions of “pick-up”: ontable(?x) , handempty, …etc. • effects of actions: • Example: effects of “pick-up”: holding(?x), …etc • TRAIL [Benson, ICML1994]: learns Teleo-operator models (TOP) with domain experts’ help. • EXPO [Gil, ICML1994]: learns action models incrementally by assuming partial action models known. • Probabilistic STRIPS-like models [Pasula et al. ICAPS2004]: learns probabilistic STRIPS-like operators from examples. • SLAF [Amir, AAAI2005]: learns exact action models in partially observable domains.

  47. ARMS [Yang et al. AIJ2007]An overview • what can be in the preconditions/Postcond Sensor states, object usage Activity Sequences Information constraints Build constraints Plan constraints Solved w/ Weighted MAXSAT/MLN Each relation has a weight that can be learned Action models

  48. Evaluation:byStudents @ HKUSTexecute learned actions Lego-Learning-Planning (LLP) System Design Notebook Activity recognition & planning Bluetooth Robot PDA Web Server Robot Status/ Data Internet Control Command

  49. A Lego Planning Domain Relations given by sensors/phy. map • (motor_speed ) • (empty ) • … • (across x-loc y-loc z-loc) Actions Known to the robot • (Move_forw x-loc y-loc z-loc) • … • (Turn_left x-loc y-loc z-loc) • Initial state: • (empty ) (face grid0)… • Goal: • …(holding Ball) • Collection of Activity Sequences (Video 1: robot) (video 2: human)

  50. Activity Sequences • Human manually achieves goal • 0: (MOVE_FORW A B C) • … • 4: (MOVE_FORW D E F) • 5: (MOVE_FORW E F W) • 6: (STOP F) • 7: (PICK_UP F BALL) • … • 10: (STOP D) • 11: (TURN_LEFT D W E) • 12: (PUT_DOWN BALL D) • 13: (PICK_UP D BALL) Activity Recognizer ARMS: Action Model Learning

More Related