1 / 20

Spring Semester, 2010

Dynamic Time Warping and Neural Network. J.-Y. Yang, J.-S. Wang and Y.-P. Chena, Using acceleration measurements for activity recognition: An effective learning algorithm for constructing neural classifiers Pattern Recognition Letters, vol. 29, no. 16, pp. 2213-2220, 2008. Spring Semester, 2010.

ulla-berry
Download Presentation

Spring Semester, 2010

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dynamic Time Warping and Neural Network J.-Y. Yang, J.-S. Wang and Y.-P. Chena,Using acceleration measurements for activity recognition: An effective learning algorithm for constructing neural classifiersPattern Recognition Letters, vol. 29, no. 16, pp. 2213-2220, 2008. Spring Semester, 2010

  2. Outline • Background • Activity Recognition Strategy • Experiments • Summary

  3. Background • Accelerometers can be used as a human motion detector and monitoring device • Biomedical engineering, medical nursing, interactive entertainment, … • Exercise intensity /distance, sleep cycle, and calorie consumption

  4. Background Proposed Method Overview • One 3-D accelerometer on the dominant wrist • NNs • Pre-classifier  static classifier or dynamic classifier • Eight domestic activities • Standing, sitting, walking, running, vacuuming, scrubbing, brushing teeth, and working at a computer

  5. Background Neural Classifier • Neurons in the Brain • A neuron receives input from other neurons (generally thousands) from its synapses • Inputs are approximately summed • When the input exceeds a threshold the neuron sends an electrical spike that travels from the body, down the axon, to the next neuron(s)

  6. Background Neurons in the Brain (cont.) • Amount of signal passing through a neuron depends on: • Intensity of signal from feeding neurons • Their synaptic strengths • Threshold of the receiving neuron • Hebb rule (plays key part in learning) • A synapse which repeatedly triggers the activation of a postsynaptic neuron will grow in strength, others will gradually weaken • Learn by adjusting magnitudes of synapses’ strengths

  7. Background Artificial Neurons y g( ) ∑w.x w1 w3 w2 x3 x1 x2

  8. Background Neural Classifier (Perceptron) • Structure • Learning • Weights are changed in proportion to the difference (error) between target output and perceptron solution for each example • Back-propagation algorithm • The gradient descent method, Slow convergence and local minima • The resilient back-propagation (RPROP) • Ignore the magnitude of the gradient

  9. Activity Recognition Strategy • Pre-Classifier • Static/Dynamic Classifier

  10. Activity Recognition Strategy Pre-Classifier (1/2) • Two components of the acceleration data • Gravitational acceleration (GA) • Body acceleration (BA): High-pass filtering to remove GA • Segmentation with overlapping windows • 512 samples per window

  11. Activity Recognition Strategy Pre-Classifier (2/2) • SMA (Signal Magnitude Area) • The sum of acceleration magnitude over three axes • AE (Average Energy) • Average of the energy over three axes • Energy: The sum of the squared discrete FFT component magnitudes of the signal in a window

  12. Activity Recognition Strategy Feature Extraction • 8 attributes × 3axis = 24 features • Mean, correlation between axes, energy, interquartile range (IQR), mean absolute deviation, root mean square, standard deviation, variance

  13. Activity Recognition Strategy Feature Selection (1/2) • Common principalcomponent analysis (CPCA) • If features are highlycorrelated,the corresponding vectorsare similar clustering to group similar loadings

  14. Activity Recognition Strategy Feature Selection (2/2) • Apply the PCA • Select the first p PCs (cumulative sum>90%) • Estimate CPC • Support vector clustering

  15. Activity Recognition Strategy Verification

  16. Experiments: Environment (1/2) • MMA7260Q tri-axial accelerometer • Sensitivity: -4.0g ~ +4.0g, 100Hz • Mount on the dominant wrist • Eight activities from seven subjects • Standing, sitting, walking,running, vacuuming,scrubbing, brushing teeth,and working at a computer • 2min per activity

  17. Experiments Environment (2/2) • Window size = 512 (with 256 overlapping) • 22 windows in one min., 45 windows in two min. • Leave-one-subject-out cross-validation • Training: 1min per activity = 22 windows × 8 activities× 6 subjects • Test: 2min per activity = 45 windows × 8 activities

  18. Experiments FSS Evaluation • Use six static selected features

  19. Experiments Recognition Result • NN • Hidden node • Pre-classifier: 3 • Static-classifier: 5 • Dynamic-classifier: 7 • Epochs: 500 • Computational load of FSS • Training without FSS = 7.457s, training with FSS = 8.46s

  20. Summary • Proposed method yielded 95% accuracy • Pre-classifier  static / dynamic classifiers • Author’s other publication • Yen-Ping Chen, Jhun-Ying Yang, Shun-Nan Liou, Gwo-Yun Lee, Jeen-Shing Wang: Online classifier construction algorithm for human activity detection using a tri-axial accelerometer. • Applied Mathematics and Computation 205(2): 849-860 (2008)

More Related