html5-img
1 / 22

Playing with features for learning and prediction

Playing with features for learning and prediction. Jongmin Kim Seoul National University. Problem statement. Predicting outcome of surgery. Predicting outcome of surgery. Ideal approach. surgery. Training Data. . . . . ?. Predicting outcome. Predicting outcome of surgery.

ludlow
Download Presentation

Playing with features for learning and prediction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Playing with features forlearning and prediction Jongmin Kim Seoul National University

  2. Problem statement • Predicting outcome of surgery

  3. Predicting outcome of surgery • Ideal approach surgery Training Data . . . . ? Predicting outcome

  4. Predicting outcome of surgery • Initial approach • Predicting partial features • Predict witch features?

  5. Predicting outcome of surgery • 4 Surgery • DHL+RFT+TAL+FDO flexion of the knee ( min / max ) rotation of the foot ( min / max ) dorsiflexion of the ankle ( min )

  6. Predicting outcome of surgery • Is it good features? • Number of Training data • DHL+RFT+TAL : 35 data • FDO+DHL+TAL+RFT : 33 data

  7. Machine learning and feature Data Feature representation Learning algorithm Feature representation Learning algorithm

  8. Features in motion • Joint position / angle • Velocity / acceleration • Distance between body parts • Contact status • …

  9. Features in computer vision SIFT Spin image HoG RIFT GLOH Textons

  10. Machine learning and feature

  11. Outline • Feature selection • - Feature ranking • - Subset selection: wrapper, filter, embedded • - Recursive Feature Elimination • - Combination of weak prior (Boosting) • - ADAboosting(clsf) / joint boosting (clsf)/ Gradientboost (regression) • Prediction result with feature selection • Feature learning?

  12. Feature selection • Alleviating the effect of the curse of dimensionality • Improve the prediction performance • Faster and more cost-effective • Providing a better understanding of the data

  13. Subset selection • Wrapper • Filter • Embedded

  14. Feature learning? • Can we automatically learn a good feature representation? • Known as: unsupervised feature learning, feature learning, deep learning, representation learning, etc. • Hand-designed features (by human): • 1. need expert knowledge • 2. requires time-consuming hand-tuning. • When it’s unclear how to hand design features: automatically learned features (by machine)

  15. Learning Feature Representations • Key idea: • –Learn statistical structure or correlation of the data from unlabeled data • –The learned representations can be used as features in supervised and semi-supervised settings

  16. Learning Feature Representations Output Features Feed-back /generative /top-down path e.g. Decoder Encoder Feed-forward /bottom-up path Input (Image/ Features)

  17. Learning Feature Representations • Predictive Sparse Decomposition [Kavukcuoglu et al., ‘09] Sparse Features z L1Sparsity Encoder filters W Sigmoid function σ(.) Dz σ(Wx) e.g. Decoder filters D Input Patch x

  18. Stacked Auto-Encoders Class label Decoder Encoder Features Decoder Encoder Features Decoder Encoder [Hinton & SalakhutdinovScience ‘06] Input Image

  19. At Test Time Class label • Remove decoders • Use feed-forward path • Gives standard(Convolutional)Neural Network • Can fine-tune with backprop Encoder Features Encoder Features Encoder [Hinton & SalakhutdinovScience ‘06] Input Image

  20. Status & plan • Data 파악 / learning technique survey… • Plan : 11월 실험 끝 • 12월 논문 writing • 1월 시그랩submit • 8월에 미국에서 발표 • But before all of that….

  21. Deep neural net vs. boosting • Deep Nets: • - single highly non-linear system • - “deep” stack of simpler modules • - all parameters are subject to learning • Boosting & Forests: • - sequence of “weak” (simple) classifiers that are linearly combined to produce a powerful classifier • - subsequent classifiers do not exploit representations of earlier classifiers, it's a “shallow” linear mixture • - typically features are not learned

  22. Deep neural net vs. boosting

More Related