1 / 30

Koichi Shintani † Tomohiro Mashita †‡ Kiyoshi Kiyokawa †‡ Haruo Takemura †‡

Evaluation of a Pointing Interface for a Large Screen Based on Regression Model with Image Features. Koichi Shintani † Tomohiro Mashita †‡ Kiyoshi Kiyokawa †‡ Haruo Takemura †‡ † Graduate School of Information Science and Technology, Osaka University, Japan

wood
Download Presentation

Koichi Shintani † Tomohiro Mashita †‡ Kiyoshi Kiyokawa †‡ Haruo Takemura †‡

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluation of a Pointing Interfacefor a Large Screen Basedon Regression Model with Image Features Koichi Shintani† Tomohiro Mashita†‡ Kiyoshi Kiyokawa†‡ HaruoTakemura†‡ †Graduate School of Information Science and Technology,Osaka University, Japan ‡Cybermedia Center, Osaka University, Japan

  2. Background • Gesture input interfaces have become widespread • However, commonly used gesture interfaces are limited to small screens Touch Screen NintendoDS

  3. Examples of Pointing Interfaces • Wii Remote[1] • For mid- to large-size screens • Pointing coordinates are based on the device’s orientation and position • Vogelet al.’s work[2] • For large-size screens • Ways of pointing • Touch screen • Relative displacement • Ray casting Wii Remote Vogel et al [1] WiiRemote.NintendoCo.,ltd,http://www.nintendo.co.jp/ [2] D. Vogel and R. Balakrishnan. Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays .Proceedings of UIST 2005 - the ACM Symposium on User Interface Software and Technology. p. 33-42.

  4. Defect of existing pointing interfaces • There are problems of spatial cognition Target

  5. Effect of Spatial Cognition • Example of the effect • Errors between real space and cognitive space (Soechting et al.[3]) [3]JOHN F.SOECHTING AND MARTHA FLANDERS, Sensorimotor Representations for Pointing to Targets in Three-Dimensional Space

  6. Classification of Pointing Interfaces Low Cognitive Load ? Microsoft Kinect Touch Screen NintendoDS Vogelet al PlayStation Move WiiRemote Hands-In Gyroscopic Mouse Small Screen Reaching distance Large Screen Walking distance

  7. Easy Pointing Interface • Hands-free • Reduce the negative effects onspatial cognition • Real time interaction • Vision based system • Appearance learning • Direct mapping with linear regression Approach

  8. Prototype System Projector (1024*768) 2.8m Camera (640*480) 2.1m 2.5m

  9. Flow of Proposed Method Regression Coefficients

  10. Training Dataset

  11. Image Features • Eigenimage • Calculate eigenvectors of images • Moment Features • Directions of principal axes of inertia • Centroids of images Principal axis of inertia Centroid

  12. Learning Phase • is a matrix consisting of (x,y) coordinates of target points. • is a matrix consisting of intercept terms and image features • is a set of either eigenvectors or moment features • is a matrix of regression coefficients

  13. Estimation Phase • is a matrix of regression coefficients • is a vector consisting of image features • is a set of either eigenvectors or moment features • is estimated coordinates (x,y)

  14. Experiments • Experiment 1 • Evaluation of estimation accuracy • With 6 test subjects using their own datasets • Experiment 2 • Evaluation of training data size • With 1 test subject, 6 datasets

  15. Datasets for Evaluation • Test Dataset • A dataset with target positions that were the same as the training data • Midpoint Dataset • A dataset with target positions at the midpoints of the training data Example Position forMidpoint Dataset Position for Training Dataset and Test Dataset

  16. Experiments • Experiment 1 • Evaluation of estimation accuracy • With 6 test subjects using their own datasets • Experiment 2 • Increasing the amount of training data • With 1 test subject, 6 datasets

  17. Experiment 1Evaluation of estimation accuracy • Procedure • Take a training dataset • Take a dataset for evaluation • Estimate indicated positions using a dataset for evaluation in two ways(moment features or eigenimage) • Subjects: 6 students (22~26 years old)

  18. Estimation Results for the Test Datasets • Mean error of all subjects • Eigenimage: ~23 cm • Moment features: ~20 cm • Examples of Results of estimation • Block size: 28 * 21 cm Moment features Eigenimage

  19. Estimation Results for the Midpoint Datasets • Mean error of all subjects • Eigenimage: ~25 cm • Moment features: ~22cm • Examples of results of estimation Eigenimage Moment features

  20. Experiment 1: Discussion • Errors down a column point in the same direction • Due to the order of pointing With Test Datasets With Test Datasets Eigenimage Moment features

  21. Video (Eigenimage)

  22. Video (Moment Features)

  23. Experiments • Experiment 1 • Evaluation of estimation accuracy • With 6 test subjects using their own datasets • Experiment 2 • Increasing the amount of training data • With 1 test subject, 6 datasets

  24. Experiment 2Increasing the amount ofsupervised data • Evaluate the relation between the size of the training dataset and accuracy(Moment features) • Using 1 dataset for estimation • Using 6 datasets for estimation • 1 subject 1 dataset 6 datasets

  25. Estimation Results for the Test Datasets • Screen size: width 2.8m, height 2.1m • Red line: estimation error along the x axis is more than 102 pixels or the estimation error along the y axis is more than 76 pixels. Using 1 set Using 6 sets Mean error: 24cm Mean error: 17cm Mean error: 28cm Mean error: 16cm Mean error: 17cm Mean error: 15cm Mean error: 37cm

  26. Estimation Results for the Midpoint Datasets • Screen size: width 2.8m, height 2.1m • Red line: estimation error along the x axis is more than 102 pixels or the estimation error along the y axis is more than 76 pixels. Using 1 set Using 6 sets Mean error: 17cm Mean error: 31cm Mean error: 15cm Mean error: 29cm Mean error: 23cm Mean error: 27cm Mean error: 55cm

  27. Experiment 2: Discussion • The estimation errors with 6 datasets are lower than those with 1 dataset • By increasing the amount of training data, the influence of small movements occurring in the pointing motions is lessened.

  28. Conclusion • A method to estimate the positions on a large screen indicated by a user’s pointing gestures • Using a prototype system, moment features are more stable and more suitable than eigenimage • With all subjects, the accuracy of estimation is about 5 deg with our method

  29. Future work • Enhanced interaction • Using motion gesture, hand posture,etc. • Improve estimation accuracy • Recognize relation between user’s posture and pointing motion • Develop a practical application

More Related