1 / 47

利用少量慣性感測器 監控人物 動作之 研究 Action Surveillance Using Sparse Wearable Inertial Sensors

利用少量慣性感測器 監控人物 動作之 研究 Action Surveillance Using Sparse Wearable Inertial Sensors. Student: Shih-Yu Lin Advisor: I-Chen Lin . Outline. Introduction Related Work Pre-processing Clip-based OLNG Interpolated Clip-based OLNG Experiments and Results Conclusion. Introduction.

farica
Download Presentation

利用少量慣性感測器 監控人物 動作之 研究 Action Surveillance Using Sparse Wearable Inertial Sensors

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 利用少量慣性感測器監控人物動作之研究Action Surveillance Using Sparse Wearable Inertial Sensors Student: Shih-Yu Lin Advisor: I-Chen Lin

  2. Outline • Introduction • Related Work • Pre-processing • Clip-based OLNG • Interpolated Clip-based OLNG • Experiments and Results • Conclusion

  3. Introduction • Motion reconstruction using pre-record motion capture data is an important topic in computer animation. • However, the technique almost reconstructs full-body motion through high-cost vision or magnetic-based motion capture device.

  4. Introduction • We propose an approach to remotely survey user’s motion from sensing device for elders or other monitoring issue. • Our goal is that we take advantage of the inexpensive and portable device and still provide reliable accuracy like other high-cost device.

  5. Introduction • In our thesis, we use Wii Remotes with MotionPlus as our sensing device. • The advantages of Wii Remotes • Low-cost • Portable • Fewer environment constraints

  6. Introduction • These inertia sensors can easily be worn by users as input, and it does not affect the daily lives. • Wii Remote provides us acceleration and angular velocities. • We focus on the daily behavior and reconstruct motion by the sensing data.

  7. Related Work • Our system is mainly composed of two concepts • Online Lazy Neighborhood Graph • Motion Field • PhaseSpace motion capture [PhaseSpace10] • Require an array of calibrated high-resolution cameras • Manual data clearing for occlusion

  8. Related Work • Accelerometer-based user interfaces for the control of a physically simulated character [Siratori et al. 2008] • Using Wii Remotes to get the accelerations for the control of physically-based animation. • Performance animation from low-dimensional control signals [Chai et al. 2005] • Estimate character’s pose from two synchronized cameras and a few of markers.

  9. Related Work • Motion synthesis from annotations [Arikan et al. 2003] • Synthesize motion by allowing user to specify the occasion of action.

  10. Related Work • Motion fields for interactive character locomotion [Lee et al.2010] • Model the set of possible motions at each character state and select the single state by value function.

  11. Related Work Online Lazy Neighborhood Graph • Motion reconstruction using sparse accelerometer data [Tautges et al. 2011] • They propose a structure called Online Lazy Neighborhood Graph(OLNG) to efficiently identify the pose in the pre-record database. • Based on K-Nearest Neighbor algorithm Sensing data information KNN Energy minimization problem Form continuous path Result

  12. Pre-processing • Our training data is from CMU Mocap Lab. • To focus on everyday life of users, we choose the action like climbing, jumping, lying, lifting, sitting, boxing, and walking.

  13. Pre-processing • Rename the different knowledge bases by the same naming pattern that group the database simply{Motion}{Index}.bvh. • Calculate the distance between two frames in each motion to simplify database.

  14. Frame 1 Training motion data Motion sequence 1 Frame 2 Frame 3 Motion sequence 2 Frame 4 Frame 5 … Frame 6 Motion sequence x … Frame n Pre-processing • Clip the training data to several sequences to mitigate the time. • We select the first frame of each sequence as index to construct kd-tree structure. Clip-based OLNG Motion Synthesis

  15. Pre-processing • We ask user to wear Wii Remotes on two wrists, legs, and chest. • We require user to stand in a T pose to do calibration before motion capture.

  16. Pre-processing • In our system, result will bias what kind of initial state is chosen. • Wii Remotes worn on user’s limbs send 3D accelerations and angular velocities to OLNG system. • Wii Remotes worn on user’s chest send orientation to determine the direction of character.

  17. Pre-processing

  18. Preprocessing Sensor Data Clip-based OLNG KD-tree Clip-based OLNG Interpolated Motion Motion Capture Database Post-processing Online Motion Reconstruction

  19. Clip-based OLNG • Before constructing OLNG, we use the training database to build kd-tree structure. • The structure is composed of 3D accelerations of four limbs. • The total dimension is • 4: Wii Remote Number • 3: xyz dimensions • t: Total Frames

  20. Clip-based OLNG • Assume that denotes the current frame of 3D accelerations at time . • Fixing the number K of nearest neighbors. • Let be the set of size K consisting of the frame indices of the nearest neighbors of . • Consider the last sensor readings Clip-based OLNG Sensing device KNN Storage last storages candidates

  21. Clip-based OLNG • The nodes of OLNG can be presented by an array, where is 4and K is 8. • When a new reading arrives, we acquire the nearest neighbors by KNN approach and store in . • We find continuity between and . • Update Clip-based OLNG. New Reading KNN Storage Update Clip-based OLNG Find continuity with last storage candidates

  22. St-3 St-1 St-2 St Frame 4 Frame 1 Frame 2 Frame 6 Clip-based OLNG Frame 5 Frame 3 Frame 4 Frame 36 Frame 59 Frame 58 Frame 77 Frame 38 Frame 35 Frame 3 Frame 60 Frame 37 Frame 48 Frame 78 Frame 47 Frame 40 Frame 44 Frame 83 Frame 38 Frame 90 Frame 84 Frame 82 Frame 85 Frame 88 Frame 50 Frame 89 Frame 83 Frame 89

  23. St-3 St-2 St-1 St St+1 Frame 4 Frame 1 Frame 6 Frame 2 Frame 5 Clip-based OLNG Frame 4 Frame 3 Frame 36 Frame 5 Frame 61 Frame 38 Frame 59 Frame 77 Frame 58 Frame 39 Frame 35 Frame 60 Frame 37 Frame 3 Frame 80 Frame 40 Frame 47 Frame 48 Frame 78 Frame 48 Frame 38 Frame 44 Frame 83 Frame 90 Frame 82 Frame 84 Frame 85 Frame 88 Frame 39 Frame 89 Frame 83 Frame 50 Frame 89 Frame 90 Frame 4

  24. Clip-based OLNG • When the new sensing data arrives, we connect paths with existing paths. • We denote to be the cost of these paths at time . • Let be the positions, be the velocities, be the accelerations of the joints with respect to the root coordinate system.

  25. Clip-based OLNG • We introduce normalized weights denoted by , where the value of each weight is given by • We aim to find a pose that optimally satisfies constraints imposed by the observation

  26. Clip-based OLNG • The prior term is composed of three components • Pose prior • Motion prior • Smooth prior prior Energy minimization problem pose motion smooth control

  27. Clip-based OLNG • The pose prior according to joint angles to characterizes the probability of a pose. • The motion prior according to joint positions of a pose to regards the temporal evolution of a motion. • The smooth prior according to accelerations to calculate continuity between two poses to reduce jerkiness. • The control term is computed based on joint positions that acquired by Wii Remotes.

  28. Clip-based OLNG • We incorporate prior term and control term into energy minimization problem. • The result of energy minimization is as our highest priority of motion clip candidates.

  29. Clip-based OLNG • Let one candidate motion clip be one state. • The state consists of • 3D root positions • Joint orientation • A pose represents this state where is 3D root position vector, is root orientation, and are joint orientation.

  30. St-1 St I states connect Last frame states K state these states which called motion states

  31. Interpolated Clip-based OLNG • After OLNG step, we connect the closest paths and have candidates of motion clips. • Define a motion state • Calculate the similarity between highest priority of candidates as and the other candidates as motion state .

  32. Interpolated Clip-based OLNG • We use the similarity to calculate the similarity weights

  33. Interpolated Clip-based OLNG • Synthesize the set of all candidates of motion clips via similarity weights. • is the j-th joint of pose.

  34. OLNG Priority1 Priority2 Priority3 Priority4 3.Weighted combine motion via similarity weights Motion Synthesis Priority5 Priority6 Similarity Weights Priority7 Priority8 2.normalization Similarity 1.d()

  35. Interpolated Clip-based OLNG • The different height of poses are synthesized • Foot skating

  36. Interpolated Clip-based OLNG • Variety motions are used that the synthesized result may be visually discontinuous motions • Low-pass filter • The temporal technique is a kind of Gaussian filter and it alleviates the discontinuous motions in real-time.

  37. Synthesized Motion Interpolated Clip-based OLNG Unchanged Above Ground Plane Low-pass Filter Foot Position Below Ground Plane Adjust Character Pose no Stand on the ground in real world yes

  38. Experiments and Results

  39. Experiments and Results

  40. Experiments and Results

  41. Experiments and Results 5500 Frames Ground truth BVH with random noise(-0.01 ~ 0.01 )

  42. Experiments and Results 5000 Frames Ground truth BVH with random noise(-0.01 ~ 0.01 )

  43. Experiments and Results Ground truth BVH with random noise(-0.01 ~ 0.01 )

  44. Experiments and Results • We can achieve online surveillance. • Demo

  45. Conclusion • Wii Remote is as our sensing device whose advantage is portable, does not hinder daily behavior and adaptable in constraint environments. • We use the concept of motion field to extend OLNG and overcome its limitation. • Our approach is able to monitor user’s motion and provide reliable accuracy and natural synthesized motion like other techniques using high-cost devices.

  46. The End Q & A

  47. Appendix is the projection of a vector x to the subspace formed by the components related to those joints which are next to the sensors.

More Related