1 / 19

EEC-693/793 Applied Computer Vision with Depth Cameras

EEC-693/793 Applied Computer Vision with Depth Cameras. Lecture 15 Wenbing Zhao wenbing@ieee.org. Outline. Approaches to gesture recognition Rules based Single pose based (this lecture) Multiple poses based Machine learning based. Gesture Recognition Engine.

wisemanj
Download Presentation

EEC-693/793 Applied Computer Vision with Depth Cameras

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EEC-693/793Applied Computer Vision with Depth Cameras Lecture 15 Wenbing Zhao wenbing@ieee.org

  2. Outline Approaches to gesture recognition Rules based Single pose based (this lecture) Multiple poses based Machine learning based

  3. Gesture Recognition Engine The recognition engine typically performs the following tasks: It accepts user actions in a form of skeleton data It matches the data points with predefined logic for a specific gesture It executes actions if the gesture is recognized It responds to the users

  4. Gesture Recognition Engine

  5. Gesture Recognition Engine

  6. Recognizing Two Gestures

  7. Gesture Recognition Engine GestureType RecognitionResult GestureEventArgs publicenumGestureType { HandsClapping, TwoHandsRaised } publicenumRecognitionResult { Unknown, Failed, Success } publicclassGestureEventArgs : EventArgs { publicGestureType gsType { get; internalset; } publicRecognitionResult Result { get; internalset; } public GestureEventArgs(GestureType t, RecognitionResult result) { this.Result = result; this.gsType = t; } }

  8. Gesture Recognition Engine public class GestureRecognitionEngine { public GestureRecognitionEngine() { } public event EventHandler<GestureEventArgs> GestureRecognized; public Skeleton Skeleton { get; set; } public GestureType GestureType { get; set; } public void StartRecognize(GestureType t) { this.GestureType = t; switch (t) { case GestureType.HandsClapping: this.MatchHandClappingGesture(this.Skeleton); break; case GestureType.TwoHandsRaised: this.MatchTwoHandsRaisedGesture(this.Skeleton); break; default: break; } }

  9. Gesture Recognition Engine float previousDistance = 0.0f; private void MatchHandClappingGesture(Skeleton skeleton) { if (skeleton == null) { return; } if (skeleton.Joints[JointType.HandRight].TrackingState == JointTrackingState.Tracked && skeleton.Joints[JointType.HandLeft].TrackingState == JointTrackingState.Tracked) { float currentDistance = GetJointDistance(skeleton.Joints[JointType.HandRight], skeleton.Joints[JointType.HandLeft]); if (currentDistance < 0.1f && previousDistance > 0.1f) { if (this.GestureRecognized != null) { this.GestureRecognized(this, new GestureEventArgs(GestureType.HandsClapping, RecognitionResult.Success)); } } previousDistance = currentDistance; } }

  10. Gesture Recognition Engine private void MatchTwoHandsRaisedGesture(Skeleton skeleton) { if (skeleton == null) { return; } float threshold = 0.3f; if (skeleton.Joints[JointType.HandRight].Position.Y > skeleton.Joints[JointType.Head].Position.Y + threshold && skeleton.Joints[JointType.HandLeft].Position.Y > skeleton.Joints[JointType.Head].Position.Y + threshold) { if (this.GestureRecognized != null) { this.GestureRecognized(this, new GestureEventArgs(GestureType.TwoHandsRaised,RecognitionResult.Success)); } } }

  11. Gesture Recognition Engine private float GetJointDistance(Joint firstJoint, Joint secondJoint) { float distanceX = firstJoint.Position.X - secondJoint.Position.X; float distanceY = firstJoint.Position.Y - secondJoint.Position.Y; float distanceZ = firstJoint.Position.Z - secondJoint.Position.Z; return (float)Math.Sqrt(Math.Pow(distanceX, 2) + Math.Pow(distanceY, 2) + Math.Pow(distanceZ, 2)); }

  12. Vectors, Dot Product, Angles Segments of body can be represented using vectors You can create a class/struct for Vector3 or use Unity Vector3 type Dot product of two vectors Angle formed by two vectors in degrees Vector3 seg; seg.X = Joint1.Position.X – Joint2.Position.X; seg.Y = Joint1.Position.Y – Joint2.Position.Y; seg.Z = Joint1.Position.Z – Joint2.Position.Z; Vector3 seg1, seg2; float dotproduct = seg1.X*seg2.X+seg1.Y*seg2.Y+seg1.Z*seg2.Z; Vector3 seg1, seg2; float dotproduct = seg1.X*seg2.X+seg1.Y*seg2.Y+seg1.Z*seg2.Z; float seg1magnitude = Math.Sqt(seg1.X*seg1.X+seg1.Y*seg1.Y+seg1.Z*seg1.Z); float seg2magnitude = Math.Sqt(seg2.X*seg2.X+seg2.Y*seg2.Y+seg2.Z*seg2.Z); float angle = Math.Acos(dotproduct/seg1magnitude/seg2magnitude)*180/Math.PI;

  13. Build a Gesture Recognition App The app can recognize two gestures Clapping hand Two hands raised Setting up project Create a new C# WPF project with a name GestureRecognitionBasic Add Microsoft.Kinect reference, import name space, etc Add GUI component Create a new C# file named GestureRecognitionEngine.cs, and copy the code to this class In solution explore, right click, then Add => New Item

  14. Build a Gesture Recognition App User interface TextBox Canvas Image

  15. Build a Gesture Recognition App Add member variables Modify constructor KinectSensor sensor; privateWriteableBitmap colorBitmap; privatebyte[] colorPixels; Skeleton[] totalSkeleton = newSkeleton[6]; Skeleton skeleton; GestureRecognitionEngine recognitionEngine; public MainWindow() { InitializeComponent(); Loaded += newRoutedEventHandler(WindowLoaded); }

  16. Build a Gesture Recognition App private void WindowLoaded(object sender, RoutedEventArgs e) { if (KinectSensor.KinectSensors.Count > 0) { this.sensor = KinectSensor.KinectSensors[0]; if (this.sensor != null && !this.sensor.IsRunning) { this.sensor.Start(); this.sensor.ColorStream.Enable(); this.colorPixels = new byte[this.sensor.ColorStream.FramePixelDataLength]; this.colorPixels = new WriteableBitmap(this.sensor.ColorStream.FrameWidth, this.sensor.ColorStream.FrameHeight, 96.0, 96.0, PixelFormats.Bgr32, null); this.image1.Source = this.colorBitmap; this.sensor.ColorFrameReady += this.colorFrameReady; this.sensor.SkeletonStream.Enable(); this.sensor.SkeletonFrameReady += skeletonFrameReady; recognitionEngine = new GestureRecognitionEngine(); recognitionEngine.GestureRecognized += gestureRecognized; } } }

  17. Build a Gesture Recognition App Gesture recognized event handler colorFrameReady(), DrawSkeleton(), drawBone(), ScalePosition() same as before void gestureRecognized(object sender, GestureEventArgs e) { textBox1.Text = e.gsType.ToString(); }

  18. Build a Gesture Recognition App Handle skeleton frame ready event void skeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e) { canvas1.Children.Clear(); using (SkeletonFrame skeletonFrame = e.OpenSkeletonFrame()) { if (skeletonFrame == null) { return; } skeletonFrame.CopySkeletonDataTo(totalSkeleton); skeleton = (from trackskeleton in totalSkeleton where trackskeleton.TrackingState == SkeletonTrackingState.Tracked select trackskeleton).FirstOrDefault(); if (skeleton == null) return; DrawSkeleton(skeleton); recognitionEngine.Skeleton = skeleton; recognitionEngine.StartRecognize(GestureType.HandsClapping); recognitionEngine.StartRecognize(GestureType.TwoHandsRaised); } }

  19. Challenge Tasks Add recognition of two more gestures Right hand is raised Left hand is raised Add the GestureRecognitionEngine.cs to a Unity+Kinect app, and add visual feedback on the gestures recognized

More Related