1 / 25

VIRTUAL MOUSE

VIRTUAL MOUSE. INTRODUCTION. Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI).

royce
Download Presentation

VIRTUAL MOUSE

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. VIRTUAL MOUSE

  2. INTRODUCTION • Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). • With efficient object tracking algorithms, it is possible to track the motion of a human hand in real time using a simple web camera. • This presentation discusses the design of a system that tracks the fingertip of the index finger for the purpose of controlling mouse pointer on the screen.

  3. PHYSICAL LAYOUT • A single camera • (web camera) is used to track the motion of the fingertip in real time. • The camera is mounted on • the top of the computer monitor or hooked on the laptop screen.

  4. BASIC OVERVIEW • To move the mouse pointer, the user moves his index finger on a 2D plane (for example on a table). All other fingers are folded to form a fist. • The underlying tracking algorithm works by segmenting the skin colour from the background, it is required for the efficient tracking of the finger tip that user's other hand is not in the scene.

  5. BASIC OVERVIEW • To do left click, user unfolds his thumb and then folds it back. • Please note that if the hand pose is not as described above (all fingers in a folded form to form a fist except the index finger), then we simply track the hand but do not move the • mouse pointer. Similarly the clicks are recognized only if index finger was spotted in the scene.

  6. SYSTEM OVERVIEW • The system consists of three parts: One Time Continuous

  7. ONLINE TRAINING OF SKIN COLOURS • It allows the system to learn variations of skin colours due to change in illumination and hand poses. • A 2D look up table of Hue and Saturation values ( probability density values) from HSV color space is computed

  8. DETECTING HAND REGION • A CAMSHIFT algorithm is used to detect hand region, that is the detection of hand region is confined to Region of Interest (ROI). • Initially, the ROI is set to the entire captured frame. • The hand region is detected using the H and S values from the 2D look-up table computed during the training phase. • A threshold operation gives us the Binary image B(x,y) of the hand region, where B(x,y) = 1 for pixels in the skin region.

  9. DETECTING HAND REGION • The centre of the hand region (xc, yc) and its orientation (θ) is calculated using the 0th, 1st, 2nd order image moments. • The ROI of the next frame is then computed from this binary image. • The centre of ROI for the next region is set to the centre of hand region in the current frame.

  10. DETECTING HAND REGION • The horizontal and vertical lengths (Rx, Ry) of the ROI of next frame are computed as follows: • Rx = sx Moo Ry = sy M00 • Where • sx = cos|θ| + 1, sy = sin|θ| + 1 • And • M00 = 0th order image moment

  11. COMPUTATION OF FINGERTIP LOCATION • To detect finger tip, first it is found out if the pose of the hand is our mouse pointer pose (only index finger in the unfolded position, rest all folded to form a fist). • This is done by first cropping the image around the hand region, smoothing it by using Gaussian kernel so as to reduce noise and then analyzing the shape of the hand pose. • A simple method is implemented to analyze the shape of the hand.

  12. COMPUTATION OF FINGERTIP LOCATION • The image is converted into Binary image and the system scans the image from top to bottom row by row. • The system counts the pixels that it encounters in each row and try to match it with the width of the finger. • If enough rows are found with number of pixels greater than equal to the width of the finger, the system proceeds to check if there is a fist following • the finger.

  13. FINITE STATE MACHINE

  14. COMPUTATION OF FINGERTIP LOCATION • If enough number of rows with pixel count greater than fist width are found, then system shows that the pointing pose is detected. • Basically an finite state machine has been implemented to detect the pointing pose in the image by analyzing the number of pixels in the rows. • The dimensions of the finger, fist are determined during the on-line training process.

  15. COMPUTATION OF FINGERTIP LOCATION • The x coordinate of finger tip is set to the first row having number of pixels greater than finger width. • The y coordinate is set to the centre of the finger.

  16. COMPUTATION OF MOUSE CLICK • First the system proceeds to find out if thumb is present in the image. • For this a similar Finite state machine is implemented. • But this time the system scans the image column by column to find out thumb is present in the image or not. • Depending upon if the user is left handed or right handed, the scanning will begin from left side or right side of the image.

  17. COMPUTATION OF MOUSE CLICK • The system tries to find out if enough number of columns with pixel count greater than thumb width are present in the image or not. • If sufficient number of such columns detected, the system checks if there is a fist present in the system or not. • Once the fist is also detected, the system declares that a thumb is detected in the image which is equivalent to mouse's left button down event.

  18. COMPUTATION OF MOUSE CLICK • When the user folds his thumb back, the system generates a mouse button up event. • Also note that if pointing pose was not detected in the image, then mouse clicks are not • detected at all. • The system maintains the status of the mouse button (left button only).

  19. FINITE STATE MACHINE

  20. DISPLAY OF MOUSE POINTER • Once the fingertip is detected, we need to map the coordinates of the fingertip to the coordinates of the mouse on the monitor. • But, we can not use the fingertip locations directly due to the following problems:

  21. PROBLEMS IN MAPPING COORDINATES • Noise from sources such as segmentation error make it difficult to position the mouse pointer accurately • Due to the limitation on the tracking rate, we might get discontinuous coordinates of fingertip locations • Difference of resolution between the input image from the • camera and monitor makes it difficult to position the mouse pointer accurately

  22. SOLUTION • The displacement of detected finger tip is averaged over few frames and this average displacement is used to displace the mouse cursor on the screen. • Also if this displacement is found to be less than a threshold value, the mouse cursor is not moved.

  23. APPLICATIONS • This application gives user an easy way to move mouse cursor on the screen. • Since user is using his hand, the homing time (time to place hand on the mouse) is reduced a lot. • The click is implemented with a very easy gesture. • With more robust finger tip detection, this application can replace the use of mouse.

  24. LIMITATIONS • If background contains colors similar to skin, then the algorithm will loose track of the hand or falsely report its location. • When the camera's height is changed, the system has reported false pose detection. A better way to detect pointing pose is to use Machine learning algorithm (for example Neural network ). • The mouse cursor movement on the screen required more smoothing. Also user is not able to cover the entire screen.

  25. THANK YOU

More Related