140 likes | 278 Views
The Renci Multitouch Framework offers a comprehensive solution for advanced touch detection and gesture tracking. It utilizes camera capture and filtering techniques to detect multiple touch points simultaneously. The framework includes features like touch camera stitching, gesture detection, and transformation from camera coordinates to screen coordinates. With its robust architecture, developers can easily acquire and manage new touches, track object movements, and perform blob detection. Calibration tools ensure accurate mapping and distortion correction, making it a powerful tool for interactive applications.
E N D
The RenciMultitouch Framework Because touching things is just good fun.
Overview • Camera Capture • Filtering • Touch Detection • Touch Camera to Stitching Transformation • Tracking • Stitching to Screen Transformation • Gesture Detection
General Structure Touch Detection & Tracking Gesture Engine Acquire New Touches Test against In-Flight Gestures & Cull Cluster Remaining Touches Compute Contraction / Dilation Rotational Offset Clip to registered touch regions Push To Clients or Emit Mouse Events • Camera Stack • Filter Graph • Camera • Background • High Pass • Threshold • Blob Detection • Camera to Stitching Transform • Tracker • Tracking • Stitch to Screen Transform
Camera Capture • Video for Windows • Prosilica • Point Grey • Sony PS3 Eye
Blob Detection • Sample Screen Space given minimum blob size • Utilize a flood-fill style approach to blob region detection • Back-fill with black to avoid multiple detections • Compute Centroid & Bounding Box • Projective Transform from Camera to Stitched Coordinate System
Tracking • KD-Tree Nearest Neighbor to track a blob frame to frame • Find set of nearest neighbors from KD-Tree • Search along line from previous blob’s gradient to find nearest blob from given set • Transform from Stitched to Screen Coordinate System
What Are All These Transforms? • Math on all the Pixels Is Expensive • So save time by only doing the expensive mathy-bits on the touches • Optimally we want One Thread per Camera with Filter Graph & Blob Detection & Transform • Basically were mapping quadrilaterals to other quadrilaterals: • Camera System to Stitched System • Stitched System to Screen System
So where do we get the Transformation Matrices? • Calibration and More: • Camera Calibration to remove radial distortion • Stitching Calibration • Screen Calibration • Touch Calibration ( Determine Filter graph Values )