1 / 61

Off-the-Shelf Vision-Based Mobile Robot Sensing

Off-the-Shelf Vision-Based Mobile Robot Sensing. Zhichao Chen Advisor: Dr. Stan Birchfield Clemson University. Vision in Robotics. A robot has to perceive its surroundings in order to interact with it. Vision is promising for several reasons: Non-contact (passive) measurement Low cost

makan
Download Presentation

Off-the-Shelf Vision-Based Mobile Robot Sensing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Off-the-Shelf Vision-Based Mobile Robot Sensing Zhichao Chen Advisor: Dr. Stan Birchfield Clemson University

  2. Vision in Robotics • A robot has to perceive its surroundings in order to interact with it. • Vision is promising for several reasons: • Non-contact (passive) measurement • Low cost • Low power • Rich capturing ability

  3. Project Objectives Path following: Traverse a desired trajectory in both indoor and outdoor environments. 1. “Qualitative vision-based mobile robot navigation”, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA),2006. 2. “Qualitative vision-based path following”, IEEE Transactions on Robotics, 25(3):749-754, June 2009. Person following:Follow a person in a cluttered indoor environment. “Person Following with a Mobile Robot Using Binocular Feature-Based tracking”, Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS),2007 Door detection:Build a semantic map of the locations of doors as the robot drives down a corridor. “Visual detection of lintel-occluded doors from a single camera”, IEEE Computer Society Workshop on Visual Localization for Mobile Platforms (in association with CVPR),2008,

  4. Motivation for Path Following • Goal: Enable mobile robot to follow a desired trajectory in both indoor and outdoor environments • Applications: courier, delivery, tour guide, scout robots • Previous approaches: • Image Jacobian [Burschka and Hager 2001] • Homography [Sagues and Guerrero 2005] • Homography (flat ground plane) [Liang and Pears 2002] • Man-made environment [Guerrero and Sagues 2001] • Calibrated camera [Atiya and Hager 1993] • Stereo cameras [Shimizu and Sato 2000] • Omni-directional cameras [Adorni et al. 2003]

  5. Our Approach to Path Following • Key intuition: Vastly overdetermined system(Dozens of feature points, one control decision) • Key result: Simple control algorithm • Teach / replay approach using sparse feature points • Single, off-the-shelf camera • No calibration for camera or lens • Easy to implement (no homographies or Jacobians)

  6. Preview of Results milestone image current image top-down view overview

  7. Tracking Feature Points Kanade-Lucas-Tomasi (KLT) feature tracker • Automatically selects features using eigenvalues of 2x2 gradient covariance matrix • Automatically tracks features by minimizing sum of squared differences (SSD) between consecutive image frames • Augmented with gain and bias to handle lighting changes • Open-source implementation gradient of image unknown displacement gray-level images [http://www.ces.clemson.edu/~stb/klt]

  8. goal feature initial feature goal feature current feature Teach-Replay track features detect features destination Teaching Phase start compare features track features Replay Phase

  9. Qualitative Decision Rule Landmark feature image plane Robot at goal uGoal uCurrent funnel lane No evidence“Go straight” Feature is to the right |uCurrent| > |uGoal| “Turn right” Feature has changed sides sign(uCurrent) ≠ sign(uGoal)  “Turn left”

  10. α α α The Funnel Lane at an Angle Landmark feature image plane Robot at goal funnel lane No evidence“Go straight” Feature is to the right “Turn right” Side change “Turn left”

  11. A Simplified Example Landmark feature Robot at goal funnel lane funnel lane funnel lane funnel lane “Go straight” “Go straight” “Go straight” “Turn right” “Turn left” “Go straight”

  12. The funnel Lane Created by Multiple Feature Points Landmark #2 Landmark #1 Landmark #3 α α Feature is to the right “Turn right” No evidence“Do not turn” Side change “Turn left”

  13. Qualitative Control Algorithm Funnel constraints: uGoal Desired heading where φ is the signed distance between the uC and uD

  14. Incorporating Odometry Desired heading Desired heading from ith feature point Desired heading from odometry N is the number of the features;

  15. Overcoming Practical Difficulties To deal with rough terrain: Prior to comparison, feature coordinates are warped to compensate for a non-zero roll angle about the optical axis by applying the RANSAC algorithm. To avoid obstacles: The robot detects and avoids an obstacle by sonar, and the odometry enables the robot to roughly return to the path. Then the robot converges to the path using both odometry and vision.

  16. Experimental Results milestone image current image top-down view overview Videos available at http://www.ces.clemson.edu/~stb/research/mobile_robot

  17. Experimental Results milestone image current image top-down view overview Videos available at http://www.ces.clemson.edu/~stb/research/mobile_robot

  18. Experimental Results: Rough Terrain

  19. Experimental Results:Avoiding an Obstacle

  20. Experimental Results Indoor Outdoor Imaging Source Firewire camera Logitech Pro 4000 webcam

  21. Project Objectives Path following:Enable mobile robot to follow a desired trajectory in both indoor and outdoor environments. 1. “Qualitative vision-based mobile robot navigation”, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA),2006. 2. “Qualitative vision-based path following”, IEEE Transactions on Robotics, 2009 Person following:Enable a mobile robot to follow a person in a cluttered indoor environment by vision. “Person Following with a Mobile Robot Using Binocular Feature-Based tracking”, Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS),2007 Door detection:Detect doors as the robot drives down a corridor. “Visual detection of lintel-occluded doors from a single camera”, IEEE Computer Society Workshop on Visual Localization for Mobile Platforms (in association with CVPR),2008

  22. Motivation • Goal:Enable a mobile robot to follow a person in a cluttered indoor environment by vision. • Previous approaches: • Appearance properties: color, edges.[Sidenbladh et al. 1999, Tarokh and Ferrari 2003, Kwon et al. 2005] • Person has different color from background or faces camera. • Lighting changes. • Optical flow.[Piaggio et al 1998, Chivilò et al. 2004] • Drift as the person moves with out-of-plane rotation • Dense stereo and odometry. [Beymer and Konolige 2001] • difficult to predict the movement of the robot (uneven surfaces, slippage in the wheels).

  23. Our approach • Algorithm:Sparse stereo based on Lucas-Kanade feature tracking. • Handles: • Dynamic backgrounds. • Out-of-plane rotation. • Similar disparity between the person and background. • Similar color between the person and background.

  24. System overview

  25. The size of each square indicates the horizontal disparity of the feature. Detect 3D features of the scene( Cont. ) • Features are selected in the left image IL and matched in the right image IR. Left image Right image

  26. System overview

  27. Detecting Faces • The Viola-Jones frontal face detector is applied. • This detector is used both to initialize the system and to enhance robustness when the person is facing the camera. Note: The face detector is not necessary in our system.

  28. Overview of Removing Background 1) using the known disparity of the person in the previous image frame. 2) using the estimated motion of the background. 3) using the estimated motion of the person

  29. Background features Foreground features in step 1 Remove BackgroundStep 1: Using the known disparity Discard features for which where is the known disparity of the person in the previous frame, and is the disparity of a feature at time t . Original features

  30. Foreground features after step 2 Foreground features with similar disparity in step 1 Remove BackgroundStep 2: Using background motion • Estimate the motion of the background by computing a 4 × 4 affine transformation matrix H between two image frames at times t and t + 1: • Random sample consensus (RANSAC) algorithm is • used to yield dominant motion.

  31. Remove BackgroundStep 3: Using person motion • Similar to step 2, the motion model of the person is calculated. • The size of the person group should be the biggest. • The centroid of the person group should be proximate to the previous location of the person. Foreground features after step 2 Foreground features after step 3

  32. System overview

  33. System overview

  34. Experimental Results

  35. Video

  36. Project Objectives Path following:Enable mobile robot to follow a desired trajectory in both indoor and outdoor environments. 1. “Qualitative vision-based mobile robot navigation”, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA),2006. 2. “Qualitative vision-based path following”, IEEE Transactions on Robotics, 2009 Person following:Enable a mobile robot to follow a person in a cluttered indoor environment by vision. “Person Following with a Mobile Robot Using Binocular Feature-Based tracking”, Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS),2007 Door detection:Detect doors as the robot drives down a corridor. “Visual detection of lintel-occluded doors from a single camera”, IEEE Computer Society Workshop on Visual Localization for Mobile Platforms (in association with CVPR),2008

  37. Motivation for Door Detection Metric map Topological map Either way, doors are semantically meaningfullandmarks

  38. Previous Approaches to Detecting Doors • Range-based approaches • sonar [Stoeter et al.1995], stereo [Kim et al. 1994], laser [Anguelov et al. 2004] • Vision-based approaches fuzzy logic [Munoz-Salinas et al. 2004] neural network [Cicirelli et al 2003] color segmentation [Rous et al. 2005] • Limitations: • require different colors for doors and walls • simplified environment (untextured floor, no reflections) • limited viewing angle • high computational load • assume lintel (top part) visible

  39. What is Lintel-Occluded? Lintel-occluded • post-and-lintel architecture • camera is low to ground • cannot point upward b/c obstacles lintel post

  40. Our Approach Assumptions: • Both door posts are visible • Posts appear nearly vertical • The door is at least a certain width Key idea: Multiple cues are necessary for robustness (pose, lighting, …)

  41. Video

  42. Pairs of Vertical Lines vertical lines detected lines Canny edges non-vertical lines • Edges detected by Canny • Line segmentsdetected by modified Douglas-Peucker algorithm • Clean up (merge lines across small gaps, discard short lines) • Separate vertical and non-vertical lines • Door candidates given by all the vertical line pairs whose spacing is within a given range

  43. Homography In the image In the world (x, y) (x’, y’)

  44. ? ? (0,y) (x, y) (0,0) (0,0) (x,0) Prior Model Features: Width and Height Principal point

  45. An Example As the door turns, the bottom corner traces an ellipse (projective transformation of circle is ellipse) But not horizontal

  46. Data Model (Posterior) Features Placement of top and bottom edges (g2 , g3) Image gradient along edges (g1) Color (g4) Vanishing point (g7) Kick plate (g6) texture (g5) and two more…

  47. Data Model Features (cont.) Intensity along the line darker (light off) positive brighter (light on) negative no gap Bottom gap(g8)

  48. Data Model Features (cont.) Slim “U” vertical door lines wall door wall Lleft bottom door edge intersection line of wall and floor extension of intersection line LRight ε floor Concavity(g9)

  49. Two Methods to Detect Doors Training images Adaboost Weights of features Weights of features The strong classifier Bayesian formulation (yields better results)

  50. image image door door Bayesian Formulation Taking the log likelihood, Data model Prior model

More Related