1 / 22

Visual Servo Control Tutorial Part 1: Basic Approaches

Visual Servo Control Tutorial Part 1: Basic Approaches. Chayatat Ratanasawanya December 2, 2009 Ref: Article by Francois Chaumette & Seth Hutchinson. Overview. Introduction Basic components of visual servoing Image-based visual servo (IBVS) Position-based visual servo (PBVS)

rozene
Download Presentation

Visual Servo Control Tutorial Part 1: Basic Approaches

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Visual Servo Control TutorialPart 1: Basic Approaches ChayatatRatanasawanya December 2, 2009 Ref: Article by Francois Chaumette & Seth Hutchinson

  2. Overview • Introduction • Basic components of visual servoing • Image-based visual servo (IBVS) • Position-based visual servo (PBVS) • Stability analysis • Conclusion • Questions/comments

  3. Introduction • Visual servo (VS) control – the use of computer vision data to control the motion of a robot. • Relies on techniques from image processing, computer vision, and control theory • Two camera configurations: • Eye-in-hand: camera is mounted on a robot manipulator or on a mobile robot. • Camera is fixed in the workspace

  4. Basic components of VS • Error function • The goal is to minimize the error • Design of s: • Consists of a set of features that are readily available in the image data (IBVS), or • Consists of a set of 3D parameters, which must be estimated from image measurements (PBVS)

  5. Basic components of VS (Cont’d) • Interaction matrix (feature Jacobian) • Design of the controller • Can be done quite simply once s is selected • The most straightforward approach is to design a velocity controller In practice, it is impossible to know perfectly Le or Le+

  6. Image-based visual servo (IBVS) • The classical IBVS schemes use the image-plane coordinates of a set of points to define s. • m - the pixel coordinates of a set of image points. • a - the camera intrinsic parameters. • For a 3D point X=(x,y,z) in the camera frame, using the projection model the point is at x=(u,v) in the image, the interaction matrix is

  7. Image-based visual servo (IBVS) • To control the 6DOF, at least three points are necessary. • If the feature vector is chosen as x=(x1,x2,x3), the Jacobian matrix can be • However, more than 3 points are usually considered because there will exist cases for which Lx is singular. Moreover, it is not possible to differentiate the global minima (poses for which e=0) when they exist.

  8. IBVS: Estimating the interaction matrix • If L e is known; i.e., if the current z of each point is available • L e is unknown, but the desired z is available • Same condition as in 1.

  9. Example of IBVS positioning

  10. IBVS result: case 1

  11. IBVS result: case 2

  12. IBVS result: case 3

  13. IBVS with a Stereovision system • A straightforward extension of the IBVS approach. • If a 3D point is visible in both left and right images, it is possible to use as visual features. • The 3D coordinates of any point observed in both images can be estimated easily by a triangulation process, it is therefore possible and quite natural to use these 3D coordinates in the features set s.

  14. Position-based visual servo (PBVS) • PBVS schemes use the pose of the camera w.r.t. some reference coordinate frame to define s. • Computing that pose from a set of measurements in an image requires the camera intrinsic parameters and the 3D model of the object observed. • m - the pixel coordinates of a set of image points. • a - the camera intrinsic parameters and the 3D model of the object.

  15. PBVS: definition of s s=(t, θu) • If t is defined relative to the object frame, we have Following the developments presented: determining Le and the estimate of its inverse, the control law is

  16. PBVS result: case 1

  17. PBVS: definition of s s=(t, θu) • If t is defined w.r.t. the current camera frame, we have the corresponding control law is

  18. PBVS result: case 2

  19. Stability analysis - IBVS • Local asymptotic stability (vc=0 and e≠e*) can be ensured when number of visual feature in the vector s is greater than 6. • Global asymptotic stability cannot be guaranteed.

  20. Stability analysis - PBVS • Global stability is achievable when all pose parameters are perfect. • Robustness: small points position computation errors in the image can lead to pose errors that may impact the accuracy and the stability of the system significantly.

  21. Conclusion • IBVS or PBVS is better? – performance tradeoffs • Stability: no strategy provides perfect properties • Correct estimation of 3D parameters is important for IBVS, but crucial for PBVS. • In PBVS, the vision sensor is considered as a 3D sensor, which leads to errors. • In IBVS, the vision sensor is considered as a 2D sensor; therefore, it is robust to errors in calibration and image noise.

  22. Questions/comments

More Related