1 / 51

COMP 417 – Jan 12 th , 2006

COMP 417 – Jan 12 th , 2006. Guest Lecturer: David Meger Topic: Camera Networks for Robot Localization. Introduction. Who am I? Overview, Camera Networks for Robot Localization What Where Why How (technical stuff). Introduction - Hardware. Intro - What.

Download Presentation

COMP 417 – Jan 12 th , 2006

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. COMP 417 – Jan 12th, 2006 Guest Lecturer: David Meger Topic: Camera Networks for Robot Localization

  2. Introduction • Who am I? • Overview, Camera Networks for Robot Localization • What • Where • Why • How (technical stuff)

  3. Introduction - Hardware

  4. Intro - What • Previously: Localization is a key task for a robot. It’s typically achieved using the robot’s sensors and a map. • Can “the environment” help with this?

  5. Typical Robot Localization

  6. Sensor Networks

  7. Sensor Networks

  8. Intro - Where • In cases where there is sensing already in the environment, we can invert the direction of sensing. • Where is this true? • Buildings with security systems • Public transportation areas (metro) • More and more large cities (scary but true)

  9. Intro – Why • Advantages: • In many cases sensors already exist • Many robots operating in the same place, can all share the same sensors • Computation can be done at a powerful central computer, saves robot computation • Interesting research problem

  10. Intro – How • As the robot appears in images, we can use 3-D vision techniques to determine its position relative to the cameras • What do we need to know about the cameras to make this work? • Can we assume we know where the cameras are? • Can we assume we know the camera properties?

  11. Problem Can we use images from arbitrary cameras placed in unknown positions in the environment to help a robot navigate?

  12. Proposed Method • Detect the robot • Measure the relative positions • Place the camera in the map • Move robot to the next camera • Repeat

  13. Detection – An algorithm to detect these robots?

  14. Detection (cont’d) • Computer Vision techniques attempt detection of (moving) objects • Background subtraction or image differencing • Image templates • Color matching • Feature matching • A robust algorithm for arbitrary robots is likely beyond current methods

  15. Detection – Our Method

  16. ARTag Markers

  17. Proposed Method • Detect the robot • Measure the relative positions • Place the camera in the map • Move robot to the next camera • Repeat

  18. Position Measurement • Question: Can we determine the 3-D position of an object relative to the camera from examining 2-D images? • Hint: start from the introduction to Computer Vision from last time

  19. Pinhole Camera Model

  20. Camera Calibration • An image depends on BOTH scene geometry and camera properties • For example, zooming in and out and moving the object closer and farther have essentially the same effect • Calibration means determining relevant camera properties (e.g. focal length f)

  21. Projective Calibration Equations

  22. Coordinate Transformation

  23. Calibration Equations • Matrix AT is a 3x4 and fully describes the geometry of image formation • Given known object points M, and image points m, it is possible to solve for both A and T • How many points are needed?

  24. Calibration Targets

  25. 3-Plane ARTag Target

  26. Position Measurement Conclusion • With enough image points whose 3-D location are known, measurement of coordinate transformation T is possible • The process is more complicated than traditional sensing, but luckily, we only need to do it once per camera

  27. Proposed Method • Detect the robot • Measure the relative positions • Place the camera in the map • Move robot to the next camera • Repeat

  28. Mapping Camera Locations • Given the robot’s position, a measurement of the relative position of the camera allows us to place it in our map • Question: What affects the accuracy of this type of relative measurement?

  29. Proposed Method • Detect the robot • Measure the relative positions • Place the camera in the map • Move robot to the next camera • Repeat

  30. Robot Motion • A robot moves by using electric motors to turn its wheels. There are numerous strategies here in each of the important aspects: • Physical Design • Control algorithms • Programming Interface • High-level software architecture

  31. Nomad Scout

  32. Differential Drive Kinematics

  33. Odometry Position Readings

  34. Robot Motion - Specifics • Robot control accomplished by using an in-house application – Robodaemon • Allows “point and shoot” motion, not continuous control • Graphical and programmatic interface to query robot odometry, send motion commands, collect sensor data

  35. Proposed Method • Detect the robot • Measure the relative positions • Place the camera in the map • Move robot to the next camera • Repeat Are we done?

  36. Challenges • In general, it’s impossible to know the robot or camera positions exactly. All measurements have error • What should the robot do if the cameras can’t see the whole environment? • I didn’t say anything about how the robot should decide where to go next • More?

  37. Mapping with Uncertainty • Given exact knowledge of the robot’s position, mapping is possible • Given a pre-built map, localization is possible • What if neither are present? Is it realistic to assume they will be? If so, when?

  38. Uncertainty in Robot Position • In general, kinematics equations do not exactly predict robot locations • Sources of error • Wheel slippage • Encoder quantization • Manufacturing artifacts • Uneven and terrain • Rough/slippery/wet terrain

  39. Typical Odometry Error

  40. Simultaneous Localization and Mapping (SLAM) • When both the robot and map features are uncertain, both must be estimated • Progress can be made by viewing measurements as probability densities instead of precise quantities

  41. SLAM Progress

  42. SLAM (cont’d) • A quantity of the work in robotics in the last 5-10 years has involved localization and SLAM, results are now very pleasing indoors with good sensing • These methods apply to our system • More on this later in the course, or after class today if you’re interested

  43. Motion Planning • The mapping framework described is dependant on the robot’s motion: • The robot must pass in front of a camera in order to collect any images • Numerous points are needed for each camera to perform calibration • SLAM accuracy affected by order of camera visitation

  44. Local and Global Planning • Local: how should the robot move while in front of one camera, to collect the set of calibration images? • Global: in which order should the cameras be visited?

  45. Local Planning • Modern calibration algorithms are quite good at estimating from noisy data, but there are some geometric considerations • Field of view • Detection accuracy • Singularities in calibration equations

  46. Local Planning • We must avoid configurations where all points collected lie in a linear sub-space of R3 • For example, a set of images of a single plane moved only through translation, gives all co-planar points

  47. Projective Calibration Equations

  48. Global Planning • Camera positions estimated by relative measurements from the robot • This information is only as accurate as our knowledge about the robot • “Re-localizing” is our only way to reduce error

  49. Distance / Accuracy Tradeoff • Returning to well-known cameras helps our position estimates but causes the robot to travel farther than necessary • An intelligent strategy is needed to manage this tradeoff • Some partial results so far, this is work in progress

  50. Review • Using sensors in the environment, we can localize a robot • In order to use previously un-calibrated and unmapped cameras, a robot can carry out exploration, and SLAM • This must only be done once, and then accurate localization is possible

More Related