Augmented reality for robot development and experimentation
1 / 34

Augmented Reality for Robot Development and Experimentation - PowerPoint PPT Presentation

  • Uploaded on

Augmented Reality for Robot Development and Experimentation. Authors: Mike Stilman, Philipp Michel, Joel Chestnutt, Koichi Nishiwaki, Satoshi Kagami, James J. Kuffner. Jorge Dávila Chacón. Introduction & Related Work Overview Ground Truth Modelling Evaluation of Sensing

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about ' Augmented Reality for Robot Development and Experimentation' - marrim

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Augmented reality for robot development and experimentation

Augmented Reality for Robot Development and Experimentation


Mike Stilman, Philipp Michel,

Joel Chestnutt,

Koichi Nishiwaki, Satoshi Kagami, James J. Kuffner.

Jorge Dávila Chacón

  • Introduction & Related Work

  • Overview

  • Ground Truth Modelling

  • Evaluation of Sensing

  • Evaluation of Planning

  • Evaluation of Control

  • Discussion


  • Virtual simulation: Find critical system flaws or software errors.

  • Testing of various interconnected components for perception, planning, and control becomes increasingly difficult.

    • Vision system: Model of the environment.

    • Navigation planner: Erroneous path.

    • Controller: Properly following the desired trajectory.

  • Objective: To present a ground truth model of the world and to introduce virtual objects into real world experiments.

  • Establish a correspondence between virtual components (Environment, models, plans, intended robot actions) and the real world.

  • Visualize and identify system errors prior to their occurrence.

Related work
Related Work to introduce virtual objects into real world experiments.

  • For humanoid robots: Simulation engines model dynamics and test the controllers, kinematics, geometry, higher level planning and vision components.

  • Khatib: Haptically interaction with the virtual environment.

    • Purely virtual simulations are limited to approximating the real world (Rigid body dynamics and perfect vision).

  • Hardware in-the-loop simulation (Aeronautics and space robotics).

  • Virtual overlays for robot teleoperation: Design and evaluate robot plans.

  • Speed, robustness and accuracy enhanced by binocular cameras

  • Hybrid tracking by the use of markers (Retroreflective, LEDs and/or Magnetic trackers).

Overview robotics).

  • Lab space setting:

  • “Eagle-4” Motion analysis system, cameras and furniture objects.

  • Experiments focus: High level autonomous tasks for humanoid robot “HRP-2”.

    • Foot locations to avoid obstacles and manipulate them to free its path.

  • Technical details robotics).

  • “Eagle-4” system:

    • Eight cameras, 5 × 5 x 2 m.

    • Distances calculated to 0.3% accuracy.

    • Dual Xeon 3.6GHz processor

    • “EVa Real-Time” (EVaRT) software: Locate 3D markers at a max. rate of 480Hz and 1280 × 1024 (60 markers min. at 60Hz).

Virtual chair is overlayed in real-time. robotics).

Both the chair and the camera are in motion.

Ground truth modeling
Ground Truth Modeling robotics).

  • Reconstructing Position and Orientation

  • Individually identified markers, attached to an object can be expressed as a set of points

    {a1, ..., an} in the object’s coordinate frame “F” (Object template).

  • Displaced marker location “bi” is found with translation vector “t”, orientation vector “R” and the centroid of markers.

  • Markers occluded from motion capture: robotics).

    • Algorithm performed only on the visible markers: their corresponding rows in matrix must be removed.

  • New centroids are the centroids of the visible markers and their associated template markers.

  • Reconstructing Geometry and robotics).

  • Virtual Cameras

  • 3D triangular surface meshes form environment objects (Manually edited for holes and automatically simplified to reduce the number of vertices).

  • The position of robot camera found with ground-truth positioning information and calculus of its axis provides the “Virtual view”.

Evaluation of sensing
Evaluation of Sensing robotics).

  • Ground-truth positioning information localize sensors, cameras, range finders.

  • Build reliable global environment representations (Occupancy grids or height maps) for robot navigation plan.

  • Overlay them onto projections of the real world evaluate sensing algorithms for construction of world models.

  • Reconstruction by Image Warping robotics).

  • Tracking of camera’s position, using motion capture to recover projection matrix, enables 2D homography between the floor and the image plane.

  • To build a 2D occupancy grid of the environment for biped navigation, we assume that all scene points of interest lie in the z = 0 plane.

  • Reconstruction from Range Data robotics).

  • Range sensor “CSEM Swiss Ranger SR-2” time-of-flight (TOF) to build 2.5D height maps of the environment objects.

  • Motion-capture based localization lets us convert range measurements into clouds of 3D points in world coordinates in real-time.

  • Environment height maps can be cumulatively constructed.

Example “box” scene. Raw sensor measurement.

Point cloud views of reconstructed box.

  • Registration with Ground Truth Raw sensor measurement.

  • Environment reconstructed by image warping or range data allows to visually evaluate the accuracy of our perception algorithms.

  • Make parameter adjustments on the-fly by overlaying the environment maps generated back onto a camera view of the scene.

Evaluation of planning
Evaluation Of Planning Raw sensor measurement.

  • Video overlay displays diagnostic information about the planning and control process in physically relevant locations.

  • The robot plan a safe sequence of actions to convey itself from its current configuration to a goal location.

    • Goal location and obstacles moved while robot was walking, requiring a constant update of the plan.

  • Planning algorithm evaluates candidate footstep locations through a cluttered environment.

  • • Motion capture obstacle recognition.

  • • Localized sensors.

  • • Self-contained vision

    • Motion capture data removed completely and the robot use its own odometry to build maps of the environment.

  • Visual Projection: Footstep Plans through a cluttered environment.

  • For each step it computes 3D position and orientation of the foot.

  • Augmented reality planned footsteps are overlaid in real-time onto the environment (Continuously updated while walking).

  • This display exposes the planning process to identify errors and gain insight into the performance of the algorithm.

Occupancy grid generated from the robot’s camera. through a cluttered environment.

Planar projection of an obstacle recovered from range data.

  • Temporal Projection: Virtual Robot through a cluttered environment.

  • Real world preferred to completely simulated environment for experimentation: AVATAR proposal.

  • Instead of replacement of all sensing with perfect ground truth data, we can simulate degrees of realistic sensors.

  • Objects And The Robot’s Perception through a cluttered environment.

  • Slowly increase the realism of the data which the system must handle.

  • By knowing the locations and positions of all objects as well as the robot’s sensors, we can determine which objects are detectable by the robot at any given point in time.

Footstep plan displayed onto through a cluttered environment.

the world.

Augmented reality with a simulated robot amongst real obstacles.

Evaluation of control
Evaluation Of Control through a cluttered environment.

  • Objective: To maximize the safety of the robot and the environment.

  • To accomplish this, we perform hardware in-the-loop simulations while gradually introducing real components.

    • “Complexity of the Plant”

  • Virtual Objects through a cluttered environment.

  • Simulation: Analyze the interaction of a robot with a virtual object by a geometric and dynamic model of the object.

  • In case of a failure we observe and detect virtual collisions without affecting the robot hardware.

  • Similarly, these concepts can be applied towards grasping and manipulation.

  • Precise Localization through a cluttered environment.

  • To perform force control on an object during physical interaction.

  • Fixing the initial conditions of robot and environment, or asking the robot to sense and acquire a world model prior to every experiment.

  • The hybrid experimental model avoids the rigidity of the former approach and the overhead time required for the latter.

  • Virtual optical sensor: Efforts can be focused on algorithms for making contact with the object and evaluating the higher frequency feedback required for force control.

  • Gantry Control former approach and the overhead time required for the latter.

  • Physical presence of gantry and its operator prevent from testing fine manipulation and navigation in cluttered environments that requires the close proximity to objects.

  • To bypass this problem a ceiling suspended gantry was implemented, that can follow the robot throughout the experimental space.

Discussion former approach and the overhead time required for the latter.

  • Paradigm leverages advances in optical motion capture speed and accuracy to enable simultaneous online testing of complex robotic system components.

  • Promotes a rapid development and validation testing on each of the perception, planning and control.

  • Future Work former approach and the overhead time required for the latter.

  • Automated methods for environment modeling (Object with markers could be inserted at environment and immediately modeled for application)

  • Automatic sensor calibration in the context of a ground truth world model.

  • Enhanced visualizations by fusing local sensing (Gyroscopes and Force) sensors into the virtual environment.

? former approach and the overhead time required for the latter.