Autonomy for general assembly
1 / 17

Autonomy for General Assembly - PowerPoint PPT Presentation

  • Uploaded on

Autonomy for General Assembly. Reid Simmons Research Professor Robotics Institute Carnegie Mellon University. The Challenge. Autonomous manipulation of flexible objects for general assembly of vehicles Dexterity Precise perception Speed Reliability The Specific Task

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Autonomy for General Assembly' - reba

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Autonomy for general assembly

Autonomy for General Assembly

Reid Simmons

Research ProfessorRobotics InstituteCarnegie Mellon University

The challenge
The Challenge

  • Autonomous manipulation of flexible objects for general assembly of vehicles

    • Dexterity

    • Precise perception

    • Speed

    • Reliability

  • The Specific Task

    • Insert clip attached to cable intohole with millimeter tolerance

    • Year 2: Moving taskboard

Carnegie Mellon

Overall approach
Overall Approach

  • Utilize our previous work in robot autonomy

    • Multi-layered software architecture

    • Hierarchical, task-level description of assembly

    • Robust, low-level behaviors

    • Distributed visual servoing

    • Force sensing

    • Exception detection and recovery

Carnegie Mellon

Architectural framework





Architectural Framework

  • Three-Tiered Architecture

  • Deals with goals and resource interactions

  • Reactive & Deliberative

  • Modular

  • Control loops at multiplelevels of abstraction

  • Task decomposition; Task synchronization; Monitoring; Exception handling

  • Deals with sensors and actuators

Carnegie Mellon

Syndicate multi robot architecture

Synchronization / Coordination














Syndicate: Multi-Robot Architecture

Carnegie Mellon

Syndicate layers behavioral
Syndicate Layers: Behavioral

  • Made up of “blocks”

    • Each block is a small thread/function/process

    • Represent hardware capabilities or repeatable behaviors

    • “Stateless”: relies on current data; no knowledge of past or future

  • Communicate with sensors

  • Send commands to robots and get feedback

  • Communicate data to other blocks

Carnegie Mellon

Ace control behaviors
Ace Control Behaviors

Carnegie Mellon

Distributed visual servoing

Mast Eye

Mobile Manipulator





End effector delta

Images (via cameras)

The World

Manipulate environment

(via arm)

Distributed Visual Servoing

  • Mast Eye tracks fiducials

    • Uses ARTag software package to detect fiducials

    • Provides 6-DOF transform between fiducials

  • Mobile Manipulator uses information to plan how to achieve goal

    • Use data base describing positions of fiducials on objects

  • Behavioral layer enables dynamic, transparent inter-agent connections


Carnegie Mellon

Distributed visual servoing1
Distributed Visual Servoing

  • Fairly precise

    • millimeter resolution at one meter

  • Relatively fast

    • 3-4 Hz

  • Basically unchanged from Trestle code

  • Operates in relative frame

    • Poses of one object relative to another

    • Controller continually tries to reduce pose difference

    • Cameras do not need to be precisely calibrated with respect to base or arm

Carnegie Mellon

Distributed visual servoing2
Distributed Visual Servoing

  • Associating Fiducials with Objects

    • Programmer provides file listing the pose of each fiducial with respect to an object

    • Multiple fiducials can be associated with each object

    • Can measure directly, or use system to give us the poses

  • Reducing Pose Differences

    • “Waypoint” is the pose of one object with respect to another

      • Everything is relative!

    • Visual servo block multiplies pose difference by gain

    • Update moves when new information arrives

Carnegie Mellon

Syndicate layers executive
Syndicate Layers: Executive

  • Made up of “tasks”

    • Each task is concerned with achieving a single goal

    • Tasks can be arranged temporally

  • Tasks can:

    • Spawn subtasks

    • Enable and connect blocks in the behavioral layer to achieve the task

      • Enable  tell a block to start running

      • Connect  tell blocks to send data to other blocks

    • Monitor blocks for failure

    • Provide failure recovery

Carnegie Mellon

Ace task decomposition

Child link



Ace Task Decomposition

Carnegie Mellon

Example tdl code somewhat simplified

In the tree, this is the task name, but this is the actual function being executed

A keyword that says this is not supposed to be a “leaf” in the task tree

Reusing task with different parameters

Tells the system to execute this task afterLoadPlugArmPose completes

Wait until both tasks have completed before starting RoughArmMove

Example TDL Code (somewhat simplified)

Goal ClipInsertion ( ) {

loadPlugArmPose : spawn ArmMove (loadPose);

stowArmPose: spawn ArmMove (stowPose) WITH

SERIAL loadPlugArmPose;

roughBaseMove: spawn RoughBaseMove (roughBaseWaypoint) WITH SERIAL loadPlugArmPose;

spawn RoughArmMove (roughArmWaypoint)

WITH SERIAL roughBaseMove, SERIAL stowArmPose;


Carnegie Mellon

Initial results december 2007
Initial Results (December 2007)

  • Used Previous Hardware

    • RWI base

    • Metrica 5 DOF arm

    • Metal & plastic gripper

  • Successfully Inserted Clip

    • 60% success rate (15 trials)

      • Mainly attributable to hardware problems

    • Fairly slow (~1 minute)

    • Scripted base move

Carnegie Mellon

Insertion video
Insertion Video

Press This

Carnegie Mellon

Current status
Current Status

  • Moved to New Hardware

    • Powerbot base

    • WAM arm (Barrett)

    • All-metal gripper

  • Still Successfully Inserting Clip

    • Much faster

      • Better hardware

      • “Rough” moves

    • Base motion is planned, not scripted

    • Uses force sensing to detect completion / problem

    • Have not yet characterized success rate

Carnegie Mellon

Upcoming work
Upcoming Work

  • Near Term (1 month)

    • Complete hardware integration

      • Laser, PTU, VizTracker

    • Characterize success rate of system

  • Mid Term (2-6 months)

    • Convert to velocity control of WAM

    • Use force control for actual insertion

    • Increase reliability through execution monitoring and exception handling

  • Farther Term (2nd year of contract)

    • Insert clip into moving taskboard

Carnegie Mellon