Adaptive autonomous robot teams for situational awareness
This presentation is the property of its rightful owner.
Sponsored Links
1 / 45

Adaptive Autonomous Robot TEAMS for Situational Awareness PowerPoint PPT Presentation


  • 52 Views
  • Uploaded on
  • Presentation posted in: General

Adaptive Autonomous Robot TEAMS for Situational Awareness. Co-PI: Ron Arkin Senior Personnel: Tucker Balch, Robert Burridge Research Associates Keith O’Hara, Patrick Ulam, Alan Wagner. PI:Vijay Kumar Senior Personnel: Camillo Jose Taylor, Jim Ostrowski Research Associates

Download Presentation

Adaptive Autonomous Robot TEAMS for Situational Awareness

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Adaptive autonomous robot teams for situational awareness

Adaptive Autonomous Robot TEAMS for Situational Awareness

  • Co-PI: Ron Arkin

  • Senior Personnel:

  • Tucker Balch, Robert Burridge

  • Research Associates

  • Keith O’Hara, Patrick Ulam, Alan Wagner

PI:Vijay Kumar

Senior Personnel:

Camillo Jose Taylor, Jim Ostrowski

Research Associates

James Keller, John Spletzer, Aveek Das, Guilherme Pereira, Luiz Chaimowicz, Jong-Woo Kim, Anthony Cowley

  • GRASP Laboratory

  • University of Pennsylvania

Mobile Robotics Laboratory

Georgia Institute of Technology

Co-PI: Gaurav Sukhatme

Senior Personnel:

Maja Mataric, Andrew Howard, Ashley Tews

Research Associates:

Srikanth Saripalli, Boyoon Jung, Brian Gerkey, Helen Yan

Co-PI: Jason Redi

Senior Personnel: Josh Bers, Keith Manning

Robotics Research Laboratory

University of Southern California

BBN Technologies


Future combat systems

The Future Combat System (FCS) concept revolves around the creation of a network-centric force of heterogeneous platforms that is strategically responsive, lethal, survivable and sustainable

communication in active mobile nodes during network-centric warfare;

integration of multiple, heterogeneous views of the target area

Future Combat Systems


Key fcs considerations

Key FCS Considerations

  • Adapt tovariations in communication performance and strive to maximize suitably defined network-centric measures for perception, control and communication

  • Provide situational awareness for remotely-located war fighters in a wide range of conditions

  • Integrate heterogeneous air-ground assets in support of continuous operations over varying terrain


Context

Context

  • Communication Network

    • 400 MHz (100Kbs), 2.4 GHz (10Mbs), 38 GHz (100 Mbs)

    • Affected by foliage, buildings, terrain features, indoor/outdoor

    • Directionality

  • Small Team of Heterogeneous Robots

    • UGVs with vision, range finders

    • UAVs (blimp, helicopter)


Goals

GOALS

  • A comprehensive model and framework integrating communications, perception, and execution

  • Automated acquisition of perceptual information for situational awareness

  • Reactive group behaviors for a team of air and ground based robots that are communications sensitive

  • A new framework for mobile networking in which robots use sensory information and relative position information to adapt network topology to the constraints of the task.


Adaptive autonomous robot teams for situational awareness

Control,

Vision

Behaviors,

Architecture

  • GRASP Laboratory

  • University of Pennsylvania

MARS

TEAMS

Mobile Robotics Laboratory

Georgia Institute of Technology

Sensing,

Mapping

Comms,

Networking

Robotics Research Laboratory

University of Southern California

BBN Technologies


Thrusts

Thrusts

  • Ad Hoc Networks for Control, Perception and Communication

  • Software framework for distributed computation, sensing, control, and human-robot interface

  • Communications-sensitive operations

  • Network-centric approach to situational awareness

  • Mission-specific planning and control for a team of heterogeneous robots

  • Adaptation of behaviors and networks to changing conditions


Thrusts and tasks

Thrusts and Tasks


1 ad hoc networks for control perception and communication

  • Communication Network

    (R, EC )

  • Computational Network

    (R, H)

1. Ad Hoc Networks for Control, Perception and Communication

  • Physical Network

    (R, ES )

eij={i, j, bm, bv, dm, dv}

qi= {sm, sv }


Models of communication

Models of Communication

  • Modeling

    • Effect of foliage

    • Buildings

    • Dependence on frequency, directionality

    • Statistical models of delays and “hot spots” from experimental data

      • Neighbors, path costs (delays, power)

      • Time of last communication

  • QoS metrics

    • Control/perception tasks

    • Individual robots vs. end-to-end

    • Move to improve reliability and network performance

  • Interface between network and robot software


Self awareness and cooperative localization penn

Self-Awareness and Cooperative Localization (Penn)

  • Discovery – robots can organize themselves into a team

  • Localization – establish relative pose information

R1

R1

R2

R2

R3

R3

R5

R5

R4

R4


Self awareness and cooperative localization

Self-Awareness and Cooperative Localization

  • Network of UGVs and Surrogate UAV

  • Reactive controllers that maintain, exploit network


Cooperative control penn

Cooperative Control (Penn)

  • Reactive controllers that maintain and exploit network

  • Controllers and estimators are represented by graphs

  • Fundamental connection between graph structure and performance (stability, convergence)


2 software framework for distributed computation sensing control and human robot interface

2. Software framework for distributed computation, sensing, control, and human-robot interface

  • Player/Stage (USC)

    • Robots

    • Sensors

      • Sonar

      • IR

      • Scanning LRF, cameras (color blob detection)

    • Integration


Adaptive autonomous robot teams for situational awareness

2. Software framework for distributed computation, sensing, control, and human-robot interface (continued)

  • ROCI (Penn)

    • Discover other processes

    • Communicate with other processes

    • Monitor other processes

    • Control other processes


3 communications sensitive behaviors and operations

3. Communications-sensitive behaviors and operations

  • Networking

    • Models (BBN)

    • Diagnostics (BBN)

  • Control of Mobility

    • Behaviors (GT)

    • Verification and Analysis

      (Penn)

  • Operations (Thrust 5)

    • Mission specification (GT)

    • Mission Planning (GT)


4 network centric approach to situational awareness

4. Network-centric approach to situational awareness

  • Cooperative Localization

    • Vision (Penn)

    • Range sensors, GPS, and IMU (USC)

    • Unreliable communication

  • Acquisition of 3-D information (Penn)

  • Cooperative behaviors

  • (USC, Penn)

  • Cooperative

  • Mapping (USC)

  • Semantic Markup

  • of Maps (USC)


5 mission specific planning and control for a team of heterogeneous robots

5. Mission-specific planning and control for a team of heterogeneous robots

  • FCS scenarios (BBN, GT)

  • MissionLab integration (GT)


6 adaptation of behaviors and networks to changing conditions

6. Adaptation of behaviors and networks to changing conditions

  • Adaptation of control modes (Penn)

  • Reinforcement learning to adapt mode switching (sequential composition of behaviors) (USC, Penn)

  • Path referenced perception and selection of behaviors (USC)

  • Variable autonomy (USC)

  • Operation under stealth (USC)


Technology integration

Air Ground Coordination

Command and Control Vehicle

Software

Mission planning

Control for communications

Active perception

Infrastructure for distributed computing

Technology Integration


Georgia institute of techology

GT Personnel

Faculty

Prof. Ron Arkin

Prof. Tucker Balch

Dr. Robert Burridge

GRAs

Keith O’Hara

Patrick Ulam

Alan Wagner

Mobile Intelligence Inc.

Dr. Doug MacKenzie

Georgia Institute of Techology


Impact gt

Impact - GT

  • Provide communication-sensitive planning and behavioral control algorithms in support of network-centric warfare, that employ valid communications models provided by BBN

  • Provide an integrated mission specification system (MissionLab) spanning heterogeneous teams of UAVs and UGVs

  • Demonstrate warfighter-oriented tools in three contexts: simulation, laboratory robots, and in the field


Task 1 communication sensitive mission specification

Task 1: Communication-sensitive Mission Specification

  • MissionLab is a usability-tested Mission-specification software developed under extensive DARPA funding (RTPC / UGV Demo II / TMR / UGCV / MARS / FCS-C programs)

    • Adapt to incorporate air-ground communication-sensitive command and control mechanisms

    • Extend to support physical and simulated experiments for objective air and ground platforms

    • Incorporate new communication tasks and triggers


Task 2 communication sensitive planning

Task 2: Communication Sensitive Planning

  • Add support for terrain models and other communications relevant topographic features to MissionLab

  • Use plans-as-resources as a basis for multiagent robotic communication control (spatial, behavioral, formations, etc.) and integrate within MissionLab


Task 3 communication sensitive team behaviors

Task 3: Communication-Sensitive Team Behaviors

  • Generation and testing of a new set of reactive communications preserving and recovery behaviors

  • Creation of behaviors sensitive to QoS

  • Expansion of Behaviors in support of line-of-sight and subterranean operations


Task 4 communication models and fidelity

Task 4: Communication Models and Fidelity

  • Work with BBN to incorporate suitable communication models into MissionLab in support of both simulation and field tests


Task 5 technology integration

Task 5: Technology Integration

  • Conduct Early-on Demonstrations on Ground Robots at Georgia Tech

  • Provide our Hummer Command and Control Vehicle for Team support at Objective Demonstration

    • Currently being used for FCS-C Program

    • Fully actuated – capable of teleautonomous control


University of southern california

University of Southern California

  • Faculty:

    • Prof. Gaurav Sukhatme

    • Prof. Maja Mataric

  • Research Associates:

    • Dr. Andrew Howard

    • Dr. Ashley Tews

  • Graduate Students:

    • Srikanth Saripalli, Boyoon Jung, Brian Gerkey, Helen Yan


Usc task summary

Outdoor simulation

Cooperative outdoor localization

Semantic representations

Stealthy behaviors

Path-referenced perception

HRI Integration

USC Task Summary


Task 1 stage simulation

Task 1: Stage Simulation

  • Current

    • Multi-robot 2D simulation, models differential and omni-drive robots, sonar, IR, scanning LRF, cameras (color blob detection), pan-tilt-zoom heads, and simple 2 DOF grippers

    • Language independent, architecture neutral

  • Extensions

    • 3D simulation for outdoor terrain.

    • Incorporate USC helicopter and UPenn blimp


Task 2 cooperative outdoor localization

Task 2: Cooperative Outdoor Localization

  • Extend existing localization algorithms to outdoor environments.

  • Implement outdoor localization in the presence of partial GPS.

  • Validate through outdoor experiments with small teams (4 ground robots).


Task 3 semantic representation and activity recognition

Task 3: Semantic Representation and Activity Recognition

  • Semantic mark-up of maps with following attributes:

    • elevation, terrain type and traversability, foliage and coverage type, and impact on communications.

  • Integrate activity/motion detection algorithms to locate people in the environment.

  • Demonstrate semantic markup using ground robots at USC.


Task 4 variable autonomy and stealth

Task 4: Variable Autonomy and Stealth

  • Develop and implement behaviors for variable autonomy incorporating operator feedback using gestures

  • Develop and implement a new “stealthy patrolling” behavior by integrating visibility constraints into current patrolling algorithms

  • Adapt and tune above behaviors using reinforcement learning to improve performance


Task 5 path referenced perception and behaviors

Task 5: Path-referenced Perception and Behaviors

  • Develop path-referenced perception and behaviors, which allow recall of behavioral strategy relative to priors paths taken in the mission

  • Integrate path-referencing which allows robots to query each other for relative locations of semantic mark-ups


Task 6 human robot interface

Task 6: Human Robot Interface

  • Extend Stage to serve as a simple visual display for war fighter. Overlay visual information with laser information in Stage.

  • Provide simple auditory feedback to the operator about current behavioral state of robots.


Technology integration1

Technology Integration

  • Demonstrations at USC of cooperative localization (laser based with IMU and GPS) using ground robots and USC helicopter.

  • Demonstration at USC of activity detection, semantic markup of terrain and stealthy traverses.

  • Support joint demonstration with ground robots.


University of pennsylvania

Faculty

Vijay Kumar

Camillo Jose Taylor

Jim Ostrowski

Research Associates

James Keller

Luiz Chaimowicz

Students

John Spletzer,

Aveek Das

Guilherme Pereira

Jong-Woo Kim

Vito Sabella

University of Pennsylvania


Task 1 model of ad hoc network

Task 1: Model of Ad Hoc Network

  • Develop a comprehensive model for control, perception and communication for situational awareness

  • Integrate models of interference, bandwidth, latency and QoS of the communication network with models of control, sensing and communication.

  • Performance measure

  • Implications for mobility

F (R, H)

ÑF (R, H)


Task 2 control of mobility

Task 2: Control Of Mobility

  • 1. Design controllers and behaviors in support for communications, establishing or sustaining links

  • 2. Design controllers and behaviors in support for situational awareness

  • 3. Formal analysis of controllers and behaviors to predict team performance


Task 3 adaptation

Task 3: Adaptation

  • Performance functions for the ad hoc network and adaptation using reinforcement learning

  • Reconfiguration of network to enable integration and fusion of sensory data in support of human interaction and situational awareness


Task 4 human robot team interface

Task 4: Human Robot Team Interface

  • Synthesis and integration for perception enabling multiple views at different spatio-temporal resolution

  • Interface for human-robot interaction

    • ROCI

    • Macroscope


Task 5 performance metrics verification and validation

Task 5: Performance Metrics: Verification and Validation

  • 1. Metrics for control, communication, and perception technologies, and performance measures for system performance.

    • Existing measures do not incorporate the dependence of control, communication and perception

  • 2. Designing and conducting experiments to measure performance


Task 6 technology integration

Task 6: Technology Integration

  • Coordinated motion of four UGVs and one blimp optimizing end-to-end network performance

  • Team control, realization of situational awareness using ROCI.


Summary of tasks

Penn GRASP

Integrated model for control, perception and communication for situational awareness

Synthesis and integration for perception enabling multiple views at different spatio-temporal resolution

Georgia Tech MRL

Communication-sensitive planning and behavioral control algorithms in support of network-centric warfare

Integrated mission specification system (MissionLab) spanning heterogeneous teams of UAVs and UGVs

Summary of Tasks

USC RRL

  • Cooperative outdoor localization for small teams of robots

  • Semantic mark-up of maps with environmental attributes and recognition of activity.

  • Behaviors for path-referenced perception and for clandestine operations

BBN

  • Models of QoS and metrics of performance for network-centric warfare

  • Interface design between network and robot modules

  • Formulation of FCS needs, capabilities, and design of demonstrations


Mars teams impact

MARS TEAMS Impact

  • New paradigm and novel algorithms for network-centric operations

  • Mobile nodes that reconfigure to maintain and enhance connectivity

  • Air-Ground coordination will directly impact FCS capabilities


  • Login