Intelligent behaviors for simulated entities
This presentation is the property of its rightful owner.
Sponsored Links
1 / 69

Intelligent Behaviors for Simulated Entities PowerPoint PPT Presentation


  • 48 Views
  • Uploaded on
  • Presentation posted in: General

Intelligent Behaviors for Simulated Entities. I/ITSEC 2006 Tutorial. Presented by:Ryan Houlette Stottler Henke Associates, Inc. [email protected] 617-616-1293 Jeremy Ludwig Stottler Henke Associates, Inc. [email protected] 541-302-0929. Outline.

Download Presentation

Intelligent Behaviors for Simulated Entities

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Intelligent behaviors for simulated entities

Intelligent Behaviors for Simulated Entities

I/ITSEC 2006 Tutorial

Presented by:Ryan Houlette

Stottler Henke Associates, Inc.

[email protected]

617-616-1293

Jeremy Ludwig

Stottler Henke Associates, Inc.

[email protected]

541-302-0929


Outline

Outline

  • Defining “intelligent behavior”

  • Authoring methodology

  • Technologies:

    • Cognitive architectures

    • Behavioral approaches

    • Hybrid approaches

  • Conclusion

  • Questions


The goal

The Goal

  • Intelligent behavior

    • a.k.a. entities acting autonomously

    • generally replacements for humans

      • when humans are not available

        • scheduling issues

        • location

        • shortage of necessary expertise

        • simply not enough people

      • when humans are too costly

  • Defining Intelligent Behavior

  • Authoring Methodology

  • Technologies:

  • Cognitive Architectures

  • Behavioral Approaches

  • Hybrid Approaches

  • Conclusion


Intelligent behavior

“Intelligent Behavior”

  • Pretty vague!

  • General human-level AI not yet possible

    • computationally expensive

    • knowledge authoring bottleneck

  • Must pick your battles

    • what is most important for your application

    • what resources are available


Decision factors 1

Decision Factors1

  • Entity “skill set”

  • Fidelity

  • Autonomy

  • Scalability

  • Authoring


Factor entity skill set

Factor: Entity Skill Set

  • What does the entity need to be able to do?

    • follow a path

    • work with a team

    • perceive its environment

    • communicate with humans

    • exhibit emotion/social skills

    • etc.

  • Depends on purpose of simulation, type of scenario, echelon of entity


  • Factor fidelity

    Factor: Fidelity

    • How accurate does the entity’s behavior need to be?

      • correct execution of a task

      • correct selection of tasks

      • correct timing

      • variability/predictability

  • Again, depends on purpose of simulation and echelon

    • training => believability

    • analysis => correctness


  • Factor autonomy

    Factor: Autonomy

    • How much direction does the entity need?

      • explicitly scripted

      • tactical objectives

      • strategic objectives

  • Behavior reusable across scenarios

  • Dynamic behavior => less brittle


  • Factor scalability

    Factor: Scalability

    • How many entities are needed?

      • computational overhead

      • knowledge/behavior authoring costs

  • Can be mitigated

    • aggregating entities

    • distributing entities


  • Factor authoring

    Factor: Authoring

    • Who is authoring the behaviors?

      • programmers

      • knowledge engineers

      • subject matter experts

      • end users / soldiers

  • Training/skills required for authoring

  • Quality of authoring tools

  • Ease of modifying/extending behaviors


  • Choosing an approach

    Choosing an Approach

    • Also ease of integration with simulation....

    Scalability

    Ease of Authoring

    Skill Set

    Fidelity

    Autonomy


    Agent technologies

    Agent Technologies

    • Wide range of possible approaches

    • Will discuss the two extremes

    Cognitive

    Architectures

    Behavioral

    Approaches

    EPIC, ACT-R, Soar

    scripting

    FSMs

    deliberative

    reactive


    Authoring methodologies

    Authoring Methodologies

    Behavior

    Model

    Agent

    Architecture

    Simulation

    • Defining Intelligent Behavior

    • Authoring Methodology

    • Technologies:

    • Cognitive Architectures

    • Behavioral Approaches

    • Hybrid Approaches

    • Conclusion


    Basic authoring procedure

    Basic Authoring Procedure

    Evaluate

    entity

    behavior

    Run

    simulation

    Determine

    desired

    behavior

    Build

    behavior

    model

    DONE!

    Refine behavior model


    Iterative authoring

    Iterative Authoring

    • Often useful to start with limited set of behaviors

      • particularly when learning new architecture

      • depth-first vs. breadth-first

    • Test early and often

    • Build initial model with revision in mind

      • good software design principles apply: modularity, encapsulation, loose coupling

    • Determining why model behaved incorrectly can be difficult

      • some tools can help provide insight


    The knowledge bottleneck

    The Knowledge Bottleneck

    • Model builder is not subject matter expert

    • Transferring knowledge is labor-intensive

      • For TacAir-Soar, 70-90% of model dev. time

    • To reduce the bottleneck:

      • Repurpose existing models

      • Use SME-friendly modeling tools

      • Train SMEs in modeling skills

    • => Still an unsolved problem


    The simulation interface

    The Simulation Interface

    • Simulation sets bounds of behavior

      • the primitive actions entities can perform

      • the information about the world that is available to entities

    • Can be useful to “move interface up”

      • if simulation interface is too low-level

      • abstract away simulation details

        • in wrapper around agent architecture

        • in “library” within the behavior model itself

      • enables behavior model to be in terms of meaningful units of behavior


    Cognitive architectures

    Cognitive Architectures

    • Overview

    • EPIC, ACT-R, & Soar

    • Examples of Cognitive Models

    • Strengths / Weakness of Cognitive Architectures

    • Defining Intelligent Behavior

    • Authoring Methodology

    • Technologies:

    • Cognitive Architectures

    • Behavioral Approaches

    • Hybrid Approaches

    • Conclusion


    Introduction

    Introduction

    • What is a cognitive architecture?

      • “a broad theory of human cognition based on a wide selection of human experimental data and implemented as a running computer simulation” (Byrne, 2003)

    • Why cognitive architectures?

      • Advance psychological theories of cognition

      • Create accurate simulations of human behavior


    Introduction1

    Introduction

    • What is cognition?

    • Where does psychology fit in?


    Cognitive architecture components

    Cognitive Architecture Components


    A theory the model human processor

    A Theory – The Model Human Processor

    • Some principles of operation

      • Recognize-act cycle

      • Fitt’s law

      • Power law of practice

      • Rationality principle

      • Problem space principle

    (from Card, Moran, & Newell, 1983)


    Architecture

    Architecture

    • Definition

      • “a broad theory of human cognition based on a wide selection of human experimental data and implemented as a running computer simulation” (Byrne, 2003)

    • Two main components in modeling

      • Cognitive model programming language

      • Runtime Interpreter


    Epic architecture

    EPIC Architecture

    • Processors

      • Cognitive

      • Perceptual

      • Motor

    • Operators

      • Cognitive

      • Perceptual

      • Motor

      • Knowledge Representation

    (from Kieras, http://www.eecs.umich.edu/ ~kieras/epic.html)


    Model

    Model

    Task Description

    Task Environment

    Architecture

    Runtime

    Task Strategy

    Architecture

    Language


    Task description

    Task Description

    • There are two points on the screen: A and B.

    • The task is to point to A with the right hand, and press the “Z” key with the left hand when it is reached.

    • Then point from A to B with the right hand and press the “Z” key with the left hand.

    • Finally point back to A again, and press the “Z” key again.


    Task environment

    Task Environment

    A

    B


    Task strategy epic production rules

    Task Strategy –EPIC Production Rules


    Epic production rule

    EPIC Production Rule

    • (Top_point_A

    • IF

    • ((Step Point AtA)

    • (Motor Manual Modality Free)

    • (Motor Ocular Modality Free)

    • (Visual ?object Text My_Point_A)

    • )

    • THEN

    • (

    • (Send_to_motor Manual Perform Ply Cursor ?object Right)

    • (Delete (Step Point AtA))

    • (Add (Step Click AtA))

    • ))


    Act r and soar

    ACT-R and Soar

    • Motivations

    • Features

    • Models


    Initial motivations

    Initial Motivations

    • ACT-R

      • Memory

      • Problem solving

    • Soar

      • Learning

      • Problem solving


    Act r architecture

    ACT-R Architecture

    (from Bidiu, R., http://actr.psy.cmu.edu/about/)


    Some act r features

    Some ACT-R Features

    • Declarative memory stored in chunks

      • Memory activation

      • Buffer sizes between modules is one chunk

    • One rule per cycle

    • Learning

      • Memory retrieval, production utilities

      • New productions, new chunks


    Act r 6 0 ide

    ACT-R 6.0 IDE


    Task description1

    Task Description

    • Simple Addition

      • 1 + 3 = 4

      • 2 + 2 = 4

    • Goal: mimic the performance of four year olds on simple addition tasks

      • This is a memory retrieval task, where each number is retreived (e.g. 1 and 3) and then an addition fact is retrieved (1 + 3 = 4)

      • The task demonstrates partial matching of declarative memory items, and requires tweaking a number of parameters.

    • From the ACT-R tutorial, Unit 6


    Act r 6 0 production rules

    (p retrieve-first-number

    =goal>

    isaproblem

    arg1=one

    statenil

    ==>

    =goal>

    stateencoding-one

    +retrieval>

    isanumber

    name=one

    )

    (p encode-first-number

    =goal>

    isaproblem

    state encoding-one

    =retrieval>

    isanumber

    ==>

    =goal>

    stateretrieve-two

    arg1=retrieval

    )

    ACT-R 6.0 Production Rules


    Some relevant act r models

    Some Relevant ACT-R Models

    • Best, B., Lebiere, C., & Scarpinatto, C. (2002). A model of synthetic opponents in MOUT training simulations using the ACT-R cognitive architecture. In Proceedings of the Eleventh Conference on Computer Generated Forces and Behavior Representation. Orlando, FL.

    • Craig, K., Doyal, J., Brett, B., Lebiere, C., Biefeld, E., & Martin, E. (2002). Development of a hybrid model of tactical fighter pilot behavior using IMPRINT task network model and ACT-R. In Proceedings of the Eleventh Conference on Computer Generated Forces and Behavior Representation. Orlando, FL


    Soar architecture

    Soar Architecture

    • Problem Space Based


    Some soar features

    Some Soar Features

    • Problem space based

      • Attribute/value hierarchy (WM) forms the current state

      • Productions (LTM) transform the current state to achieve goals by applying operators

    • Cycle

      • Input

      • Elaborations fired

      • All possible operators proposed

      • One selected

      • Operator applied

      • Output

    • Impasses & Learning


    Soar 8 6 2 ide

    Soar 8.6.2 IDE


    Task description2

    Task Description

    • Control the behavior of a Tank on the game board.

      • Each tank has a number of sensors (e.g. radar) to find enemies, missiles to launch at enemies, and limited resources

    • From the Soar Tutorial


    Propose moves

    sp {propose*move

    (state <s> ^name wander

    ^io.input-link.blocked.forward no)

    -->

    (<s> ^operator <o> +)

    (<o> ^name move

    ^actions.move.direction forward)}

    sp {propose*turn

    (state <s> ^name wander

    ^io.input-link.blocked <b>)

    (<b> ^forward yes

    ^ { << left right >> <direction> } no)

    -->

    (<s> ^operator <o> + =)

    (<o> ^name turn

    ^actions <a>)

    (<a> ^rotate.direction <direction>

    ^radar.switch on

    ^radar-power.setting 13)

    }

    sp {propose*turn*backward

    (state <s> ^name wander

    ^io.input-link.blocked <b>)

    (<b> ^forward yes ^left yes ^right yes)

    -->

    (<s> ^operator <o> +)

    (<o> ^name turn

    ^actions.rotate.direction left)

    }

    Propose Moves


    Prefer moves

    Prefer Moves

    • sp {select*radar-off*move

    • (state <s> ^name wander

    • ^operator <o1> +

    • ^operator <o2> +)

    • (<o1> ^name radar-off)

    • (<o2> ^name << turn move >>)

    • -->

    • (<s> ^operator <o1> > <o2>)

    • }


    Apply move

    Apply Move

    • sp {apply*move

    • (state <s> ^operator <o>

    • ^io.output-link <out>)

    • (<o> ^direction <direction>

    • ^name move)

    • -->

    • (<out> ^move.direction <direction>)

    • }


    Elaborations

    Elaborations

    • sp {elaborate*state*missiles*low

    • (state <s> ^name tanksoar

    • ^io.input-link.missiles 0)

    • -->

    • (<s> ^missiles-energy low)

    • }

    • sp {elaborate*state*energy*low

    • (state <s> ^name tanksoar

    • ^io.input-link.energy <= 200)

    • -->

    • (<s> ^missiles-energy low)

    • }


    Some relevant soar models

    Some Relevant Soar Models

    • Wray, R.E., Laird, J.E., Nuxoll, A., Stokes, D., Kerfoot, A. (2005). Synthetic adversaries for urban combat training. AI Magazine, 26(3):82-92.

    • Jones, R. M., Laird, J. E., Nielsen, P. E., Coulter, K. J., Kenny, P., & Koss, F. V. (1999). Automated intelligent pilots for combat flight simulation. AI Magazine, 20(1), 27-41.


    Strengths weaknesses of cognitive architectures

    Strengths / Weaknesses of Cognitive Architectures

    • Strengths

      • Supports aspects of intelligent behavior, such as learning, memory, and problem solving, not supported by other types of architectures

      • Can be used to accurately model human behavior, especially human-computer interaction, at small grain sizes (measured in ms)

    • Weaknesses

      • Can be difficult to author, modify, and debug complicated sets of production rules

        • High level modeling languages (e.g. CogTool, Herbal, High Level Symbolic Representation language)

        • Automated model generation (e.g. Konik & Laird, 2006)

      • Computational issues when scaling to large number of entities


    Behavioral approaches

    Behavioral Approaches

    • Focus is on externally-observable behavior

      • no explicit modeling of knowledge/cognition

      • instead, behavior is explicitly specified:

        “Go to destination X, then attack enemy.”

    • Often a natural mapping from doctrine to behavior specifications

    • Defining Intelligent Behavior

    • Authoring Methodology

    • Technologies:

    • Cognitive Architectures

    • Behavioral Approaches

    • Hybrid Approaches

    • Conclusion


    Hard coding behaviors

    Hard-coding Behaviors

    • Simplest approach is write behavior in C++/Java:

      MoveTo(location_X);

      AcquireTarget(target);

      FireAt(target);

    • Don’t do this!

      • Can only be modified by programmers

      • Hard to update and extend

      • Behavior models not easily portable


    Scripting behaviors

    Scripting Behaviors

    • Write behaviors in scripting language

      • UnrealScript

  • Avoids many problems of hard-coding

    • not tightly coupled to simulation code

    • more portable

    • often simplified to be easier to learn & use

  • Fine for linear sequences of actions, but do not scale well to complex behavior


  • Finite state machine fsm

    Finite State Machine (FSM)

    • Specifies a sequence of decisions and actions

    • Basic form is essentially a flowchart

    no

    X?

    yes

    Z?

    yes


    An fsm example

    An FSM Example

    • A basic Patrol behavior

      • implemented for bots in Counter-Strike

      • built in SimBionic visual editor

    • Simulation interface

      • Primitive actions:

        • FollowPath, TurnTo, Shoot, Reload

      • Sensory inputs:

        • AtDestination, Hear, SeeEnemy, OutOfAmmo, IsDead


    An fsm example 2

    An FSM Example (2)


    An fsm example 3

    An FSM Example (3)


    An fsm example 4

    An FSM Example (4)


    Fsms advantages

    FSMs: Advantages

    • Very commonly-used technique

    • Easy to implement

    • Efficient

    • Intuitive visual representation

      • Accessible to SMEs

      • Maintainable

    • Variety of tools available


    Fsms disadvantages

    FSMs: Disadvantages

    • Have difficulty accurately modeling:

      • behavior at small grain sizes

      • human-entity interaction

    • Lack of planning and learning capabilities

    • => brittleness (can’t cope with situations unforeseen by the modeler)

    • Tend to scale ungracefully


    Hierarchical fsms

    Hierarchical FSMs

    • An FSM can delegate to another FSM

      • SearchBuilding  ClearRoom

    • Allows modularization of behavior

    • Reduces complexity

    • Encourages reuse of model components


    Hierarchical fsms 2

    Hierarchical FSMs (2)


    Hierarchical fsms 3

    Hierarchical FSMs (3)


    Hybrid architectures

    Hybrid Architectures

    • Combine cognitive approaches

      • EASE: Elements of ACT-R, EPIC, & Soar (Chong & Wray, 2005)

  • Combine behavioral and cognitive approaches

    • Imprint / ACT-R (Craig, et al., 2002)

    • SimBionic / Soar

    • Defining Intelligent Behavior

    • Authoring Methodology

    • Technologies:

    • Cognitive Architectures

    • Behavioral Approaches

    • Hybrid Approaches

    • Conclusion


    Hybrid architectures1

    Hybrid Architectures

    • Combine cognitive & behavioral approaches

    goals

    • Pros:

    • More scalable

    • Easier to author

    • More flexible

    • Cons:

    • Architecture is more complex

    Cognitive Layer

    behaviors

    Behavioral Layer

    actions

    Simulation


    Hybrid example htn planner fsms

    Hybrid Example: HTN Planner + FSMs

    • Hierarchical Task Network (HTN) Planner

    • Inputs:

      • goals

      • library of plan fragments (HTNs)

    • Outputs:

      • High-level plan achieving those goals

        • Each plan step is an FSM in the Behavior Layer

    • Not really a cognitive architecture, but adds goal-driven capability to system

      • Plan fragments represent codified sequences of behavior


    Conclusion

    Conclusion

    • Factors affecting choice of architecture:

      • Entity capabilities

      • Behavior fidelity

      • Level of autonomy

      • Number of entities

      • Authoring resources

    • Two main paradigms:

      • cognitive architectures

      • behavioral approaches


    Conclusion 2

    Conclusion (2)

    • Recommend iterative model development

      • Build

      • Test

      • Refine

    • Be aware of the knowledge bottleneck


    Resources

    Resources

    • EPIC

    • http://www.eecs.umich.edu/~kieras/epic.html

    • ACT-R

    • http://act-r.psy.cmu.edu/

    • SOAR

    • http://sitemaker.umich.edu/soar

    • SimBionic

      http://www.simbionic.com/


    Questions

    Questions?


    References

    References

    • Anderson, J. R., Bothell, D., Byrne, M. D., Douglass,S., Lebiere, C., & Qin, Y. (2004). An integrated theory of mind. Psychological Review, 111(4), 1036-1060.

    • Card, S. K., Moran, T. P., & Newell, A. (1983). The psychology of human-computer interaction. Hillsdale, N.J.: L. Erlbaum Associates.

    • Chong, R. S., & Wray, R. E. 2005. Inheriting constraint in hybird cognitive architecutres: Applying the EASE architecture to perforance and learning in a simplified air traffic control task. In K. A. Gluck & R. W. Pew (Eds.), Modeling Human Behavior with Integrated Cognitive Architectures: Comparison, Evaluation, and Validation (237-304): Lawrence Erlbaum Associates.

    • Craig, K., Doyal, J., Brett, B., Lebiere, C., Biefeld, E., & Martin, E. A. (2002). Development of a hybrid model of tactical fighter pilot behavior using IMPRINT task network modeling and the adaptive control of thought - rational (ACT-R). Paper presented at the 11th Conference on Computer Generate Forces and Behavior Representation.

    • Douglass. (2003). Modeling of Cognitive Agents. Retrieved May 22, 2005, from http://actr.psy.cmu.edu/~douglass/Douglass/Agents/15-396.html).

    • Fu, D., & Houlette, R. (2003). The ultimate guide to FSMs in games. In S. Rabin (Ed.), AI Game Programming Wisdom 2.

    • Fu, D., Houlette, R., Jensen, R., & Bascara, O. (2003). A Visual, Object-Oriented Approach to Simulation Behavior Authoring. Paper presented at the Industry/Interservice, Training, Simulation & Education Conference.

    • Gray, W. D., & Altmann, E. M. (1999). Cognitive modeling and human-computer interaction. In W. Karwowski (Ed.), International Encyclopedia of Ergonomics and Human Factors (pp. 387-391). New York: Taylor & Francis, Ltd.

    • Jones, R. M., Laird, J. E., Nielsen, P. E., Coulter, K. J., Kenny, P., & Koss, F. V. (1999). Automated intelligent pilots for combat flight simulation. AI Magazine, 20(1), 27-41.

    • Kieras, D. E. (2003). Model-based evaluation. In J. A. Jacko & A. Sears (Eds.), The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications (pp. 1139-1151). Mahway, NJ: Lawrence Erlbaum Associates.

    • Könik, T., and Laird, J. (2006). Learning Goal Hierarchies from  Structured Observations and Expert Annotations. Machine Learning. In Print


    References 2

    References (2)

    • Laird, J. E. (2004). The Soar 8 Tutorial. Retrieved August 16, 2004, from http://sitemaker.umich.edu/soar/soar_software_downloads

    • Laird, J. E., & Congden, C. B. (2004). The Soar User's Manual Version 8.5 Edition 1. Retrieved August 16, 2004, from http://sitemaker.umich.edu/soar/soar_software_downloads

    • Lehman, J. F., Laird, J. E., & Rosenbloom, P. S. (1998). A gentle introduction to SOAR: An architecture for human cognition. In D. Scarborough & S. Sternberg (Eds.), (2 ed., Vol. 4, pp. 212-249). Cambridge, MA: MIT Press.

    • Pearson, D., & Laird, J. E. (2004). Redux: Example-driven diagrammatic tools for rapid knowledge acquisition. Paper presented at the Behavior Representation in Modeling and Simulation, Washington, D.C.

    • Pew, R. W., & Mavor, A. S. (1998). Modeling human and organizational behavior : application to military simulations. Washington, D.C.: National Academy Press. Unit 7: Production Rule Learning. Retrieved October 31, 2004, from http://actr.psy.cmu.edu/tutorials/unit7.htm

    • Ritter, F. E, Haynes, S. R., Cohen, M., Howes, A., John, B., Best, B., Lebiere, C., Lewis, R. L., St. Amant, R., McBraide, S. P. Urbas, L., Leuchter, S., Vera, A. (2006) High-level behavior representation languages revisited. In Proceedings of ICCM 2006, Seventh International Conference on Cognitive Modeling (Trieste, Italy, April 5-8, 2006)

    • Wallace, S. A., & Laird, J. E. (2003). Comparing agents and humans using behavioral bounding. Paper presented at the International Joint Conference on Artificial Intelligence.

    • Wray, R., van Lent, M., Beard, J., & Brobst, P. (2005) “The Design Space of Control Options for AIs in Computer Games.” Paper presented at the International Joint Conference on Artificial Intelligence.

    • Wray, R. E., van Lent, M., Beard, J., & Brobst, P. 2005. The Design Space of Control Options for AIs in Computer Games. In Proceedings of the International Joint Conference on Artificial Intelligence 2005.


  • Login