8 3 agent and decision making ai n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
8 . 3. Agent and Decision Making AI PowerPoint Presentation
Download Presentation
8 . 3. Agent and Decision Making AI

Loading in 2 Seconds...

play fullscreen
1 / 25

8 . 3. Agent and Decision Making AI - PowerPoint PPT Presentation


  • 95 Views
  • Uploaded on

8 . 3. Agent and Decision Making AI. Agent driven AI and associated decision making techniques. Question Clinic: FAQ. In lecture exploration of answers to frequently asked student questions. AI Agents. Using an agent driven approach to control game character AI. Game agents.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about '8 . 3. Agent and Decision Making AI' - robyn


Download Now An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
8 3 agent and decision making ai

8.3.Agent and Decision Making AI

Agent driven AI and associated decision making techniques

question clinic faq
Question Clinic: FAQ

In lecture exploration of answers to frequently asked student questions

ai agents
AI Agents

Using an agent driven approach to control game character AI

slide4

Game agents

“An autonomous agent is a system situated within and a part of an environment that senses that environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the future.”

Agents may act as an

Opponent

Ally

Neutral character

Loops through the following cycle:

Sense ► Think ► Act

Optional learning or remembering step

slide5

Game agents:

Sense►Think►Act: Sensing - Sight

Within a game, agents can have access to perfect information about the game world (e.g. complete terrain layout, location and state of the player, etc.).

Often a sensing model is used to avoid agent ‘cheating’ and ensure agents cannot ‘see’ through walls, know about unexplored areas, etc.

  • Example sensing model:
  • For each game object:
  • Is it within the viewing distance of the agent?
  • Is it within the viewing angle of the agent?
  • Is it unobscured by the environment?
slide6

Game agents:

Sense►Think►Act: Sensing - Sound

Do agents respond to sound? If so, how is sound propagation modelled?

An event based model is typically used:

When sound is emitted, it alerts interested agents

Use distance and zones to determine how far sound can travel

Travel distance may also depend upon type of incident surface and movement speed of the player, etc.

slide7

Game agents:

Sense►Think►Act: Sensing - Reacting

Agents should not normally see, hear and communicate instantaneously (or rather immediately commence actions following the sensing stage)

Normally sufficient to introduce artificial reaction times, e.g.:

Vision: ¼ to ½ second

Hearing: ¼ to ½ second

Communication: > 2 seconds

slide8

Game agents:

Sense►Think►Act: Thinking

Approaches towards agent AI ‘decision making’ (not mutually exclusive) include:

Use pre-coded expert knowledge

Algorithmically search for a solution

Many different techniques exist (we will explore some later)

Aside: Encoding expert knowledge is appealing as it is relatively easy to obtain and use, but may not be scalable or adaptable.

Whilst often scalable and adaptable, algorithmic approaches may not match a human expert in the quality of decision making or be computationally expensive.

slide9

Game agents:

Sense►Think►Act: Think

‘Dumbing’ down agents

Sometimes it may be necessary to ‘dumb down’ agents, for example:

Make shooting less accurate

Introduce longer reaction times

Change locations to make self more vulnerable, etc.

Letting agents cheat

This is sometimes necessary for:

Highest difficultly levels

CPU computation reasons

Development time reasons

slide10

Game agents:

Sense►Think►Act: Acting

Sensing and thinking steps are invisible to player, i.e. acting is how player witnesses intelligence. Example of actions include:

Move location

Pick up object

Play animation

Play sound effect

Fire weapon

Agents might also use event-driven communication when within the vicinity of each other to:

Alter other agents to some situation (i.e. agent hurt)

Share agent knowledge (i.e. player last seen at location x)

decision making ai
Decision Making AI

Introduction to decision making techniques within game AI

slide12

Decision making techniques

Internal change (s)

The input to a decision making process is the knowledge possesses by a game object and the output is a requested action.

The input knowledge can consist of internal knowledge (i.e. internal state) and external knowledge (i.e. game world).

Likewise, actions can be directed towards changing internal state or the external (world) state.

Internal knowledge

Decision Making Process

Requested action(s)

External knowledge

External change (s)

Aside: Most games need only use simple decision making techniques such as decision trees and state machines. Rule-based approaches may be needed for more complex needs.

decision trees
Decision Trees

Simple decision making using decision trees

slide14

Decision trees

Decision trees offer a simple, but fast form of decision making.

A decision tree consists of a starting decision point, which is connected to more refined decision points. Each leaf contains an action that is executed once reached.

The tree can be grown to encapsulate complex behaviour but then often become hard to manage.

finite state machines
Finite State Machines

The use of finite state machines to encapsulate a decision process

slide16

Finite State Machines (FSMs)

AS2

A finite state machine occupies one of a finite number of states at any point in time. Actions may be undertaken based on the current state. Inputs to the system can cause a transition from one state to another.

In general:

Each FSM has a number of possible states {S1 ... SN}

Transition functions {T1 ... TM} define the conditions under which a state transition will occur.

Every time a state transition occurs and a new state is entered, one or more state actions may be fired {AS1 ... ASN}

AS1

T4

T2

T3

T1

AS3

slide17

Finite State Machines (FSMs)

A finite state machine works by decomposing an object’s behaviour into defined chunks (states). So long as a character remains in a state it will use the same behaviour.

State machines are very widely used, including:

Controlling ghosts in Pac-man

Controlling bots in Quake

Sports simulations such as FIFA 2002

NPCs in RTSs such as Warcraft

slide18

Finite State Machines (Pacman example)

Each ghost can be in a wander, chase or evade state (each ghost can have a different chase/wander behaviour).

Once a powerpill is eaten, all ghosts transition to the evade state, which is exited once the timer expires.

Pacman in range

Powerpill eaten

Powerpill expired

Pacman out of range

Powerpill eaten

Aside: The wander state could be entirely removed in this FSM, i.e. Chase and Evade form the minimum behavioural set.

slide19

Finite State Machines (Examples)

State machines along with scripting represent the most common forms of decision making in games, as:

They are relatively quick and simple to code and debug

They have little computational overhead (depending on the complexity of transition tests).

They are flexible and can often be easily extended or modified.

State based behaviour is good for modelling many game-world objects.

slide20

Hierarchical FSMs

Simple FSMs cannot easily model all forms of behaviour. One example is ‘alarm behaviour’, an action that can be triggered from any state.

Consider a robot whose ‘alarm’ behaviour is to recharge when power levels become low.

Using a hierarchical FSM, the states can transition between ‘cleaning up’ and ‘getting power’ (at the top level). When in the ‘cleaning up’ state a lower-level FSMs controls behaviour.

goal oriented behaviour
Goal Oriented Behaviour

Using goals to drive behaviour

slide22

Goal Oriented Behaviour (GOB)

GOB is used widely in games such as The Sims.

Characters have a range of ‘emotional’ or ‘physical’ goals (or motives). Depending on the actions executed by the character (possibly player controlled) the goals (i.e. needs, desires, fears, etc.) will either increase or decrease.

GOB algorithms try to fulfil the character’s goals by selecting between available actions that influence the goal parameters.

slide23

Goal Oriented Behaviour (Goals and Actions)

Characters can have a number of currently active goals. Goals might include: eat, seek health, defeat opponent.

Each goal has an associated numeric insistence value representing the current importance of that goal.

Some goals may be fully achievable (e.g. seek health) others may be only reducible by always remain (e.g. satiate hunger).

A set of (possibly situational) actions is presented to the character. The character will select the action that best satisfies their current goal insistence values.

slide24

Goal Oriented Behaviour (Selecting actions)

Goals (Insistence low = 0, high = 5):

Eat (4), Sleep(1), Bathroom (3)

Actions:

Eat-Food (Eat − 3, Bathroom +1)

Eat-Snack (Eat − 2)

Sleep-Bed (Sleep − 4)

Sleep-Sofa (Sleep − 2)

Drink-Cola (Eat − 1; Bathroom + 3)

Visit-Bathroom (Bathroom − 4)

Discontentment:

Eat-Food: (4-3)2 + 12 + (3+1)2 = 18

Eat-Snack: (4-2)2 + 12 + 32 = 14

Sleep Bed: 42 + (1-1)2 + 32 = 25

Sleep Sofa: 42 + (1-1)2 + 32 = 25

Drink-Cola: (4-1)2 + 12 + (3+2)2 = 35

Visit-Bathroom: 42 + 12 + (3-3)2 = 17

Consider the shown goals and actions. Which action should be selected?

The notion of overall discontentment offers a useful means of selecting the best action.

A good discontentment metric is to sum the squares of insistence values and select the action that results in lowest discontentment.

More advanced approaches can consider the time to start/complete each activity, or more complex insistence contributions.

summary

Summary

Today we explored:

  • The notion of a game AI agent
  • Decision making processes including finite state machines and goal driven behaviour

To do:

  • If applicable to your game, explore finite-state machines, agents and goal driven behaviour.
  • Word towards your alpha hand-in goals.