character artificial intelligence ce00875 3 n.
Skip this Video
Loading SlideShow in 5 Seconds..
Character Artificial Intelligence CE00875-3 PowerPoint Presentation
Download Presentation
Character Artificial Intelligence CE00875-3

Loading in 2 Seconds...

play fullscreen
1 / 34

Character Artificial Intelligence CE00875-3 - PowerPoint PPT Presentation

  • Uploaded on

Character Artificial Intelligence CE00875-3. Introduction to Agent-based AI Lecture 2. Agents, Animats and Game Platforms. What are agents and artificial life? Considerations for game AI Goal orientation, planning and control Reactive techniques for games Reflective vs Reactive behaviour.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Character Artificial Intelligence CE00875-3' - olisa

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
character artificial intelligence ce00875 3
Character Artificial IntelligenceCE00875-3

Introduction to Agent-based AI

Lecture 2

agents animats and game platforms
Agents, Animats and Game Platforms
  • What are agents and artificial life?
  • Considerations for game AI
  • Goal orientation, planning and control
  • Reactive techniques for games
  • Reflective vs Reactive behaviour
  • Arguably the most important development in AI of the last decade has been the notion of agents
  • Comes from the philosophical notions of action, and the notion of Rational Economic Man
  • An agent is a self-contained bundle of software with the characteristics of autonomy, action and rationality
  • Related to, but more sophisticated than, the notion of an object in OOL. Higher level of abstraction, in which what counts is the choice of action in response to goals and situation.
  • Newell’s concept of behavioural laws governing action egAn agent will choose the action most likely to lead to an outcome that matches one of its goals.
  • In a multi-agent system complex behaviour emerges as the result of the interactions between many, relatively simple agents according to social principles
artificial life cellular automata
Artificial Life – Cellular automata
  • A-life shares some elements with agent research, especially the multi-agent approach
  • Earliest forms were a class of game called cellular automataeg Conway’s Life
  • A few rules control the birth, survival and death of counters on a rectangular grid, to produce growing, complex patterns over cycles

Cells can be either live (contain counter) or dead (no counter). Each cell has 8 neighbours. To decide what will happen in a cell for each cycle, count the number of live and dead cells

1) A dead cell with exactly 3 live neighbors becomes a live cell (birth).

2) A live cell with 2 or 3 live neighbors stays alive (survival).

3) In all other cases, a cell dies or remains dead (overcrowding or loneliness).

artificial life norns
Artificial Life - Norns
  • However, A-life today is about making biologically plausible software models
  • Some of this work has resulted in extremely valuable creatures which may be used in game-playing scenarios eg Norns in Cyberlife’s Creatures
  • Norns have a simulated body, with a ‘digestive system’ and ‘biochemistry’

Image: Cyberlife/New Scientist

considerations for game ai
Considerations for Game AI
  • Are the AI approaches applicable to the software design of NPCs?
  • ie able to provide:
    • control of primitive behaviours
      • eg picking things up, pressing switches, using objects, etc.
    • movement
      • move around the game world, dealing with obstacles and navigating to target destinations
    • decision-making
      • working out what tasks to perform, and in what order
  • Traditionally, these things were done by laborious, detailed, procedural programming - now becoming more difficult as games evolve
  • From a games point of view, use of AI not important
considerations for game ai1
Considerations for Game AI
  • In nouvelle game AI, we consider how the needs of game designers work for (and against) AI methods
  • In games, some NPC characters need to behave intelligently to be believable and entertaining – maybe easierto accomplish with AI programs...
  • ...but only if the program can really perform (very fast response to new situation) - challenging for AI
  • Software control of actions within a game has until recently needed to be explicit – ie scripting
  • For each of a number of situations arising in the game, a standard behaviour response is rigidly encoded
  • More recently, agent and other AI technologies have enabled implicit control – NPC assesses situation, chooses its action with respect to its own goals
considerations for game ai2
Considerations for Game AI
  • Basic conflict for game design: Building in intelligence allows NPC characters to behave autonomously, yet human designers need to control them, to make the game work as expected
  • Eg many games follow a script like a movie, so that a certain sequence of events happens to unfold a story line. If NPCs (especially learning ones) can run their own show, they might not stick to the script!
  • What role would intelligent decisions have in a scripted storyline?
  • One idea would be implicit design – no explicit script, just the elements of an interesting world and both player and NPC improvise
  • Another is to alternate sequences of scripted action (in which the NPCs intelligence is switched off) with free form action (in which the intelligence is on). Might be difficult to keep on track though!
goal orientation planning
Goal-orientation, planning
  • Goals may be defined as a representation of a state of affairs which is marked as somehow desirable, to be achieved
  • Maintenance goals involve making sure some state of affairs is not lost (eg keep breathing)
  • Achieving a goal may involve a number of actions to move the actual state of the world closer to the desired state (eg move toward a source of food or target)
  • It might not be clear what to do next to get closer to a goal – planning is needed to select and order individual actions
  • All this requires collecting and maintaining models of the world-state, comparing them to goal states and then making decisions
  • This can take a lot of design and a lot of computation
  • Planning programs exist, but they tend to be slow and complex
the problem of planning
The problem of planning
  • Conventional AI planning program requires
  • i) a list of goals represented using a knowledge representation
  • ii) a problem in the form of some data representing opportunities or problems
  • iii) a set of possible actions which the machine can choose from

iv) a planning algorithm which organises actions into a optimal plan

The program would then do standard information processing:

1) accepts these as data files

2) process the data

3) prints out an optimal plan, ie ordered list of actions

reflective vs reactive behaviour
Reflective vs Reactive Behaviour
  • This distinction is important to game development, mostly because of the need for fast reactions
  • Early AI tended to be reflective – meaning that a problem-solving algorithm would exhaustively consider all possible courses of action, then choose the optimal (in terms of goal satisfaction) one
  • Humans don’t really do that, though. They satisfice, which means come up with a “good enough” solution, but more quickly, especially when time is an issue. Saves mental effort
  • Reflective programs build representational models from input data, evaluate or reason about them, plan a course of action then output it – expensive in computing power
  • Reactive programs use the world as their model. They interact directly with the world, exploiting simple signals and regularities there, to get behaviour which is not perfect, but fast
  • Now game AI tends to be mostly reactive

Image: Mike Williams

One does not want to be stuck in high-level planning when there’s an emergency bearing down on you

<train bearing down>

Image: Benjamin Hobson, The Canal Gallery

Maybe it’s quicker and easier to sense the world and act directly on it, instead of building and maintaininginternal models of the world

This is how insects are able to do a lot of clever things without a big

brain and lots of memory

  • To simplify writing of AI code, the environment and integrating the two, we can use a standard development platform, Unreal Development Kit, UDK
  • Creates embodied bots ie that have a (simulated) body, are subject to physical constraints in the (simulated) world and do not have full information about the game
  • Just as human game players has an avatar in the game world, so a disembodied AI software controls its own avatar
  • Unreal is designed to support development of an first person shooter games.
  • The game developer specifies the interfaces, modules and architectures of an bot in a given world, in a domain-specific language called Uscript but has a myriad of other features which support various game-specific AI concepts.
reactive techniques for games
Reactive Techniques for Games
  • Another advantage of reactive designs is that they tend to be deterministic – fully specified by inputs
  • That means reactive code can very simple, easy to test and be highly optimised for speed eg

- lookup tables

- scripts

- rule-based systems

- Finite State Machines

  • Benefits of bots- fits in with the idea of embodiment very well
  • - environment can be enriched which provides more information to the bot
  • - most learning techniques are based on reactive mappings
  • - easy to create, test and maintain
environments simulations platforms
Environments, simulations, platforms
  • According to the notion of embodiment, the best place for an AI in immersed in the world, connected to it by many sensors and actuators Eg a robot roams about a house, sensing with cameras, microphones andusing motorised limbs to move
  • Game AI modifies this slightly, and says the ‘world’ doesn’t have to be physical – it can be a computer simulation of the real world
    • Advantages
      • Cheaper and smaller than robots in the real world
      • Can develop and make changes faster in simulation
      • Can’t do any harm if it goes wrong
    • Disadvantages
      • world may be too simple to properly challenge an AI program
      • world may not be realistic, or even logically consistent
      • may have to build - or at least configure - an artificial world; extra work
  • In game AI, we are more or less forced use to a simulated world. But this is still better than prepared, cut-and-dried data sets
reactive animat search and obstacle avoidance
Reactive animat, search and obstacle avoidance
  • Game’s “Physical” Environment
  • Machine Vision
  • Representing Space in the Game World
  • Movement in the Game World
  • Navigation in the Game World
  • Obstacle Avoidance
  • A Reactive Control System
game s physical environment
Game’s “Physical” Environment
  • We may distinguish between two aspects of a game environment:

Structure - topography of the environment as it constrains movement (physics and layout of walls, paths, obstacles etc.

Detail – graphical appearance of the game world and placement of objects which don’t impede movement

  • What about players, NPCs and monsters? Really need to consider moving things as a third category, especially when interactions go beyond simply destroying everything you see
  • Humans perceive the world mostly visually through detail, while AI sees the world a as simplified data structures and must interpret these as well as it can
  • We can try to make an AI interpret the graphical world directly, as if it was seeing through eyes, but such machine vision has proven to be very difficult to program and expensive to compute (at least at a human level of skill)
  • It is an important concept of nouvelle game AI that an animat should only have local, not global, knowledge of the game (like a human)
  • Having complete, perfect knowledge of the world is not good for AI or games
machine vision




ASCII for ‘2’




data representation

raw image

Machine Vision
  • Getting a machine to see is a traditional sub-discipline of AI
  • A typical system might involve a camera returning a digitised image, interpretive software and some kind of output arrangement
  • Eg handwritten letter recogniser – easy
  • To get more sophisticated output information requires more complex processing. Eg. scene analysis to aid robot navigation – hard
  • The output for that would be information enabling the robot to identify particular objects, or find their range and bearing, to help navigate around them
representing space in the game world
Representing Space in the Game World
  • How space is represented is important
  • 2D vs 3D – how the location of objects is encoded in the structure, not how the detail makes the world appear
  • Discrete vs continuous – meaning whether objects are placed in a grid or matrix with a finite number of locations or not (eg chess vs marbles)
  • Representation of time - also discrete (turn-taking) or continuous (stream of consciousness)
  • Conversions – discrete vs continuous is a relative matter, since a fine enough unit size (grid or time-steps) may be considered continuous in practice
  • In fact, all representations in computers must ultimately be discrete, approximating continuous to a greater or lesser degree!
movement in the game world
Movement in the Game World
  • At present, game engines provide a locomotion layer which abstracts basic movement actions away from the strategic control of direction
  • Physics simulation is required to handle gravity, throwing, fire, water etc.
  • Low level motion might nowbe handled by the AI


(forward/backward, turns)

signals from user or

parameters via API fromthe decision-making AI


signals from environment

alter behaviour as

appropriate (eg falls)

Collision detection

Physics in the game signals a collision halting

(forward) motion

Simulation loop

(walking, running)

physics handles displacement

animation handles limb cycling

locomotion layer

Image: Chris Bayliss

representing space in the game world1
Representing Space in the Game World
  • In a classical AI navigation experiment, travel paths in the world model might be represented at design-time as a graph, with nodes representing rooms and arcs representing passages or doors between them
  • Finding an optimal path from a current location to a target was then a matter of search on the graph
  • There are well-studied and good search algorithms available
search basics
Search - Basics
  • Uninformed search algorithms simply follow a pattern to examine all nodes until one containing the goal is found (aka “brute force”)
  • Depth-first search - start at a root and explore as far as possible along each branch before backtracking until goal is found
  • Breadth-first search - start at a root and explore all neighbouring nodes, then for each neighbour, explore their unexplored neighbours, and so on until goal is found
  • On this graph, starting at A, choosing left nodes before right and remembering previously visited nodes:
  • Depth First Search visits nodes in this order: A,B,D,F,E,C,G
  • Breadth First Search visits nodes in this order: A,B,C,E,D,F,G
search using domain information
Search – Using Domain Information
  • Informed search algorithms use some method to choose intelligently which node to search next
  • Best-first search – modifies breadth-first method to order all current paths by a heuristic which estimates how close the end of the path is to a goal. Paths that are closest to a goal are extended first.
  • A* search - is a best-first method that calculates a cost and an estimate for the path leading to each node:
  • Cost is zero for the first node; for all others, cost is the cumulative sum associated with all its ancestor nodes plus the cost of the operation which reached the node (e.g. linear distance)
  • An estimate measures how far a given node is thought to be from a goal state (e.g. intensity on a sensory gradient)
  • Cost and estimate are summed to score each path for eligibility for exploration
  • For more detail, view the ‘Graph Search Methods’ link on the website
navigation in the game world
Navigation in the Game World
  • There are problems with this kind of classical search however:
    • Depends on global information at design-time, so
    • Question of realism arises – not comparable with the limited viewpoint of real biological creatures
    • A lot of information may overwhelm decision-making processes
    • Information does not update dynamically via sensors, so cannot track changes (eg moving creatures in the world)
  • Nouvelle game AI animats are (virtually) embodied, which implies that
  • - They have a (simulated) limited perceptual system which updates the AI continuously
  • - They need more plausible navigation algorithms which can work on limited information and in real time
  • For now, we are interested in reactive solutions
  • Fortunately, such solutions have been studied for the design of robots
modelling a bot s state in space









World Origin

Absolute Coordinate System


Egocentric Origin

Relative Coordinate System

90 deg angles

any angle

unit distance

any distance

Discrete moves and turns

Continuous moves and turns

Modelling a Bot’s State in Space
  • For many (but not all) AI models, need a description of the animat’s position and orientation in space, as well as how it will move
obstacle avoidance basic functionality





Obstacle Avoidance – Basic functionality
  • Finding one’s way around obstacles is fundamental to navigation
  • Well suited to implementation by a reactive control system
  • Begins from general principles:

1. When no obstacle is sensed, move ahead normally (straight or wandering randomly)

2. If wall detected on one side, veer away


3. If obstacle detected ahead, turn to one side to avoid it

4. When stuck in a corner, a more radical turn is needed

a reactive control system
A Reactive Control System
  • A reactive system requires three elements:
  • 1) a set of perceptual and action functions, which apply in a particular situation
  • 2) a mapping showing which percepts release which behaviours. That means a theory about how the animat behave (note related to idea of behavioural laws for agents). See previous slide.
  • An if-else-if-else structure could be used to order calls to perceptual and motor functions – procedural
  • A rule-based system (later topic!) is another possibility – declarative
  • UDK is based on the client-server model. Requests by a client are made and do not return until the server has computed a result. Usually based on simple function calls. Commonly used for the delegation of tasks
a reactive control system1
A Reactive Control System
  • Could also use asynchronous events. Based on something called the observer design pattern: have a set of behaviours ready to go and a set of events that will trigger them. A kind of event-based processing
  • These can interrupt another routine and transfer control when something comes up unexpectedly
  • These should operate in ‘parallel’, competing for control of the animat’s body => need for 3) arbitration in case of a tie
  • How are competing motor outputs combined to control the limbs?
  • - Independent sum – different outputs connected to different effectors so that they cannot clash
  • - Combination – some formula (eg weighted sum) combines two or more outputs to blend them into output commands
  • - Suppression – certain components get priority over others, and can replace their output with their own (eg. subsumption architecture)
  • - Sequential – output of different behaviours getting control at alternate times
subsumption architecture works on real robots




Random Act





IR proximity


Bump switches

Motor load


Motor command

Sensory triggers


Subsumption architecture works on real robots!
heroes of ai 3 the radical
Heroes of AI #3 – The Radical
  • The radical idea that sophisticated robot behaviour could be accomplished without high-powered computing was advanced in the 1980s by ex-patriot Australian
  • engineerRodney Brooks. At the time robots were slow and clumsy, using
  • classical planning to compute their motions
  • Brooks argued that robots could interact directly with the world via properly designed reactive network of sensors and actuators, and created a number of behaviour-based control architectures for robots. Without the need for complex representations. In the 1990s he and his students at the MIT robotics lab demonstrated ever more ingenious robots that used his subsumption architecture.
  • Brooks was featured in a 1997 documentary called “Fast, Cheap and Out of Control’,
  • the name of his paper to the British Interplanetary Society arguing that insect-like robots could be used for exploration of the solar system.
  • Formed a company called iRobot, which now provides pack-bot robots to the US military as well as mass producing Rhoomba floor-cleaning robots
  • Latest and most demanding project is Cog, a behaviour-based humanoid robot


  • Agents are an AI development which enables software to choose actions autonomously to achieve one or more goals which it has
  • Artificial life is an attempt to model the more biological aspects of life, such as reproduction (Life) or the digestion of food (eg Norns)
  • Both these technologies can be used to make lifelike, believable and entertaining NPCs for computer games
  • The software control of NPCs can be explicit (eg scripts) or implicit (eg rational agent)
  • Good control of gameplay in interactive games is part of the art
  • In conventional AI, accomplishment of goals required planning, which could be complex and processor-intensive
  • Conventional AI planners were reflective, which means they did not excel in time critical situations (like games)
  • Behavioural control in games tends to be reactive, which means less logical modeling of the world and more reacting directly to stimuli
  • Game virtual worlds generally distinguish structure and detail. Moving objects could be considered a third category
  • Humans see detail, but game characters generally interact via structure
  • Machine vision is an important but difficult sub-field of AI
  • The representation of space and time may be 2D or 3D, discrete or continuous
  • Present game engines provide a locomotion layer which abstracts basic movement actions away from the strategic control of direction. In future AI may automate basic interactions with world
  • Travel paths in through space may be represented as graphs. These are traditionally searched by search methods such as breadth-first, depth-first, or A* search. Such search is a general problem-solving method
  • Reactive control systems require 1) perceptual & action functions 2) a mapping from percepts to actions representing a theory of behaviour and 3) a method to arbitrate conflicts – to resolve which action will be taken in case of a tie
  • Could be implemented procedurally as if-else-if statement or declaratively as a set of rules in a RBS
  • Brooks, R. & Flynn, A. Fast, Cheap and Out of Control: The Robotic Invasion of the Solar System. Journal of The British Interplanetary Society, Vol. 42, 1989, 478-485.
  • Newell, A. The Knowledge Level, Artificial Intelligence, 1982, 18, 87-127
  • Martin Gardner. The fantastic combination of John Conway's new solitaire game of “life”. Scientific American, 1970, 223, 120-123
  • Cliff D. & Grand S. The Creatures Global Digital Ecosystem. Artificial Life, 1999, 5, 1, 77-93