Dictionary definition: agent (aygent) n. something that produces or is capable of producing an effect: an active or efficient cause. one who acts for or in the place of another by authority from him ... a means or instrument by which a guiding intelligence achieves a result. .
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
An agent is a computer system, situated in some environment, that is capable of flexible autonomous action in order to meet its design objectives. (Jennings, Sycara, Wooldridge 1998)
This definition embraces three key concepts:
The agent receives sensory input from its environment and it can perform actions which change the environment in some way.
Autonomy : selfdetermined freedom, especially moral independence.
Autonomous: selfgoverning, independent.
The system should be able to act without the direct intervention of humans (or other agents). The system should have control over its own actions and internal state.
Example: Autonomous navigation
Sometimes used in a stronger sense to mean systems that are capable of learning from experience.Autonomy
very complex nuclear reactor control systems.
software deamons: monitor software environment and perform actions to modify the environment as conditions change,
UNIX xbiff program, monitors a user's incoming mail and displays an icon when new mail is detected.
However, these systems are not capable of flexible action in order to meet their design objectives.Examples of existing situated, autonomous computer systems
agents should not simply act in response to their environment, they should be able to exhibit opportunistic, goaldirected behaviour and take the initiative where appropriate;
agents should be able to interact, when appropriate, with other artificial agents and humans in order to complete their own problem solving and to help others with their activities.
agents should perceive their environment and respond in a timely fashion to changes that occur in it;
Agents may have other characteristics, e.g. mobility, adaptability, but those given here are the distinguishing features of an agent.Flexibility
Also builds on contributions from other long established fields:
concurrent objectbased systems
humancomputer interface design
Historically AI researchers tended to focus on different components of intelligent behaviour, e.g. learning, reasoning, problem solving, vision understanding.
The assumption seemed to be that progress was more likely to be made if these aspects of intelligent behaviour were studied in isolation.The road to intelligent agents
Combination to create integrated AI systems was assumed to be straightforward.
Phase 2: expert systems; building on domain-specific knowledge for specialist problems
Phase 3: specialised areas such as vision, speech, natural language processing, robot control, data mining
Mainly sensory data
Intelligent agents seen currently as the main integrating forceAI development stages
Need measures of success
E.g. pick the most points, make the least number of moves, minimise power consumption, etc.
Rationality depends on performance measures, prior knowledge, actions, event history
For each possible event sequence, the rational agent should select an action that is expected to maximise its performance measure, given the evidence provided by the event sequence and the built-in knowledge the agent has.
Important: rationality maximises expected performance, not actual (we cannot tell the future)Rational agents
Hence, an agent gets percepts one at a time, and maps this percept sequence to actions (one action at a time)
Interacts with other agents plus the environment
Reactive to the environment
Pro-active (goal-directed)How to design an intelligent agent?
Medical diagnosis system
Symptoms, findings, patient's answers
Questions, tests, treatments
Healthy patients, minimize costs
Satellite image analysis system
Pixels of varying intensity, color
Print a categorization of scene
Images from orbiting satellite
Pixels of varying intensity
Pick up parts and sort into bins
Place parts in correct bins
Conveyor belts with parts
Temperature, pressure readings
Open, close valves; adjust temperature
Maximize purity, yield, safety
Interactive English tutor
Print exercises, suggestions, corrections
Maximize student's score on test
Set of studentsExamples of agents in different types of applications
How to encode an agent’s strategy?
Long list of what should be done for each possible percept sequence
vs. shorter specification (e.g. algorithm)Agent’s strategy
function SKELETON-AGENT (percept) returns action
static: memory, the agent’s memory of the world
memory UPDATE-MEMORY(memory, action)
On each invocation, the agent’s memory is updated to reflect the new percept, the best action is chosen, and the fact that the action was taken is also stored in the memory. The memory persists from one invocation to the next.
Simple reflex agent
Reflex agent with internal state
Agent with explicit goals
Utility-based agentExamples of how the agent function can be implemented
function TABLE-DRIVEN-AGENT (percept) returns action
static: percepts, a sequence, initially empty
table, a table, indexed by percept sequences, initially fully specified
append percept to the end of percepts
action LOOKUP(percepts, table)
An agent based on a prespecified lookup table. It keeps track of percept sequence and just looks up the best action
Percepts could be e.g. the pixels on the camera of the automated taxiSimple reflex agent
What the world is like now
What action I should do now
Condition - action rules
function SIMPLE-REFLEX-AGENT(percept) returns action
static: rules, a set of condition-action rules
state INTERPRET-INPUT (percept)
rule RULE-MATCH (state,rules)
action RULE-ACTION [rule]
No further matches sought.
Only one level of deduction.
A simple reflex agent works by finding a rule whose condition matches the current situation (as defined by the percept) and then doing the action associated with that rule.
e.g. if car-in-front-is-breaking then initiate breaking
Table is still too big to generate and to store (e.g. taxi)
Takes long time to build the table
No knowledge of non-perceptual parts of the current state
Not adaptive to changes in the environment; requires entire table to be updated if changes occur
Looping: Can’t make actions conditionalSimple reflex agent…
function REFLEX-AGENT-WITH-STATE (percept) returns action
static: state, a description of the current world state
rules, a set of condition-action rules
state UPDATE-STATE (state, percept)
rule RULE-MATCH (state, rules)
action RULE-ACTION [rule]
state UPDATE-STATE (state, action)
A reflex agent with internal state works by finding a rule whose condition matches the current situation (as defined by the percept and the stored internal state) and then doing the action associated with that rule.
Needed because sensors do no usually give the entire state of the world at each input, so perception of the environment is captured over time.
Requires ability to represent change in the world with/without the agent
one possibility is to represent just the latest state, but then cannot reason about hypothetical courses of actionReflex Agent with Internal State …
Keeping track of the current state is often not enough – need to add goals to decide which situations are good
Deliberative instead of reactive
May have to consider long sequences of possible actions before deciding if goal is achieved – involves considerations of the future, “what will happen if I do…?” (search and planning)
More flexible than reflex agent. (e.g. rain / new destination)
In the reflex agent, the entire database of rules would have to be rewrittenAgent with Explicit Goals
A goal specifies a crude destination from an unhappy to a happy state, but often need a more general performance measure that describes “degree of happiness”
Utility function U: State Reals indicating a measure of success or happiness when at a given state
Allows decisions comparing
choice between conflicting goals
choice between likelihood of success and importance of goal (if achievement is uncertain)Utility-Based Agent
Most moderately complex environments (including, for example, the everyday physical world and the Internet) are inaccessible
The more accessible an environment is, the simpler it is to build agents to operate in itEnvironments – Accessible vs. Inaccessible
Limited memory (poker)
Too complex environment to model directly (weather, dice)
The physical world can to all intents and purposes be regarded as non-deterministic
Non-deterministic environments present greater problems for the agent designerEnvironments –Deterministic vs. Non-deterministic
Episodic environments are simpler from the agent developer’s perspective because the agent can decide what action to perform based only on the current episode — it need not reason about the interactions between this and future episodesEnvironments - Episodic vs. Non-episodic
A dynamic environment is one that has other processes operating on it, and which therefore changes in ways beyond the agent’s control
Other processes can interfere with the agent’s actions (as in concurrent systems theory)
The physical world is a highly dynamic environmentEnvironments - Static vs. Dynamic
A chess game is an example of a discrete environment, and taxi driving as an example of a continuous one
Continuous environments have a certain level of mismatch with computer systems
Discrete environments could in principle be handled by a kind of “lookup table”Environments – Discrete vs. Continuous
Chess with a clock
Chess without a clock
Medical diagnosis system
Interactive English tutor
agents are autonomous:
agents embody stronger notion of autonomy than objects, and in particular, they decide for themselves whether or not to perform an action on request from another agent
agents are smart:
capable of flexible (reactive, pro-active, social) behavior, and the standard object model has nothing to say about such types of behavior
agents are active:
a multi-agent system is inherently multi-threaded, in that each agent is assumed to have at least one thread of active controlAgents and Objects
Janine took her umbrella because she believed it was going to rain.
Michael worked hard because he wanted to possess a PhD.
These statements make use of a folk psychology, by which human behavior is predicted and explained through the attribution of attitudes, such as believing and wanting (as in the above examples), hoping, fearing, and so on
The attitudes employed in such folk psychological descriptions are called the intentional notionsAgents as Intentional Systems
Dennett identifies different ‘grades’ of intentional system:
‘A first-order intentional system has beliefs and desires (etc.) but no beliefs and desires about beliefs and desires. …A second-order intentional system is more sophisticated; it has beliefs and desires (and no doubt other intentional states) about beliefs and desires (and other intentional states) — both those of others and its own’Agents as Intentional Systems
‘To ascribe beliefs, free will, intentions, consciousness, abilities, or wants to a machine is legitimate when such an ascription expresses the same information about the machine that it expresses about a person. It is useful when the ascription helps us understand the structure of the machine, its past or future behavior, or how to repair or improve it. It is perhaps never logically required even for humans, but expressing reasonably briefly what is actually known about the state of the machine in a particular situation may require mental qualities or qualities isomorphic to them. Theories of belief, knowledge and wanting can be constructed for machines in a simpler setting than for humans, and later applied to humans. Ascription of mental qualities is most straightforward for machines of known structure such as thermostats and computer operating systems, but is most useful when applied to entities whose structure is incompletely known’.
As it turns out, more or less anything can. . . consider a light switch:
But most adults would find such a description absurd!Why is this?Agents as Intentional Systems
‘It is perfectly coherent to treat a light switch as a (very cooperative) agent with the capability of transmitting current at will, who invariably transmits current when it believes that we want it transmitted and not otherwise; flicking the switch is simply our way of communicating our desires’. (Yoav Shoham)
Put crudely, the more we know about a system, the less we need to rely on animistic, intentional explanations of its behavior
But with very complex systems, a mechanistic, explanation of its behavior may not be practicable
As computer systems become ever more complex, we need more powerful abstractions and metaphors to explain their operation — low level explanations become impractical. The intentional stance is such an abstractionAgents as Intentional Systems
Remember: most important developments in computing are based on new abstractions:
abstract data types
Agents, and agents as intentional systems, represent a further, and increasingly powerful abstraction
So agent theorists start from the (strong) view of agents as intentional systems: one whose simplest consistent description requires the intentional stanceAgents as Intentional Systems
Now, much of computer science is concerned with looking for abstraction mechanisms (witness procedural abstraction, ADTs, objects,…)So why not use the intentional stance as an abstraction tool in computing — to explain, understand, and, crucially, program computer systems?
This is an important argument in favor of agentsAgents as Intentional Systems
It provides us with a familiar, non-technical way of understanding & explaining agents
It gives us the potential to specify systems that include representations of other systems
It is widely accepted that such nested representations are essential for agents that must cooperate with other agentsAgents as Intentional Systems
This view of agents leads to a kind of post-declarative programming:
In procedural programming, we say exactly what a system should do
In declarative programming, we state something that we want to achieve, give the system general info about the relationships between objects, and let a built-in control mechanism (e.g., goal-directed theorem proving) figure out what to do
With agents, we give a very abstract specification of the system, and let the control mechanism figure out what to do, knowing that it will act in accordance with some built-in theory of agency (e.g., the well-known Cohen-Levesque model of intention)Agents as Intentional Systems
In the most general case, agents will be acting on behalf of users with different goals and motivations
To successfully interact, they will require the ability to cooperate, coordinate, and negotiate with each other, much as people doMultiagent Systems
How do we build agents capable of independent, autonomous action, so that they can successfully carry out tasks we delegate to them?
How do we build agents that are capable of interacting (cooperating, coordinating, negotiating) with other agents in order to successfully carry out those delegated tasks, especially when the other agents cannot be assumed to share the same interests/goals?
The first problem is agent design, the second is society design (micro/macro)Agent Design vs Society Design
How can cooperation emerge in societies of self-interested agents?
What kinds of languages can agents use to communicate?
How can self-interested agents recognize conflict, and how can they (nevertheless) reach agreement?
How can autonomous agents coordinate their activities so as to cooperatively achieve goals?Multiagent Systems
How can your agent compare different deals from different vendors? What if there are many different parameters?
What algorithms can your agent use to negotiate with other agents (to make sure you get a good deal)?
These issues aren’t frivolous – automated procurement could be used massively by (for example) government agenciesMultiagent Research Issues
This can be both a strength (infusing well-founded methodologies into the field) and a weakness (there are many different views as to what the field is about)
This has analogies with artificial intelligence itselfMultiagents are Interdisciplinary
“IRC (Internet Relay Chat) is a virtual meeting place where people from all over the world can meet and talk; you'll find the whole diversity of human interests, ideas, and issues here, and you'll be able to participate in group discussions on one of the many thousands of IRC channels, on hundreds of IRC networks, or just talk in private to family or friends, wherever they are in the world.”
mIRC Homepage (http://www.mirc.co.uk/)
How the game is played
Rules and etiquette to maintain the channel
Techniques for self-preservation within the IRC environment
Human operators are capable of ”correcting” the game
Keeps records of individual player statistics
Number of games won
All knowledge must be input by human operator.RobBot
<RobBot> Current category: Footwear. Question Value: 800.
<RobBot> Question 5 of 30: Low cut woman's shoe or a device to pass gasoline
<BrandEx> rob pump
<Texmex> rob pump
<RobBot> brandex: That is CORRECT! You win 800. Your total is -300.
<RobBot> Please wait while preparing the next Gullivers Travels question...
<jennew> brand rocks!
<RobBot> Current category: Gullivers Travels. Question Value: 400.
<RobBot> Category Comment: Trivia about Gullivers Travels
<RobBot> Question 6 of 30: The only thing the Laputian king wanted to learn about the outside world
<Texmex> oh this one sux
<Mach> what food do you like rob
<RobBot> Pass the ho-ho's!
<Mach> rob mathematics
<MastrLion> rob flug
<RobBot> mastrlion: Bzzt! That is incorrect. You lose 400. Your total is -500.
<RobBot> mach: That is CORRECT! You win 400. Your total is 400.
Most player input is either commands or an answer to a trivia question
RobBot can also respond to text which is not an answer
In response to question of food RobBot makes a comment about Ho-Ho’s.
This type of reply helps establish RobBots personality
Maybe he is a junk-food addict
Players socialize with each other during the game
Texmex comments on how he dislikes the current category
Jennew praises BrandEx for answering a question correctlyRisky Features
RobBot hosts game without human intervention
Despite serious neglect at times by the creators, the game has continued to run and flourish on its own
Must be able to run independently if it is to have any value in terms of entertainment or social interaction
If human operator must constantly provide direction, RobBot would become a tool of the human rather than a separate entityRobBot as Agent
RobBot maintains his own personality through the responses programmed into his lexicon
Life-like qualities help to provide an atmosphere that is inducive to socialization
He is capable of recording information about other users
Scores that players have obtained
Player’s scores and record are important measures of social status on the gaming channelsRobBot as Agent
RobBot does make errors while conducting the game
Players may phrase an answer differently than the answer stored in the answer database
Spellling errors may be present in the answer database
Sometimes human operators are present to correct errors but usually not
Sometimes errors are found humorous by players
Sometimes players band together to curse RobBot for his errors
Both cases encourage socialization among the playersRobBot as Agent
RobBot relies heavily on anthropomorphism to accomplish his tasks as a game show host
Main task is to provide entertainment as a game show host
Not to pass the Turing Test!
Technology based on keyword mappings and canned phrases gets RobBot remarkably far in fulfilling his main taskRobBot as Agent
As a core group of players engage in Risky Business, the network becomes meaningfully ritualized in its context
A subculture is formed that can be transmitted to new players
Components of a subculture include
Shared language, history, and purpose
These lead to mutually shared
norms (behavior patterns)
Values (common goals)
Agent’s main task is to support and encourage development of the subculture.RobBot as Social Engineer
Rules basically stay the same from day to day and month to month
Responses that exist to the game environment can be learned and become predictable
Stable structure helps facilitate the development of a social history of the game-playing environmentRobBot as Social Engineer
Lead to the development of a shared language
Bot typically utters certains phrases
Used by participants as symbols of events and concepts
Bot’s consistency of phrasing leads to players acceptance and standardization of language
When RobBot consistently expresses delight about chocolate via *choco* all participants can use *choco* as a keyword for joyRobBot as Social Engineer
RobBot has a set of canned responses for some players when he sees their names
E.g., in response to “Do you know Cass?”
“Hey! Cass is a real cutie! Woohoo!”
Players react strongly to these personalized messages
Often input questions that cause RobBot to frequently cycle through a small set of responses
Flattery from a computer agent appears to have a similar effect to flattery coming from a real personRobBot as Social Engineer
Impact of gender
Change name from RobBot to ReneeBot + make minor changes in the vocabulary
Result is significant attitude changes towards the bot
RobBot is treated like a man
Players joke with him about stereotypical male things
Women flirt with him
Players can be brusque and treat him rudely
ReneeBot is treated differently
Men flirt with her
Players treat her more politelyRobBot as Social Engineer
Helps shape behavior patterns of players
Reinforces a definition of social order
E.g., swearing is frowned upon
Any player using certain utterances will get
“<Player 1> This is a family channel! Be warned or I’ll have to call the bouncers!”
Persistence will get the player kicked off the channel
E.g., in another game, Acro, bot provides no guidance on coarse language
Result is that it is a common feature of the players
There was a morality play between certain players
Eventually those who could deal with occasional vulgarity stopped playingRobBot as Social Engineer
As the numbers of such people increase, they acquire the power to demand familiar institutions from the real world that cater to the social animal
Etc.Bots – The Bigger Picture
RobBot’s AI (minimal as it is) creates a persona to which people can relate
He is something understood and nonthreatening
He plays an essential part in acclimatizing people to the world of the internet
With more and better AI, the line between user and software artifcat would become more blurredBots – The Bigger Picture