Agents. What is an agent?. Agenthood = 4 dimensions: autonomy proactiveness embeddedness distributedness. Autonomy. Programs are controlled by user interaction Agents take action without user control monitor: does anyone offer a cheap phone?
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Programs are controlled by user interaction
Agents take action without user control
Procedures are replaced by behaviors:
map situation action
Programming agents = defining
Agents can act on users' behalf, for example in looking for products or bidding in auctions (ebay!)
User might not always be available (mobile phones)
Agents can represent users' interest, for example by choosing the best offers.
Programs are activated by commands: run ...
Agents take action by themselves:
Agents must have explicit goals
Goals are linked to plans for achieving them.
Plans are continously reevaluated; new opportunities lead to replanning
React to ranges of conditions rather than a set of foreseen situations
Gain flexibility in information systems
Proactive agents really act on user's behalf
Programs take as long as they take
Agents act under deadlines imposed by the real world:
and with limited resources (time, memory, communication)
Asymptotic complexity analysis insufficient:
does not give bounds for particular cases!
1) ''Anytime'' algorithms:
quick and suboptimal solution
2) reasoning about resource usage:
estimate computation time
choose suitable computation parameters
3) learning, compilation
Agent can integrate in the real world:
Programs have common data structures and algorithms
Multiagent systems model distributed systems; agents are independent entities and may:
Agents run on platforms:
Agent system reflects structure of the real system:
Agents = situated software:
People understand agents to have intentions:
John studied because he wanted to get a diploma.
The system is asking for a filename because it wants to save the data.
Modeling intentions: reasoning + intelligence!
Agent interacts with its environment:
particular software architectures
Robot following a wall
Backup every new file
Behaviors should adapt themselves
Agents need to be instructed
Multiple agents need to cooperate:
Behaviors operate at level of sensors/effectors:
Goto position (35.73,76.14)
Communication is symbolic:
Go to the corner of the room!
reasoning layer translates between them!
Intelligence has (at least) 4 dimensions:
Programs/Algorithms = always do the same thing
rm r * wipes out operating system
Rational agents = do the right thing
rm r * will keep essential files
action serves to satisfy the goals!
Complex behavior, intelligence:
adapt behavior to changing conditions
Negotiation/selfinterest requires explicit goals!
Learning and using new knowledge requires explicit structures
Programs/objects procedure call:
Agents communication language:
Communication is about:
Examples of languages: KQML, ACL
Coordination, cooperation and negotiation among agents
Communicate about intentions, selfinterest
ACL provides a higher abstraction layer that allows heterogeneous agents to communicate
Add/remove agents in a running multiagent system
Adapt to user:
Learn from the environment:
Knowledge systems: explicit representation of goals, operators, plans easy to modify
Automatic adaptation by machine learning or casebased reasoning
Information gathering/machine learning techniques for learning about the environment
Reinforcement learning, genetic algorithms for learning behaviors
Every user is different requires different agent behavior
Impractical to program a different agent for everyone
Programmers cannot foresee all aspects of environment
Agent knows its environment better than a programmer
Agents are a useful metaphor for computer science:
Smart Agents Collaborative
Collaborative Interface Agents
Computers always execute algorithms
Agents are a metaphor, implementation is limited:
Methods for simple behaviors:
Methods for controlling behaviors:
Formalisms for cooperation:
Theories of agent systems:
Structure: performatives + content language
Criteria for content languages
Communication among heterogeneous agents:
but common communication language:
Vocabulary (words): e.g. reference to objects
Messages (sentences): e.g. request for an action
Distributed Algorithms (conversations): e.g. negotiating task sharing
Object sharing (Corba, RPC, RMI, Splice): shared objects, procedures, data structures
Knowledge sharing (KQML, FIPA ACL): shared facts, rules, constraints, procedures and knowledge
Intentional sharing: shared beliefs, plans, goals and intentions
Cultural sharing: shared experiences and strategies
Ideal example of a heterogeneous agent system: human society
See agents as intentional systems:
all actions and communication are motivated by beliefs and intentions
Allows modeling agent behavior in a humanunderstandable way
BDI model requires modal logics
Many modal logics pose unrealistic computational requirements:
BDI model too general as a basis for agent cooperation
ACL = 2 components:
Allows formulating distributed algorithms in a heterogeneous agent society
Basis: human communication/speech acts
:content ''price(ISBN348291, 24.95)'‘
Represents a single speech act (tell)
which defines the relevant attributes
:sendersender of the message
:receivereceiver of the message
:fromactual origin if forwarded
:toactual destination if to be forwarded
:inreplytoreference if this is a reply
:replywithreference is a reply is expected
:languagelanguage used for content
:ontologyontology used for content
tell(P1, P2, …)
Achieve: make a proposition true
Unachieve: undo the previous achieve
Most commonly: Knowledge Interchange Format (KIF)
= predicate calculus in Lisp form
agents are logical reasoners (EPILOG system available from Stanford University)
Several agent systems use KIF as a basis, e.g. IBM's ABE (Agent Building Environment)
Responding to queries = decide if KIF expression
Contracting tasks (achieve) = defining subgoals in a plan
Querying information = asking for information for completing a proof
in general firstorder logic, all these tasks are computable, but not decidable
Content languages must make formulating and responding to performatives decidable!
Description logic =
reduced form of predicate calculus so that subsumption (an expression consistent with a certain class) can be efficiently decided
Usefulness for planning and negotiation less clear
CCL allows expressing constraint satisfaction problems:
variables, domains, relations and constraints
answering an information query = deciding whether
Other advantages: more efficient to formulate complex protocols, especially negotiation and coordinated plans
Agent Communication languages are important for building heterogeneous agent systems.
2 levels: performative + content
Issue with content: is there an efficient mechanism for programming agent conversations around them?
Behavior: conventional realtime programming
Planning/Reasoning: with limited resources
Options for reasoning under time constraints:
Estimate computation time set parameters of method
not very promising in practice
Idea: run several methods in parallel
First one to find a solution wins
Can be very successful!
Idea: algorithm first finds rough solution, then improves with more computation time.
Example:iterative deepening search
Not necessarily shortest path
Limited memory requirement
Finds shortest path
Large memory requirement
Usually, breadthfirst search requires too much memory to be practical.
Main problem with depthfirst search:
impose a depth limit l:
never explore nodes at depth > l
What is the right depth limit?
Idea: depthlimited search with increasing limit
Some repeated work, but:
tree with n leaves always has < n intermediate nodes
complexity no more than double of straight DFS
As time allows, increase the search depth
Solution improves with more computation time
Idea: precompute solutions for the most current problems
customized agent which works well only in a certain environment
Learn a strategy p, mapping S A(S) such that averagepayoff is maximized
Game theory: optimize only current step
But actions also determine future state future payoffs!
optimal strategy can be computed in two steps:
Often, model is not known apriori
First approach tends to be unstable!
Biggest limitation of reinforcement learning:
space of states, actions must be finite, small
Main difficulty: modeling realworld problems to fit the framework!
Challenge for embedded behavior:
Two successful methods:
reinforcement learning try it at: