1 / 29

Luís Moniz Pereira CENTRIA, Departamento de Informática Universidade Nova de Lisboa

Preferring and Updating in. Abductive Multi-Agents Systems. Pierangelo Dell’Acqua Dept. of Science and Technology Linköping University pier@itn.liu.se. Luís Moniz Pereira CENTRIA, Departamento de Informática Universidade Nova de Lisboa lmp@di.fct.unl.pt. Our agents.

Download Presentation

Luís Moniz Pereira CENTRIA, Departamento de Informática Universidade Nova de Lisboa

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Preferring and Updating in Abductive Multi-Agents Systems Pierangelo Dell’Acqua Dept. of Science and Technology Linköping University pier@itn.liu.se Luís Moniz Pereira CENTRIA, Departamento de Informática Universidade Nova de Lisboa lmp@di.fct.unl.pt

  2. Our agents • We propose a LP approach to agents that can: • Reason and react to other agents • Abducehypotheses to solve goals and to explain observations • Prefer among possible choices • Intend to reason and to act • Update their own knowledge, reactions and goals • Interact by updating the theory of another agent • Decide whether to accept an update depending on the requesting agent

  3. Framework • This framework builds on the work: • Updating Agents - P. Dell’Acqua & L. M. Pereira MAS’99 • Updates plus Preferences - J. J. Alferes & L. M. Pereira JELIA’00

  4. Updating agents • Updating agent:a rational, reactive agent that can dynamically change its own knowledge and goals: • makes observations • reciprocally updates other agents with goals and rules • thinks a bit (rational) • selects and executes an action (reactive)

  5. Abductive agents • Abductive agent:an agent that can abduce hypotheses to solve golas and to explain observations. Hypotheses must satisfy the integrity constraints. Hypotheses abduced in proving a goal G are not permanent: they only hold during the proof of G. Hypotheses can be committed to by self-updating.

  6. Updates plus preferences • A logic programming framework that combines two distinct forms of reasoning: preferring and updating. • A language capable of considering sequences of logic programs that result from the consecutive updates of an initial program, where it is possible to define a priority relation among the rules of all successive programs. Updates create new models, while preferences allow us to select among pre-existing models • The priority relation can itself be updated.

  7. Preferring agents • Preferring agent:an agent that is able to prefer beliefs, reactions and abducibles when several alternatives are possible. Agents can express preferences about their own rules and abducibles. Preferences are expressed via priority rules. Preferences can be updated, possibly on advice from others.

  8. Claim We argue that our present theory of the type of agents is a rich, integrative, evolvable basis, and suitable for engineering configurable, dynamic, self-organizing and self-evolving agent societies. Thus, the overall emerging structure will be flexible and dynamic: each agent has its own explicit representation of its organization which is updatable.

  9. - : i C Agent’s language • Atomic formulae: Aobjective atoms not Adefault atoms i:Cprojects updates • Formulae: generalized rules A ¬ L1 Ù...Ù Ln not A ¬ L1 Ù...Ù Ln Li is an update or an atom Zj is a project integrity constraint false ¬ L1 Ù...Ù Ln Ù Z1 Ù...Ù Zm L1 Ù...Ù Ln  Z active rule

  10. Agent’s language • A projecti:Ccan take one of the forms: i:( A ¬ L1 Ù...Ù Ln ) i:( not A ¬ L1 Ù...Ù Ln ) i:( false ¬ L1 Ù...Ù Ln Ù Z1 Ù...Ù Zm ) i:( L1 Ù...Ù Ln  Z ) goal i:( ?- L1 Ù...Ù Ln ) • Note that a program can be updated with another program, i.e., any rule can be updated.

  11. Agents’ knowledge states • Knowledge states represent dynamically evolving states of agents’ knowledge. They undergo change due to updates. • Given the current knowledge state Ps , its successor knowledge state Ps+1 is produced as a result of the occurrence of a set of parallel updates. • Update actions do not modify the current or any of the previous knowledge states. They only affect the successor state: the precondition of the action is evaluated in the current state and the postcondition updates the successor state.

  12. - - : : i C i C Projects and updates • A projectj:Cdenotes the intention of some agent i of proposing the updating the theory of agent j with C. • denotes an update proposed by i of the current theory of some agent j with C . j:C

  13. Priority rules • Let < be a binary predicate symbol whose set of constants includes all the generalized rules: r1 < r2 means that the rule r1 is preferred to the rule r2 . A priority rule is a generalized rule defining < .

  14. Prioritized abductive LP • A prioritized abductive LP is a pair (P,A): - P is a set of generalized rules (possibly, priority rules) and integrity constraints. - A is a set of objective and default atoms (abducibles).

  15. Agent theory • The initial theory of an agent  is a tuple (P,A,R): - (P,A) is an prioritized abductive LP. - R is a set of active rules. • An updating program is a finite set of updates. • Let S be a set of natural numbers. We call the elements sS states. • An agent  at state s , written ,s , is a pair (T,U): - T is the initial theory of . - U={U1,…, Us} is a sequence of updating programs.

  16. Multi-agent system • A multi-agent system M={1,s ,…, n,s } at states is a set of agents 1,…,n at state s. • M characterizes a fixed society of evolving agents. • The declarative semantics of M characterizes the relationship among the agents in M and how the system evolves. • The declarative semantics is stable models based.

  17. Distributed databases and cooperative agents Communication and updates allow to integrate distinct agents. Assume that we want to minimize the administrative procedure required for changing residence. For example, we may notify the new residence once in a public office (p). Then it is the responsibility of that office to inform all the relevant offices. Then p can be characterized by (P,A,R), where A={} and rC ¬ reject(rC) ¬ NrC P = rC=residence of Carlo NrC=new residence of Carlo R = NrC t:NrC

  18. Representation of conflicting informationand preferences Preferences may resolve conflicting information. This example models a situation where an agent, Fabio, receives conflicting advice from two reliable authorities. Let (P,A,R) be the initial theory of Fabio, where A=R={} and dont(A) ¬ fa(noA)Ù not do(A)(r1) do(A) ¬ ma(A)Ù not dont(A)(r2) false ¬ do(A) Ù fa(noA) false ¬ dont(A) Ù ma(A) r1 < r2¬ fr r2 < r1¬ mr P = fa=father advises ma=mother advises fr=father responsability mr=mother responsability

  19. - - : : Representation of conflicting informationand preferences Suppose that Fabio wants to live alone, represented aslA. His mother advises him to do so, but the father advises not to do so: mother ma(lA) , father fa(nolA) U1 = Assuming that there are no rejection clauses, Fabio accepts both updates, and therefore he is still unable to choose eitherdo(lA)ordont(lA)and, as a result, does not perform any action whatsoever.

  20. - : Representation of conflicting informationand preferences Afterwards, Fabio's parents separate and the judge assigns responsibility over Fabio to the mother: judgemr U2 = Now the situation changes since the second priority rule gives preference to the mother's wishes, and therefore Fabio can happily conclude ”do live alone”.

  21. Updating preferences Within the theory of an agent both rules and preferences can be updated. The updating process is triggered by means of external or internal projects. Here internal projects of an agent are used to update its own priority rules.

  22. Updating preferences Let the theory of George be characterized by : workLate ¬ not party(r1) party ¬ not workLate(r2) money ¬ workLate(r3) r2 < r1 A = { } P = partying is prefered to working until late beautifulWoman george: wishGoOut wishGoOutÙ not money george: getMoney wishGoOutÙ money beautifulWoman: inviteOut getMoney george: r1 < r2 getMoney george: not r2 < r1 R = to get money, George must update his priority rules

  23. Applications • Applications in which our agent technology can have a significant potential to contribute are internet applications, e.g. - information integration - web-site management

  24. Engineering agent societies • We believe that the theory of our agents is rich and suitable to engineer configurable, dynamic, self-organizing and self-evolving agent societies. • Jennings argues that: - open, networked systems are characterized by the fact that there is no simple controlling organization. - the computational model of these systems places several requirements.

  25. Engineering agent societies • Computational model’s requirements: • the individual entities must be active and autonomous; • the individual entities need to be reactive and proactive; • the computational entities need to be capable of interacting with entities that were not foreseen at design time; • any organizational relationships that do exist must be reflected in the behaviour and actions of the agents (i.e., the organizational relationships must be explicitly represented).

  26. Engineering agent societies • Castelfranchi claims that: - The most effective solution to the problem of social order in multi-agent systems is social modelling. - It should leave some flexibility and try to deal with emergent and spontaneous form of organizations (that is, decentralized and autonomous social control). Problem: modeling the feedback from the global results to the local/individual layer

  27. Introspection and metareasonig for social modelling • To solve this problem we need two ingredients: • introspection • metareasoning Introspection To dynamically change the organization, structure of the multi-agent system, agents must be aware (even if partially) of the structure and must be able to introspect about it. Metareasoning By using metareasoning the agent can evaluate it, obtain feedback from it and eventually try to modify it via preferences and updates in a rational way.

  28. Future work • The approach can be extended in several ways: • Dynamically reconfigurable multi-agent system. • Introspective and metareasoning abilities. • Other rational abilities can be incorporated, e.g., learning. • Proof procedure for preference reasoning to be incorporated into the current implementation of updates plus abduction.

  29. Conclusion To have dynamic, flexible agent societies we need to have suitable agent theories, otherwise the structure modeling the agent society will be rigid in the sense that it will not be modifiable by the agents themselves. We believe that our theory of agents is a suitable basis for achieving this aim.

More Related