1 / 44

Preference Reasoning in Logic Programming

Preference Reasoning in Logic Programming. Pierangelo Dell’Acqua Aida Vit ó ria Dept. of Science and Technology - ITN Linköping University, Sweden {pier, aidvi}@itn.liu.se. José Júlio Alferes Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA

hazel
Download Presentation

Preference Reasoning in Logic Programming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Preference Reasoning inLogic Programming • Pierangelo Dell’Acqua • Aida Vitória • Dept. of Science and Technology - ITN • Linköping University, Sweden • {pier, aidvi}@itn.liu.se • José Júlio Alferes • Luís Moniz Pereira • Centro de Inteligência Artificial - CENTRIA • Universidade Nova de Lisboa, Portugal • {jja, lmp}@di.fct.unl.pt

  2. Outline • Combining updates and preferences • User preference information in query answering • Preferring alternative explanations • Preferring and updating in multi-agent systems

  3. References [ JELIA00 ] J. J. Alferes and L. M. Pereira Updates plus Preferences Proc. 7th European Conf. on Logics in Artificial Intelligence (JELIA00), LNAI 1919, 2000 [ INAP01 ] P. Dell'Acqua and L. M. Pereira Preferring and Updating in Logic-Based Agents Selected Papers from the 14th Int. Conf. on Applications of Prolog (INAP01), LNAI 2543, 2003 [ JELIA02 ] J. J. Alferes, P. Dell'Acqua and L. M. Pereira A Compilation of Updates plus Preferences Proc. 8th European Conf. on Logics in Artificial Intelligence (JELIA02), LNAI 2424, 2002 [ FQAS02 ] P. Dell'Acqua, L. M. Pereira and A. Vitoria User Preference Information in Query Answering 5th Int. Conf. on Flexible Query Answering Systems (FQAS02), LNAI 2522, 2002

  4. 1. Update reasoning • Updates model dynamically evolving worlds: • knowledge, whether complete or incomplete, can be updated to reflect world change. • new knowledge may contradict and override older one. • updates differ from revisions which are about an incomplete static world model.

  5. Preference reasoning Preferences are employed with incomplete knowledge when several models are possible: • preferences act by choosing some of the possible models. • this is achieved via a partial order among rules. Rules will only fire if they are not defeated by more preferred rules. • our preference approach is based on the approach of [KR98]. [ KR98 ] G. Brewka and T. Eiter Preferred answer sets for extended logic programs KR’98, 1998

  6. Preference and updates combined Despite their differences preferences and updates display similarities. Both can be seen as wiping out rules: • in preferences the less preferred rules, so as toremove models which are undesired. • in updates the older rules, inclusively for obtaining models in otherwise inconsistent theories. This view helps put them together into a single uniform framework. In this framework, preferences can be updated.

  7. LP framework Atomic formulae: Aobjective atom not Adefault atom Formulae: L0¬ L1 , ... , Ln generalized rule everyLi is an objective or default atom

  8. Let N be a set of constants containing a unique name for each generalized rule. priority rule Zis a literal r1< r2or not r1< r2 Z ¬ L1 , ... , Ln r1< r2 means that rule r1 is preferred to rule r2 Def. Prioritized logic program Let P be a set of generalized rules and R a set of priority rules. Then =(P, R) is a prioritized logic program.

  9. Dynamic prioritized programs Let S={1,…,s,…} be a set of states (natural numbers). Def. Dynamic prioritized program Let (Pi, Ri) be a prioritized logic program for every iS, then = {(Pi, Ri) : iS} is a dynamic prioritized program. The meaning of such a sequence results from updating (P1, R1) with the rules from (P2, R2), and then updating the result with … the rules from (Pn, Rn).

  10. Example Suppose a scenario where Stefano watches programs on football, tennis, or the news. (1) In the initial situation, being a typical italian, Stefano prefers both football and tennis to the news and, in case of international competitions, he prefers tennis over football. r1< r3 r2< r3 r2< r1¬ us x<y ¬ x<z, z<y f ¬ not t, not n (r1) t ¬ not f, not n (r2) n ¬ not f, not t (r3) P1 R1 In this situation, Stefano has two alternative TV programmes equally preferable: football and tennis.

  11. (2) Next, suppose that a US-open tennis competition takes place: P2 R2 us ¬ (r4) Now, Stefano's favourite programme is tennis. (3) Finally, suppose that Stefano's preferences change and he becomes interested in international news. Then, in case of breaking news he will prefer news over both football and tennis. not (r1< r3) ¬ bn not (r2< r3) ¬ bn r3< r1¬ bn r3< r2¬ bn P3 R3 bn ¬ (r5)

  12. Preferred stable models Let P = {(Pi, Ri) : iS} be a dynamic prioritized program, Q = { Pi  Ri : i S }, PR = Ui (Pi Ri), and M an interpretation of P. Def. Default and Rejected rules Default(PR,M) = {not A :  (A¬L1,…,Ln) in PR and M ⊨L1,…,Ln} Reject(s,M,Q) = { r  Pi  Ri : r’ Pj  Rj, head(r) = not head(r’), i<js and M ⊨body(r’) }

  13. Def. Unsupported and Unprefered rules Unsup(PR,M) = {r  PR : M ⊨head(r) and M ⊨body-(r)} Unpref(PR,M) is the least set including Unsup(PR, M) and every rule r such that: • r’  (PR – Unpref(PR, M)) : M ⊨r’ < r,M ⊨body+(r’) and [not head(r’)body-(r) or(not head(r)  body-(r’) and M ⊨body(r))]

  14. Def. Preferred stable models Let s be a state, P ={(Pi, Ri) : iS} a dynamic prioritized program, and M a stable model of P. M is a preferred stable model of P at state s iff M = least( [X - Unpref(X, M)]  Default(PR, M) ) where: PR = Ui≤s(Pi  Ri) Q = {PiRi : iS } X = PR - Reject(s,M,Q)

  15. (s,P) transformation • Let s be a state and P = {(Pi, Ri) : iS} a dynamic prioritized program. • In [Jelia02] we gave a transformation (s,P) that compiles dynamic prioritized programs into normal logic programs. • The preference part of our transformation is modular or incremental wrt. the update part of the transformation. • The size of the transformed program (s,P) in the worst case is quadratic in the size of the original dynamic prioritized program P. • An implementation of the transformation is available at: http://centria.di.fct.unl.pt/~jja/updates

  16. Thm. Correctness of (s,P) An interpretation M a stable model of (s,P) iff M, restricted to the language of P, is a preferred stable model of P at state s.

  17. 2. User preference information in query answering Query answering systems are often difficult to use because they do not attempt to cooperate with their users. The use of additional information about the user can enhance cooperative behaviour of query answering systems [FQAS02].

  18. Consider a system whose knowledge is formalized by a prioritized logic program:  = (P, R) Extra level of flexibility - if the user can provide preference information at query time: ?- (G, Pref ) Given =(P,R), the system has to derive G from P by taking into account the preferences in R which are updated by the preferences in Pref. Finally, it is desirable to make the background knowledge (P,R) of the system updatable in a way that it can be modified to reflect changes in the world (including preferences).

  19. The ability to take into account the user information makes the system able to target its answers to the user’s goal and interests. Def. Queries with preferences Let G be a goal,  a prioritized logic program and = {(Pi, Ri) : iS} a dynamic prioritized program. Then ?- (G,) is a query wrt. 

  20. Joinability function S+ = S  { max(S) + 1 } Def. Joinability at state s Let sS+ be a state, = {(Pi, Ri) : iS} a dynamic prioritized program and =(PX, RX) a prioritized logic program. The joinability function s at state s is: s  = {(Pi, Ri) : iS+} (Pi, Ri) if 1  i < s (Pi, Ri) = (PX, RX) if i = s (Pi-1, Ri-1) if s < i  max(S+)

  21. Preferred conclusions Def. Preferred conclusions Let sS+ be a state and = {(Pi, Ri) : iS} a dynamic prioritized program. The preferred conclusions of  with joinability function s are: (G,) : G is included in every preferred stable model of s  at state max(S+)

  22. Example: car dealer Consider the following program that exemplifies the process of quoting prices for second-hand cars. price(Car,200) ¬ stock(Car,Col,T), not price(Car,250), not offer (r1) price(Car,250) ¬ stock(Car,Col,T), not price(Car,200), not offer (r2) prefer(orange) ¬ not prefer(black) (r3) prefer(black) ¬ not prefer(orange) (r4) stock(Car,Col,T) ¬ bought(Car,Col,Date), T=today-Date (r5)

  23. When the company buys a car, the information about the car must be added to the stock via an update: bought(fiat,orange,d1) When the company sells a car, the company must remove the car from the stock: not bought(volvo,black,d2)

  24. The selling strategy of the company can be formalized as: r2 < r1¬ stock(Car,Col,T), T < 10 r1 < r2¬ stock(Car,Col,T), T  10, not prefer(Col) r2 < r1¬ stock(Car,Col,T), T  10, prefer(Col) r4 < r3 price(Car,200) ¬ stock(Car,Col,T), not price(Car,250), not offer (r1) price(Car,250) ¬ stock(Car,Col,T), not price(Car,200), not offer (r2) prefer(orange) ¬ not prefer(black) (r3) prefer(black) ¬ not prefer(orange) (r4) stock(Car,Col,T) ¬ bought(Car,Col,Date), T=today-Date (r5)

  25. Suppose that the company adopts the policy to offer a special price for cars at a certain times of the year. price(Car,100) ¬ stock(Car,Col,T), offer (r6) not offer Suppose an orange fiat bought in date d1 is in stock and offer does not hold. Independently of the joinability function used: ?- ( price(fiat,P), ({},{}) ) P = 250 if today-d1 < 10 P = 200 if today-d1  10

  26. ?- ( price(fiat,P), ({},{not (r4 < r3), r3 < r4}) ) P = 250 • For this query it is relevant which joinability function is used: • if we use 1, then we do not get the intended answer since the user preferences are overwritten by the default preferences of the company; • on the other hand, it is not so appropriate to use max(S+) since a customer could ask: ?- ( price(fiat,P), ({offer},{}) )

  27. Selecting a joinability function In some applications the user preferences in  must have priority over the preferences in . In this case, the joinability function max(S+) must be used. Example: a web-site application of a travel agency whose database  maintains information about holiday resorts and preferences among touristy locations. When a user asks a query ?- (G, ), the system must give priority to . Some other applications need the joinability function 1to give priority to the preferences in .

  28. Open issues • Detect inconsistent preference specifications. • How to incorporate abduction in our framework: abductive preferences leading to conditional answers depending on accepting a preference. • How to tackle the problem arising when several users query the system together.

  29. 3. Preferring abducibles The evaluation of alternative explanations is one of the central problems of abduction. • An abductive problem of a reasonable size may have a combinatorial explosion of possible explanations to handle. • It is important to generate only the explanations that are relevant. Some proposals involve a global criterion against which each explanation as a whole can be evaluated. • A general drawback of those approaches is that global criteria are generally domain independent and computationally expensive. • An alternative to global criteria is to allow the theory to contain rules encoding domain specific information about the likelihood that a particular assumption be true.

  30. In our approach we can express preferences among abducibles to discard the unwanted assumptions. Preferences over alternative abducibles can be coded into cycles over negation, and preferring a rule will break the cycle in favour of one abducible or another.

  31. Example Consider a situation where an agent Peter drinks either tea or coffee (but not both). Suppose that Peter prefers coffee to tea when sleepy. This situation can be represented by a set Q of generalized rules with set of abducibles AQ={tea, coffee}. drink ¬ tea Q = drink ¬ coffee coffeeC tea¬ sleepy a C b means that abducible a is preferred to abducible b

  32. In our framework, Q can be coded into the following set P of generalized rules with set of abducibles AP = {abduce}. drink ¬ tea drink ¬ coffee coffee ¬ abduce, not tea, confirm(coffee) (r1) tea ¬ abduce, not coffee, confirm(tea) (r2) P = confirm(tea) ¬ expect(tea), not expect_not(tea) confirm(coffee) ¬ expect(coffee), not expect_not(coffee) expect(tea) expect(coffee) r1 < r2¬ sleepy, confirm(coffee)

  33. Having the notion of expectation allows one to express the preconditions for an expectation: expect(tea) ¬ have_tea expect(coffee) ¬ have_coffee By means of expect_not one can express situations where he does not expect something: expect_not(coffee) ¬ blood_pressure_high

  34. 4. Preferring and updating in multi-agents In [ INAP01 ] we proposed a logic-based approach to agents that can: • Reason and react to other agents • Prefer among possible choices • Intend to reason and to act • Update their own knowledge, reactions and goals • Interact by updating the theory of another agent • Decide whether to accept an update depending on the requesting agent

  35. Updating agents Updating agent:a rational, reactive agent that can dynamically change its own knowledge and goals: • makes observations • reciprocally updates other agents with goals and rules • thinks a bit (rational) • selects and executes an action (reactive)

  36. Preferring agents Preferring agent: an agent that is able to prefer beliefs and reactions when several alternatives are possible. • Agents can express preferences about their own rules. • Preferences are expressed via priority rules. • Preferences can be updated, possibly on advice from others.

  37. Agent’s language Atomic formulae: Aobjective atoms not Adefault atoms :Cprojects ÷Cupdates Formulae: generalized/priority rules Liis an atom, an update or a negated update A ¬ L1, ... , Ln not A ¬ L1 , ... , Ln Zj is a project integrity constraint false ¬ L1 , ... , Ln , Z1 , ... , Zm active rule L1 , ... ,Ln  Z

  38. Agent’s knowledge states • Knowledge states represent dynamically evolving states of agents’ knowledge. They undergo change due to updates. • Given the current knowledge state Ps , its successor knowledge state Ps+1 is produced as a result of the occurrence of a set of parallel updates. • Update actions do not modify the current or any of the previous knowledge states. They only affect the successor state: the precondition of the action is evaluated in the current state and the postcondition updates the successor state.

  39. Projects and updates • A project j:C denotes the intention of some agent i of proposing the updating the theory of agent j with C. • i÷C denotes an update proposed by i of the current theory of some agent j with C. Fred ÷ C wilma:C

  40. Representation of conflicting information and preferences Preferences may resolve conflicting information. This example models a situation where an agent, Fabio, receives conflicting advice from two reliable authorities. Let (P,R) be the initial theory of Fabio, where R={} and dont(A) ¬ fa(noA), not do(A)(r1) do(A) ¬ ma(A), not dont(A)(r2) false ¬ do(A), fa(noA) false ¬ dont(A), ma(A) r1 < r2¬ fr r2 < r1¬ mr P = fa=father advises ma=mother advises fr=father responsibility mr=mother responsibility

  41. Suppose that Fabio wants to live alone, represented as LA. His mother advises him to do so, but the father advises not to do so: U1 = { mother ÷ ma(LA), father ÷ fa(noLA) } Fabio accepts both updates, and therefore he is still unable to choose either do(LA) or dont(LA) and, as a result, does not perform any action whatsoever.

  42. Afterwards, Fabio's parents separate and the judge assigns responsibility over Fabio to the mother: U2 = { judge ÷ mr } Now the situation changes since the second priority rule gives preference to the mother's wishes, and therefore Fabio can happily conclude ”do live alone”.

  43. Updating preferences Within the theory of an agent both rules and preferences can be updated. • The updating process is triggered by means of external or internal projects. • Here internal projects of an agent are used to update its own priority rules.

  44. Let the theory of Stan be characterized by : workLate ¬ not party(r1) P = party ¬ not workLate(r2) money ¬ workLate(r3) r2 < r1 % partying is preferred to working until late beautifulWoman stan: wishGoOut wishGoOut,, not money stan: getMoney R = wishGoOut, money beautifulWoman: inviteOut getMoney stan: r1 < r2 getMoney stan: not r2 < r1 % to get money, Stan must update his priority rules

More Related