1 / 27

Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA

User preference information in query answering. Pierangelo Dell’Acqua Aida Vitória Dept. of Science and Technology - ITN Linköping University, Sweden. Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa, Portugal. Motivation.

wanda-lowe
Download Presentation

Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. User preference information in query answering Pierangelo Dell’Acqua Aida Vitória Dept. of Science and Technology - ITN Linköping University, Sweden Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa, Portugal

  2. Motivation • Query answering systems are often difficult to use because they do not attempt to cooperate with their users. • We discuss the use of additional information about the user to enhance cooperative behaviour from query answering systems.

  3. Idea • Consider a system whose knowledge is defined as: (P, R) P is a set of rules and R expresses preference information over the rules in P. When the rules in P conflict, then some rules are preferred over others according to R. P is used to derive conclusions and the preferences in R to derive the preferred conclusions.

  4. Idea • Extra level of flexibility - if the user can provide preference information at query time: ?- (G,Pref ) Given (P,R), the system has to derive G from P by taking into account the preferences in R which are updated by the preferences in Pref.

  5. Idea • Finally, it is desirable to make the background knowledge (P,R) of the system updatable in a way that it can be modified to reflect changes in the world (including preferences).

  6. Update reasoning • Updates model dynamically evolving worlds. • Knowledge, whether complete or incomplete, can be updated to reflect world change. • New knowledge may contradict and override older one. • Updates differ from revisions which are about an incomplete static world model.

  7. Preference reasoning • Preferences are employed with incomplete knowledge when several models are possible. • Preferences act by choosing some of the possible models. • They do this via a partial order among rules. Rules will only fire if they are not defeated by more preferred rules.

  8. Preference and updates combined • Despite their differences preferences and updates display similarities. • Both can be seen as wiping out rules: • in preferences the less preferred rules, so as toremove models which are undesired. • in updates the older rules, inclusively for obtaining models in otherwise inconsistent theories. • This view helps put them together into a single uniform framework. • In this framework, preferences can be updated.

  9. LP framework Atomic formulae: Aatom not Adefault atom Formulae: L0¬ L1 ,... , Ln generalized rule every Li is an atom or a default atom

  10. LP framework Let N={ n1,…, nk } be a set of constants containing a unique name for each generalized rule. priority rule Zis a literal nr<nu or not nr<nu Z ¬ L1 , ... , Ln nr<nu means that rule r is preferred to rule u Def. Prioritized logic program Let P be a set of generalized rules and R a set of priority rules. Then =(P,R) is a prioritized logic program.

  11. Dynamic prioritized programs Let S={1,…,s,…} be a set of states (natural numbers). Def. Dynamic prioritized program Let (Pi,Ri) be a prioritized logic program for every iS, then = {(Pi,Ri) : iS} is a dynamic prioritized program. Intuitively, the meaning of such a sequence results from updating (P1, R1) with the rules from (P2, R2), and then updating the result with … the rules from (Pn, Rn)

  12. Example: dynamic prioritized program This example illustrates the use of contextual preferences to select preferred models. (1) Suppose a scenario where John wants to buy a magazine. He can buy either a sport magazine (sm), a travel magazine (tm) or a financial magazine (fm). sm ¬ not fm, not tm (r1) tm ¬ not fm, not sm (r2) fm ¬ not sm, not tm (r3) office (r4) n1<n3¬ holiday n2<n3¬ holiday n3<n1 ¬ office n3<n2 ¬ office P1 R1 When John is at the office his preferred magazine is a financial magazine.

  13. Example: dynamic prioritized program (2) Next, suppose that John goes on vacation. P2 not office (r5) holiday (r6) R2 Now, John has two alternative magazines equally preferable: sport and travel magazine.

  14. Queries with preferences • The ability to take into account the user information makes the system able to target its answers to the user’s goal and interests. Def. Queries with preferences Let G be a goal,  a prioritized logic program and ={(Pi,Ri) : iS} a dynamic prioritized program. Then ?- (G,) is a query wrt. 

  15. Joinability function S+ = S  { max(S) + 1 } Def. Joinability at state s Let sS+ be a state, ={(Pi,Ri) : iS} a dynamic prioritized program and =(PX,RX) a prioritized logic program. The joinability function s at state s is: s  = {(Pi,Ri) : iS+} (Pi, Ri) if 1  i < s (Pi,Ri) = (PX, RX) if i = s (Pi-1, Ri-1) if s < i  max(S+)

  16. Example: car dealer Consider the following program that exemplifies the process of quoting prices for second-hand cars. price(Car,200) ¬ stock(Car,Col,T), not price(Car,250), not offer (r1) price(Car,250) ¬ stock(Car,Col,T), not price(Car,200), not offer (r2) prefer(orange) ¬ not prefer(black) (r3) prefer(black) ¬ not prefer(orange) (r4) stock(Car,Col,T) ¬ bought(Car,Col,Date), T=today-Date (r5)

  17. Example: car dealer When the company buys a car, the information about the car must be added to the stock via an update: bought(fiat,orange,d1) When the company sells a car, the company must remove the car from the stock: not bought(volvo,black,d2)

  18. Example: car dealer The selling strategy of the company can be formalized as: n2 < n1 ¬ stock(Car,Col,T), T < 10 n1 < n2 ¬ stock(Car,Col,T), T  10, not prefer(Col) n2 < n1 ¬ stock(Car,Col,T), T  10, prefer(Col) n4 < n3 price(Car,200) ¬ stock(Car,Col,T), not price(Car,250), not offer (r1) price(Car,250) ¬ stock(Car,Col,T), not price(Car,200), not offer (r2) prefer(orange) ¬ not prefer(black) (r3) prefer(black) ¬ not prefer(orange) (r4) stock(Car,Col,T) ¬ bought(Car,Col,Date), T=today-Date (r5)

  19. Example: car dealer Suppose that the company adopts the policy to offer a special price for cars at a certain times of the year. price(Car,100) ¬ stock(Car,Col,T), offer (r6) not offer Suppose an orange fiat bought in date d1 is in stock and offer does not hold. Independently of the joinability function used: ?- ( price(fiat,P), ({},{}) ) P = 250 if today-d1 < 10 P = 200 if today-d1  10

  20. Example: car dealer ?- ( price(fiat,P), ({},{not (n4 < n3), n3 < n4}) ) P = 250 • For this query it is relevant which joinability function is used: • if we use 1, then we do not get the intended answer since the user • preferences are overwritten by the default preferences of the company; • on the other hand, it is not so appropriate to use max(S+) since a • customer could ask: ?- ( price(fiat,P), ({offer},{}) )

  21. Joinability function • In some applications the user preferences in  must have priority over the preferences in . In this case, the joinability function max(S+) must be used. Example: a web-site application of a travel agency whose database  maintains information about holiday resorts and preferences among touristy locations. When a user asks a query ?- (G, ), the system must give priority to . • Some other applications need the joinability function 1to give priority to the preferences in .

  22. Conclusions • Novel logical framework: • update and preference information can be specified and used in query answering systems. • declarative semantics is stable model based. • procedural semantics based on a syntactical transformation (correct and complete).

  23. Future work • A preference metalanguage that compiles the pairwise preference specification. • Detect inconsistent preference specifications. • How to incorporate abduction in our framework: abductive preferences leading to conditional answers depending on accepting a preference. • How to tackle the problem arising when several users query the system together.

  24. Preferred stable models Let = {(Pi,Ri) : iS} be a dynamic prioritized program, Q = { PiRi : iS }, PR = i(PiRi) and M an interpretation of P. Def. Default and Rejected rules Default(PR,M) = {not A :  (A¬Body) in PR and M |=body} Reject(s,M,Q) = { r  PiRi : r’ PjRj, head(r)=not head(r’), i<js and M |=body(r’) }

  25. Preferred stable models Def. Unsupported and Unprefered rules Unsup(PR,M) = {r  PR : M |=head(r) and M |¹body-(r)} Unpref(PR,M) is the least set including Unsup(PR, M) and every rule r such that: • r’  (PR – Unpref(PR, M)) : M |=r’ < r,M |=body+(r’) and [not head(r’)body-(r) or(not head(r)  body-(r’) and M |=body(r))]

  26. Preferred stable models Def. Preferred stable models Let s be a state, ={(Pi,Ri) : iS} a dynamic prioritized program, and M a stable model of . M is a preferred stable model of  at state s iff M = least( [X - Unpref(X, M)]  Default(PR, M) ) where: PR = is(PiRi) Q = { PiRi : iS } X = PR - Reject(s,M,Q)

  27. Preferred conclusions Def. Preferred conclusions Let sS+ be a state and ={(Pi,Ri) : iS} a dynamic prioritized program. The preferred conclusions of  with joinability function s are: (G,) : G is included in every preferred stable model of s  at state max(S+)

More Related