1 / 12

Learning by Answer Sets

Learning by Answer Sets. Chiaki Sakama Wakayama University, Japan. Presented at AAAI Spring Symposium on Answer Set Programming, March 2001. Nonmonotonic LP (NMLP) vs. Inductive LP (ILP). Purpose of Research. NMLP and ILP have different goals and complement each other.

eve
Download Presentation

Learning by Answer Sets

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Learning by Answer Sets Chiaki Sakama Wakayama University, Japan Presented at AAAI Spring Symposium on Answer Set Programming, March 2001

  2. Nonmonotonic LP (NMLP)vs. Inductive LP (ILP)

  3. Purpose of Research NMLP and ILP have different goals and complement each other. Combine techniques of two fields in the context of nonmonotonic ILP.

  4. Problem Setting • A background KB is a function-free & categorical ELP P. • Given a positive exampleL as a ground literal s.t. P≠L, compute a rule R satisfying: P {R}  L. • Given a negative exampleL as a ground literal s.t. P=L, compute a rule R satisfying: P {R} ≠ L.

  5. Definitions • A literal L or an NAF formula not L is called an LP-literal. • Let Lit be the set of all ground literals and S⊆Lit, then define S+= S ∪ { not L| L  Lit\S }. • For an LP-literal K, |K|=K if K is a literal; |K|=L if K=not L.

  6. Definitions Given two ground LP-literals L1 and L2, • L1~L2 if L1 and L2 have the same predicate and the same constants. • L1 in a ground rule R is relevant to L2 if (i) L1~L2 or (ii) L1 shares a constant with L3 (in R) which is relevant to L2. • L1 is involved in a program P if |L1| appears in the ground instance of P.

  7. Computing Hypothesesusing Answer Sets Let S be the answer set of P. Then, P{R}  L and P ≠ L imply S ≠ R. Consider the integrity constraint   where ⊆S+ and every element in  is relevant to the example L and is involved in P{L}.

  8. Computing Hypothesesusing Answer Sets (cont.) As S does not satisfy the constraint, S |≠  . By S|≠L, not L is in S+ and also in . Shifting L to the head, we get L  ’ where ’=\{ not L }. Finally, construct a rule R* s.t. R* = L  ’ for some substitution  .

  9. Example P: bird(x) penguin(x), bird(tweety), penguin(polly). L: flies(tweety). Initially, P|≠flies(tweety). S+={bird(t), bird(p), peng(p), not peng(t), not flies(t), not flies(p), not¬ bird(_), not¬ peng(_), not¬ flies(_) } ( _ means t or p).

  10. Example (cont.) Picking up LP-literals which are relevant to L and are involved in P ∪{L}:  bird(t),not peng(t),not flies(t). Shifting L=flies(t) to the head: flies(t) bird(t), not peng(t). Replacing tweety by x: R*= flies(x) bird(x), not peng(x).

  11. Remarks • With an additional condition on L, R* is shown to be correct. • When P is a function-free stratified program, R* is efficiently constructed from S+. • Existing procedures for ASP are used for computing R*.

  12. Further Issues In the paper, • a similar procedure for learning from negative examples is provided; • applications to learning from multiple examples, and learning from non-categoricalprograms are presented.

More Related