1 / 14

Learning As Search

Learning As Search. Overview. Inductive learning as search Current best hypothesis search Least commitment search. Learning as search. An interesting view of learning Helps understand the learning process, including generalization and specialization

charis
Download Presentation

Learning As Search

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Learning As Search

  2. Overview • Inductive learning as search • Current best hypothesis search • Least commitment search

  3. Learning as search • An interesting view of learning • Helps understand the learning process, including generalization and specialization • Concepts, such as hypothesis space better come into focus • Will see two more learning algorithms • An instructional tool and elegant, and with some applications too

  4. The Setting • Target function is a Logical sentence • In our examples: propositional logic • Task: find a hypothesis or hypotheses for a goal predicate consistent with all examples seen so far • Example: goal predicate would be WillWait(instance), for the restaurant example

  5. Hypotheses • Each hypothesis proposes an expression for the goal predicate • Hypotheses: Set of all hypotheses an algorithm entertains. E.g. decision trees • Having seen no examples, any hypothesis could be correct: H1 V H2 V… V Hn

  6. Examples • The goal predicate g() may or may not hold for each example • False positive example i for a hypothesis h(): • If g(i) is false, but h(i) is true • False negative example i for a hypothesis h(): • If g(i) is true, but h(i) is false

  7. Current-Best-Hypothesis Search • Local search (iterative improvement) • Start with a hypothesis and keep it consistent with examples seen: • If a false positive is seen, then specialize • If a false negative is seen, then generalize • (try to keep the hypothesis consistent with the previous examples)

  8. Specialization and Generalization • Specialization (Restriction): Add a conjunction or drop a disjunction • Generalization (Relaxation): Add a disjunction or drop a conjunction • There could be other changes depending on the representation language

  9. Discussion • The choice of initial hypothesis and specialization and generalization is nondeterministic (use heuristics) • Problems: • What heuristics to use • Could be inefficient (need to check consistency with all examples), and perhaps backtrack or restart • Handling noise

  10. Least-Commitment Search • Or the version space (v.s.) algorithm • Keep all the version space=set of all consistent hypes • As each example is seen, drop inconsistent ones (shrink the v.s.) • Return all the consistent ones • But there is an enormous number of hypes..

  11. Represent the Frontier • Solution: don’t maintain explicitly; Keep only the boundary of the version space • There is a partial ordering on the hype space under the relationship of specializes/generalizes • Maintain the most general hypes and the most specific hypes of the version space

  12. Version Space Maintenance • S: set of most specific hypes in v.s. (initially False) • G: set of most general hypes in v.s. (initially True) • For each new example and h in S or G: • if false negative for h in S, generalize h • if false positive for h in S, throw h out of S! • Symmetrically for h in G • Generalization or specialization can create multiple hypes (children)

  13. Stopping Condition • If no hype left, but examples exist then? • If examples are gone, and one hype left, great! • If multiple hypotheses remain?

  14. Discussion • Advantages to Best-Current-Search? • Disadvantages (in general)? • Best-Current-Hypothesis and Version-Space algorithms are examples of incremental or on-line learning algorithms

More Related