1 / 78

Markov Logic

Markov Logic. Pedro Domingos Dept. of Computer Science & Eng. University of Washington. Desiderata. A language for cognitive modeling should: Handle uncertainty Noise Incomplete information Ambiguity Handle complexity Many objects Relations among them IsA and IsPart hierarchies Etc.

lamar
Download Presentation

Markov Logic

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Markov Logic Pedro Domingos Dept. of Computer Science & Eng. University of Washington

  2. Desiderata A language for cognitive modeling should: • Handle uncertainty • Noise • Incomplete information • Ambiguity • Handle complexity • Many objects • Relations among them • IsA and IsPart hierarchies • Etc.

  3. Solution • Probability handles uncertainty • Logic handles complexity What is the simplest way to combine the two?

  4. Markov Logic • Assign weights to logical formulas • Treat formulas as templates for featuresof Markov networks

  5. Overview • Representation • Inference • Learning • Applications

  6. Propositional Logic • Atoms: Symbols representing propositions • Logical connectives:¬, Λ, V, etc. • Knowledge base: Set of formulas • World: Truth assignment to all atoms • Every KB can be converted to CNF • CNF: Conjunction of clauses • Clause: Disjunction of literals • Literal: Atom or its negation • Entailment: Does KB entail query?

  7. First-Order Logic • Atom: Predicate(Variables,Constants)E.g.: • Ground atom: All arguments are constants • Quantifiers: • This tutorial: Finite, Herbrand interpretations

  8. Markov Networks Undirected graphical models Smoking Cancer Asthma Cough • Potential functions defined over cliques

  9. Markov Networks Undirected graphical models Smoking Cancer Asthma Cough • Log-linear model: Weight of Feature i Feature i

  10. Probabilistic Knowledge Bases PKB = Set of formulas and their probabilities + Consistency + Maximum entropy = Set of formulas and their weights = Set of formulas and their potentials (1 if formula true, if formula false)

  11. Markov Logic A Markov Logic Network (MLN) is a set of pairs (F, w) where F is a formula in first-order logic w is a real number An MLN defines a Markov network with One node for each grounding of each predicatein the MLN One feature for each grounding of each formula F in the MLN, with the corresponding weight w

  12. Relation to Statistical Models Special cases: Markov networks Markov random fields Bayesian networks Log-linear models Exponential models Max. entropy models Gibbs distributions Boltzmann machines Logistic regression Hidden Markov models Conditional random fields Obtained by making all predicates zero-arity Markov logic allows objects to be interdependent (non-i.i.d.) Markov logic facilitates composition

  13. Relation to First-Order Logic Infinite weights  First-order logic Satisfiable KB, positive weights Satisfying assignments = Modes of distribution Markov logic allows contradictions between formulas

  14. Example

  15. Example

  16. Example

  17. Example

  18. Example

  19. Overview • Representation • Inference • Learning • Applications

  20. Theorem Proving TP(KB, Query) KBQ← KB U {¬ Query} return ¬SAT(CNF(KBQ))

  21. Satisfiability (DPLL) SAT(CNF) if CNF is empty return True ifCNF contains empty clause returnFalse choose an atom A return SAT(CNF(A)) V SAT(CNF(¬A))

  22. First-Order Theorem Proving • Propositionalization 1. Form all possible ground atoms 2. Apply propositional theorem prover • Lifted Inference: Resolution • Resolve pairs of clauses until empty clause derived • Unify literals by substitution, e.g.: unifies and

  23. Probabilistic Theorem Proving Given Probabilistic knowledge base K Query formula Q OutputP(Q|K)

  24. Weighted Model Counting • ModelCount(CNF) = # worlds that satisfy CNF • Assign a weight to each literal • Weight(world) = П weights(true literals) • Weighted model counting: Given CNF C and literal weights W OutputΣ weights(worlds that satisfy C) PTP is reducible to lifted WMC

  25. Example

  26. Example

  27. Example If Then

  28. Example

  29. Example

  30. Inference Problems

  31. Propositional Case • All conditional probabilities are ratiosof partition functions: • All partition functions can be computed by weighted model counting

  32. Conversion to CNF + Weights WCNF(PKB) for all (Fi, Φi) єPKB s.t. Φi> 0 do PKB← PKB U {(Fi Ai, 0)} \ {(Fi, Φi)} CNF← CNF(PKB) for all ¬Ai literals do W¬Ai← Φi for all other literals L do wL ← 1 return (CNF, weights)

  33. Probabilistic Theorem Proving PTP(PKB, Query) PKBQ← PKB U {(Query,0)} return WMC(WCNF(PKBQ)) / WMC(WCNF(PKB))

  34. Probabilistic Theorem Proving PTP(PKB, Query) PKBQ← PKB U {(Query,0)} return WMC(WCNF(PKBQ)) / WMC(WCNF(PKB)) Compare: TP(KB, Query) KBQ← KB U {¬ Query} return ¬SAT(CNF(KBQ))

  35. Weighted Model Counting Base Case WMC(CNF, weights) if all clauses in CNF are satisfied return ifCNF has empty unsatisfied clause return 0

  36. Weighted Model Counting WMC(CNF, weights) if all clauses in CNF are satisfied return ifCNF has empty unsatisfied clause return 0 if CNF can be partitioned into CNFs C1,…, Ck sharing no atoms return Decomp. Step

  37. Weighted Model Counting WMC(CNF, weights) if all clauses in CNF are satisfied return ifCNF has empty unsatisfied clause return 0 if CNF can be partitioned into CNFs C1,…, Ck sharing no atoms return choose an atom A return Splitting Step

  38. First-Order Case • PTP schema remains the same • Conversion of PKB to hard CNF and weights:New atom in Fi Ai is now Predicatei(variables in Fi, constants in Fi) • New argument in WMC:Set of substitution constraints of the formx = A, x ≠ A, x = y, x ≠ y • Lift each step of WMC

  39. Lifted Weighted Model Counting Base Case LWMC(CNF, substs,weights) if all clauses in CNF are satisfied return ifCNF has empty unsatisfied clause return 0

  40. Lifted Weighted Model Counting LWMC(CNF, substs,weights) if all clauses in CNF are satisfied return ifCNF has empty unsatisfied clause return 0 if there exists a lifted decomposition of CNF return Decomp. Step

  41. Lifted Weighted Model Counting LWMC(CNF, substs,weights) if all clauses in CNF are satisfied return ifCNF has empty unsatisfied clause return 0 if there exists a lifted decomposition of CNF return choose an atom A return Splitting Step

  42. Extensions • Unit propagation, etc. • Caching / Memoization • Knowledge-based model construction

  43. Approximate Inference WMC(CNF, weights) if all clauses in CNF are satisfied return ifCNF has empty unsatisfied clause return 0 if CNF can be partitioned into CNFs C1,…, Ck sharing no atoms return choose an atom A return withprobability , etc. Splitting Step

  44. MPE Inference • Replace sums by maxes • Use branch-and-bound for efficiency • Do traceback

  45. Overview • Representation • Inference • Learning • Applications

  46. Learning Data is a relational database Closed world assumption (if not: EM) Learning parameters (weights) Generatively Discriminatively Learning structure (formulas)

  47. Generative Weight Learning Maximize likelihood Use gradient ascent or L-BFGS No local maxima Requires inference at each step (slow!) No. of true groundings of clause i in data Expected no. true groundings according to model

  48. Pseudo-Likelihood Likelihood of each variable given its neighbors in the data [Besag, 1975] Does not require inference at each step Consistent estimator Widely used in vision, spatial statistics, etc. But PL parameters may not work well forlong inference chains

  49. Discriminative Weight Learning Maximize conditional likelihood of query (y) given evidence (x) Expected counts can be approximatedby counts in MAP state of y given x No. of true groundings of clause i in data Expected no. true groundings according to model

  50. Voted Perceptron Originally proposed for training HMMs discriminatively [Collins, 2002] Assumes network is linear chain wi← 0 fort← 1 toTdo yMAP← Viterbi(x) wi←wi+ η[counti(yData) – counti(yMAP)] return ∑twi / T

More Related