1 / 35

Markov Logic

Markov Logic. Overview. Introduction Statistical Relational Learning Applications First-Order Logic Markov Networks What is it? Potential Functions Log-Linear Model Markov Networks vs. Bayes Networks Computing Probabilities. Overview. Markov Logic Intuition Definition Example

kevyn
Download Presentation

Markov Logic

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Markov Logic

  2. Overview • Introduction • Statistical Relational Learning • Applications • First-Order Logic • Markov Networks • What is it? • Potential Functions • Log-Linear Model • Markov Networks vs. Bayes Networks • Computing Probabilities

  3. Overview • Markov Logic • Intuition • Definition • Example • Markov Logic Networks • MAP Inference • Computing Probabilities • Optimization

  4. Introduction

  5. Statistical Relational Learning L. Getoor & B. Taskar (eds.), Introduction to Statistical Relational Learning, MIT Press, 2007. Goals: • Combine (subsets of) logic and probability into a single language • Develop efficient inference algorithms • Develop efficient learning algorithms • Apply to real-world problems

  6. Applications • Professor Kautz’s GPS tracking project • Determine people’s activities and thoughts about activities based on their own actions as well as their interactions with the world around them

  7. Applications • Collective classification • Determine labels for a set of objects (such as Web pages) based on their attributes as well as their relations to one another • Social network analysis and link prediction • Predict relations between people based on attributes, attributes based on relations, cluster entities based on relations, etc. (smoker example) • Entity resolution • Determine which observations imply real-world objects (Deduplicating a database) • etc.

  8. First-Order Logic Constants, variables, functions, predicatesE.g.: Anna, x, MotherOf(x), Friends(x, y) Literal: Predicate or its negation Clause: Disjunction of literals Grounding: Replace all variables by constantsE.g.: Friends (Anna, Bob) World (model, interpretation):Assignment of truth values to all ground predicates

  9. Markov Networks

  10. What is a Markov Network? • Represents a joint distribution of variables X • Undirected graph • Nodes = variables • Clique = potential function (weight)

  11. Markov Networks Undirected graphical models Smoking Cancer (S,C) False False 4.5 False True 4.5 True False 2.7 True True 4.5 Smoking Cancer Asthma Cough • Potential functions defined over cliques

  12. Markov Networks • Undirected graphical models Smoking Cancer Asthma Cough • Log-linear model: Weight of Feature i Feature i

  13. Markov Nets vs. Bayes Nets Property Markov Nets Bayes Nets Form Prod. potentials Prod. potentials Potentials Arbitrary Cond. probabilities Cycles Allowed Forbidden Partition func. Z = ? Z = 1 Indep. check Graph separation D-separation Inference MCMC, BP, etc. Convert to Markov Inference MCMC, BP, etc. Convert to Markov

  14. Computing Probabilities Goal: Compute marginals & conditionals of Exact inference is #P-complete Approximate inference Monte Carlo methods Belief propagation Variational approximations

  15. Markov Logic

  16. Markov Logic: Intuition A logical KB is a set of hard constraintson the set of possible worlds Let’s make them soft constraints:When a world violates a formula,It becomes less probable, not impossible Give each formula a weight(Higher weight  Stronger constraint)

  17. Markov Logic: Definition A Markov Logic Network (MLN) is a set of pairs (F, w) where F is a formula in first-order logic w is a real number Together with a set of constants,it defines a Markov network with One node for each grounding of each predicate in the MLN One feature for each grounding of each formula F in the MLN, with the corresponding weight w

  18. Example: Friends & Smokers

  19. Example: Friends & Smokers

  20. Example: Friends & Smokers

  21. Example: Friends & Smokers Two constants: Anna (A) and Bob (B)

  22. Example: Friends & Smokers Two constants: Anna (A) and Bob (B) Smokes(A) Smokes(B) Cancer(A) Cancer(B)

  23. Example: Friends & Smokers Two constants: Anna (A) and Bob (B) Friends(A,B) Friends(A,A) Smokes(A) Smokes(B) Friends(B,B) Cancer(A) Cancer(B) Friends(B,A)

  24. Example: Friends & Smokers Two constants: Anna (A) and Bob (B) Friends(A,B) Friends(A,A) Smokes(A) Smokes(B) Friends(B,B) Cancer(A) Cancer(B) Friends(B,A)

  25. Example: Friends & Smokers Two constants: Anna (A) and Bob (B) Friends(A,B) Friends(A,A) Smokes(A) Smokes(B) Friends(B,B) Cancer(A) Cancer(B) Friends(B,A)

  26. MLN is template for ground Markov nets Typed variables and constants greatly reduce size of ground Markov net Probability of a world x: Markov Logic Networks Weight of formula i No. of true groundings of formula i in x

  27. Markov Networks

  28. MAP Inference Problem: Find most likely state of world given evidence Query Evidence

  29. MAP Inference Problem: Find most likely state of world given evidence

  30. MAP Inference • Problem: Find most likely state of world given evidence

  31. MAP Inference Problem: Find most likely state of world given evidence This is just the weighted MaxSAT problem Use weighted SAT solver(e.g., MaxWalkSAT [Kautz et al., 1997] )

  32. The MaxWalkSAT Algorithm fori := 1 to max-triesdo solution = random truth assignment for j := 1 tomax-flipsdo if weights(sat. clauses) > thresholdthen return solution c := random unsatisfied clause with probabilityp flip a random variable in c else flip variable in c that maximizes weights(sat. clauses) return failure, best solution found

  33. Computing Probabilities P(Formula|MLN,C) = ? Brute force: Sum probs. of worlds where formula holds MCMC: Sample worlds, check formula holds P(Formula1|Formula2,MLN,C) = ? Discard worlds where Formula 2 does not hold Slow! Can use Gibbs sampling instead

  34. Weighted Learning • Given a formula without weights, we can learn them • Given a set with labeled instances, we want to find wi’s that maximize the sum of the features

  35. References • P. Domingos & D. Lowd, Markov Logic: An Interface Layer for Artificial Intelligence, Synthesis Lectures on Artificial Intelligence and Machine Learning, Morgan & Claypool, 2009. • Most of the slides were taken from P. Domingos’ course website: http://www.cs.washington.edu/homes/pedrod/803/ Thank You!

More Related