1 / 35

# Markov Logic - PowerPoint PPT Presentation

Markov Logic. Overview. Introduction Statistical Relational Learning Applications First-Order Logic Markov Networks What is it? Potential Functions Log-Linear Model Markov Networks vs. Bayes Networks Computing Probabilities. Overview. Markov Logic Intuition Definition Example

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

## PowerPoint Slideshow about ' Markov Logic' - kevyn

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

### Markov Logic

• Introduction

• Statistical Relational Learning

• Applications

• First-Order Logic

• Markov Networks

• What is it?

• Potential Functions

• Log-Linear Model

• Markov Networks vs. Bayes Networks

• Computing Probabilities

• Markov Logic

• Intuition

• Definition

• Example

• Markov Logic Networks

• MAP Inference

• Computing Probabilities

• Optimization

### Introduction

L. Getoor & B. Taskar (eds.), Introduction to Statistical

Relational Learning, MIT Press, 2007.

Goals:

• Combine (subsets of) logic and probability into a single language

• Develop efficient inference algorithms

• Develop efficient learning algorithms

• Apply to real-world problems

• Professor Kautz’s GPS tracking project

• Determine people’s activities and thoughts about activities based on their own actions as well as their interactions with the world around them

• Collective classification

• Determine labels for a set of objects (such as Web pages) based on their attributes as well as their relations to one another

• Social network analysis and link prediction

• Predict relations between people based on attributes, attributes based on relations, cluster entities based on relations, etc. (smoker example)

• Entity resolution

• Determine which observations imply real-world objects (Deduplicating a database)

• etc.

Constants, variables, functions, predicatesE.g.: Anna, x, MotherOf(x), Friends(x, y)

Literal: Predicate or its negation

Clause: Disjunction of literals

Grounding: Replace all variables by constantsE.g.: Friends (Anna, Bob)

World (model, interpretation):Assignment of truth values to all ground predicates

### Markov Networks

• Represents a joint distribution of variables X

• Undirected graph

• Nodes = variables

• Clique = potential function (weight)

Undirected graphical models

Smoking

Cancer

(S,C)

False

False

4.5

False

True

4.5

True

False

2.7

True

True

4.5

Smoking

Cancer

Asthma

Cough

• Potential functions defined over cliques

• Undirected graphical models

Smoking

Cancer

Asthma

Cough

• Log-linear model:

Weight of Feature i

Feature i

Property

Markov Nets

Bayes Nets

Form

Prod. potentials

Prod. potentials

Potentials

Arbitrary

Cond. probabilities

Cycles

Allowed

Forbidden

Partition func.

Z = ?

Z = 1

Indep. check

Graph separation

D-separation

Inference

MCMC, BP, etc.

Convert to Markov

Inference

MCMC, BP, etc.

Convert to Markov

Goal: Compute marginals & conditionals of

Exact inference is #P-complete

Approximate inference

Monte Carlo methods

Belief propagation

Variational approximations

### Markov Logic

A logical KB is a set of hard constraintson the set of possible worlds

Let’s make them soft constraints:When a world violates a formula,It becomes less probable, not impossible

Give each formula a weight(Higher weight  Stronger constraint)

A Markov Logic Network (MLN) is a set of pairs (F, w) where

F is a formula in first-order logic

w is a real number

Together with a set of constants,it defines a Markov network with

One node for each grounding of each predicate in the MLN

One feature for each grounding of each formula F in the MLN, with the corresponding weight w

Two constants: Anna (A) and Bob (B)

Two constants: Anna (A) and Bob (B)

Smokes(A)

Smokes(B)

Cancer(A)

Cancer(B)

Two constants: Anna (A) and Bob (B)

Friends(A,B)

Friends(A,A)

Smokes(A)

Smokes(B)

Friends(B,B)

Cancer(A)

Cancer(B)

Friends(B,A)

Two constants: Anna (A) and Bob (B)

Friends(A,B)

Friends(A,A)

Smokes(A)

Smokes(B)

Friends(B,B)

Cancer(A)

Cancer(B)

Friends(B,A)

Two constants: Anna (A) and Bob (B)

Friends(A,B)

Friends(A,A)

Smokes(A)

Smokes(B)

Friends(B,B)

Cancer(A)

Cancer(B)

Friends(B,A)

MLN is template for ground Markov nets

Typed variables and constants greatly reduce size of ground Markov net

Probability of a world x:

Markov Logic Networks

Weight of formula i

No. of true groundings of formula i in x

Problem: Find most likely state of world given evidence

Query

Evidence

Problem: Find most likely state of world given evidence

• Problem: Find most likely state of world given evidence

Problem: Find most likely state of world given evidence

This is just the weighted MaxSAT problem

Use weighted SAT solver(e.g., MaxWalkSAT [Kautz et al., 1997] )

fori := 1 to max-triesdo

solution = random truth assignment

for j := 1 tomax-flipsdo

if weights(sat. clauses) > thresholdthen

return solution

c := random unsatisfied clause

with probabilityp

flip a random variable in c

else

flip variable in c that maximizes

weights(sat. clauses)

return failure, best solution found

P(Formula|MLN,C) = ?

Brute force: Sum probs. of worlds where formula holds

MCMC: Sample worlds, check formula holds

P(Formula1|Formula2,MLN,C) = ?

Discard worlds where Formula 2 does not hold

Slow! Can use Gibbs sampling instead

• Given a formula without weights, we can learn them

• Given a set with labeled instances, we want to find wi’s that maximize the sum of the features

• P. Domingos & D. Lowd, Markov Logic: An Interface Layer for Artificial Intelligence, Synthesis Lectures on Artificial Intelligence and Machine Learning, Morgan & Claypool, 2009.

• Most of the slides were taken from P. Domingos’ course website: http://www.cs.washington.edu/homes/pedrod/803/

Thank You!