1 / 6

Markov Logic and Deep Networks

Markov Logic and Deep Networks. Pedro Domingos Dept. of Computer Science & Eng. University of Washington. Weight of formula i. No. of true instances of formula i in x. Markov Logic Networks. Basic idea: Use first-order logic to compactly specify large non-i.i.d. models

paul
Download Presentation

Markov Logic and Deep Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Markov Logic and Deep Networks Pedro Domingos Dept. of Computer Science & Eng. University of Washington

  2. Weight of formula i No. of true instances of formula i in x Markov Logic Networks • Basic idea: Use first-order logic tocompactly specify large non-i.i.d. models • MLN = Set of formulas with weights • Formula = Feature template (Vars→Objects) • E.g., HMM: State(+s,t) ^ State(+s',t+1) Obs(+o,t) ^ State(+s,t)

  3. State of the Art in MLNs • Many algorithms for learning and inference • Inference: Millions of variables, billions of features • Learning: Generative, discriminative, max-margin, etc. • Best-performing solutions in many application areas • Natural language, robot mapping, social networks,computational biology, activity recognition, etc. • Open-source software/Web site: Alchemy • Book: Domingos & Lowd, Markov Logic,Morgan & Claypool, 2009. alchemy.cs.washington.edu

  4. Deep Uses of MLNs • Very large scale inference • Defining architecture of deep networks • Adding knowledge to deep networks • Transition to natural language input • Learning with many levels of hidden variables

  5. MLNs for Deep Learning • Basic idea: Use small amounts of knowledgeand large amounts of joint inferenceto make up for lack of supervision • Relational clustering: • Cluster objects with similar relations to similar objects • Cluster relations that hold between similar sets of objects • Coreference resolution: Outperforms supervised approaches on MUC and ACE benchmarks • Semantic network extraction: Learns thousands of concepts and relations from millions of Web triples

  6. Example:Unsupervised Semantic Parsing • Goal: Read text and answer questions • No supervision or annotation • Input: Dependency parses of sentences(Nodes→Unary predicates / Edges→Binary preds.) • Outputs: Semantic parser and knowledge base • Basic idea: Cluster expressions with similar subexpressions • Maps syntactic variants to common meaning • Discovers its own meaning representation • “Part of” lattice of clusters • Applied to corpus of biomedical abstracts • Three times more correct answers than next best

More Related