1 / 13

Note about Resolution Refutation

Note about Resolution Refutation. You have a set of hypotheses h 1 , h 2 , …, h n , and a conclusion c. Your argument is that whenever all of the h 1 , h 2 , …, h n are true, then c is true as well. In other words, whenever all of the h 1 , h 2 , …, h n are true, then c is false.

perry
Download Presentation

Note about Resolution Refutation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Note about Resolution Refutation • You have a set of hypotheses h1, h2, …, hn, and a conclusion c. • Your argument is that whenever all of the h1, h2, …, hn are true, then c is true as well. • In other words, whenever all of the h1, h2, …, hn are true, then c is false. • If and only if the argument is valid, then the conjunction h1  h2  …  hn  c is false, because either (at least) one of the h1, h2, …, hn is false, or if they are all true, then c is false. • Therefore, if this conjunction resolves to false, we have shown that the argument is valid. Introduction to Artificial Intelligence Lecture 13: Neural Network Basics

  2. Propositional Calculus • You have seen that resolution, including resolution refutation, is a suitable tool for automated reasoning in the propositional calculus. • If we build a machine that represents its knowledge as propositions, we can use these mechanisms to enable the machine to deduce new knowledge from existing knowledge and verify hypotheses about the world. • However, propositional calculus has some serious restrictions in its capability to represent knowledge. Introduction to Artificial Intelligence Lecture 13: Neural Network Basics

  3. Propositional Calculus • In propositional calculus, atoms have no internal structure; we cannot reuse the same proposition for a different object, but each proposition always refers to the same object. • For example, in the toy block world, the propositions ON_A_B and ON_A_C are completely different from each other. • We could as well call them PETER and BOB instead. • So if we want to express rules that apply to a whole class of objects, in propositional calculus we would have to define separate rules for every single object of that class. Introduction to Artificial Intelligence Lecture 13: Neural Network Basics

  4. Predicate Calculus • So it is a better idea to use predicates instead of propositions. • This leads us to predicate calculus. • Predicate calculus has symbols called • object constants, • relation constants, and • function constants • These symbols will be used to refer to objects in the world and to propositions about the word. Introduction to Artificial Intelligence Lecture 13: Neural Network Basics

  5. Quantification • Introducing the universal quantifier  and the existential quantifier  facilitates the translation of world knowledge into predicate calculus. • Examples: • Paul beats up all professors who fail him. • x(Professor(x)  Fails(x, Paul)  BeatsUp(Paul, x)) • There is at least one intelligent UMB professor. • x(UMBProf(x)  Intelligent(x)) Introduction to Artificial Intelligence Lecture 13: Neural Network Basics

  6. Knowledge Representation • a) There are no crazy UMB students. • x (UMBStudent(x)  Crazy(x)) • b) All computer scientists are either rich or crazy, but not both. • x (CS(x)  [Rich(x)  Crazy(x)]  [Rich(x)  Crazy(x)] ) • c) All UMB students except one are intelligent. • x (UMBStudent(x)  Intelligent(x))  x,y (UMBStudent(x)  UMBStudent(y)  Identical(x, y)  Intelligent(x)  Intelligent(y)) • d) Jerry and Betty have the same friends. • x ([Friends(Betty, x)  Friends(Jerry, x)]  [Friends(Jerry, x)  Friends(Betty, x)]) • e) No mouse is bigger than an elephant. • x,y (Mouse(x)  Elephant(y)  BiggerThan(x, y)) Introduction to Artificial Intelligence Lecture 13: Neural Network Basics

  7. But now, finally… • … let us move on to… • Artificial Neural Networks Introduction to Artificial Intelligence Lecture 13: Neural Network Basics

  8. Computers vs. Neural Networks • “Standard” Computers Neural Networks • one CPU highly parallel processing • fast processing units slow processing units • reliable units unreliable units • static infrastructure dynamic infrastructure Introduction to Artificial Intelligence Lecture 13: Neural Network Basics

  9. Why Artificial Neural Networks? • There are two basic reasons why we are interested in building artificial neural networks (ANNs): • Technical viewpoint: Some problems such as character recognition or the prediction of future states of a system require massively parallel and adaptive processing. • Biological viewpoint: ANNs can be used to replicate and simulate components of the human (or animal) brain, thereby giving us insight into natural information processing. Introduction to Artificial Intelligence Lecture 13: Neural Network Basics

  10. Why Artificial Neural Networks? • Why do we need another paradigm than symbolic AI for building “intelligent” machines? • Symbolic AI is well-suited for representing explicit knowledge that can be appropriately formalized. • However, learning in biological systems is mostly implicit – it is an adaptation process based on uncertain information and reasoning. • ANNs are inherently parallel and work extremely efficiently if implemented in parallel hardware. Introduction to Artificial Intelligence Lecture 13: Neural Network Basics

  11. How do NNs and ANNs work? • The “building blocks” of neural networks are the neurons. • In technical systems, we also refer to them as units or nodes. • Basically, each neuron • receives input from many other neurons, • changes its internal state (activation) based on the current input, • sends one output signal to many other neurons, possibly including its input neurons (recurrent network) Introduction to Artificial Intelligence Lecture 13: Neural Network Basics

  12. How do NNs and ANNs work? • Information is transmitted as a series of electric impulses, so-called spikes. • The frequency and phase of these spikes encodes the information. • In biological systems, one neuron can be connected to as many as 10,000 other neurons. Introduction to Artificial Intelligence Lecture 13: Neural Network Basics

  13. “Data Flow Diagram” of Visual Areas in Macaque Brain Blue:motion perception pathway Green:object recognition pathway Introduction to Artificial Intelligence Lecture 13: Neural Network Basics

More Related