1 / 29

Artificial Intelligence on the Web

Artificial Intelligence on the Web. Wednesday, Week 9. Intelligence Exercise. What is Intelligence? What activities require intelligence?. AI Definition #1. “The automation of activities that we associate with human thinking, activities such as decision-making, problem solving, learning…”.

Download Presentation

Artificial Intelligence on the Web

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Artificial Intelligence on the Web Wednesday, Week 9

  2. Intelligence Exercise • What is Intelligence? • What activities require intelligence?

  3. AI Definition #1 • “The automation of activities that we associate with human thinking, activities such as decision-making, problem solving, learning…”

  4. AI Definition #2 • “The study of how to make computers do things at which, at the moment, people are better.”

  5. AI Definition #3 • “The study of computations that make it possible to perceive, reason, and act.”

  6. AI Definition #4 • “The branch of computer science that is concerned with the automation of intelligent behavior.”

  7. Main AI Definitions • Systems that think like humans • Systems that act like humans • Systems that act rationally

  8. Thinking humanly: • Problem is figuring out how humans think. • This is an interesting question apart from AI. • We are concerned with solving the problem as a human would.

  9. Acting humanly: • This idea is pretty much summed up in the Turing Test. • Can we get a program to seem human enough to fool a human interrogator?

  10. Acting Rationally: • “Acting to achieve one’s goals, given one’s beliefs.” • So, based on what we know about the world, we should always do the right thing. • It is generally too difficult to find the right thing to do.

  11. AI Topics • Let’s focus on Definition #2: Getting computers to do things that human’s are currently better at. • This sweeps away troubling philosophical questions, and allows us to take an engineering perspective.

  12. Search Problems • Many AI problems involve finding a sequence of actions to reach a goal. • Chess - find a series of moves to win a game. • Robot control - find a series of movements that leads to a particular room.

  13. Search Problems • We formalize search by dividing the world into a set of states and actions. • States: • Chess - legal board positions. • Robot navigation - the robot’s current room. • Actions: • Chess - Legal chess moves. • Robot - move East, West, North or South.

  14. Search Problems • We need to know two more things: • The successor function tells us how actions change the state. • The goal state tells us where we are trying to get.

  15. Robot Example • Our robot is trying to get from A to P. A B C D E F G H I J K L M N O P

  16. Search Tree • Each of our four actions will result in a new state. • From each of those new states, we again have four possible actions to choose from. • The process can be viewed as a tree…

  17. Search Tree A North West East South A E B A North West East South … … A I E F

  18. Navigating a Search Tree • We can move through a search tree in different ways. • One possibility: Breadth First Search • First consider every possible action sequence of length N. • Then move on to every possible action sequence of length N+1. • We’ll consider other options in lab.

  19. Search Efficiency • With breadth first search, how large will our tree get before we reach P? • 46 = 4096. • In general? • BD • B is the branching factor - The number of actions. • D is the depth - The number of steps to the goal.

  20. Making Search More Efficient • We can do better if we have an evaluation function - something that tells us if one state is better than another. • Chess is a good example: • Branching factor is around 35. • Number of moves until goal is about 100. • Search tree size: 35100 • We can do much better by using board evaluation - some configurations are clearly better than others.

  21. Speaking of Chess… • This is an example of an adversarial game. • In this sort of search we need to consider: • The results of our own actions AND • The possible responses of our opponent. • What would the tree look like? • …

  22. General Reasoning • Our two examples so far don’t really feel like intelligence. • What if our states are sets of logical claims? • Germany is a country. • If something is a country, it has a flag. • Our goals are to answer logical questions: • Does Germany have a flag? • Actions are logical operators: • (A AND A->B) -> B

  23. General Reasoning • Intelligence through theorem proving. • This was a popular idea early in the history of AI. • Can you guess what problems arise? • The state space is huge. • The action space is big. • It relies on statements being either true or false, when we usually don’t know for sure.

  24. A Big Stumbling Block • Our discussion so far has pre-supposed that the world is deterministic and knowledge is certain: • If the robot tries to move North, he always succeeds. • Every country ALWAYS has a flag. • In fact, we almost never have determinism or certainty.

  25. Probability as a Tool in AI • Probability theory gives us a formal framework for reasoning under uncertainty. • Some notation: • P(A) = the probability that statement A is true. • P(SNOW_TOMORROW) = .4 • 40% chance it will snow. 60% chance it will not. • P(A | B) = the probability that A is true if we know B to be true. • P(SNOW_TOMORROW | SUMMER) = .001

  26. Bayes’ Rule • Let’s say I have a fever. I want to know the following: • P(PNEUMONIA | FEVER) • I do know this: • P(FEVER | PNEUMONIA) = .9 • P(PNEUMONIA) = .001 • P(FEVER) = .1

  27. Bayes’ Rule P(E | H) * P(H) P(H | E) = P(E) • Where H is a hypothesis and E is evidence. • P(PNEU. | FEV.)= P(FEV. | PNEU.) * P(PNEU.) • P(PNEU. | FEV.) = .9 * .001 / .1 = .009 • Why is Bayes’ rule helpful? • We want one probability, we need three others to get it. P(FEV.)

  28. Bayes’ Rule • Let’s ask a doctor: • How likely is it that a patient with pneumonia has a fever? • “Very likely. I’d say 90%” EASY • What is the probability that a patient with fever has pneumonia? • “I dunno. People get fevers for all sorts of reasons. Flu, infections, etc…” HARD • This happens all the time. It is often easy to estimate a conditional probability in one direction, and not the other.

  29. Bayes’ Nets • A nice approach to handling general reasoning while taking probabilities into account. • Here is an example…

More Related