1 / 33

Introduction to Artificial Intelligence

Introduction to Artificial Intelligence. AI definitions and History. What is AI? . 1950, Alan Turing (There was no term “AI” yet…) Turing Test of Intelligence. What is AI?. 1955: “The goal of AI is to develop machines that behave as though they were intelligent.”

fox
Download Presentation

Introduction to Artificial Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Artificial Intelligence AI definitions and History

  2. What is AI? • 1950, Alan Turing (There was no term “AI” yet…) • Turing Test of Intelligence

  3. What is AI? • 1955: “The goal of AI is to develop machines that behave as though they were intelligent.” • What is intelligence? What about Braitenberg vehicles?

  4. What is AI? • 1991, Encyclopedia Britannica: “AI is the ability of digital computers or computer controlled robots to solve problems that are normally associated with the higher intellectual processing capabilities of humans.” • You mean like a calculator?

  5. What is AI? • Elaine Rich: “Artificial Intelligence is the study of how to make computers do things at which, at the moment, people are better.” • Our author likes this definition. • Does it cover Neural Networks? Does it cover doing tasks that computers are already better at, like finding patterns?

  6. What is AI? • Russell & Norvig:

  7. Current Events in AI: MOBUSERV

  8. Recognize people by their walk

  9. Emotion Detection

  10. Watson

  11. Jokes • School of Informatics University of Edinburgh • I like my relationships like I like my source, open • I like my coffee like I like my war, cold • I like my boys like I like my sectors, bad

  12. Back to the Turing Test • Chatbots are fun • http://www.elbot.com/

  13. Agent Lingo • Agent: something that: • Perceives things about the world through sensors (inputs) • Acts in its environment through effectors or actuators (outputs) • Rational Agent: • An agent that does the right thing (vague enough for ya?) • Practically: an agent that has a proven basis for its actions

  14. Recurring Theme in AI • We have a problem of Definitions. What does it mean to be rational, successful, thinking, intelligent? • Performance Measure: a function created to determine how successful an agent is in a particular environment. • Just like people, each agent has differing notions of success.

  15. Performance Measure Examples • Be careful with defining success. • What is a performance measure for teaching? • Success = % of A’s in class • Success = % of happy students in class • Success = Level of difficulty of material • Success = % of students that go to grad school in AI • Success = ?

  16. PEAS Descriptors • Way to define an AI problem. Define its: • Performance Measure • Environment • Actuators • Sensors

  17. Properties of environments:Accessibility • Accessibility: How much knowledge of the world can be sensed. • Fully observable: Able to sense the complete state of the environment at each point in time. • Partially observable: Only part of the world is accessible

  18. Properties of Environments:Agent Cardinality • Other entities are “agents” only if the performance measure depends on their behavior. • Single-agent: No other competing or helping agents • Multi-agent: Can be competitive or cooperative

  19. Properties of environments:Determinism • Deterministic environments: the next world state is completely determined by the current world state and the agent’s actions. • Stochastic environments: Some probability is involved in determining the next state that the agent cannot control.

  20. Properties of environments:Episodism • An episode is a perception-action pair. • An episodic environment means that the quality of an agent’s action depends only on its current state and action. • Sequential environments mean that the agent must be able to “think ahead” to determine the best action.

  21. Properties of environments:Static vs Dynamic • A static environment stays the same while the agent is deliberating its next action. • A dynamic environment may change while the agent is deciding its next move. The world may need to be perceived during deliberation. • A semidynamic environment is one in which the world does not change while the agent is deliberating, but its performance score suffers.

  22. Properties of environments:Discreteness • Discrete environments are made of a finite number of clearly defined percepts and actions. • Continuous environments have actions or precepts that are given in a continuous range. • Discrete and continuous can also refer to the agent’s view of time and space or its internal state.

  23. Properties of Environments:Knowledge • Known environments: We understand the way the environment works, the “laws of physics” for the world • Unknown: Must first explore – what are the effects of my actions in this environment?

  24. Environments • Fully vs Partially Observable • Deterministic vs Stochastic • Episodic vs Sequential • Static vs Dynamic • Discrete vs Continuous • Single vs Multiagent • Known vs Unknown

  25. PEAS Descriptions • Part-picking robot • Chess player • Taxi driver • Crossword puzzle solver • Soccer-playing robot • Poker player • Backgammon player • Medical diagnosis expert system • English tutor • Facial recognition program

  26. Types of Agents: Simple Reflex Agents • Agent senses the environment and chooses an action based on condition-action rules • Condition-action rules: (if-thens) Characterize the current input only, and choose action from a rule for that condition. • Not the same as a lookup table – may have multiple actions, do not need to delineate every possible input.

  27. Types of Agents:Model-based Reflex Agent • Reflex agent + memory • Keep a history as you act in the environment • Use the internal state along with the current state of the environment to make decisions, still based on if-then rules.

  28. Types of Agents:Goal-based Agents • Instead of having hard rules to follow, the agent has an idea of what the goal is and how its actions change its environment. • The agent chooses an action by searching the possible outcomes of each action. • This type of agent can act correctly when its goal changes.

  29. Types of Agents:Utility-based Agents • Goals are overall binary distinctions: success or failure • Utility Function: A mapping from the state of the world (and internal state, if applicable) to a real number that represents a measure of “happiness”. • Allows the agent to take into account subgoals on the way to a main goal. • Examples: • Taxi-driver • Chess program

  30. Types of Agents:Learning Agents • An agent gets feedback on how it is doing • It uses the feedback to decide how to change its own if-then rules and control mechanism • Finally, it must have a mechanism to promote exploration of new control strategies, even though they might not seem to be “optimal”

  31. Next… • Now we can describe agents and their problem environments • How do we create the “smarts” of the agent’s control mechanism? • Logic…

More Related