1 / 62

Com1005 Machines and Intelligence

Com1005 Machines and Intelligence. Amanda Sharkey. Last week: Turing test – A conversation stopper? (Dennett) OR Flawed and anthropocentric? Ways of improving it?. Early AI programs. great optimism! 1952 Arthur Samuel: draughts program which learned to beat its inventor

ward
Download Presentation

Com1005 Machines and Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Com1005 Machines and Intelligence Amanda Sharkey

  2. Last week: Turing test – A conversation stopper? (Dennett) OR Flawed and anthropocentric? Ways of improving it?

  3. Early AI programs • great optimism! • 1952 Arthur Samuel: draughts program which learned to beat its inventor • Logic Theorist – Newell and Simon • 1956 Dartmouth Summer Research Project on AI • General Problem Solver – Newell, Simon and Shaw

  4. Early AI Programs • Focus on ability to reason logically ... • 1956 The Logic Theorist Allen Newell, Cliff Shaw and Herbert Simon e.g. Given that either X or Y is true and given further that Y is in fact false, it follows that X is true Presented at Dartmouth Conference

  5. Start with axioms of logic • To derive theorem (also sentence) from axioms • Rules of inference • (sentences describing what is known) • Could generate all possible sentences from axiom, and stop when theorem proved. (British Museum algorithm) • Or use heuristics to guide search

  6. The Logic Theorist • Proved 38 of the 52 theorems presented in Bertrand Russell and Alfred North Whitehead’s book Principia Mathematica • The Logic Theorist found a more elegant proof for one theorem than Russell and Whitehead’s • Newell, Simon, Shaw wrote journal paper on the proof, listing Logic Theorist as co-author.

  7. Logic Theorist: a reasoning program. • But too prompt at generating proofs. • Newell – interest in designing a computer simulation of human problem solvers • Printed trace of steps of program, compared to records of students ‘thinking out loud’ as they grappled with problems.

  8. General Problem Solver (GPS) • Designed to emulate human problem solving protocols • Means-end Analyser • Developed over a 10 year period. • Reference: • Newell, A., and Simon, H.A. (1961) GPS, a Program that Simulates Human Thought. In E.A. Feigenbaum and J. Feldman (Eds) Computers and Thought, New York: McGraw-Hill, pp 279-93.

  9. GPS • Could solve problems like • Missionaries and cannibals problem. • Three missionaries are travelling through an inhospitable landscape with their three native bearers. The bearers are cannibals, but it is the custom of their people never to attack unless the victims are outnumbered. Each missionary is aware of what might happen if the party is accidentally divided. The group reaches the bank of a wide, deep flowing river. The party has to cross it. One of the bearers chances upon a two-man dugout upturned in the mud. A terrible grin spreads across his face as he savours the implications of his find.

  10. 3 missionaries, 3 cannibals • Cannibals only attack if victims are outnumbered. • Have to cross river in 2 man dugout. • How to get all across without letting cannibals outnumber missionaries on either bank?

  11. Try solving the problem

  12. Probably used similar methods to GPS – build up sequence of river crossings one at a time, back-tracking occasionally when things to wrong.

  13. GPS works by working out how to arrive at goal state from current state. • Similar problems – Tower of Hanoi • Trying to get from home to Sheffield University • Procedures selected according to their ability to reduce the observed difference between the current state and the goal state = means ends analysis

  14. GPS designed to imitate human problem solving • tries to identify a series of lesser problems, which if solved would lead to solution of main problem. • The order in which goals and subgoals considered was similar to the way humans approached problems

  15. GPS • Search • Start position • Transitions • Goal or solution position • Heuristics • Human resemblance?

  16. GPS used heuristics (not algorithms) • Trial and error guided by tables telling it which moves to try first • ‘Heuristic’ from greek word heuriskein (to discover) • Archimedes shouted ‘Heurika’ (eureka) as water displaced from the bath. • Like a rule of thumb • ‘a process that may solve a given problem, but offers no guarantee of doing so, is called a ‘heuristic’ for that problem’ • - instead of working through all possible solutions, use short cuts ie. Rules that eliminate less likely candidates and allow concentration on those that seem more likely.

  17. GPS – “general” because general problem solving methods separated from knowledge specific to task in hand. • Problem solving part (means end analysis) • Task dependent knowledge, collected in data structures forming task environment

  18. GPS Can be considered from 3 perspectives Relationship to human thought? AI hype and exaggeration? AI techniques

  19. 1. Relationship to human thought • GPS – behaves similarly to humans • A program that solves problems • Is this evidence that human mind also solves problems like a computer? • Is mind a computer? • Symbol processing hypothesis • Strong symbol processing hypothesis .... To be returned to in later lectures.

  20. 2. AI hype? • Herbert Simon (1957) “It is not my aim to surprise or shock you – but the simplest way I can summarize is to say that there are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until – in a visible future- the range of problems they can handle will be coextensive with the range to which the human mind has been applied” GPS – described as a program which simulates human thought

  21. By 1976, Drew McDermott writing • ‘by now GPS is a colourless term denoting a particularly stupid program to solve puzzles. But it originally meant ‘General Problem Solver’ which caused everybody a lot of needless excitement and distraction’ • Problem of lack of knowledge. • Also only suitable for particular kinds of problem – reaching a goal from a starting position

  22. Different kinds of problem • A man leaves a hut on the top of a mountain at noon and walks down the track to a hut at the bottom of the mountain. Next day a woman leaves the hut at the bottom of the mountain, again at noon, and walks up the track to the hut at the top. Is there a time, x o’ clock, such that at x o’ clock on the second afternoon the woman is at exactly the same place on the track as the man was at x o’ clock on the first afternoon? • No obvious goal or starting position. • Also GPS relies on preset rankings and heuristics. • Too complicated for complex problems. • Work on GPS ceased around 1966.

  23. 3. AI techniques • Search – fundamental to traditional AI • Changing real world problem into a search problem • Start position • A set of transitions from one position to another • Goal or solution position.

  24. Search – originally developed in AI, but now fundamental to computing. Combinatorial explosion and heuristics

  25. Means-ends analysis • Involves detection of difference between current state and goal state • Once difference identified, an operator to reduce the difference must be found • But perhaps operator cannot be applied to current state • Subproblem of getting to state where operator can be applied • Operator may not result in goal state • Second subproblem of getting from new state to goal state

  26. MEA • MEA process applied recursively • Each rule (operator) has • LHS preconditions and RHS aspects of problem state changed. • Difference table of rules and differences they can reduce.

  27. Problem for household robot: moving desk with 2 things on it from one room to another. • Main difference between start and goal state is location. • Choose PUSH and CARRY

  28. Move desk with 2 things on it to new room

  29. CARRY: preconditions cannot be met PUSH: 4 preconditions WALK to object, clear desk using PICKUP and PLACE. After PUSH objects not on desk. Must WALK to collect them and put on table using PICKUP and CARRY

  30. Means-Ends Analysis • 1. Compare CURRENT to GOAL. If no differences, return. • 2. Otherwise select most important difference and reduce it by doing the following until success or failure is indicated. • Select an as yet untried operator O that is applicable to the current difference. If there are no such operators then signal failure. • Attempt to apply O to the current state. Generate descriptions of two states O-START a state in which O’s preconditions are satisfied and O-RESULT, the state that would result if O were applied in O-START. • If (FIRST-PART MEA (CURRENT,O-START) AND (LAST-PART MEA (O-RESULT, GOAL) are successful then signal success.

  31. Other search strategies • Exhaustive search • Depth first • Breadth first • Generate and test • Hill climbing • Best first search • Problem reduction • Constraint satisfaction • Means-ends Analysis.

  32. Generate and test • Simplest search strategy (1) Generate a possible solution (2) Test to see if it is a solution (compare end point of path to goal state) (3) If solution is found, quit. Else return to (1) Depth first search procedure Should find a solution eventually, but could take a long time. AKA British museum algorithm – like finding an object in the British Museum by wandering randomly.

  33. Chess

  34. "Within ten years a digital computer will be the world's chess champion unless the rules bar it from competition.“ Allen Newell (1957) 1997 defeat of chess world Grand Master, Gary Kasparov by Deep Blue, and IBM team.

  35. Chess • Combinatorial explosion – in middle part of game, about 36 moves possible. • Your opponent can respond to your moves in 36 different ways. • So to consider effect of your moves, need to consider 1296 possibilities • Following move, 1,679,616 possibilities

  36. Computer can’t consider all possible moves. • Heuristic: static evaluation function (how good does the board look) for as many possible board moves in time available. • Alpha-beta pruning to reduce size of search tree. • Don’t consider moves that lead to bad board positions • Or moves that lead to good board positions but opponent won’t let you take • Still requires powerful computers

  37. Arthur Samuel, early draughts (checkers) playing program • One of the first AI programs • Credit assignment problem – which of the many moves was responsible for winning? • Samuel introduced static evaluation • One version of program played against another – • One used randomly modified static evaluation function, the other didn’t change. • If randomly modified version did better, then that version was adopted for next round.

  38. Knowledge representation • GPS – no knowledge of problem domain • ELIZA – no knowledge • How can knowledge be represented? • Next week: Knowledge .....

  39. Summary • Early AI programs • The Logic Theorist • GPS General Problem Solver • Relationship to human thought? • AI hype? • AI techniques • Search • Means-Ends-Analysis • Chess • Illusion and AI • Comparison to humans • search

  40. Winograd’s method: based on logic and idea that words point to things in the world. • E.g pick up the ball to the right of the small box • Known instruction – pick up • Find object that satisfies constraints – ball c and d • Ambiguous – can ask. • If answer ‘the large one’ -> ball c

  41. But Shrdlu’s knowledge of the world limited. • Example from Haugeland. • Build a steeple • SORRY I DON’T KNOW THE WORD ‘STEEPLE’ • A ‘steeple’ is a stack that contains two green cubes and a pyramid. • I UNDERSTAND • Trade you the steeple for three red cubes • SORRY I DON’T KNOW THE WORD ‘TRADE’ • A ‘trade’ is a free exchange of ownership • SORRY I DON’T KNOW THE WORD ‘FREE’ • Sorry, I thought you were smarter than you are • SORRY I DON’T KNOW THE WORD ‘SORRY’.

  42. Shrdlu: domain-specific knowledge (as opposed to domain-general) about microworld. • But does it really understand even its microworld?

  43. Expert systems • Depth of knowledge about constrained domain. • Commercially exploitable, real applications • Knowledge stored as production rules • If the problem is P then the answer is A

More Related