Artificial intelligence
This presentation is the property of its rightful owner.
Sponsored Links
1 / 81

Artificial Intelligence PowerPoint PPT Presentation


  • 76 Views
  • Uploaded on
  • Presentation posted in: General

Artificial Intelligence. Search Problem (2). Oracle path. Uninformed and informed searches. Since we know what the goal state is like, is it possible get there faster?. Breadth-first search. Heuristic Search.

Download Presentation

Artificial Intelligence

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Artificial intelligence

Artificial Intelligence

Search Problem (2)


Artificial intelligence

Oracle path

Uninformed and informed searches

Since we know what the goal state is like, is it possible get there faster?

Breadth-first search


Heuristic search

Heuristic Search

Heuristics means choosing branches in a state space (when no exact solution available as in medical diagnostic or computational cost very high as in chess) that are most likely to be acceptable problem solution.


Informed search

Informed search

  • So far, have assumed that no nongoal state looks better than another

  • Unrealistic

    • Even without knowing the road structure, some locations seem closer to the goal than others

    • Some states of the 8s puzzle seem closer to the goal than others

  • Makes sense to expand closer-seeming nodes first


Heuristic

Heuristic

Merriam-Webster's Online Dictionary

Heuristic (pron. \hyu-’ris-tik\): adj. [from Greek heuriskein to discover.] involving or serving as an aid to learning, discovery, or problem-solving by experimental and especially trial-and-error methods

The Free On-line Dictionary of Computing (15Feb98)

heuristic 1. <programming> A rule of thumb, simplification or educated guess that reduces or limits the search for solutions in domains that are difficult and poorly understood. Unlike algorithms, heuristics do not guarantee feasible solutions and are often used with no theoretical guarantee. 2. <algorithm> approximation algorithm.


A heuristic function

A heuristic function

  • Let evaluation function h(n) (heuristic)

    • h(n)= estimated cost of the cheapest path from node n to goal node.

    • If n is goal thenh(n)=0


Examples 1

G

3

C

F

4

4

B

5

E

4

2

  • Define f(T) = the straight-line distance from T to G

5

D

10.4

6.7

A

B

C

4

A

The estimate

can be wrong!

11

4

S

3

G

8.9

S

F

6.9

3

D

E

Examples (1):

  • Imagine the problem of finding a route on a road map and that the NET below is the road map:


A quick review

A Quick Review

  • g(n) = cost from the initial state to the current state n

  • h(n) = estimated cost of the cheapest path from node n to a goal node

  • f(n) = evaluation function to select a node for expansion (usually the lowest cost node)


Artificial intelligence

D

E

B

C

75

A

150

125

50

100

60

75

80

75

80


Artificial intelligence

D

E

B

C

75

A

150

125

50

100

60

75

80

75

80

 50


Artificial intelligence

D

E

B

C

75

A

150

125

50

100

60

75

80

75

80

 125


Artificial intelligence

D

E

B

C

75

A

150

125

50

100

60

75

80

75

80

 200


Artificial intelligence

D

E

B

C

75

A

150

125

50

100

60

75

80

75

80

 300


Artificial intelligence

D

E

B

C

75

A

150

125

50

100

60

75

80

75

80

 450


Artificial intelligence

A

D

E

B

C

75

150

125

50

100

60

75

80

75

80

 380


Heuristic functions

London

Heuristic Functions

  • Estimate of path cost h

    • From state to nearest solution

    • h(state) >= 0

    • h(solution) = 0

  • Example: straight line distance

    • As the crow flies in route finding

  • Where does h come from?

    • maths, introspection, inspection or programs (e.g. ABSOLVER)

Liverpool

Leeds

135

Nottingham

155

75

Peterborough

120


Romania with straight line dist

Romania with straight-line dist.


Examples 2 8 puzzle

f2

f1

= 4

= 4

1

1

3

3

2

2

  • f2(T) = number or incorrectly placed tiles on board:

    • gives (rough!) estimate of how far we are from goal

8

8

4

4

5

5

6

6

7

7

Examples (2): 8-puzzle

  • f1(T) = the number correctly placed tiles on the board:

Most often, ‘distance to goal’ heuristics are more useful !


Examples 3 manhattan distance

f2

1

3

2

8

4

5

6

7

Examples (3):Manhattan distance

  • f3(T) = the sum of ( the horizontal + vertical distance that each tile is away from its final destination):

    • gives a better estimate of distance from the goal node

= 1 + 1 + 2 + 2 = 6


Examples 4 chess

= v( ) + v( )

+ v( ) + v( )

- v( ) - v( )

Examples (4):Chess:

  • F(T) = (Value count of black pieces) - (Value count of white pieces)

f


Heuristic evaluation function

Heuristic Evaluation Function

  • It evaluate the performance of the different heuristics for solving the problem.

    f(n) = g(n) + h(n)

    • Where f is the evaluation function

    • G(n) the actual length of the path from state n to start state

    • H(n) estimate the distance from state n to the goal


Search methods

Search Methods

  • Best-first search

  • Greedy best-first search

  • A* search

  • Hill-climbing search

  • Genetic algorithms


Best first search

Best-First Search

  • Evaluation function f gives cost for each state

    • Choose state with smallest f(state) (‘the best’)

    • Agenda: f decides where new states are put

    • Graph: f decides which node to expand next

  • Many different strategies depending on f

    • For uniform-cost search f = path cost

    • greedy best-first search

    • A* search


Greedy best first search

Greedy best-first search

  • Evaluation function f(n) = h(n) (heuristic)

    = estimate of cost from n to goal

  • Ignores the path cost

  • Greedy best-first search expands the node that appears to be closest to goal


Greedy search

a

b

g

h=2

h=4

c

h

h=1

h=1

d

h=1

h=0

i

e

h=1

g

h=0

Greedy search

  • Use as an evaluation function f(n) = h(n)

  • Selects node to expand believed to be closest (hence “greedy”) to a goal node (i.e., select node with smallest f value)

  • as in the example.

    • Assuming all arc costs are 1, then greedy search will find goal g, which has a solution cost of 5.

    • However, the optimal solution is the path to goal I with cost 3.


Romania with step costs in km

Romania with step costs in km


Greedy best first search example

Greedy best-first search example


Greedy best first search example1

Greedy best-first search example


Greedy best first search example2

Greedy best-first search example


Greedy best first search example3

Greedy best-first search example


Optimal path

Optimal Path


Greedy best first search algorithm

Greedy Best-First Search Algorithm

Input: State Space

Ouput: failure or path from a start state to a goal state.

Assumptions:

  • L is a list of nodes that have not yet been examined ordered by their h value.

  • The state space is a tree where each node has a single parent.

  • Set L to be a list of the initial nodes in the problem.

  • While L is not empty

    • Pick a node n from the front of L.

    • If n is a goal node

      • stop and return it and the path from the initial node to n.

        Else

      • remove n from L.

      • For each child c of n

        • insert c into L while preserving the ordering of nodes in L and labelling c with its path from the initial node as well as its h value.

          End for

          End if

          End while

          Return failure


Properties of greedy best first search

Properties of greedy best-first search

  • Complete?

    • Not unless it keeps track of all states visited

      • Otherwise can get stuck in loops (just like DFS)

  • Optimal?

    • No – we just saw a counter-example

  • Time?

    • O(bm), can generate all nodes at depth m before finding solution

    • m = maximum depth of search space

  • Space?

    • O(bm) – again, worst case, can generate all nodes at depth m before finding solution


Uniform cost search

Uniform Cost Search

  • Let g(n) be the sum of the edges costs from root to node n. If g(n) is our overall cost function, then the best first search becomes Uniform Cost Search, also known as Dijkstra’s single-source-shortest-path algorithm .

  • Initially the root node is placed in Open with a cost of zero. At each step, the next node n to be expanded is an Open node whose cost g(n) is lowest among all Open nodes.


Example of uniform cost search

a

2

1

c

b

2

1

2

1

e

c

f

g

c

d

c

Example of Uniform Cost Search

  • Assume an example tree with different edge costs, represented by numbers next to the edges.

    Notations for this example:

    generated node

    expanded node


Uniform cost search sample

Uniform-cost search Sample

0


Uniform cost search sample1

Uniform-cost search Sample

75

X

140

118


Uniform cost search sample2

Uniform-cost search Sample

146

X

X

140

118


Uniform cost search sample3

Uniform-cost search Sample

146

X

X

140

X

229


Uniform cost search1

Uniform-cost search

  • Complete? Yes

  • Time? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε)) where C* is the cost of the optimal solution

  • Space? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε))

  • Optimal? Yes


Hill climbing gradient descent

Hill Climbing & Gradient Descent

  • For artefact-only problems (don’t care about the path)

  • Depends on some e(state)

    • Hill climbing tries to maximise score e

    • Gradient descent tries to minimise cost e (the same strategy!)

  • Randomly choose a state

    • Only choose actions which improve e

    • If cannot improve e, then perform a random restart

      • Choose another random state to restart the search from

  • Only ever have to store one state (the present one)

    • Can’t have cycles as e always improves


Hill climbing search

Hill-climbing search

  • Problem: depending on initial state, can get stuck in local maxima


Hill climbing algorithm

Hill Climbing - Algorithm

1.Pick a random point in the search space

2.Consider all the neighbors of the current state

3.Choose the neighbor with the best quality and move to that state

4.Repeat 2 thru 4 until all the neighboring states are of lower quality

5.Return the current state as the solution state


Example 8 queens

Example: 8 Queens

  • Place 8 queens on board

    • So no one can “take” another

  • Gradient descent search

    • Throw queens on randomly

    • e = number of pairs which can attack each other

    • Move a queen out of other’s way

      • Decrease the evaluation function

    • If this can’t be done

      • Throw queens on randomly again


Hill climbing search1

Hill-climbing search

  • Looks one step ahead to determine if any successor is better than the current state; if there is, move to the best successor.

  • Rule:If there exists a successor s for the current state n such that

    • h(s) < h(n) and

    • h(s) ≤h(t) for all the successors t of n,

      then move from n to s. Otherwise, halt at n.


Hill climbing search2

Hill-climbing search

  • Similar to Greedy search in that it uses h(), but does not allow backtracking or jumping to an alternative path since it doesn’t “remember” where it has been.


A search algorithm

A* Search Algorithm

Evaluation function f(n)=h(n)+g(n)

h(n)estimated cost to goal from n

g(n)cost so far to reach n

A* uses admissible heuristics, i.e.,

h(n) ≤ h*(n) where h*(n) is the

true cost from n.

A* Search finds the optimal path


A search

A* search

  • Best-known form of best-first search.

  • Idea: avoid expanding paths that are already expensive.

  • Combines uniform-cost and greedy search

  • Evaluation function f(n)=g(n) + h(n)

    • g(n) the cost (so far) to reach the node

    • h(n) estimated cost to get from the node to the goal

    • f(n) estimated total cost of path through n to goal

  • Implementation: Expand the node nwith minimum f(n)


A search example

A* search example


A search example1

A* search example


A search example2

A* search example


A search example3

A* search example


A search example4

A* search example


A search example5

A* search example


Artificial intelligence

straight-line distances

6

1

A

D

F

1

3

h(S-G)=10

h(A-G)=7

h(D-G)=1

h(F-G)=1

h(B-G)=10

h(E-G)=8

h(C-G)=20

2

4

8

S

G

B

E

1

20

C

try yourself

  • The graph above shows the step-costs for different paths going from the start (S) to

  • the goal (G). On the right you find the straight-line distances.

  • Draw the search tree for this problem. Avoid repeated states.

  • Give the order in which the tree is searched (e.g. S-C-B...-G) for A* search.

  • Use the straight-line dist. as a heuristic function, i.e. h=SLD, and indicate for each node visited what the value for the evaluation function, f, is.


Properties of a

Properties of A$^*$

  • Complete? Yes (unless there are infinitely many nodes with f ≤ f(G) )

  • Time? Exponential

  • Space? Keeps all nodes in memory

  • Optimal? Yes


Artificial intelligence

B

2714

start

City Map

300

200

50

630

N

2427

P

1841

Q

2848

600

780

685

950

570

1430

1350

W

1974

I

1190

C

1170

120

700

890

A

1318

1025

1220

1080

M

725

K

480

775

740

340

O

0

J

1666

730

600

870

goal


Artificial intelligence

B

Open

B

2714 + 0

2714

Close


Artificial intelligence

P

N

Q

Open

B

2714 + 0

Close

B

200

50

630

N

2427 + 50

2477

P

1841 + 630

2471

Q

2848 + 200

3048


Artificial intelligence

I

N

C

Q

A

W

Open

B

2714 + 0

Close

B

P

200

50

630

N

2427 + 50

2477

P

1841 + 630

Q

2848 + 200

3048

685

570

780

W

1974 + 1410

3384

1350

I

1190 + 1180

2390

C

1170 + 1315

2485

A

1318 + 1980

3298


Artificial intelligence

N

C

M

Q

A

W

Open

B

2714 + 0

Close

B

P

I

200

50

630

N

2427 + 50

2477

P

1841 + 630

Q

2848 + 200

3048

685

570

780

W

1974 + 1410

3384

I

1190 + 1180

C

1170 + 1315

2485

700

890

A

1318 + 1900

3218

M

725 + 2090

2815


Artificial intelligence

C

M

W

Q

A

Open

B

2714 + 0

Close

B

P

I

N

200

50

630

N

2427 + 50

P

1841 + 630

Q

2848 + 200

3048

685

950

570

W

1974 + 1000

2974

I

1190 + 1200

C

1170 + 1315

2485

700

890

A

1318 + 1900

3218

M

725 + 2090

2815


Artificial intelligence

M

K

W

Q

A

Open

B

2714 + 0

Close

B

P

I

N

200

50

630

N

2427 + 50

P

1841 + 630

Q

2848 + 200

3048

685

950

570

W

1974 + 1000

2974

I

1190 + 1180

C

1170 + 1315

700

890

1025

A

1318 + 1900

3218

M

725 + 2090

2815

K

480 + 2340

2820


Artificial intelligence

K

O

W

Q

A

J

Open

B

2714 + 0

Close

B

P

I

N

M

200

50

630

N

2427 + 50

P

1841 + 630

Q

2848 + 200

3048

685

950

570

W

1974 + 1000

2974

I

1190 + 1180

C

1170 + 1315

700

890

1025

A

1318 + 1900

3218

M

725 + 2090

K

480 + 2340

2820

740

870

O

0 + 2960

2960

J

1666 + 2830

4496


Artificial intelligence

O

W

Q

A

J

Open

B

2714 + 0

Close

B

P

I

N

M

K

200

50

630

N

2427 + 50

P

1841 + 630

Q

2848 + 200

3048

685

950

570

W

1974 + 1000

2974

I

1190 + 1180

C

1170 + 1315

700

890

1025

A

1318 + 1900

3218

M

725 + 2090

K

480 + 2340

600

740

O

0 + 2940

2940

J

1666 + 2830

4496


Artificial intelligence

O

W

Q

A

J

Open

B

2714 + 0

Close

B

P

I

N

M

K

630

P

1841 + 630

685

C

1170 + 1315

1025

K

480 + 2340

600

O

0 + 2940

2940


Ida search

IDA* Search

  • Problem with A* search

    • You have to record all the nodes

    • In case you have to back up from a dead-end

  • A* searches often run out of memory, not time

  • Use the same iterative deepening trick as IDS

    • But iterate over f(state) rather than depth

    • Define contours: f < 100, f < 200, f < 300 etc.

  • Complete & optimal as A*, but less memory


Ida search contours

IDA* Search: Contours

  • Find all nodes

    • Where f(n) < 100

    • Ignore f(n) >= 100

  • Find all nodes

    • Where f(n) < 200

    • Ignore f(n) >= 200

  • And so on…


Genetic algorithms

Genetic algorithms

  • A successor state is generated by combining two parent states

  • Start with k randomly generated states (population)

  • A state is represented as a string over a finite alphabet (often a string of 0s and 1s)

  • Evaluation function (fitness function). Higher values for better states.

  • Produce the next generation of states by selection, crossover, and mutation


Genetic algorithms1

Genetic algorithms

  • Fitness function: number of non-attacking pairs of queens (min = 0, max = 8 × 7/2 = 28)

  • 24/(24+23+20+11) = 31%

  • 23/(24+23+20+11) = 29% etc


Mutation

Mutation

  • Mutation randomly changes genes in the new offspring.

  • For binary encoding we can switch a few randomly chosen bits from 1 to 0 or from 0 to 1.

    Original offspring1101111000011110

    Mutated offspring1100111000001110


Class exercise local search for map graph coloring

Class Exercise:Local Search for Map/Graph Coloring


Artificial intelligence

A

B

C

(START)

E

F

36

G

K

61

31

H

L

80

32

D

52

31

J

102

112

43

20

122

32

M

36

(END)

40

I

45

G(n) = The cost of each move as the distance between each town (shown on map).H(n) = The Distance between any town and town M.


Search strategies

Uninformed

Breadth-first search

Depth-first search

Iterative deepening

Bidirectional search

Uniform-cost search

Informed

Greedy search

A* search

IDA* search

Hill climbing

Search Strategies


Artificial intelligence

A

40

B

12

C

D

23

10

10

5

F

20

E

10

G

H

10

5

J

I

10

10

15

5

K

20

20

M

L

G(n) = The cost of each move as the distance between each town H(n) = The Straight Line Distance between any town and town M.


Artificial intelligence

  • Consider the following search problem. Assume a state is represented as an integer, that the initial state is the number 1, and that the two successors of a state n are the states 2n and 2n+1. For example, the successors of 1 are 2 and 3, the successors of 2 are 4 and 5, the successors of 3 are 6 and 7, etc. Assumes the goal state is the number 12. Consider the following heuristics for evaluating the state n where the goal state is g

  • h1(n) = |n-g| & h2(n) = (g – n) if (n  g) and h2 (n) =  if (n >g)

  • Show the search trees generated for each of the following strategies for the initial state 1 and the goal state 12, numbering the nodes in the order expanded.

  • Depth-first searchb) Breadth-first search

  • c) beast-first with heuristic h1d) A* with heuristic (h1+h2)

  • If any of these strategies get lost and never find the goal, then show the few steps and say "FAILS"


Artificial intelligence

The end!


  • Login