1 / 23

# Artificial Intelligence for Games Informed Search (2) - PowerPoint PPT Presentation

Artificial Intelligence for Games Informed Search (2). Patrick Olivier [email protected] Heuristic functions. sample heuristics for 8-puzzle: h 1 (n) = number of misplaced tiles h 2 (n) = total Manhattan distance h 1 (S) = ? h 2 (S) = ?. Heuristic functions.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

## PowerPoint Slideshow about ' Artificial Intelligence for Games Informed Search (2)' - rowdy

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

### Artificial Intelligence for GamesInformed Search (2)

Patrick Olivier

• sample heuristics for 8-puzzle:

• h1(n) = number of misplaced tiles

• h2(n) = total Manhattan distance

• h1(S) = ?

• h2(S) = ?

• sample heuristics for 8-puzzle:

• h1(n) = number of misplaced tiles

• h2(n) = total Manhattan distance

• h1(S) = 8

• h2(S) = 3+1+2+2+2+3+3+2 = 18

• dominance:

• h2(n) ≥ h1(n) for all n (both admissible)

• h2 is better for search (closer to perfect)

• less nodes need to be expanded

• randomly generate 8-puzzle problems

• 100 examples for each solution depth

• contrast behaviour of heuristics & strategies

• Memory enhancements

• IDA*: Iterative-Deepening A*

• SMA*: Simplified Memory-Bounded A*

• Other enhancements (next lecture)

• Dynamic weighting

• LRTA*: Learning Real-time A*

• MTS: Moving target search

• Local search (next lecture)

• Hill climbing & beam search

• Simulated annealing & genetic algorithms

• Improving the heuristic function

• not always easy for path planning tasks

• Implementation of A*

• key aspect for large search spaces

• reduces the memory constraints of A* without sacrificing optimality

• cost-bound iterative depth-first search with linear memory requirements

• expands all nodes within a cost contour

• store f-cost (cost-limit) for next iteration

• repeat for next highest f-cost

Goal state

1 2 3

6 X 4

8 7 5

1 2 3

8 X 4

7 6 5

IDA*: exercise

• Order of expansion:

• Move space up

• Move space down

• Move space left

• Move space right

• Evaluation function:

• g(n) = number of moves

• h(n) = misplaced tiles

• Expand the state space to a depth of 3 and calculate the evaluation function

1 2 3

6 X 4

8 7 5

1+4=5

1+3=4

1+4=6

1 3

6 2 4

8 7 5

1 2 3

6 7 4

8 5

1 2 3

6 4 4

8 7 5

1 2 3

X 6 4

8 7 5

1+3=4

IDA*: f-cost = 3

Next f-cost = 4

Next f-cost = 3

Next f-cost = 5

1 2 3

6 X 4

8 7 5

1 2 3

6 7 4

8 7 5

1+3=4

1+4=5

1 3

6 2 4

8 7 5

2+3=5

4+0=4

2+2=4

3+3=6

3+1=4

1 2 3

8 4

7 6 5

1 2 3

8 6 4

7 5

1 2 3

8 6 4

7 5

1 2 3

8 6 4

7 5

1 2 3

6 4

8 7 5

IDA*: f-cost = 4

Next f-cost = 4

Next f-cost = 5

• SMA*

• When we run out of memory drop costly nodes

• Back their cost up to parent (may need them later)

• Properties

• Utilises whatever memory is available

• Avoids repeated states (as memory allows)

• Complete (if enough memory to store path)

• Optimal (or optimal in memory limit)

• Optimally efficient (with memory caveats)

• Use the state space given in the example

• Execute the SMA* algorithm over this state space

• Be sure that you understand the algorithm!

• The admissibility condition guarantees that an optimal path is found

• In path planning a near-optimal path can be satisfactory

• Try to minimise search instead of minimising cost:

• i.e. find a near-optimal path (quickly)

fw(n) = (1 - w).g(n) + w.h(n)