Artificial intelligence for games informed search 2
This presentation is the property of its rightful owner.
Sponsored Links
1 / 23

Artificial Intelligence for Games Informed Search (2) PowerPoint PPT Presentation


  • 51 Views
  • Uploaded on
  • Presentation posted in: General

Artificial Intelligence for Games Informed Search (2). Patrick Olivier [email protected] Heuristic functions. sample heuristics for 8-puzzle: h 1 (n) = number of misplaced tiles h 2 (n) = total Manhattan distance h 1 (S) = ? h 2 (S) = ?. Heuristic functions.

Download Presentation

Artificial Intelligence for Games Informed Search (2)

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Artificial intelligence for games informed search 2

Artificial Intelligence for GamesInformed Search (2)

Patrick Olivier

[email protected]


Heuristic functions

Heuristic functions

  • sample heuristics for 8-puzzle:

    • h1(n) = number of misplaced tiles

    • h2(n) = total Manhattan distance

  • h1(S) = ?

  • h2(S) = ?


Heuristic functions1

Heuristic functions

  • sample heuristics for 8-puzzle:

    • h1(n) = number of misplaced tiles

    • h2(n) = total Manhattan distance

  • h1(S) = 8

  • h2(S) = 3+1+2+2+2+3+3+2 = 18

  • dominance:

    • h2(n) ≥ h1(n) for all n (both admissible)

    • h2 is better for search (closer to perfect)

    • less nodes need to be expanded


Example of dominance

Example of dominance

  • randomly generate 8-puzzle problems

  • 100 examples for each solution depth

  • contrast behaviour of heuristics & strategies


A enhancements local search

A* enhancements & local search

  • Memory enhancements

    • IDA*: Iterative-Deepening A*

    • SMA*: Simplified Memory-Bounded A*

  • Other enhancements (next lecture)

    • Dynamic weighting

    • LRTA*: Learning Real-time A*

    • MTS: Moving target search

  • Local search (next lecture)

    • Hill climbing & beam search

    • Simulated annealing & genetic algorithms


Improving a performance

Improving A* performance

  • Improving the heuristic function

    • not always easy for path planning tasks

  • Implementation of A*

    • key aspect for large search spaces

  • Relaxing the admissibility condition

    • trading optimality for speed


Ida iterative deepening a

IDA*: iterative deepening A*

  • reduces the memory constraints of A* without sacrificing optimality

  • cost-bound iterative depth-first search with linear memory requirements

  • expands all nodes within a cost contour

  • store f-cost (cost-limit) for next iteration

  • repeat for next highest f-cost


Ida exercise

Start state

Goal state

1 2 3

6 X 4

8 7 5

1 2 3

8 X 4

7 6 5

IDA*: exercise

  • Order of expansion:

    • Move space up

    • Move space down

    • Move space left

    • Move space right

  • Evaluation function:

    • g(n) = number of moves

    • h(n) = misplaced tiles

  • Expand the state space to a depth of 3 and calculate the evaluation function


Ida f cost 3

0+3=3

1 2 3

6 X 4

8 7 5

1+4=5

1+3=4

1+4=6

1 3

6 2 4

8 7 5

1 2 3

6 7 4

8 5

1 2 3

6 4 4

8 7 5

1 2 3

X 6 4

8 7 5

1+3=4

IDA*: f-cost = 3

Next f-cost = 4

Next f-cost = 3

Next f-cost = 5


Ida f cost 4

0+3=3

1 2 3

6 X 4

8 7 5

1 2 3

6 7 4

8 7 5

1+3=4

1+4=5

1 3

6 2 4

8 7 5

2+3=5

4+0=4

2+2=4

3+3=6

3+1=4

1 2 3

8 4

7 6 5

1 2 3

8 6 4

7 5

1 2 3

8 6 4

7 5

1 2 3

8 6 4

7 5

1 2 3

6 4

8 7 5

IDA*: f-cost = 4

Next f-cost = 4

Next f-cost = 5


Simplified memory bounded a

Simplified memory-bounded A*

  • SMA*

    • When we run out of memory drop costly nodes

    • Back their cost up to parent (may need them later)

  • Properties

    • Utilises whatever memory is available

    • Avoids repeated states (as memory allows)

    • Complete (if enough memory to store path)

    • Optimal (or optimal in memory limit)

    • Optimally efficient (with memory caveats)


Simple memory bounded a

Simple memory-bounded A*


Class exercise

Class exercise

  • Use the state space given in the example

  • Execute the SMA* algorithm over this state space

  • Be sure that you understand the algorithm!


Simple memory bounded a1

Simple memory-bounded A*


Simple memory bounded a2

Simple memory-bounded A*


Simple memory bounded a3

Simple memory-bounded A*


Simple memory bounded a4

Simple memory-bounded A*


Simple memory bounded a5

Simple memory-bounded A*


Simple memory bounded a6

Simple memory-bounded A*


Simple memory bounded a7

Simple memory-bounded A*


Simple memory bounded a8

Simple memory-bounded A*


Trading optimality for speed

Trading optimality for speed…

  • The admissibility condition guarantees that an optimal path is found

  • In path planning a near-optimal path can be satisfactory

  • Try to minimise search instead of minimising cost:

    • i.e. find a near-optimal path (quickly)


Weighting

Weighting…

fw(n) = (1 - w).g(n) + w.h(n)

  • w = 0.0 (breadth-first)

  • w = 0.5 (A*)

  • w = 1.0 (best-first, with f = h)

  • trading optimality for speed

  • weight towards h when confident in the estimate of h


  • Login