problem solving by search by jin hyung kim computer science department kaist l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Problem Solving by Search by Jin Hyung Kim Computer Science Department KAIST PowerPoint Presentation
Download Presentation
Problem Solving by Search by Jin Hyung Kim Computer Science Department KAIST

Loading in 2 Seconds...

play fullscreen
1 / 54

Problem Solving by Search by Jin Hyung Kim Computer Science Department KAIST - PowerPoint PPT Presentation


  • 234 Views
  • Uploaded on

Problem Solving by Search by Jin Hyung Kim Computer Science Department KAIST. Example of Representation. Euler Path. Graph Theory. Graph consists of A set of nodes : may be infinite A set of arcs(links) Directed graph, underlying graph, tree Notations

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Problem Solving by Search by Jin Hyung Kim Computer Science Department KAIST' - cathal


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
graph theory
Graph Theory
  • Graph consists of
    • A set of nodes : may be infinite
    • A set of arcs(links)
      • Directed graph, underlying graph, tree
  • Notations
    • node, start node(root), leaf (tip node), root, path, ancestor, descendant, child(children, son), parent(father), cycle, DAG, connected, locally finite graph, node expansion
state space representation
State Space Representation
  • Basic Components
    • set of states {s}
    • set of operators { o : s -> s }
    • control strategy { c: sn -> o }
  • State space graph
    • State -> node
    • operator -> arc
  • Four tuple representation
    • [N, A, S, GD], solution path
examples of ssr
Examples of SSR
  • TIC_TAC_TOE
  • n2-1 Puzzle
  • Traveling salesperson problem (TSP)
search strategies
Search Strategies
  • A strategy is defined by picking the order of node expansion
  • Search Direction s
    • Forward searching (from start to goal)
    • Backward searching (from goal to start)
    • Bidirectional
  • Irrevocable vs. revocable
    • Irrevocable strategy : Hill-Climbing
      • Most popular in Human problem solving
      • No shift of attention to suspended alternatives
      • End up with local-maxima
      • Commutative assumption

Applying an inappropriate operators may delay, but never prevent the eventual discovery of solutions.

    • Revocable strategy : Tentative control
      • An alternative chosen, others reserve
evaluation of strategies
Evaluation of Strategies
  • Completeness
    • Does it always find a solution if one exists ?
  • Time Complexity
    • Number of nodes generated/expanded
  • Space complexity
    • Maximum number of nodes in memory
  • Optimality
    • Does it always find a least-cost solution ?
  • Time and Space complexity measured by
    • b – maximum branching factors of the search tree
    • d – depth of least-cost solution
    • m - maximum depth of the state space (may be
implementing search strategies
Implementing Search Strategies
  • Uninformed search
    • Search does not depend on the nature of solution
    • Systematic Search Method
      • Breadth-First Search
      • Depth-First Search (backtracking)
        • Depth-limited Search
      • Uniform Cost Search
      • Iterative deepening Search
  • Informed or Heuristic Search
    • Best-first Search
      • Greedy search (h only)
    • A* search (g + h)
    • Iterative A* search
slide9

start

put s in OPEN

Fail

OPEN empty ?

Select & Remove the a node of OPEN

and put it in CLOSE (call it n)

Success

any succesor = goal

?

X-First Search Algorithm

yes

Expand n.

Put successors at the end of OPEN

pointers back to n

yes

comparison of bfs and dfs
Comparison of BFS and DFS
  • BFS always terminate if goal exist

cf. DFS on locally finite infinite tree

  • Gurantee shortest path to goal - BFS
  • Space requirement
    • BFS - Exponential
    • DFS - Linear,

keep children of a single node

  • Which is better ? BFS or DFS ?
uniform cost search
Uniform Cost Search
  • A Genaralized version of Breadth-First Search
    • C(ni, nj) = cost of going from ni to nj
    • g(n) =(tentative minimal) cost of a path from s to n.
  • Guarantee to find the minimum cost path
  • Dijkstra Algorithm
slide12

start

put s in OPEN, set g(s) = 0

Fail

OPEN empty ?

Remove the node of OPEN whose g(.) value is smallest

and put it in CLOSE (call it n)

Success

Expand n. calculate g(.) of successor

Put successors to OPEN

pointers back to n

Uniform Cost Search Algorithm

yes

yes

n = goal

?

iterative deepening search
Iterative Deepening Search
  • Compromise of BFS and DFS
  • Save on Storage, guarantee shortest path
  • Additional node expansion is negligible
  • proc Iterative_Deeping_Search(Root)
  • begin
  • Success := 0;
  • for (depth_bound := 1; depth_bound++; Success == 1)
    • { depth_first_search(Root, depth_bound);
      • if goal found, Success := 1;
    • }
  • end
properties of ids
Properties of IDS
  • Complete ??
  • Time ??

(d+1)b0 + db1 + (d-1)b2 + … + bd = O(bd)

  • Space ?? : O(bd)
  • Optimal ?? Yes, if step cost = 1
    • Can be modified to explore uniform cost tree ?
  • Numerical comparison b=10 and d=5, solution at far right

N(IDS) = 50 + 400 + 3,000 + 20,000 + 100,000 = 123,450

N(BFS) = 10 + 100 + 1000 + 10000 + 100000 + 999,990 = 1,111,100

use of heuristics to select the best

x

x

x

o

x

o

o

x

o

x

x

x

x

o

o

x

o

x

o

Use of Heuristics to select the Best
  • Tic-tac-toe
tic tac toe
Tic-tac-toe
  • Most-Win Heuristics

x

x

x

2 win

3 win

4 win

8 puzzel heuristics
8-Puzzel Heuristics

3

1

2

3

2

1

8

8

4

4

6

5

7

6

5

7

  • # of Misplaced tiles
  • Sum of Manhattan distance

2

3

8

3

2

2

3

1

1

4

8

4

8

1

4

7

7

6

5

5

7

6

5

6

a

c

b

slide24

start

put s in OPEN, compute f(s)

Fail

OPEN empty ?

Remove the node of OPEN whose f(.) value is smallest

and put it in CLOSE (call it n)

Success

Expand n. calculate f(.) of successor

Put successors to OPEN

pointers back to n

Best First Search Algorithm( for tree search)

yes

yes

n = goal

?

algorithm a
Algorithm A
  • Best-First Algorithm with f(n) = g(n) + h(n)

where g(n) : cost of n from start to node n

h(n) : heuristic estimate of the cost

from n to a goal

  • Algorithm is admissible if it terminate with optimal solution
  • What if f(n) = f*(n) where f*(n) = g*(n) + h*(n)

where g*(n) = shortest cost to n

h*(n) = shortest actual cost from n to goal

algorithm a branch and bound method
Algorithm A* (Branch and Bound method)
  • Algorithm A becomes A* if h(n) h*(n)
  • Algorithm A* is admissible
    • can you prove it ?
  • If h(n) 0, A* algorithm becomes uniform cost algorithm
    • Uniform cost algorithm is admissible
  • If n* is on optimal path, f*(n*) = C*
  • f*(n) > C* implies that n is not on optimal path
  • A* terminate in finite graph
examples of admissible heuristics
Examples of Admissible Heuristics
  • 8 puzzle heuristic
  • N queen problem, Tic-tac-toe
  • Air distance heuristic
  • Traveling Salesperson Problem
    • Minimum spanning tree heuristics
    • ….
iterative deeping a
Iterative Deeping A*
  • Modification of A*
    • use threshold as depth bound
      • To find solution under the threshold of f(.)
    • increase threshold as minimum of f(.) of

previous cycle

  • Still admissible
  • same order of node expansion
  • Storage Efficient – practical
    • but suffers for the real-valued f(.)
    • large number of iterations
slide29

start

put s in OPEN, compute f(s)

OPEN empty ?

Remove the node of OPEN whose f(.) value is smallest

and put it in CLOSE (call it n)

Success

Iterative Deepening A* Search Algorithm ( for tree search)

set threshold as h(s)

yes

threshold =

min( f(.) , threshold )

yes

n = goal

?

  • Expand n. calculate f(.) of successor
  • if f(suc) < threshold then
    • Put successors to OPEN if
    • pointers back to n
memory bounded heuristic search
Memory-bounded heuristic Search
  • Recursive best-first search
    • A variation of Depth-first search
    • Keep track of f-value of the best alternative path
    • Unwind if f-value of all children exceed its best alternative
    • When unwind, store f-value of best child as its f-value
    • When needed, the parent regenerate its children again.
  • Memory-bounded A*
    • When OPEN is full, delete worst node from OPEN storing f-value to its parent.
    • The deleted node is regenerated when all other candidates look worse than the node.
monotonicity consistency
Monotonicity (consistency)
  • A heuristic function is monotone if

for all states ni and nj = suc(ni)

h(ni) - h(nj) cost(ni,nj)

and h(goal) = 0

  • Monotone heuristic is admissible
more informedness dominate
More Informedness (Dominate)
  • For two admissible heuristic h1 and h2, h2 is more informed than h1 if

h1(n) h2(n) for all n

  • for 8-tile problem
    • h1 : # of misplaced tile
    • h2 : sum of Manhattan distance

h1(n)

h2(n)

h*(n)

0

generation of heuristics
Generation of Heuristics
  • Relaxed problem solution is an admissible heuristics
    • Manhattan distance heuristic
  • Solution of subproblems
  • combining several admissible heuristics

h(n) = max{ h1(n), …, hn(n)}

  • Use of Pattern databases
    • Max of heuristics of sub-problem pattern database
      • 1/ 1000 in 15 puzzle compared with Manhattan
    • Addition of heuristics of disjoint sub-problem pattern database
      • 1/ 10,000 in 24 puzzle compared with Manhattan
      • disjoint subdivision is not possible for Rubic’s cube
semi addmissible heuristics dynamic weighting risky heuristics
Semi-addmissible heuristicsDynamic Weighting, Risky heuristics
  • If { h(n)h*(n) + e }, then C(n) (1+e) C*(n)
  • f(n) = g(n) + h(n) + e[1-d(n)/N] h(n)
    • At shallow level : depth first excursion
    • At deep level : assumes admissibility
  • Use of non-admissible heuristics with risk
    • Utilize heuristic functions which are admissible in the most of cases
    • Statistically obtained heuristics

PEARL, J., AND KIM, J. H. Studies in semi-admissible heuristics. IEEE Trans. PAMI-4, 4 (1982), 392-399

performance measure
Performance Measure
  • Penetrance
    • how search algorithm focus on goal rather than wander off in irrelevant directions
    • P = L / T
  • Effective Branching Factor (B)
    • B + B2 + B3 + ..... + BL = T
    • less dependent on L
planning monkey and banana
Planning : Monkey and Banana
  • Monkey is on floor at (x1, y1), banana is hanging at (x2, y2), and box is at (x3, y3). Monkey can grab banana if he push box under banana and climb on it. Develop a state-space search representation for this situation and show how monkey can grab banana.
local search and optimization
Local Search and Optimization
  • Local search
    • less memory required
    • Reasonable solutions in large (continuous) space problems
  • Can be formulated as Searching for extreme value of Objective function

find i = ARGMAX { Obj(pi) }

where pi is parameter

search for optimal parameter
Search for Optimal Parameter
  • Deterministic Methods
    • Step-by-step procedure
    • Hill-Climbing search, gradient search
    • ex: error back propagation algorithm
      • Finding Optimal Weight matrix in Neural Network training
  • Stochastic Methods
    • Iteratively Improve parameters
      • Pseudo-random change and retain it if it improves
    • Metroplis algorithm
    • Simulated Annealing algorithm
    • Genetic Algorithm
hill climbing search
Hill Climbing Search

1. Set n to be the initial node

2. If obj(n) > max { obj(childi(n)) } then exit

3. Set n to be the highest-value child of n

4. Return to step 2

    • No previous state information
    • No backtracking
    • No jumping
  • Gradient Search
    • Hill climbing with continuous, differentiable functions
    • step width ?
    • Slow in near optimal
hill climbing drawbacks
Hill-climbing :Drawbacks
  • Local maxima
    • At Ridge
  • Stray in Plateau
  • Slow in Plateau
  • Determination of proper Step size
  • Cure
    • Random restart
      • Good for Only few local maxima

Global Maximum

local beam search
Local Beam Search
  • Keep track of best k states instead of 1 in hill-climbing
  • Full utilization of given memory
  • Variation: Stochastic beam search
    • Select k successors randomly
iterative improvement algorithm
Iterative Improvement Algorithm
  • Basic Idea
    • Start with initial setting
      • Generate a random solution
    • Iteratively improve the quality
  • Good For hard, practical problems
    • Because it Keeps current state only and No look-ahead beyond neighbors
  • Implementation
    • Metropolis algorithm
    • Simulated Annealing algorithm
    • Genetic algorithm
metropolis algorithm
Metropolis algorithm
  • Modified Monte Carlo method
  • Suppose our objective is to reach the state minimizing energy function

1. Randomly generate a new state, Y, from state X

2. If E(energy difference between Y and X) < 0

then move to Y (set Y to X) and goto 1

3. Else

3.1 select a random number, 

3.2 if  < exp(- E / T)

then move to Y (set Y to X) and goto 1

3.3 else goto 1

from statistical mechanics
From Statistical Mechanics
  • In thermal equilibrium, probability of state i
    • energy of state i
    • absolute temperature
    • Boltzman constant
  • In NN
  • define
simulated annealing algorithm
Simulated Annealing algorithm
  • What is annealing?
      • Process of slowly cooling down a compound or a substance
      • Slow cooling let the substance flow around  thermodynamic equilibrium
      • Molecules get optimum conformation

contraction : cause stress

simulated annealing
Simulated Annealing
  • Simulates slow cooling of annealing process
  • Solves combinatorial optimization
  • variant of Metropolis algorithm
  • by S. Kirkpatric (83)
  • finding minimum-energy solution of a neural network = finding low temperature state of physical system
  • To overcome local minimum problem
  • Instead always going downhill, try to go downhill ‘most of the time’
iterative algorithm comparison
Iterative algorithm comparison
  • Simple Iterative Algorithm

1. find a solution s

2. make s’, a variation of s

3. if s’ is better than s, keep s’ as s

4. goto 2

  • Metropolis Algorithm
    • 3’ : if (s’ is better than s) or ( within Prob), then keep s’ as s
    • With fixed T
  • Simulated Annealing
    • T is reduced to 0 by schedule as time passes
simulated annealing algorithm50
Simulated Annealing algorithm

functionSimulated-Annealing(problem, schedule) returns a solution state

inputs: problem, a problem

local variables: current, a node

next, a node

T, a “temperature” controlling the probability of downward steps

current Make-Node(Initial-State[problem])

for t1 to infinity do

T  schedule[t]

if T=0 thenreturncurrent

next  a randomly selected successor of current

DE  Value[next] – Value[current]

ifDE>0 thencurrentnext

elsecurrentnext only with probability eDE/T

simulated annealing schedule example

T0

T(n)

Tf

moves

n

Simulated AnnealingScheduleexample
  • if Ti is reduced too fast, poor quality
  • if Tt >= T(0) / log(1+t) - Geman
    • System will converge to minimun configuration
  • Tt = k/1+t - Szu
  • Tt = a T(t-1) where a is in between 0.8 and 0.99
simulated annealing parameters
Simulated Annealing parameters
  • Temperature T
    • Used to determine the probability
    • High T : large changes
    • Low T : small changes
  • Schedule
    • Determines rate at which the temperature T is lowered
    • Lowers T slowly enough, the algorithm will find a global optimum
  • In the beginning, aggressive for searching alternatives, become conservative when time goes by
simulated annealing 10
Simulated Annealing(10)
  • To avoid of entrainment in local minima
    • Annealing schedule : by trial and error
      • Choice of initial temperature
      • How many iterations are performed at each temperature
      • How much the temperature is decremented at each step as cooling proceeds
  • Difficulties
    • Determination of parameters
    • If cooling is too slow Too much time to get solution
    • If cooling is too rapid  Solution may not be the global optimum
simulated annealing local maxima

higher probability of escaping local maxima

Little chance of escaping local maxima, but local maxima may be good enough in practical problmes.

Simulated AnnealingLocal Maxima