1 / 37

Lirong Xia

Planning. Lirong Xia. Tuesday , April 22, 2014. Last time. The kernel trick Neuron network Clustering (k-means). Kernelized Perceptron. If we had a black box ( kernel ) which told us the dot product of two examples x and y: Could work entirely with the dual representation

gent
Download Presentation

Lirong Xia

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Planning Lirong Xia Tuesday, April 22, 2014

  2. Last time • The kernel trick • Neuron network • Clustering (k-means)

  3. KernelizedPerceptron • If we had a black box (kernel) which told us the dot product of two examples x and y: • Could work entirely with the dual representation • No need to ever take dot products (“kernel trick”) • Downside: slow if many examples get nonzero alpha

  4. Non-Linear Separators • General idea: the original feature space can always be mapped to some higher-dimensional feature space where the training set is separable:

  5. Some Kernels • Kernels implicitly map original vectors to higher dimensional spaces, take the dot product there, and hand the result back • Linear kernel: • Quadratic kernel: • RBF: infinite dimensional representation • Mercer’s theorem: if the matrix A with Ai,j=K(xi,xj) is positive definite, then K can be represented as the inner product of a higher dimensional space

  6. Neuron • Neuron = Perceptron with a non-linear activation function ai=g(ΣjWj,iaj)

  7. Neural networks • Feed-forward networks • single-layer (perceptrons) • multi-layer (perceptrons) • can represent any boolean function • Learning • Minimize squared error of prediction • Gradient descent: backpropagation algorithm • Recurrent networks • Hopfield networks have symmetric weights Wi,j=Wj,i • Boltzmann machines use stochastic activation functions

  8. Feed-forward example • a5=g5 (W3,5×a3+W4,5×a4) =g5 (W3,5×g3 (W1,3×a1+W2,3×a2)+W4,5×g4 (W1,4×a1+W2,4×a2))

  9. K-Means • An iterative clustering algorithm • Pick K random points as cluster centers (means) • Alternate: • Assign data instances to closest mean • Assign each mean to the average of its assigned points • Stop when no points’ assignments change

  10. K-Means as Optimization • Consider the total distance to the means: • Each iteration reduces phi • Two states each iteration: • Update assignments: fix means c, change assignments a • Update means: fix assignments a, change means c points assignments means

  11. Today: Planning • A brief review of logic • Planning problems • Search algorithms

  12. Language of propositional logic • Atomic variables (atoms) • can be true or false • IsRaining • Literal: an atomic variable or its negation • Operators • Not(x), And(x,y), Or(x,y) • applied to atomic variables or formulas • Semantics • “meaning” w.r.t. true/false valuation • Syntax • purely symbolic inference rules

  13. Language in first-order logic • Terms • objects • can be combined by functions • e.g. NextInteger(x) • Relation symbols • functions that takes terms as inputs • can take true or false • E.g. IsRainyToday (0-ary), IsHuman(x) (1-ary), AtPosition(x,y) (2-ary) • Literal: a relation symbol or its negation • Operators • Not(x), And(x,y), Or(x,y) • Quantifiers: ∀ and ∃ • applied to relation symbols or formulas, but not to terms • Semantics • “meaning” w.r.t. true/false valuation of relation symbols • Note: a term cannot be true or false • Syntax • purely symbolic inference rules

  14. Example of semantics • Caution: deliberately confusing semantics and syntax for illustration • Syllogism • All men are mortal • Socrates is a man • Therefore, Socrates is mortal • Propositional logic • SocratesIsMan→SocratesIsMortal • SocratesIsMan = true • Therefore, SocratesIsMortal = true • First-order logic • ∀ x (IsMan(x)→IsMortal(x)) • IsMan(Socrates) = true • IsMortal(Socrates) = true

  15. Today: Planning • A brief review of logic • Planning problems • Search algorithms

  16. Search Problems • A search problem consists of: • A state space …… • A successor function (with actions, costs) • A start state and a goal test • A solution is a sequence of actions (a plan) which transforms the start state to a goal state • Algorithms • BFS, DFS, UCS • Heuristics: best first, A*

  17. Planning as a search problem • State space • compactly represented by logic • Actions • compactly represented by • preconditions: activation rule • effects: consequences • A solution is a sequence of actions

  18. State of the world (STRIPS language) • Stanford Research Institute Problem Solver • State of the world = conjunction of positive, ground, function-free literals • At(Home) AND IsAt (Umbrella, Home) AND CanBeCarried(Umbrella) AND IsUmbrella(Umbrella) AND HandEmpty AND Dry • Not OK as part of the state: • NOT(At(Home)) (negative) • At(x) (not ground) • At(Bedroom(Home)) (uses the function Bedroom) • Any literal not mentioned is assumed false • Other languages make different assumptions, e.g., negative literals part of state, unmentioned literals unknown

  19. An action: TakeObject • TakeObject(location, x) • Preconditions: • HandEmpty • CanBeCarried(x) • At(location) • IsAt(x, location) • Effects (“NOT something” means that that something should be removed from state): • Holding(x) • NOT(HandEmpty) • NOT(IsAt(x, location))

  20. Another action • WalkWithUmbrella(location1, location2, umbr) • Preconditions: • At(location1) • Holding(umbr) • IsUmbrella(umbr) • Effects: • At(location2) • NOT(At(location1))

  21. Yet another action • WalkWithoutUmbrella(location1, location2) • Preconditions: • At(location1) • Effects: • At(location2) • NOT(At(location1)) • NOT(Dry)

  22. A goal and a plan • Goal: At(Work) AND Dry • Recall initial state: • At(Home) AND IsAt(Umbrella, Home) AND CanBeCarried(Umbrella) AND IsUmbrella(Umbrella) AND HandEmpty AND Dry • TakeObject(Home, Umbrella) • At(Home) AND CanBeCarried(Umbrella) AND IsUmbrella(Umbrella) AND Dry AND Holding(Umbrella) • WalkWithUmbrella(Home, Work, Umbrella) • At(Work) AND CanBeCarried(Umbrella) AND IsUmbrella(Umbrella) AND Dry AND Holding(Umbrella)

  23. Planning to write a paper ProveTheorems(x) Preconditions: Knows(x,AI), Knows(x,Math), Idea Effect: Theorems, Contributed(x) • Suppose your goal is to be a co-author on an AI paper with both theorems and experiments, within a year LearnAbout(x,y) Preconditions: HasTimeForStudy(x) Effects: Knows(x,y), NOT(HasTimeForStudy(x)) PerformExperiments(x) Preconditions: Knows(x,AI), Knows(x,Coding), Idea Effect: Experiments, Contributed(x) HaveNewIdea(x) Preconditions: Knows(x,AI), Creative(x) Effects: Idea, Contributed(x) WritePaper(x) Preconditions: Knows(x,AI), Knows(x,Writing), Idea, Theorems, Experiments Effect: Paper, Contributed(x) FindExistingOpenProblem(x) Preconditions: Knows(x,AI) Effects: Idea Name a few things that are missing/unrealistic… Goal: Paper AND Contributed(You)

  24. Some start states Start1: HasTimeForStudy(You) AND Knows(You,Math) AND Knows(You,Coding) AND Knows(You,Writing) Start2: HasTimeForStudy(You) AND Creative(You) AND Knows(Advisor,AI) AND Knows(Advisor,Math) AND Knows(Advisor,Coding) AND Knows(Advisor,Writing) (Good luck with that plan…) Start3: Knows(You,AI) AND Knows(You,Coding) AND Knows(OfficeMate,Math) AND HasTimeForStudy(OfficeMate) AND Knows(Advisor,AI) AND Knows(Advisor,Writing) Start4: HasTimeForStudy(You) AND Knows(Advisor,AI) AND Knows(Advisor,Math) AND Knows(Advisor,Coding) AND Knows(Advisor,Writing)

  25. Today: Planning • A brief review of logic • Planning problems • Search algorithms

  26. Forward state-space search (progression planning) • Successors: all states that can be reached with an action whose preconditions are satisfied in current state WalkWithUmbrella(Home, Work, Umbrella) At(Home) Holding(Umbrella) CanBeCarried(Umbrella) IsUmbrella(Umbrella) Dry At(Work) Holding(Umbrella) CanBeCarried(Umbrella) IsUmbrella(Umbrella) Dry TakeObject(Home, Umbrella) WalkWithoutUmbrella(Home, Work) At(Home) IsAt(Umbrella, Home) CanBeCarried(Umbrella) IsUmbrella(Umbrella) HandEmpty Dry GOAL! At(Work) IsAt(Umbrella, Home) CanBeCarried(Umbrella) IsUmbrella(Umbrella) HandEmpty WalkWithoutUmbrella(Work, Home) At(Home) IsAt(Umbrella, Home) CanBeCarried(Umbrella) IsUmbrella(Umbrella) HandEmpty WalkWithoutUmbrella(Home, Work) WalkWithoutUmbrella(Home, Umbrella) (!)

  27. Backward state-space search (regression planning) • Predecessors: for every action that accomplishes one of the literals (and does not undo another literal), remove that literal and add all the preconditions WalkWithUmbrella(location1, Work, umbr) TakeObject(location2, umbr) At(location1) At(location2) IsAt(umbr, location2) CanBeCarried(umbr) IsUmbrella(umbr) HandEmpty Dry At(location1) Holding(umbr) IsUmbrella(umbr) Dry At(Work) Dry GOAL WalkWithUmbrella(location2, location1) This is accomplished in the start state, by substituting location1=location2=Home, umbr=Umbrella WalkWithoutUmbrella can never be used, because it undoes Dry (this is good)

  28. Heuristics for state-space search • Cost of a plan: (usually) number of actions • Heuristic 1: plan for each subgoal (literal) separately, sum costs of plans • Does this ever underestimate? Overestimate? • Heuristic 2: solve a relaxed planning problem in which actions never delete literals (empty-delete-list heuristic) • Does this ever underestimate? Overestimate? • Very effective, even though requires solution to (easy) planning problem • Progression planners with empty-delete-list heuristic perform well

  29. Blocks world B D • On(B, A), On(A, Table), On(D, C), On(C, Table), Clear(B), Clear(D) A C

  30. Blocks world: Move action B D • Move(x,y,z) • Preconditions: • On(x,y), Clear(x), Clear(z) • Effects: • On(x,z), Clear(y), NOT(On(x,y)), NOT(Clear(z)) A C

  31. Blocks world: MoveToTable action B D • MoveToTable(x,y) • Preconditions: • On(x,y), Clear(x) • Effects: • On(x,Table), Clear(y), NOT(On(x,y)) A C

  32. Blocks world example B D • Goal: On(A,B) AND Clear(A) AND On(C,D) AND Clear(C) • A plan: MoveToTable(B, A), MoveToTable(D, C), Move(C, Table, D), Move(A, Table, B) • Really two separate problem instances A C

  33. A partial-order plan B D Start Goal: On(A,B) AND Clear(A) AND On(C,D) AND Clear(C) A C MoveToTable(B,A) MoveToTable(D,C) Move(A,Table, B) Move(C,Table, D) Anytotal order on the actions consistent with this partial order will work Finish

  34. A partial-order plan (with more detail) Start On(B,A) Clear(B) On(A, Table) On(C, Table) Clear(D) On(D,C) On(B,A) Clear(B) Clear(D) On(D,C) MoveToTable(B,A) MoveToTable(D,C) Clear(A) Clear(B) On(A, Table) On(C, Table) Clear(D) Clear(C) Move(A,Table, B) Move(C,Table, D) On(A, B) Clear(A) Clear(C) On(C, D) Finish

  35. Not everything decomposes into multiple problems: Sussman Anomaly C • Goal: On(A,B) AND On(B,C) • Sequentially pursuing goals does not work • Optimal plan: MoveToTable(C,A), Move(B,Table,C), Move(A,Table,B) A B

  36. An incorrect partial order plan for the Sussman Anomaly Start Clear(C) On(C, A) On(A, Table) On(B, Table) Clear(B) Clear(C) On(C, A) Clear(C) On(B, Table) Clear(B) MoveToTable(C,A) Move(B,Table,C) Clear(A) On(A, Table) Clear(B) Move(B,Table,C) must be after MoveToTable(C,A), otherwise it will ruin Clear(C) Move(A,Table,B) must be after Move(B,Table,C), otherwise it will ruin Clear(B) Move(A,Table,B) On(A, B) On(B, C) Finish

  37. Corrected partial order plan for the Sussman Anomaly Start Clear(C) On(C, A) On(A, Table) On(B, Table) Clear(B) Clear(C) On(C, A) Clear(C) On(B, Table) Clear(B) MoveToTable(C,A) Move(B,Table, C) Clear(A) On(A, Table) Clear(B) Move(A,Table, B) No more flexibility in the order due to protection of causal links On(A, B) On(B, C) Finish

More Related