1 / 21

Dynamic Systems And Control

Dynamic Systems And Control. Course info. Introduction (What this course is about). Course home page. Home page : http://www.cs.huji.ac.il/~control. Course Info. Home page : http://www.cs.huji.ac.il/~control

Pat_Xavi
Download Presentation

Dynamic Systems And Control

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dynamic Systems And Control Course info. Introduction (What this course is about)

  2. Course home page • Home page: http://www.cs.huji.ac.il/~control Lavi Shpigelman, Dynamic Systems and control – 76929 –

  3. Course Info • Home page: http://www.cs.huji.ac.il/~control • Staff: Prof. Naftali Tishby (Ross, room 207)Lavi Shpigelman (Ross, room 61) • Class:Sunday, 12-3pm, ICNC • Grading • 40% exercises, 60% project • Textbooks: • Chi-Tsong Chen, Linear System Theory and Design, Oxford University Press, 1999 • Robert F. Stengel, Optimal Control and Estimation, Dover Publications, 1994 • J.J.E. Slotine and W. Li, Applied nonlinear control, Prentice Hall, Englewood cliffs, New Jersey, 1991 • H. K. Khalil, Non-linear Systems, Prentice Hall, 2001 Lavi Shpigelman, Dynamic Systems and control – 76929 –

  4. Intro – Dynamical Systems • What are dynamic systems? Lavi Shpigelman, Dynamic Systems and control – 76929 – Physical things with states that evolve in time

  5. (Optimal) Control Objective: Interact with a dynamical system to achieve desired goals • Stabilize nuclear reactor within safety limits • Fly aircraft minimizing fuel consumption • Pick up glass without spilling any milk Lavi Shpigelman, Dynamic Systems and control – 76929 – ...Measures of optimality

  6. Example:Prosthetics  bionics • Problem:Make a leg that knows when to bend. • Inputs: • Knee angle. • Ankle angle. • Ground pressure. • Stump pressures. • Outputs: • Variable joint stiffness and damping Lavi Shpigelman, Dynamic Systems and control – 76929 –

  7. Example: Robotics, Reinforcement Learning • How do you stand up? • How do you teach someone to stand up? • Reinforcement learning: Let the controller learn by trial and error and give it general feedback (reinforce ‘good’ moves). • Training a 3 piece robot to stand up: • Start of training: • End of training: Lavi Shpigelman, Dynamic Systems and control – 76929 –

  8. Control Signals Task Goal Controller Plant Observations Modeling (making assumptions) Graphical representation (information flow) Lavi Shpigelman, Dynamic Systems and control – 76929 – Mathematical relationships

  9. Control Example: Motor control Plant (controlled system): hand Controller: Nervous System Control objective:Task dependent (e.g. hit ball) Plant Inputs: Neural muscle activation signals. Plant Outputs: Visual, Proprioceptive, ... Plant State: Positions, velocities, muscle activations, available energy… Controller Input: Noisy sensory information Controller Output: Noisy neural patterns plant cont-roller Lavi Shpigelman, Dynamic Systems and control – 76929 –

  10. Control Signals Neural Pattern Task Goal Brain controller Handplant Observations sensory Feedback Modeling Motor Control Lavi Shpigelman, Dynamic Systems and control – 76929 – Details…

  11. Optimal Movements • Control Objective: Reach from a to b. • Fact: more than one way to skin a cat... • How to choose: Add optimality principle • E.g. optimality principle: Minimum variance at b. • Modeling assumption(s): Control is noisy: noise/ ||control signal|| • Control problem: find the “optimal” control signal. • Note: No feedback (open loop control) Lavi Shpigelman, Dynamic Systems and control – 76929 –

  12. Modeling Motor Control - Details sensory - motor control loop Lavi Shpigelman, Dynamic Systems and control – 76929 – Wolpert DM & Ghahramani Z (2000) Computational principles of movement neuroscienceNature Neuroscience 3:1212-1217

  13. State Estimation – step 1 • Open loop estimate (w/o feedback) Lavi Shpigelman, Dynamic Systems and control – 76929 –

  14. State Estimation – step 2 • Step 1: Control signal & a forward dynamics model (dynamics predictor) updates the change in state estimate. • Step 2:Sensory information & forward sensory model (sensory predictor) are used to refine the estimate Lavi Shpigelman, Dynamic Systems and control – 76929 –

  15. Context Estimation (Adaptive Control) Lavi Shpigelman, Dynamic Systems and control – 76929 –

  16. Adaptive Control generation • An inverse model learns to translate a desired state (sequence) into a control signal. • A non-adapting, low gain feedback controller does the same for the state error. Its output is used as an error signal for learning the Inverse model. Lavi Shpigelman, Dynamic Systems and control – 76929 –

  17. u m External force (u) Contraction (y) Shock Absorber Simple(st) Dynamical System Example • Consider a shock absorber. • We wish to formulate a dynamical system model of the mass that is suspended by the absorber. • We choose a linear Ordinary Differential Equation (ODE) of 2nd order Lavi Shpigelman, Dynamic Systems and control – 76929 – net force damping force spring force external force y

  18. Observable ProcessOutputs y Controllable inputsu Observationsz ObservationProcess Dynamic Process State x Process noise w Observation noise n Plant Elements of the Dynamic System Lavi Shpigelman, Dynamic Systems and control – 76929 – State evolving with time (differential equations)

  19. Controllability & Observability of the Dynamic Process States Main issues:stabilitystabilizability Controllable Controllable inputsu Observable controlled observed ObservableOutputs y Lavi Shpigelman, Dynamic Systems and control – 76929 – Disturbance (noise) w Uncontrolled Unobserved Dynamic Process States x

  20. Other Modeling Issues* • Time-varying / Time-invariant • Continuous time / Discreet time • Continuous states / Discreet states • Linear / Nonlinear • Lumped / Not-lumped (having a state vector of finite/infinite dimension) • Stochastic / Deterministic More: • Types of disturbances (noise) • Control models Lavi Shpigelman, Dynamic Systems and control – 76929 – * All combinations are possible

  21. Rough course outline • Review of continuous (state and time), Linear, Time Invariant, state space models. • Linear algebra, state space model, solutions, realizations, stability, observability, controllability • Noiseless optimal control (non linear) • Loss functions, calculus of variations, optimization methods. • Stochastic LTI Gaussian models • State estimation, stochastic optimal control • Model Learning • Nonlinear system analysis • Phase plane analysis, Lyapunov theory. • Nonlinear control methods • Feedback linearization, sliding control, adaptive control, Reinforcement learning, ML. Lavi Shpigelman, Dynamic Systems and control – 76929 –

More Related