1 / 36

Context-Dependent Network Agents

Context-Dependent Network Agents. EPRI/ARO CINS Initiative CDNA Consortium CMU, RPI, TAMU, Wisconsin, UIUC. The CDNA Consortium. Carnegie Mellon University Prof. Pradeep Khosla Prof. Bruce Krogh Dr. Eswaran Subrahmanian Prof. Sarosh Talukdar Rensselaer Polytechnic Institute

boaz
Download Presentation

Context-Dependent Network Agents

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Context-Dependent Network Agents EPRI/ARO CINS Initiative CDNA Consortium CMU, RPI, TAMU, Wisconsin, UIUC

  2. The CDNA Consortium Carnegie Mellon University Prof. Pradeep Khosla Prof. Bruce Krogh Dr. Eswaran Subrahmanian Prof. Sarosh Talukdar Rensselaer Polytechnic Institute Prof. Joe Chow Texas A&M University Prof. Garng Huang Prof. Mladen Kezunovic University of Illinois at Urbana-Champaign Prof. Lui Sha University of Minnesota Prof. Bruce Wollenberg

  3. CDNA Objective • Improve • agility and robustness (survivability) of large-scale dynamic networks • that face new and unanticipated operating conditions. • Target Networks: • U.S. Power Grid • Local networks

  4. CDNA Approach • Improve • decision-making competenceof components distributed throughout the network, • particularlyexisting and future control devices, such as relays, voltage regulators and FACTS.

  5. Why CDNA? • centralized real-time control is • infeasible in many situations because of the distribution of information and growing number of independent decision makers on the grid • intractable - robust control algorithms simply don’t scale, the problems are NP hard • undesirable - we contend that centralized solutions are less robust against major network upsets and less adaptive to new situations

  6. Why CDNA? (cont’d.) • control devices are already pre-programmed for anticipated situations BUT “one-size fits all” strategies are conservative in most cases, and wrong in some (the most critical!) situations • necessary communication and computation technology for CDNA exists today

  7. Key Research Issues • modeling • operating modes • contingencies • impact of restructured power systems • device capabilities/influence

  8. Key Research Issues - 2 • state estimation • using local information • network state estimation • real-time constraints • hybrid control • adaptive mode switching • coverage

  9. Key Research Issues - 3 • learning • distributed learning • state-space decomposition • coordination • collaboration strategies • moving off-line techniques for asynchronous algorithms on-line

  10. Decentralized Large Area Power System ControlBruce WollenbergUniversity of Minnesota

  11. Objectives • Research goal is to show how all standard functions built on a power flow calculation can be accomplished without a large area (centralized) model and computer system • Each region of the power system retains its own control system, models it own power network and communicates with immediate neighbors • Functions that now require central computing • Security Analysis • Optimal Power Flow • Available Transfer Capability

  12. Typical Power Pool or ISO Trends: - Getting larger - Standard data formats - Less functionality in regional systems Examples: - California ISO - Midwest ISO

  13. Networked Control Systems - Region can be any size - Can extend to any number of regions - Aggregate has same functionality as large area control system - Can new functionality be added that would not be available in a central system?

  14. Collaborative Nets Eduardo Camponogara and Sarosh Talukdar Institute for Complex Engineered Systems Carnegie Mellon University

  15. Controlling Large Networks Operating goals fall into categories: • Costs & profits • Safety • Regulations • Equipment Limits Limitations: Control Solution: • No organization can cope with all operating goals • Need of diverse skills • Multitudes of agents • Delegate goals to separate organizations Organization: A network of agents and communication links. Agent: Any entity that makes and implements decisions such as relays, control devices, and humans.

  16. Multiple Organizations in the Power Grid Protection Systems Generator Control Security Systems Agents Relays Simulation & learn. tools, humans Governors, exciters optimization soft. Prevent cascading failures Keep equipment under limits Reduce cost s.t. constraints Goals Hours, days 0.01 to 0.1secs Seconds Reaction Time High Low Agent Skills Small Large Number of Agents Slow Fast Agent Speed

  17. Organizations Do Not Collaborate Security Systems Protection Systems Generator Control Current Scenario: • Agents in separate organizations do not “talk” • Agents might work at cross-purpose • Organizations might interfere with one another How do we make individual agents more effective? How do we prevent interference between organizations?

  18. Improving Overall Performance of Nets The suggested answer is based on: 1.) The use of a common framework to specify agent tasks. 2.) The implementation of a sparse, collaborative net that can cut across the hierarchic organizations. 3.) The design of collaboration protocols to promote effective exchange of information. Security Systems Protection Systems Generator Control C-Net C-Net

  19. What Is A Collaborative Net? A flat organization of dissimilar agents that can integrate hierarchic organization. Properties: • Agents are autonomous within the C-Net. They have initiative, make and implement decisions. • Agents collaborate with their neighbors. The collaboration protocol determines: • what information is exchanged, • in which way, and • how agents make use of it. Advantages: Disadvantages: • Quick • Fault Tolerant • Open • No structural coordination. if necessary, it can emerge from the collaboration protocol. • Unfamiliar.

  20. The Rolling Horizon Formulation The dynamic control problem A framework to solve dynamic control problems as a series of static optimization problems. Minimize f(x,dx/dt,u,t) Subject to h(x,dx/dt,u,t)=0 g(x,dx/dt,u,t)<=0 The static opt. problem (P) The steps of the rolling horizon formulation: 1.) Choose a horizon [t0,..,tN], I.E. a set of time points where t0 is the current time. 2.) Let x(tn) be the state predicted at time tn. x(t0) is the current state. 3.) Let u(tn) be the planned actions at time tn. 4.) Let X=[x(t0),…,x(tN)] and U=[u(t0),…,u(tN)] 5.) Choose a model to predict x(tn+1) from x(tn) and u(tn). Possibly, a discrete approximation of the dynamic equations (e.g., Euler’s step). Minimize f(X,U) Subject to H(X,U) = 0 G(X,U) <= 0 The prediction model is embedded in the H(X,U). G(X,U) approximates the operating constraints in g(x,dx/dt,u,t).

  21. The Rolling Horizon Algorithm • A model is used to predict the future state of the physical network over a set of discrete points in time (horizon). • An optimization procedure computes the control actions, over the horizon, that minimize “error.” Design Issues: Steps of the Algorithm: • The horizon has to be long enough to avoid present actions with poor long-term effects. • Accuracy of the prediction model. 1.) The current time it t0. 2.) Sense the current state x(t0) 3.) Instantiate the static optimization problem (P). 4.) Solve (P) to obtain the control actions U=[u(t0),…,u(tN)]. 5.) Implement the control action u(t0). 6.) Pause and let the physical network progress in time. The horizon “rolls” forward. 7.) Repeat from step 1.

  22. The Rolling Horizon Plan ahead implemented control Control model predicted control t0 t1 t2 t3 t4 Time now

  23. The Rolling Horizon Update plans frequently Control plans at t0 plans at t1 t0 t1 t2 t3 t4 Time now

  24. A Framework for Specifying Agent Tasks (P) Break up the static optimization problem, (P), into a set of M small, localized subproblems, {(Pm)}. (P1) (P3) (P4) (P2) Ag1 Assemble M agents into a C-Net, so that each agent matches one subproblem. C-Net Agent m and its subproblem (Pm) Proximate variables (xm,um): It senses the values of a subset xm of x. It sets the values of a subset um of u. It has partial perception of, and limited authority over, the physical network. Neighborhood variables (ym) Variables sensed or set by neighbors. Remote variables (zm): All the other variables.

  25. Matching Agents to Subproblems Minimize fm(Xm,Um,Ym,Zm) Subject to Hm(Xm,Um,Ym,Zm) = 0 Gm(Xm,Um,Ym,Zm) <= 0 The rolling horizon formulation of (Pm) Exact: If its not sensitive to remote vars. fm = fm(Xm,Um,Ym) Hm = Hm(Xm,Um,Ym) Gm = Gm(Xm,Um,Ym) The matching between agent-m and its subproblem (Pm) Near: If it is weakly sensitive to remote vars.

  26. Collaboration Protocols A protocol prescribes: a) the data exchanged by agents, b) in which way, and c) how agents use the data to solve their problems. Voting In setting the values of its controls, each agent takes the votes of its neighbors into account. Proximate Exchange Two protocols Each agent broadcasts its plans to nearby agents which, in turn, take these plans into account. Semi-synchronous, semi-parallel (mutual help). Synchronization between neighbors. Parallel work if agents are non-neighbors. Versions Asynchronous, parallel.

  27. Equivalence and Convergence Equivalence: When are the solutions to the network of subproblems, {(Pm)}, solutions to (P)? Two Questions: Convergence: When does the effort of the collaborative agents converge to a solution of {(Pm)}? Sufficient conditions for equivalence and convergence: 1.) Coverage: The C-Net must provide complete coverage of the network. 2.) Density: The matching of agents to subproblems must be exact. 3.) Convexity: (P) must be convex. 4.) Feasibility: (P) must be strictly feasible. 5.) Int-Pt-Mtd: The agents must use an interior-point-method. 6.) Serial Work: The agents run the semi-synchronous, semi-parallel protocol.

  28. Relaxing Sufficient Conditions in Practice We believe that the following conditions can be relaxed in practice: 1.) Density: Near matching of agents to problems are likely to be adequate. 2.) Convexity: It is impractical in real-world networks. 3.) Serial work: Serial work within a neighborhood is too slow. A prototypical network: A forest of pendulums. - One agent at each pend. - Agents control two forces: Horizontal & Orthogonal. - Agents collaborate with nearest neighbors.

  29. The Dynamic Control Problem Drive pendulums to the pre-disturbance mode, that is, minimize cumulative error (from desired trajectory) and total control-input cost. Problem: Minimize Subject to A centralized, nonlinear optimization package that solve the stat. opt. prob. (P). C1 Three Control Solutions: C2 A centralized, feedback linearization controller. C-Net A collaborative net, with one agent at each pendulum, that solves {(Pm)}.

  30. C-Net and C1: Experimental Set-up Evaluate the loss in quality of the Collaborative Net solution. Goal: Set-up: C-Nets and C1s restore synchronous mode of pendulums. At each sample time t, 1.) solve the network of subproblems, {(Pm)}, with the C-Net, 2.) record the obj-function evaluation of the C-Net, F(C-Net), 3.) solve the static optimization problem, (P), with C1, and 4.) record the obj-function evaluation of C1, F(C1). Output Data: A list of obj-function-evaluation pairs [F(C-Net),F(C1)]. Scenarios: Place pendulums in a line to form forests of 2 to 9 pendulums. 3-Pendulum Forest 2-Pendulum Forest Add 1 Pend.

  31. C-Net and C1: Results The difference in quality between the C-Net and C1 solutions. C-Net Excess: F(C-Net) is the obj-function evaluation attained by the C-Net. F(C1) is the obj-function evaluation attained by controller C1. C-Net excess = [F(C-Net) – F(C1)] / F(C1) C-Net Penalty: The mean value of the C-Net excess. C-Net penalty is low C-Net Penalty (%) Number of Pendulums

  32. C-Net and C2: Experimental Set-up Evaluate the performance of the C-Net and the feedback linearization controller, C2, a traditional control technique. Goal: Set-up: C-Net and C2 restore synchronous mode of pendulums. Output Data: The cumulative error and input-cost, f(x,u), for the C-Net & C2. Scenario: A forest with 9 pendulums placed in grid.

  33. C-Net and C2: Results Objective: Minimize Control-Input Cost (b) Objective Function Evaluation: f(x,u) The lower the f(x,u), the better the solution C-Net C2 (feedback lin) C-Net performance improves 11.89 10e-4 9.56 12.32 10.49 10e-3 16.00 10e-2 17.05 82.64 32.07 10e-1

  34. C-Net and C2: Trajectory of Pendulums Pendulums under control of C2 (feedback linearization) C2 immediately drives pendulums to the desired trajectory. Pendulums under control of the C-Net The C-Net waits until it becomes cheaper to drive pendulums.

  35. Conclusion The experiments show that C-Nets are promising. Current research effort: • Development of collaboration protocols that allow agents to work asynchronously and in parallel, at their own speed. - Use of safety margins to guarantee feasibility, and foster effective work between slow and fast agents. • A taxonomy of collaboration protocols. What else have we done? • Employed C-Nets to recover synchronous operation of generators in power networks IEEE-14, -30, -57. • Preliminary work on the decomposition of (P) into {(Pm)}: - Models and algorithms to specify “neighborhood” perception.

  36. Hybrid Control Strategies

More Related