1 / 88

Computational intelligence: an F-matrix view

Computational intelligence: an F-matrix view. Qianchuan Zhao Center for Intelligent and Networked Systems Tsinghua University Beijing 100084, China Presented to: SFI summer school at Qingdao July 8, 2004. Joint work with.

obelia
Download Presentation

Computational intelligence: an F-matrix view

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computational intelligence: an F-matrix view Qianchuan Zhao Center for Intelligent and Networked Systems Tsinghua University Beijing 100084, China Presented to: SFI summer school at Qingdao July 8, 2004

  2. Joint work with • Prof. Yu-Chi Ho, Dr. David Pepyne, Prof. Da-Zhong Zheng, Prof. Bruce Krogh, Prof. Qiang Lu, Mr. Kai Sun, Dr. Ke Yang, Mr. Qingshan Jia

  3. Acknowledgement • National Science Foundation of China 60074012 and 60274011, funding from ministry of education (Chinese) and a Tsinghua University (China) Fundamental Research Funding Grant.

  4. Computational Intelligence • Methods inspired by nature intelligence (Genetic Algorithms, Swarm Intelligence, Simulated Annealing, Quantum Computing) • Methods inspired by human brain structure (Artificial Neural Networks) • Methods inspired by how human reasons (Fuzzy Logic)

  5. Outline • Optimization • Modeling strategies • General search strategies • General design strategies • Complexity in behavior of dynamic systems

  6. Outline • Optimization • Modeling strategies • General search strategy • General design strategy • Complexity in behavior of dynamic systems

  7. Subject to: Optimization A optimization problem is to maximum (minimum) performance index in a search space subject to some constraints.

  8. Complexity in evaluating objective function An objective function f is complex to evaluate if it can only be evaluated by simulation.

  9. Outline • Optimization • Modeling strategies • General search strategy • General design strategy • Complexity in behavior of dynamic systems

  10. Modeling of optimization problems • Encoding • Filtering • Surrogate • Goal soften

  11. Representing solutions • Encoding Using strings or numbers to represent a solution to the optimization problem as input such that optimization algorithms can proceed. Solutions should be able to obtained by decoding the outputs of optimization algorithms.

  12. 2 1 3 4 Example • TSP (traveling salesman problem): Find a minimum cost tour of n cities with each city visited once and only once. The sequence of nodes x=1234 is a solution.

  13. Example • Buffer allocation example: A solution is a vector of ten buffers. Alternatively, in observing the constraints, a solution can be defined as a vector of 4 variables (B0,B4,B5,B8).

  14. Filtering • Solve the original problem by stages. At the first stage, easy constraints are obtained to narrow down the solution space  to a smaller space . At the second stage, hard constraints are handled only within .

  15. 

  16. Example Traditional function optimization Max f(x) Subject to: x=[0,1] f is a continuously differentiable function. Method: obtain set  by solving df(x)/dx=0 on xR at the first stage and then solve Max f(x) x ={0,1}

  17. Example Islanding operation for power systems: Under local failures, to avoid collapse of the entire power system, it is separated into several small islands which can operate in safe conditions.

  18. Example Islanding operation for power systems [Zhao03a][Sun03]: The balance of static power supply and load in each island is a necessary condition for each island to operate safely. First stage: obtain solution set  by search all separation operation keeping static power balance. Second stage: search within  true proper separation operation by simulation.

  19. A power system

  20. Surrogate • Exploration Learning by example: Predict complex constraints/objective function with ANN • Average Noised observation by Mote Carlo simulation

  21. 

  22. Example • Q-learning • Neural dynamic programming

  23. Goal soften • Instead of asking best for sure, we ask good enough with high probability

  24. Example • Ordinal Optimization

  25. F-matrix [Ho02] f1 f2 f|F| y|Y-1| y|Y-1| y|Y| y1 y1 x1 y1 y|Y-1| y|Y| y1 y|Y-1| x2 y1 y|Y| y2 y|Y| y|Y-1| x|X| The number of all different problem instances is |Y||X|. Note the sum for each row is the same.

  26. F-matrix f1 f2 f|F| 0 1 1 x1 0 1 1 1 0 x2 1 0 0 0 1 0 x|X| 1 1

  27. F-matrix • Assumptions: a) Finite world assumption: finite search space and finite set of performance values. b) There is no constraint. c) Only P (polynomial) solutions can be searched.

  28. Outline • Optimization • Modeling strategies • General search strategies • General design strategies • Complexity in behavior of dynamic systems

  29. General search strategies • Neighborhood search • Random guess • Parallel search • Hybrid search • Hill climbing • Backtracking

  30. Neighborhood search (one dimension) f1 f2 f|F| 0 1 1 x1 0 1 1 0 1 x2 1 0 0 0 1 0 x|X| 1 1 Every element of designs in a neighborhood can be listed as nearby designs.

  31. Total computation effort consumed: 2+2+2+1+2+1+1+1 Number of successes: 6

  32. Random guess f1 f2 f|F| 0 1 1 x1 0 1 1 0 1 x2 1 0 0 0 1 0 x|X| 1 1 Not like neighborhood search, random guess jumps in the entire search space stochastically.

  33. Total computation effort consumed: 2+2+2+1+2+1+1+1 Number of successes : 6

  34. S0 S1 S2 S3

  35. NS RG S0 S1 S2 S3

  36. n is the number of solutions. Sk is the set of binary strings with exactly k 1s. is 1 if the problem instance f has outcome 1 for at least one solution in the randomly picked P solutions {x’1,x’2,…,x’P}.

  37. Parallel search f1 f2 f|F| 0 1 1 x1 0 1 1 0 1 x2 1 0 0 0 1 0 x|X| 1 1 Parallel search allow several search procedures work simultaneously.

  38. c P1 P2 P1 P2 P1 is a search procedure P2 is another search procedure P12: the iterative search process For each search step of both procedures, results are reported to the controller.

  39. P1 P2 P12 Total computation effort consumed: 2+2+2+2+2+2+2+2 Number of successes: 6

  40. n is the number of solutions. Sk is the set of binary strings with exactly k 1s. is 1 if the problem instance f has outcome 1 for at least one design in P1 designs (decided by neighborhood search). is defined similarly. =

  41. Hybrid f1 f2 f|F| 0 1 1 x1 0 1 1 0 1 x2 1 0 0 0 1 0 x|X| 1 1 Simple search strategies can also be combined.

  42. n is the number of solutions. Sk is the set of binary strings with exactly k 1s. is 1 if the problem instance f has outcome 1 for at least one design in the randomly picked P1 designs. is defined similarly. =

  43. Hill climbing f1 f2 f|F| y|Y-1| y|Y| y1 y|Y-1| y1 x1 y1 y1 y|Y-1| y|Y-1| y|Y| x2 y1 y|Y| y|Y-1| y2 y|Y| x|X| The purpose of hill climbing is to find the maximum outcome of the given instance by search in an increasing direction. If it find a maximum, we say it makes a hit.

  44. HC NS

More Related