Derivative-Based Fuzzy System Optimization

1 / 23

# Derivative-Based Fuzzy System Optimization - PowerPoint PPT Presentation

Derivative-Based Fuzzy System Optimization. Dan Simon Cleveland State University. Suppose we have a fuzzy controller that operates for N time steps. The controller error can be measured as:. Input Modal Points c ij. Note that y q is constant. Therefore,.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

## PowerPoint Slideshow about 'Derivative-Based Fuzzy System Optimization' - albany

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

### Derivative-Based Fuzzy System Optimization

Dan Simon

Cleveland State University

Suppose we have a fuzzy controller that operates for N time steps. The controller error can be measured as:

Input Modal Points cij

Note that yq is constant. Therefore,

at each time step q, where wj is the firing strength, j is the centroid, and Jj is the area of the j-th output fuzzy membership function.

Definition:

ri1k = 1 if x1 fuzzy set i is a premise of the k-th rule and wk = fi1(x1), and ri1k = 0 otherwise. In other words, ri1k = 1 if x1 determines the activation level of the kth rule because of its membership in the i-th fuzzy set.

Similarly, ri2k = 1 if x2 fuzzy set i is a premise of the k-th rule and wk = fi2(x2), and ri2k = 0 otherwise.

Example:

Error x1

x1 NS (0.8) and x1 Z (0.2)

x2 Z (0.3) and x2 PS (0.7)

12-th rule: x1 NS and x2 PS  y  NS

Therefore, r2,1,12 = 1.

Change in Error x2

Throttle Position Change y

x1

x2

ri,1,12 = 0 for i  {1, 3, 4, 5}, and ri,2,12 = 0 for i  [1, 5].

Recall wk = firing strength of k-th rule, which is equal to the minimum of the two input membership functions. Therefore:

Recall the membership functions fi1 (x1) are given by the following triangular functions:

Summary:

• The expressions on pages 3, 5, 8, and 9, give us the partial derivative of the error with respect to the modal points of the input MFs.
• Similar methods are used to find the derivatives of the error with respect to input MF half-widths, output MF modal points, and output MF half-widths.
• Now we can use gradient descent (or another gradient-based method) to optimize the MFs.

Reference: D. Simon, "Sum normal optimization of fuzzy membership functions," International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, Aug. 2002.

74% error decrease

Example: Fuzzy Cruise Control – VehicleGrad.m

Default Output MFs

Optimized Output MFs

PlotMem(

'paramgu.txt',

2, [5 5], 1, 5)

The input MFs do not change as much as the output MFs

• Why use 5 membership functions for the output and for each input?
• How can we make the response less oscillatory? How about something like:

where  is a weighting parameter

• Try different gradient descent options
• Try different MF shapes
• How can we optimize while constraining the MFs to be sum normal?

f11(x1)

1

f21(x1)

1

x1

b11–

c11

b11+

x1

b21–

c21

Similar equality constraints can be written for the input 2 MFs, and for the output MFs.

 = number of input 1 fuzzy sets

 = number of input 2 fuzzy sets

 = number of output fuzzy sets

L1: 2( 1)  3

L2: 2( 1)  3

L3: 2( 1)  3

Example: Fuzzy Cruise Control – VehicleGrad(1);

70% error decrease

(Recall that it was 74% for unconstrained optimization)

Performance of cruise control after unconstrained and constrained gradient descent optimization of MFs

Default Output MFs

Optimized Constrained Output MFs

PlotMem(

'paramgc.txt',

2, [5 5], 1, 5)

The input MFs do not change as much as the output MFs