1 / 52

Automatic Recognition of Power Quality Disturbances

APT Center. Automatic Recognition of Power Quality Disturbances. MSEE Thesis Presentation Min Wang Advisor: Prof. Alexander Mamishev August 9, 2001. Outline. Overview the project and my thesis Two new PQ event recognition algorithms Signal resources Other techniques under exploration

lane
Download Presentation

Automatic Recognition of Power Quality Disturbances

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. APT Center Automatic Recognition of Power Quality Disturbances MSEE Thesis Presentation Min Wang Advisor: Prof. Alexander Mamishev August 9, 2001

  2. Outline • Overview the project and my thesis • Two new PQ event recognition algorithms • Signal resources • Other techniques under exploration • Conclusions

  3. What is Power Quality (PQ)? • Any deviation from a perfect sinusoidal waveform that can results in failure or misoperation of customer equipment • Quality of the current and voltage provided to the customers • Providing customers with a clean sinusoidal waveforms at 60 Hz without sags or spikes. • Providing power to allow sensitive electronic equipment operate reliably.

  4. Several typical PQ disturbances Voltage sags Major causes: faults, starting of large loads, and brown-out recovery Major consequences:shorts, accelerated aging, loss of data or stability, process interrupt, etc. Capacitor switching transients Major causes: a power factor correction method Major consequences:insulation breakdown or sparkover, semiconductor device damage, shorts, accelerated aging, loss of data or stability Harmonics Major causes: power electronic equipment, arcing, transformer saturation Major consequences:equipment overheating,high voltage/current, protective device operations Voltage Sag Capacitor Switching Harmonics Lightning transients Major causes: lightning strikes Major consequences:insulation breakdown or sparkover, semiconductor device damage, shorts, accelerated aging, loss of data or stability High impedance faults (One of the most difficult power system protection problems) Major causes: fallen conductors, trees (fail to establish a permanent return path) Major consequences:fire, threats to personal safety Lightning Strike High Impedance Fault (RMS)

  5. Why PQ becomes important? • Proliferation of highly sensitive computerized equipment places more stringent demands on PQ • Semiconductor industry • Computers and computer-related businesses • Variable-speed drives or robots • Programmable logic controllers • Electronic equipment results in more PQ problems • Deregulation of power industry creates more competitive market

  6. Why PQ becomes important? • Impact to Silicon Valley • One cycle interruption makes a silicon device worthless • Five minutes shut down of a chip fabrication plant causes delay from a day to a week • One second of power outage makes e-commerce sites lose millions of dollars worth of business • US PQ losses: $20 billion/year (Frost & Sullivan)

  7. ITIC curve (1996)

  8. State of the art • PQ monitoring software and hardware are needed in both utilities and customers • Detect, identify, and localize different PQ disturbances • Real time decision making • The topic of a general event classification (opposed to individual fault detection) has rarely been addressed • Existing automatic recognition methods need much improvement in terms of their versatility, reliability, and accuracy. • The accumulation of a comprehensive PQ database will significantly expedite the birth of the solutions

  9. Goals and Status of this project • Goals of this research project • Enhancement of real-time power system protection • Statistical accumulations of power quality problems • Incipient fault detections • Current work -- Applying advanced signal processing techniques to identification of power quality events • Capture the key information from the waveforms (Feature Extraction) • Discriminate based on the features captured (Classifier) • Error correcting / comprehensive decision making (Post-processing)

  10. Signal Sources Power Quality Events Feature Extractors Classifiers Applications harmonics capacitor high frequency switching Matlab simulations capacitor high frequency switching PSCAD/EMTDC simulations voltage sudden sag single instrument measurement (power platform) statistical analysis, troubleshooting ambiguity plane based class-dependent TFR artificial neural networks voltage sag decay relaying, protection voltage swell clustered wavelet MSD matrix hidden Markov models data from industrial partners high impedance faults incipient fault detection motor starting lightning strike Scope of this thesis transformer inrush Scope of this project

  11. Low Pass Filtering and Resampling Instantaneous Autocorrelation Function Ambiguity Plane Classification By Artificial Neural Network Feature Extraction Modified Fisher’s Discriminant Ratio Kernel Types of PQ Disturbances Overview of Algorithm I Voltage Waveform PQ Disturbance Classification Algorithm II

  12. A chirp signal Fourier Spectrum Optimal-kernel TFR Wigner TFR Spectrogram Time-Frequency Representations (TFR)

  13. Overall Strategy • Want a TFR specially for classification purpose • Why? • Spectrogram is not for the classification goal • We don’t need “accurate” information about the signal • We only need information useful for the classification goal • Need a class-dependent, instead of signal-dependent TFR • Want a TFR specially for this classification task • Different classification task have different optimal TFRs • No TFR available we can use directly • Need Design a TFR by ourselves

  14. Signal A Signal B Signal C Signal D TFR1 TFR2 (TFRi)A (TFRi)B (TFRi)C (TFRi)D Signal TFRi TFRi Classifier TFRn Class 1 Class 2 Class 3 Overall Strategy

  15. Ambiguity Plane • Any TFR can be generate from ambiguity plane TFR= Fourier Transform {AF .* Kernel} • Design our TFR by smoothing Ambiguity Function (AF) with some form of kernel • Ambiguity Function: V=[ v1 v2 v3 …… vn-2 vn-1 vn]  and  are discrete Doppler and lag respectively

  16. To extract features, class-dependent kernels need to be designed for smoothing the ambiguity plane. Ambiguity Plane (a) (b) (c) (d) (e) (f)

  17. How to Choose Feature Points From the AP ??? Intuitive Feature Extraction From AP

  18. Design class-dependent TFR from AP • Features for a pattern recognition task should • Maximize the separability of signals from different classes • Maximize the similarity of signals from the same class • Select those points whose • Between-class variances are largest • Within-class variances are smallest

  19. Feature Vector Design the Kernel Select those N points that have N largest FDK values as Feature Point Locations

  20. 2nd Level Wavelet Decomposition and Reconstruction 9 Level Multiresolution signal Decomposition (MSD) (For first 3 level WTC’s) Moving Average Filter NOTPQ Disturbances WTC’s > T no yes Classification by Neural Networks Feature Extraction From MSD Matrix Typesof PQ Disturbances Overview of Algorithm II V /I Waveforms

  21. Wavelet Analysis • A mathematical tool for signal analysis • A wavelet is a short-term duration wave, which grows and decays essentially in a limited time period RP of Morlet Mother Wavelet Daubechies 4 Mother Wavelet • Tell us how weighted average of certain other functions vary from one averaging period to the next

  22. Why Use Wavelets? Fourier Analysis • Periodic time functions • Wide bandwidth for short term transients • Not consider frequencies that evolve in time • Suffer from certain annoying anomalies • Gibbs’s phenomenon • Aliasing (With FFT) • FFT’s computation complexity -- O(N*log2N) Wavelet Analysis • Choose desirable frequency and time characteristics • Use short windows at high frequencies and low windows at low frequencies • Basic functions employ time compression or dilation • Freedom in the choice of mother wavelet • WT’s computation complexity -- O(N)

  23. Wavelet Transform • A family of scaling functions and wavelet functions are generated by dilating and shifting the mother wavelet and scaling function • Calculate wavelets and scaling coefficients based on following inner products

  24. Wavelet MSD • WT decomposes a signal into different scales with multiple levels of resolution by dilating a single mother wavelet • Decomposes a signal into its detailed and smoothed versions

  25. Wavelet MSD

  26. c0 d1 d2 … dM cM Wavelet MSD • Conventional Methods • Use WTC’s in a certain row --- lose information on the other scales • Use energy values ||di||2 of all rows --- their weights are not equal

  27. Wavelet MSD • Basic ideas • Divide WTC’s in the MSD matrix into disjoint clusters • Each cluster contributes one feature • More important frequency/scale ranges have a larger number of clusters • WTC’s producing more features are more important in classification • One possible result of grouping clusters (24 features)

  28. Wavelet MSD • Cluster Determination • Clusters are determined by a set of training signals • Peaks of wavelet coefficients indicate the occurrences of PQ events • Coefficients at same position of different MSD matrices form independent random variables • Clusters are constructed around the high peaks of wavelet coefficients • Feature Extraction • Divide the MSD Matrix according to the cluster pattern • Determine all the d features, u1, … , ud,

  29. Cluster 4 Cluster 3 Cluster 2 Cluster 1 0.03 0.02 0.01 0 100 0 10 20 30 40 50 60 70 80 90 Wavelet MSD Clustering a rowvector in the MSD matrix

  30. Weights Input Patterns Output Decision Output layer Hidden layer Input layer Classifier – Neural Networks • Input: signal signatures from feature extractors • Output: class type identified

  31. Activation Function Bipolar Sigmoidy=T(s) Neuronal Model

  32. Basic Idea of Error Backpropagation Minimum Sum-Squared Error Methodology

  33. Class Tested Correctly Identified Mistaken to C 1 Mistaken to C 2 Mistaken to C 3 Mistaken to C 4 Mistaken to C 5 Mistaken to C 6 1. Harmonics 100% - 0% 0% 0% 0% 0% 2. Capacitor fast switching transients 100% 0% - 0% 0% 0% 0% 3. Capacitor slow switching transients 94% 0% 6% - 0% 0% 0% 4. Voltage sudden sag 92% 0% 0% 1% - 7% 0% 5. Voltage gradual decay sag 93% 0% 0% 7% 0% - 0% 6. Voltage swell 100% 0% 0% 0% 0% 0% - Testing of Algorithm I RESULTS OF TESTING CLASSIFICATION METHOD (6 CLASSES CASE)

  34. Demo of PQ Event Recognition System

  35. PSCAD/EMTDC • Visual power system simulator • Developed by Manitoba HVDC research center • Simulate electromagnetic transients for DC and AC • PSCAD is the user interface • EMTDC is the simulation engine. • Similar to EMTP and ATP but faster

  36. Complete Circuit- AC system

  37. AC system Continued…

  38. Dranetz BMIPower Platform 4300

  39. Specifications • Sampling Frequency: 7kHz • Update rates: once per second (Harmonic-based parameters updated every 5 seconds) • Voltage: 10 – 600 Vrms • Frequency: Fundamental range 16 - 450 Hz • Current: depend on the current probes (TR-2520: 300A - 3000A RMS)

  40. Partnerships • Signal databases are being built with possible help from • R.W. Beck • Bonnevile Power Administration • SRP (Salt River Project) • University of Washington Physical Plant • American Public Power Association

  41. Classifier – HMM (Hidden Markov Model) • Initially introduced in late 60s and early 70s • Extended from Markov process • Utilized extensively in a wide range of applications • Pattern recognition, especially speech recognition • Biological signal processing, e.g., gene prediction in DNA • Artificial intelligence, image understanding • Possible advantages as a classifier • Very good classification performance • Very competitive learning speed • Requires small number of training examples

  42. Classifier – HMM • Discrete Markov process (first order) P[qt=Sj| qt-1=Si ,qt-2=Sk ,…]=P[qt=Sj| qt-1=Si]=aij • Example: • State 1: rain; State 2: cloudy; State 3: sunny. (This is B.) • State transition matrix, • Given day 1(t=1) is sunny (state 3) (This is π.) • What is the probability that next 7 days will be “sun-sun-rain-rain-sun-cloudy-sun”? (This is O.) • If current state is known, past states is useless for predicting future states…

  43. Classifier – HMM • Coin toss model (a HMM model) • Can only see the results, but don’t know what’s going on… • Observation sequence: O = H H T T T H T T H … H • Underlying state transition model, matrix A={aij} • Probability model between observations and states, B={bj(k)} • Elements of an HMM • N, number of states • A & B • π, initial state distribution • Complete HMM model – λ = (A, B, π)

  44. Classifier – HMM • A HMM poses three questions • Evaluation – P(O| λ) • Given the model, probability of the observation sequence? • Forward-backward procedure • Decoding – hidden state transition sequence • Given O, λ, best explanation of observations? • Viterbi Algorithm • Learning – λ = arcmax P(O| λ) • Given O, how to adjust λ = (A, B, π)to approach the real model • Expectation Maximization (EM) Algorithm

  45. Classifier – HMM • Tree structure constructed (HMT) • Black circles – hidden states • White circles – observations • State transition matrix A? • Probability model (B) for Wavelets coefficients – Gaussian Mixture Model S1 S2 +

  46. Classifier – HMM • Pdf for the a wavelet coefficient, • Wavelet-based HMM constructed, • How to calculate (train) the HMM? -- EM algorithm • Select an initial modelλ0, m=0 • E step: p(S|w, λm) • M step: set λm+1=arg maxλ ES[ln f(w, S|λ)|w, λm] • Set m=m+1. if converged, then stop; else, go to E step

  47. Classifier – HMM • Maximum likelihood Classification • Compare between 2 classes • Many-class classification (finding the shortest distance of signal likelihood)

  48. Postprocessing – Voting Scheme • Most straightforward method for combining the outputs of multiple classifiers • Decision is made based on votes • Multiple NNs, if • Class A receives 70% votes • Class B receives 30% votes • Then,A win • Threshold Kt • None of the classes receive more than Kt , reject all

  49. Postprocessing – Dempster-Shafer Theory of Evidence • Mathematical Theory of Evidence (MTE) • Theory of probable reasoning and combining evidence • Provides a means of decision making • Give the degree of believe of decision made • An application example Belief intervals A – [6.1% 11.7%] B – [6.1% 11.7%] C – [82.2% 87.8%]

  50. Postprocessing – Rule-Based Approach • Used in artificial intelligence and expert systems • Mimic power engineers thought process • Final control process • Attempt to correct wrong decisions by some standard rules, such as • Fast capacitor switching, wide duration of disturbance likelihood in the highest resolution wavelet domain • These rules are from experienced PQ engineers, who have the best embedded classification algorithms

More Related