1 / 20

Filtering of Telemetry Using Entropy

Filtering of Telemetry Using Entropy. SPACE. SCIENCE CENTER. by N. Huber, T. Carozzi, B. Popoola, P. Gough. 1 #132 MAPLD2005. Breakdown of Presentation. 1) Preliminaries. 2) “Filtering with Entropy” Concept Introduced.

Download Presentation

Filtering of Telemetry Using Entropy

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Filtering of Telemetry Using Entropy SPACE SCIENCE CENTER by N. Huber, T. Carozzi, B. Popoola, P. Gough 1 #132 MAPLD2005

  2. Breakdown of Presentation 1) Preliminaries. 2) “Filtering with Entropy” Concept Introduced. 3) Case Study: Data Received from the ESA Cluster Mission. 4) Entropy Calculation Algorithms presented. 5) Implementation of Entropy Calculation Algorithms in FPGA’s, and Further Considerations. 6) Experimental Results. 7) Future Work and Conclusions. 2 #132 MAPLD2005

  3. Preliminaries • Spacecraft instrumentation can produce vast datasets. • Scientifically interesting data is usually mixed with noise. • Telemetry can be overloaded by insignificant data, unnecessarily increasing the overall budget. • Data mining techniques are required to expose the significant data, resulting in many man-hours potentially wasted. • An algorithm that could determine the information content, and thus whether the data is “interesting enough” to transmit, would be advantageous. 3 #132 MAPLD2005

  4. Filtering with Entropy • A method based on the Shannon Entropy of an information source: • Estimates the minimum number of bits required to represent a dataset. • Random (noise-like) data exhibit higher entropy. • Structured (interesting) data exhibit lower entropy. • Proposed Algorithm: • Calculate the Entropy of an acquired dataset. • Compare it to a threshold, defined by the entropy of white noise. • If lower, then keep/transmit. If higher, discard/don’t process. 4 #132 MAPLD2005

  5. Case Study: The ESA Cluster Correlator • Main objective is to detect Wave-Particle interactions in space plasma. • Auto-correlation operations are carried out on accumulated counts of detected particles. • Due to the nature of the phenomenon, low count rates are expected. White noise is also the predominant feature of the received telemetry. • 210 bits transmitted at a time. They could potentially all be of interest. 5 #132 MAPLD2005

  6. Case Study cont.: Preliminary Study Entropies over CLUSTER S/C2 orbit (fixed) 80 Magnetosphere Magneto- sphere Solar wind 60 Total entropy [bits] 40 Magneto- sheath Magneto- sheath 20 03:00 09:00 15:00 21:00 03:00 09:00 15:00 21:00 03:00 White noise 0 level -0.5 -1 Difference relative to Turbulent data of scientific interest max. entropy [bits] -1.5 -2 -2.5 03:00 09:00 15:00 21:00 03:00 09:00 15:00 21:00 03:00 Time [HH:MM] since 2001-03-17 UT 6 #132 MAPLD2005

  7. Case Study cont.: Points of Interest • Data counts affect overall entropy, but not difference from maximum (white noise) entropy for that data count. • Utilisation of bandwidth is less than 50% (max of ≃80 bits out of possible 210). • Areas of interest correspond to areas with lower relative entropy (e.g. in the Magneto-sheath wave-particle interactions are expected, mainly through turbulence). • If a 1-bit threshold had been selected, nearly 80% of Telemetry need not have been transmitted, as it contained mainly noise. 7 #132 MAPLD2005

  8. Entropy Calculation Algorithms • Algorithms selected are based on the “Maximum Entropy Method” for the optimisation of spectral analysis techniques. • Specifically chosen to follow from ACF’s. • They compute the spectral Entropy of a dataset. • Both algorithms to be implemented in an FPGA (Xilinx 4VSX35). These devices offer high parallelism, with in-built DSP blocks that facilitate mathematical operations. 8 #132 MAPLD2005

  9. Toeplitz Matrix algorithm • Procedure: • Create a P x P symmetrical Toeplitz matrix, where Pi is equal to the i-th coefficient (lag) of an ACF. • Find the determinant of this matrix. • Compute the log2 of the determinant. • Average over number of coefficients/lags. 9 Mapld2005/P132

  10. FFT algorithm • Procedure: • Pass the coefficients of the ACF through a FFT, to obtain the power spectral density (PSD) of the original dataset. • Keep only the real coefficients, as imaginary ones are irrelevant. Negative or zero real coefficients should be scaled accordingly. • Calculate the log2 of each coefficient. • Sum all log2 results • Average over number of FFT points. 10 #132 MAPLD2005

  11. Entropy Calculation Algorithms in FPGA’s. 1 • Disadvantages of Toeplitz Matrix implementation in FPGA’s: • Requires LU decomposition of matrix to obtain determinant (very mathematically intensive). • Easily parallelised for each matrix element, but requires N3 iterations (N being the number of ACF lags). • Does not scale well. • Due to nature of the phenomenon studied, most of the required assumptions do not hold true. • Issues arise from non singularity of Determinant, negative determinants, and determinants equal to 0. • Determinants through decomposition are usually fractional, and hence accurate logarithms are very hard to acquire. • Not preferred. 11 #132 MAPLD2005

  12. Entropy Calculation Algorithms in FPGA’s. 2 • Advantages of FFT implementation in FPGA’s: • Well known technique, used in a wide range of applications. • FFT’s have been implemented extensively in FPGA’s. • Can be configured to use in-built DSP blocks, resulting in less fabric utilisation. • A fast method, especially for small number of points. • Easily configurable IP Cores that carry out FFT’s are readily available for most FPGA families. • Preferred method. Main concern is accuracy issues that arise from the rounding method used in the FFT. Rounding is necessary for the accuracy of the Log2 step. 13 #132 MAPLD2005

  13. Further Considerations • Further issues: • Algorithms that calculate log2 to a high accuracy are non-existent. The use of Look-Up-Tables (LUT’s) is very common. They become inefficient as the range of inputs grows. • Fast algorithms can be very inaccurate, e.g. discarding the whole of the fractional part of the logarithm. • Thresholds set for the final entropy calculated have to be application specific, since they are dependent on count-rate. A theoretical threshold can be calculated for each count-rate in advance using the Toeplitz matrix method. • Further threshold setting methods can include an averaging of a number of past calculated entropies. These may, however, not detect small fluctuations of entropy. 14 #132 MAPLD2005

  14. Implementation. 1 FFT Entropy: The design created consisted of a simple chain of elements that would carry out the algorithm: This design calculates the absolute entropy of the original dataset, not the relative one. 15 #132 MAPLD2005

  15. Implementation. 2 Inputs to the system were based on the ACF results received from CLUSTER. The FFT module is a pipelined version with full internal precision. Its output is rounded to the closest integer for the next steps. Only 32 points were required originally. Log2 estimator is based on a linear approximation algorithm developed at the University of Sussex. It is fast, very resource efficient and very accurate, especially for larger numbers. Control Logic was minimal, consisting of a small Finite State Machine (FSM) to provide control signals. 16 #132 MAPLD2005

  16. Experimental Results. 1 • Synthesis Results (8-bit inputs, 32 data points): • Only 7% of the FPGA was utilised. • Clock speeds of 220MHz. • Two 16K RAM blocks used, as required by the FFT. • Synthesis Results (8-bit inputs, extended to 64 data points): • 9% of the FPGA was utilised. • Clock speeds of 210MHz. • Two 16K RAM blocks used, as required by the FFT. Run time: Roughly 4*N clock cycles, where N is the number of points. This allows for the loading, processing and unloading of the FFT core, and for all further calculations. 16 #132 MAPLD2005

  17. Experimental Results. 1 (cont.) Logic Usage: • 7% for 32 data-points • 9% for 64 data-points (extended) Speeds Achieved • 220MHz for 32 data-points • 210MHz for 64 data-points (extended) 17 #132 MAPLD2005

  18. Experimental Results. 2 • Average FPGA implementation error compared to Matlab implementation error: about -3%. This accuracy allows for clear data selection through thresholding. • A large portion of the error is inevitable, as it is caused by the necessary rounding of the FFT. • Further errors are introduced by the Log2 estimator. This module is very accurate (<1% error) at higher numbers. CLUSTER data, however, is generally of small value, hence the errors. • The Log2 estimator always undervalues the true value, hence the error inherent in the overall entropy system is always negative. 18 #132 MAPLD2005

  19. Future Work • Generalisation of Entropy calculations to be ACF independent. • Application in Correlating Electron Spectrograph (CORES), a University of Sussex project scheduled to launch in 2006. We are currently investigating directional entropy of events monitored. • Optimisation of the Log2 estimation techniques for higher accuracy. • Generalised threshold setting techniques. • Entropy algorithms to be developed for multi-channel applications 19 #132 MAPLD2005

  20. Conclusion • We have implemented a method that can select scientifically interesting data from noise. • It is easy to implement in an FPGA, with high accuracy. • Can be included in space instruments for real-time data selection. • Case study of CLUSTER shows that at least a specific portion of telemetry could have been reduced to just 20%. 21 #132 MAPLD2005

More Related