1 / 35

Chapter 12 Case Studies

Chapter 12 Case Studies. Case Study - EEG Spike Detection. Outline: Problem definition Data acquisition and preprocessing Alternative network paradigms and structures Results. Acknowledgments. At Johns Hopkins University Applied Physics Laboratory: Russ Eberhart, Roy Dobbins, Chuck Spaur

Download Presentation

Chapter 12 Case Studies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 12Case Studies

  2. Case Study - EEG Spike Detection • Outline: • Problem definition • Data acquisition and preprocessing • Alternative network paradigms and structures • Results

  3. Acknowledgments At Johns Hopkins University Applied Physics Laboratory: Russ Eberhart, Roy Dobbins, Chuck Spaur At Johns Hopkins Hospital: Bob Webber, Ron Lesser, Dale Roberts

  4. What is an EEG? EEG is an abbreviation for ‘electroencephalogram.’ It is the recording of brain electric potentials varying in time at frequencies up to a few tens of Hz and measuring from a few microvolts to a few millivolts. For diagnostic purposes, it is usually taken by placing electrodes at standard locations. The most commonly used electrode arrangement is the International 10-20 montage.

  5. EEG Electrode Placement Diagram of top view of the scalp with the nose (front of scalp) up, illustrating the EEG electrode positions in the 10-20 International System. C is central, F is frontal, P is parietal, O is occipital, and T is temporal. Odd numbers designate leads on the left side of the scalp, even numbers designate leads on the right side, and Z designates zero or midline. (Many more channels may be used.)

  6. Usually, electrodes are attached to the scalp. However, sometimes they are put directly on the brain:

  7. EEG Spike Detection Project • A team effort • Always remember that “medicine drives engineering” and that the customer is always right • Data were taken real-time from multiple channels (32-64 typical) • Data rate is often 200–250 Hz continuously, resulting in 10–100 Mbytes per hour of data • Accurate interpretation is critical: surgical procedures often depend on results obtained • Replace expensive and time-consuming manual interpretation of EEG

  8. Problems to be addressed Primary: On-line multi-channel analysis of EEG waveforms, including spike and seizure detection Secondary: Reduction in amount of data to be recorded and archived, especially paper records

  9. Design Considerations • Design process was iterative • Raw data versus pre-processed parameters • Network architectures • Minimize system cost (< $10K 1990 dollars) • Ambulatory system desirable

  10. System Performance Specifications • Multi-channel analysis capability • “Real-time” analysis • Minimal training required for each patient • Spikes are defined by (4 out of 6) neurologists • Recall and precision (each > 0.8) are performance measures • Note: The first three were relatively easy to formulate; the last two were difficult.

  11. Spike Detection Performance Metrics Recall: The number of spikes correctly identified by the system divided by the number of spikes identified by the neurologists Precision: The number of spikes correctly identified by the system divided by the total number of spikes identified by the system (includes false positives)

  12. Contingency Matrix System Diagnosis Gold Standard Diagnosis Recall is TP/(TP + FN) Precision is TP/(TP+FP)

  13. Data Preprocessing and Categorization Design effort focused in two main areas * Preprocessing of data for NN input * Development of NN analysis tools Three main processing alternatives were considered 1. Raw data using sliding windows (max. 200 ms spike results in 250 ms window) 2. Preprocess raw data; present raw candidate spike centered in window 3. Preprocess to produce parameters & spike center time

  14. Methods Selected: 2 and 3 JHU Hospital software set to minimize false negatives (resulted in 2-3 false positives for each spike) Data “of interest” had an average of about 1 spike per second, so processing load was about 3-4 calculations per second Used the JHU Hospital spike viewer, which produced 9 parameters for each candidate spike

  15. Spike Parameters

  16. Input Data Scaling • Had significant effect • OK to normalize uniformly across all channels for raw data • For parametric data, scaling across all channels didn’t work; neither did scaling on each one individually • For parametric data, success was achieved by scaling channels with same units together • amplitudes • times • sharpnesses

  17. ROC Curve for Output 1

  18. ROC Curve for Output 2

  19. The “Universal Solution” Myth Minimization of false negatives is more important than minimization of false positives in the U.S. It is the reverse in New Zealand. System performance specifications are customer and application dependent.

  20. Case Study: Determining Battery State of Charge Using Computational Intelligence R. Eberhart, Y. Chen and S. Lyashevskiy Purdue School of Engineering and Technology Indianapolis, Indiana S. Sullivan and R. Brost Delphi Energy and Engine Management Systems Indianapolis, Indiana

  21. The Situation • Existing state of charge (SOC) estimation methods don’t perform satisfactorily for many applications. • Problems arise due to: • Charge-discharge cycles • Load profiles • Environmental conditions • Accuracy of existing systems was not better than about 10 percent.

  22. Dynamic Load Profile

  23. The Project Develop a system to determine SOC for a string of two or more lead-acid batteries Goal: Achieve errors significantly less than 5% over charge-discharge cycles, load and environmental variations

  24. Prior Technology • * Amp-hour integration • * High accumulation errors • * Peukert’s relation • * Requires constant current discharge, and invariant temperature and environmental conditions * Discharge to 0% SOC required (hard on batteries!) • * Families of capacity-voltage-current curves • * Battery aging invalidates calibration

  25. Electrochemical Principles • * Need current, voltage, temperature, and amp-hours of individual • batteries • Results in input vector with 4n inputs for n batteries • * Makes online system difficult to implement computationally • and economically • * Additional goal: Reduce dimensionality of inputs

  26. Data Acquisition Approach • Acquired numerous data sets • Included constant and dynamic loads • Varied temperature

  27. Initial Input Parameters • Discharge current of battery pack • Total ampere hours used • Average temperature of battery pack • Minimum battery voltage • Maximum battery voltage • Average battery voltage • Voltage difference between average and minimum battery voltages • Minimum battery voltage at previous sampling time

  28. Initial Design • Various approaches were tried • Selected feedforward supervised training neural network • Initial design used all 8 parameters in previous list in an 8-5-1 network configuration • Sigmoidal activation function used for hidden and output PEs • Each input was scaled between 0 and 1 • Levenberg-Marquardt algorithm chosen • Second-order Butterworth filter implemented for test/run only • Largest errors were at beginning and end of the cycle

  29. Errors Worse Near 0 and 100 Percent

  30. Final Design • Reduced number of processing elements in neural network • to a 5-3-1 configuration (used inputs 1-4, 8) • Used linear output processing element • Trained on 2,500 patterns • Tested on 24 data sets with various load and temperature profiles • Average sum-squared error/pattern ~0.0006 • Errors in SOC estimation generally less than 1%, • always less than 2%

  31. Performance Results

  32. Learnings/Conclusions • 1. The methodology chosen met all project goals. • 2. Use only the data necessary to train the network; more is not always better. • 3. Match output processing element activations to problem. • Computational methods may be useful for either training or testing that are not useful/needed for both. • U. S. Patent 6,064,180 was issued May 16, 2000 for this technology.

More Related