1 / 17

Mark D. Skowronski and John G. Harris Computational Neuro-Engineering Lab

Automatic detection of microchiroptera echolocation calls from field recordings using machine learning algorithms. Mark D. Skowronski and John G. Harris Computational Neuro-Engineering Lab Electrical and Computer Engineering University of Florida, Gainesville, FL, USA May 19, 2005. Overview.

hidi
Download Presentation

Mark D. Skowronski and John G. Harris Computational Neuro-Engineering Lab

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Automatic detection of microchiroptera echolocation calls from field recordings using machine learning algorithms Mark D. Skowronski and John G. Harris Computational Neuro-Engineering Lab Electrical and Computer Engineering University of Florida, Gainesville, FL, USA May 19, 2005

  2. Overview • Motivations for acoustic bat detection • Machine learning paradigm • Detection experiments • Conclusions

  3. Bat detection motivations • Bats are among the most diverse yet least studied mammals (~25% of all mammal species are bats). • Bats affect agriculture and carry diseases (directly or through parasites). • Acoustical domain is significant for echolocating bats and is non-invasive. • Recorded data can be volumous  automated algorithms for objective and repeatable detection & classification desired.

  4. Conventional methods • Conventional bat detection/classification parallels acoustic-phonetic paradigm of automatic speech recognition from 1970s. • Characteristics of acoustic phonetics: • Originally mimicked human expert methods • First, boundaries between regions determined • Second, features for each region were extracted • Third, features compared with decision trees, DFA • Limitations: • Boundaries ill-defined, sensitive to noise • Many feature extraction algorithms with varying degrees of noise robustness

  5. Machine learning • Acoustic phonetics gave way to machine learning for ASR in 1980s: • Advantages: • Decisions based on more information • Mature statistical foundation for algorithms • Frame-based features, from expert knowledge • Improved noise robustness • For bats: increased detection range

  6. Detection experiments • Database of bat calls • 7 different recording sites, 8 species • 1265 hand-labeled calls (from spectrogram readings) • Detection experiment design • Discrete events: 20-ms bins • Discrete outcomes: Yes or No: does a bin contain any part of a bat call?

  7. Detectors • Baseline • Threshold for frame energy • Gaussian mixture model (GMM) • Model of probability distribution of call features • Threshold for model output probability • Hidden Markov model (HMM) • Similar to GMM, but includes temporal constraints through piecewise-stationary states • Threshold for model output probability along Viterbi path

  8. Feature extraction • Baseline • Normalization: session noise floor at 0 dB • Feature: frame power • Machine learning • Blackman window, zero-padded FFT • Normalization: log amplitude mean subtraction • From ASR: ~cepstral mean subtraction • Removes transfer function of recording environment • Mean across time for each FFT bin • Features: • Maximum FFT amplitude, dB • Frequency at maximum amplitude, Hz • First and second temporal derivatives (slope, concavity)

  9. Feature extraction examples

  10. Feature extraction examples

  11. Feature extraction examples Six features: Power, Frequency, P, F P, F

  12. Detection example

  13. Experiment results

  14. Experiment results

  15. Conclusions • Machine learning algorithms improve detection when specificity is high (>.6). • HMM slightly superior to GMM, uses more temporal information, but slower to train/test. • Hand labels determined using spectrogram, biased towards high-power calls. • Machine learning models applicable to other species.

  16. Bioacoustic applications • To apply machine learning to other species: • Determine ground truth training data through expert hand labels • Extract relevant frame-based features, considering domain-specific noise sources (echos, propellor noise, other biological sources) • Train models of features from hand-labeled data • Consider training “silence” models for discriminant detection/classification

  17. Further information • http://www.cnel.ufl.edu/~markskow • markskow@cnel.ufl.edu Acknowledgements Bat data kindly provided by: Brock Fenton, U. of Western Ontario, Canada

More Related