1 / 29

Higher-Level Clients to Leverage MUSTANG Metrics

Higher-Level Clients to Leverage MUSTANG Metrics. Dr. Mary Templeton IRIS Data Management Center Managing Data from Seismic Networks September 9-17 2015 Hanoi, Vietnam. Why Have Multiple Clients?. Quality Assurance Practice at IRIS DMC Finding problems Analyst review Tracking problems

Download Presentation

Higher-Level Clients to Leverage MUSTANG Metrics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Higher-Level Clients to Leverage MUSTANG Metrics Dr. Mary Templeton IRIS Data Management Center Managing Data from Seismic Networks September 9-17 2015 Hanoi, Vietnam

  2. Why Have Multiple Clients? • Quality Assurance Practice at IRIS DMC • Finding problems • Analyst review • Tracking problems • Reporting problems

  3. Customizing Quality Assurance • Strategies for leveraging MUSTANG metrics • Scripting your own clients • wget • curl • R

  4. Quality Assurance Practice at IRIS DMC • Finding Problems: Automated Text Reports (internal use) • A script retrieves MUSTANG metrics • Metrics are grouped by problem type • Focuses on problem stations for further review

  5. Quality Assurance Practice at IRIS DMC • Analyst review • Metrics: dead_channel_exp < 0.3 and pct_below_nlnm > 20 • Review plot using MUSTANG noise-pdf service Nepal Earthquake microseisms • *IU.WCI.00.BHZ isn’t completely dead – it still records some energy

  6. Quality Assurance Practice at IRIS DMC • Analyst review • Review plot using MUSTANG noise-mode-timeseries service Problem started on August 27 2014

  7. Quality Assurance Practice at IRIS DMC • Analyst review • Review sample_mean plot using MUSTANG databrowser

  8. Quality Assurance Practice at IRIS DMC • Analyst review • Example: Channel Orientation Analysis Theorientation_check metric finds observed channel orientations for shallow M>= 7 events by • Calculating the Hilbert transform of the Z component (H{Z}) for Rayleigh waves • Cross-correlating H{Z} with trial radial components calculated at varying azimuths until the correlation coefficient is maximized • The observed channel orientation is difference between the calculated event back azimuth and observed radial azimuth Stachnik, J.C., Sheehan, A.F., Zietlow, D.W., Yang, Z, Collins, J. and Ferris, A, 2012, Determination of New Zealand Ocean Bottom Seismometer Orientation via Rayleigh-Wave Polarization, Seismological Research Letters, v. 83, no. 4,p704-712.

  9. Quality Assurance Practice at IRIS DMC • Analyst review • orientation_check measurements from 2013 and 2014 for CU.ANWB having correlation coefficients > 0.4 Median observed Y azimuth differed from the metadata by -2.79 degrees This value was omitted from the median because it fell outside two standard deviations A discrepancy with the CU.TGUH.00 metadata orientation was found using this metric. Its metadata has since been corrected.

  10. Why Have Multiple Clients? • You can browse small networks by channel: • But for large networks, a retrieving a list is faster percent_availability box plot

  11. Quality Assurance Practice at IRIS DMC • Tracking Problems

  12. Quality Assurance Practice at IRIS DMC • Tracking Problems HTML report

  13. Quality Assurance Practice at IRIS DMC • Reporting Problems Virtual network report summarized by network Links to analyst assessment of issue …

  14. Strategies for leveraging MUSTANG metrics • Use Metrics Thresholds • Find problems by retrieving channels that meet a meaningful metrics condition • Missing data have percent_availability=0 • Channels with masses against the stops have very large absolute_value(sample_mean) • Channels that do report GPS locks where clock_locked=0 have lost their GPS time reference

  15. Strategies for leveraging MUSTANG metrics • Finding Metrics Thresholds • Retrieve measurements for your network wget 'http://service.iris.edu/mustang/measurements/1/ query?metric=sample_mean &net=IU &cha=BH[12ENZ] &format=csv &timewindow=2015-07-07T00:00:00,2015-07-14T00:00:00'

  16. Strategies for leveraging MUSTANG metrics • Finding Metrics Thresholds • Find the range of metrics values for problem channels Threshold for pegged masses: abs(sample_mean) < 1e+7

  17. A Note About Amplitude Metrics • Metrics reported in counts may have different thresholds for different instrumentation • sample_max • sample_mean • sample_median • sample_min • sample_rms

  18. A Note About Amplitude Metrics • PSD-based metrics have their instrument responses removed – one threshold works for similar (e.g. broadband) instrumentation • dead_channel_exp • pct_below_nlnm • pct_above_nhnm • transfer_function

  19. A Note About Amplitude Metrics • PDF – a “heat-density” plot of many Power Spectral Density curves: Calibration New High Noise Model NHNM Healthy PSDs New Low Noise Model NLNM Dead channel

  20. Metrics Threshold Example Problem HHE poles: HHN poles: Sign error

  21. Strategies for leveraging MUSTANG metrics • Combine metrics • Dead channels have • almost linear PSDs (dead_channel_exp < 0.3) • and lie mainly below the NLNM (pct_below_nlnm > 20)

  22. Combined MetricsExample Problem dead_channel_exp < 0.3 && pct_below_nlnm > 20

  23. Strategies for leveraging MUSTANG metrics • Metrics Arithmetic • Metrics averages • num_gaps / # measurements • num_spikes / # measurements • Metrics differences • pct_below_nlnm daily difference

  24. MetricsArithmetic Example Problem A nonzero gap average for all channels with no high num_gap days may indicate an ongoing telemetry problem.

  25. Strategies for leveraging MUSTANG metrics • Some favorite metrics tests for GSN data • noData: percent_availability = 0 • gapsGt12: num_gaps > 12 • avgGaps: average gaps/measurement >= 2 • noTime: clock_locked = 0 • dead: dead_channel_exp < 0.3 && pct_below_nlnm > 20 • pegged: abs(sample_rms) > 10e+7 • lowAmp: dead_channel_exp >= 0.3 && pct_below_nlnm > 20 • noise: dead_channel_exp < 0.3 && pct_above_nhnm > 20 • hiAmp: sample_rms > 50000 • avgSpikes: average spikes/measurement >= 100 • dcOffsets: dc_offset > 50 • badRESP: pct_above_nhnm > 90 || pct_below_nlnm > 90

  26. Strategies for leveraging MUSTANG metrics • Scripting your own client can take advantage of these strategies:

  27. Strategies for leveraging MUSTANG metrics • Incorporate graphics

  28. IRIS DMC QA Website • http://ds.iris.edu/ds/nodes/dmc/quality-assurance/ • Currently has links to • Existing MUSTANG clients • MUSTANG resources and tutorials • Interpreting Power Spectral Density graphs • We hope to add tutorials on MUSTANG’s R-based metrics packages and other ways to script your own clients in the future

  29. Thank you

More Related