1 / 16

Fidelity and Yield in a Volcano Monitoring Sensor Network

Fidelity and Yield in a Volcano Monitoring Sensor Network. Introduction is Clear. States goal “Evaluate the ability of a sensor networks to provide meaningful data to domain scientists” States Challenges High data rates Need complete data for signal analysis => reliable data collection

daryl
Download Presentation

Fidelity and Yield in a Volcano Monitoring Sensor Network

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Fidelity and Yield in a Volcano Monitoring Sensor Network

  2. Introduction is Clear • States goal • “Evaluate the ability of a sensor networks to provide meaningful data to domain scientists” • States Challenges • High data rates • Need complete data for signal analysis => reliable data collection • Correlation across multiple sensors => accurate time-stamps • States Contributions • “..analysis of the efficacy and accuracy of a volcano-monitoring sensor network as a scientific instrument” • “first paper to our knowledge to take a science-centric view of a sensor network with such demanding data-quality requirements” • States Evaluation Metrics • Robustness • Event detection accuracy • Data transfer performnace • Timing accuracy • Data fidelity

  3. ? • Robustness: “We find that the sensor nodes themselves were extremely reliable but that overall robustness was limited by power outages at the base station and a single three-day software failure. Discounting the power outages and this single failure, mean node uptime exceeded 96%.”

  4. Background is Motivational • Existing Instrumentation • Data-loggers with heavy power/communication infrastructure • Sensor Network Challenges • High-resolution signal collection • Triggered data acquisition required due to limited radio bandwidth • Timing accuracy

  5. Experimental Design for Data Validation • They document the context necessary for network validation • Includes GPS for time-synch validation • Standalone seismic stations, consisting of a broadband sensor and Reftek 130 data logger

  6. Presenting Results Optimistically: Network Robustness • “…the base station infrastructure was very unreliable, and a single bug affecting the Deluge protocol caused a three-day outage of the entire network” “Failures of the base station infrastructure were a significant source of network downtime during the deployment. This contrasts with common assumptions that the base station is generally reliable and operating on a continuous power source. “ • About removing Deluge • “We did not see this behavior in the lab before deployment, although we had not rigorously tested this portion of the code. In retrospect, it was optimistic of us to rely on a complex network reboot protocol that had not been field-tested”…

  7. Event Detector Accuracy • “We intended to apply our event detection algorithm to the signals collected by the two broadband seismic stations to establish the algorithm’s accuracy. Unfortunately, we found this to be difficult for several reasons. • First, each of the broadband stations suffered intermittent power and software failures…. • Second, the broadband stations deployed a more sensitive seismometer with a much wider frequency response. • Additionally, the broadband seismometers are much more sensitive, … As a result, the broadband sensors are able to detect much weaker seismic signals.”

  8. “We focus our attention on a single day of data where the broadband stations were recording clean data and the sensor network was relatively stable.” <= Good technique? • “One of the authors, a seismologist, visually extracted events from the broadband data; during this 24-hour period, a total of 589 events were recorded by the broadband sensors.”

  9. Results • Straight comparison => 1% detection accuracy • “During the same time, the sensor network triggered on just 7 events, suggesting that our detection accuracy is very low (about 1%).” • After compensating for differences between base-line sensor and sensor network => 5% detection accuracy • “Taking these two factors into account, the network’s detection accuracy is still only about 5%” • Discounting all events that occur while nodes are downloading data => ?% detection accuracy • “During the Fetch cycles on August 15, the broadband stations recorded 42 events, 24% of the total events detected. This indicates that, all else being equal, the sensor network could have detected approximately 24% more events had we designed the protocol to sample and download simultaneously. We plan to add this feature in the next version of our system.”

  10. How 24% more events? • Of 589 Total events, they detected 7 • 7/589 => 1% • Then, they only consider 136 events after compensating for differences in sensors • 7/136 => 5% • Discounting 42 events that occur during data download: • 136-42 = 94 remaining events • 7/94 => 7.5% effective detection rate • So at that rate, of additional 42 events, they would have detected (42) * (.075) = 3 additional events

  11. “In the end, we believe that our low detection rate is the result of the parameters used in the EWMA-based event detection algorithm. These parameters were chosen prior to the deployment, based on our experience with detecting infrasonic events at a different volcano [27]. We did not experiment with modifying them in the field. Indeed, using our algorithm with these same parameters on the broadband data for August 15 detects only 101 events, a fraction of the events chosen manually by an expert.” • Is it really about the parameters? Isn’t 101 >>> 7?

  12. Time rectification: Systematic Time-stamp Cleaning • Necessary because they had a few glitches (like running into a rare tinyOS bug in the clock driver )… • “In the absence of failures, this mapping would be a straightforward process. However, in the field, we noticed that nodes would occasionally lose synchronization with the rest of the network and report FTSP global times with significant errors, sometimes exceeding several hours. We suspect that the sparse deployment conditions at the volcano might have led to different behavior in the time synchronization protocol than in the lab. For example, occasional message loss or failure of a neighbor could cause the node’s global time to drift from the rest of the network. However, in lab tests that constrained the network topology we did not observe these instabilities.” • Their process makes no assumptions about sampling rate, and seems to have very few parameters/thresholds: • Require at least 2 status messages in a window for linear regression • Window is 5 minutes • Did several simple validation tests in the lab for verification

  13. ?? • “To our knowledge, few if any sensor network deployments have attempted to use network time synchronization protocols for extended periods.”

  14. Data Fidelity • “The final and most important measure of our network is its ability to provide scientifically-meaningful data on the volcano’s activity.” • “We identified four events in our [acoustic] data set with a clear infrasonic component. For each event, we hand-picked the the arrival time of the wave at each node using the time rectified signal.” • “Analyzing the seismic waves captured by our network is significantly more challenging….However, we present a high-level measure of the consistency of the signals captured by our network: that is, we evaluate whether the seismic wave arrivals are consistent with expected volcanic activity…We took 15 seismic events”

  15. Lessons Learned • Ground truth and self-validation mechanisms are critical – YES!! • Coping with infrastructure and protocol failures • Building confidence inside cross-domain scientific collaborations

  16. (Very) Brief Summary • Clear Introduction • Motivational Background • Systems description is incomplete – but they cite previous papers that describe it • Experimental Design – Documented their context for future validation! • Evaluation of Results • In some instances, they do a great job of explaining every point on the graph! Awesome! • They also give explanations for larger results – good! • Others (Figures 6,7) they reference anomalous patterns but do not even try to explain why • Did some comparison with “ground-truth” but ultimately not much: 1 day to analyze event detector algorithm, and 4 acoustic and 15 seismic events to analyze data fideltiy; seismic events were difficult to compare so they did a “consistency” check instead • Expectation of good behavior in the field led them astray • Didn’t sufficiently test deluge • Didn’t expect base-station outages • Time-synch protocol was tested with “constrained” network topology, that doesn’t seem to have included “occasional message loss or neighbor failure?”

More Related