1 / 24

Stream Survey Protocol Comparisons; In search of truth, comparability and reality.

Stream Survey Protocol Comparisons; In search of truth, comparability and reality. Brett Roper ( broper@fs.fed.us ) Aquatic Monitoring Program Lead USDA Forest Service. Overview.

rafe
Download Presentation

Stream Survey Protocol Comparisons; In search of truth, comparability and reality.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Stream Survey Protocol Comparisons; In search of truth, comparability and reality. Brett Roper (broper@fs.fed.us) Aquatic Monitoring Program Lead USDA Forest Service

  2. Overview • Precision – this addresses how repeatable a given monitoring protocol is. The more repeatable the more confidence when have in any one observation (making a call at the stream reach). • Are the results related to the “truth”. • Are there relationships among protocols – can data on attributes be shared even if protocols are different. • If the question is of a high level indicator, say stream health, are monitoring groups comparable?

  3. Repeatability within a protocol • Low coefficient of variation • High signal to noise • Discrimination among streams Insures that different evaluations of a stream by a monitoring program draws the same conclusion regardless of who does the survey. .

  4. Lets look at the within group variability of gradient for the 12 streams

  5. Most groups have repeatable protocols –A and B’s good for sites. B,C, and D’s good for large scale surveys that rely on large sample sizes or detecting large changes. F’s will be difficult to use in determining status or trend at a site or across sites.

  6. If most protocols are OK (an no one can force them to change) the question becomes, are they related to each other and/or the ‘truth”

  7. Some attributes appear to easily crosswalk, even if there is some variability within a monitoring group

  8. Truth to protocol; Gradient- All are good even if they were not of the same repeatability. Truth =0.05307+1.02799(AREMP) r2=0.995 Truth =-0.30709+1.20253(CFG) r2=0.980 Truth =-0.19464+1.04283(EMAP) r2=0.993 Truth = 0.09352+0.9498(ODFWS) r2=0.963 Truth =0.03793+1.05197(PIBO) r2=0.993 Truth = -0.21219+1.00497(UC) r2=0.992

  9. Sinuosity; By Stream

  10. Sinuosity – By stream type

  11. Truth to protocol; Sinuosity Truth =0.10032+0.9172(AREMP) r2=0.93 Truth =0.18409+0.86456(EMAP) r2=0.95 Truth =0.31618+1.21918(PIBO) r2=0.76 Truth =0.40388+0.65961(UC) r2=0.87

  12. Bankfull

  13. Bankfull Width

  14. Truth to protocol; Bankfull. Truth =-0.2986+1.2382(AREMP) r2=0.59 Truth = 0.8448+1.1273(CFG) r2=0.63 Truth = 0.6383+1.3139(EMAP) r2=0.73 Truth = 1.8871+0.9654(NIFC) r2=0.57 Truth = 1.6484+0.9898(ODFWS) r2=0.65 Truth = 1.6163+1.1731(PIBO) r2=0.59 Truth = 2.1586+1.2536(UC) r2=0.52

  15. Protocol to protocol; Bankfull pretty good across the board. UC = 0.6621+0.4516(AREMP) r2=0.91 UC = 0.1320+0.6352(CFG) r2=0.95 UC = 0.1765+0.7939(EMAP) r2=0.95 UC = 0.4258+0.6158(NIFC) r2=0.98 UC = 0.6825+0.9898(ODFWS) r2=0.97 UC =-0.6227+1.0137(PIBO) r2=0.99

  16. Width-to-Depth

  17. Sometimes protocol results may not be strongly related to the truth (this is difficult to say because truth had no data in streams with large values), but are still related to each other – Which is more important?

  18. All the discussions up to now has focused on single attributes – but what about some higher level indicator called “stream health” • If you add up the outcomes of several indicators do they give you the same picture across monitoring programs? • How does standardizing results across several attributes affect signal-to-noise and conclusions about streams.

  19. An example; using width/depth,% pools, residual pool depth, % fines and large wood count as indicators. Standardize each of the attributes for each of the monitoring groups by applying the following equations; % pool, residual pool depth, large wood count width-to-depth, % fines

  20. If you average these values Big Bridge Camas Crane Crawfish Indian Myrtle Potamus Tinker Trail WF Lick Whiskey

  21. Then if you standardize to 1 to 100 S/N 4.59 0.69 2.76 11.35 1.84 2.49 5.32 Big Bridge Camas Crane Crawfish Indian Myrtle Potamus Tinker Trail WF Lick Whiskey

  22. Index Score – 1 = Best, 12 = Worst

  23. Relationship in the ranks of the index – all are significantly related (P<0.05) to each other but the strength of the relationship varies. This will need more work to relate to the ‘truth’.

  24. So what did we learn from this protocol comparison • Monitoring groups vary in repeatability for different attributes. • In most cases, monitoring group repeatability is at least acceptable and correlated to what other groups are also measuring for that attribute. • Results from monitoring groups are related to some more strictly/ intensively defined “truth”. • The real Truth is defined by the monitoring objective; What are you using Bankfull width for? It appears this question is sometimes forgotten in the search for repeatability. • If the real question is some higher level indicator like “stream health”, data from different groups seem to be correlated. • If the stream population from which the sample came from can be defined, then it may be possible to use monitoring group as a block effect when analyzing trend (when repeatability is acceptable) – status may be more difficult because each protocol measures a slightly different mental construct of and attribute.

More Related