html5-img
1 / 34

Cosmogenic isotope measurement inter-comparison

Cosmogenic isotope measurement inter-comparison. Marian Scott, Tim Jull University of Glasgow, University of Arizona July 2008. The CRONUS inter-comparison. To assess comparability of measurements made by the different laboratories

yon
Download Presentation

Cosmogenic isotope measurement inter-comparison

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cosmogenic isotope measurement inter-comparison Marian Scott, Tim Jull University of Glasgow, University of Arizona July 2008

  2. The CRONUS inter-comparison • To assess comparability of measurements made by the different laboratories • To assist in the definition of overall uncertainty when results from different laboratories used. • Carried out through a programme of comparisons, small number of samples distributed to the laboratories.

  3. The quality of the measurement is determined by the laboratory. Sound, reliable, precise and accurate measurement requires trace-ability to community agreed reference materials and standards

  4. Reference materials • Basic purpose: improvement of comparability of measurement results • Possible uses • for calibration, to demonstrate trace-ability • for quality control, to verify the performance of a method

  5. Reference materials • For calibration, material often artificially produced so its properties are known with low uncertainty • for qualitycontrol, material is often ‘real world’, so that it behaves as similarly as possible to the samples being measured.

  6. Objectives of TCN within CRONUS • To explore the comparability of results from the different laboratories • generate consensus values for a range of reference materials • to assist laboratories in independently assessing quality and • to quantify precision and accuracy

  7. What information can be quantified from an inter-comparison? • Accuracy (from known activity samples) • Laboratory precision (from duplicate samples) leading ultimately to quantification of • Measurement uncertainty • Quality Assurance (QA)

  8. QA is an early warning system - it is retrospective and dynamic, based on judgement of the measurements on backgrounds, standards and reference samples (including internal laboratory materials). It is the laboratory paper trail, and in-house checks What is QA?

  9. Quality issues • User concerns • How good are my results?what does the quoted uncertainty represent? • Laboratory concerns • Is my system stable? Are there any sources of contamination in the laboratory? How do the results compare to those expected?,……How good are my measurements?

  10. QA involves • Internal checking • Measurements made on a series of replicate samples (Polach (1989) noted ‘internal checking needs suitable quality control and reference materials’.) • Monitoring of background, standards, known-age and reference materials • Independent (external) checking • Laboratory inter-comparisons

  11. Time line of activities within CRONUS • Design the inter-comparison • Identify suitable samples (and criteria) done • Agree timescale for results- done- phase one results due in April 2008 • Define format for reporting results done • Inform laboratories and ask for expression of interest to participate (done) • Distribute samples- phase 1: done

  12. Timeline • For phase 1 • Distribution July 2007, results returned April 2008 • Archive material for future use • Location of archive- currently in Arizona- sufficient material for at least 10 more intercomparisons of the same size • 23 laboratories sent samples

  13. identified inter-calibration standards • Noble gases there are 2 potential “standards” – the pyroxene of Schaefer and an EU sample (from Wieler) . • pyroxene sample distributed • AMS standards prepared and available from Nishizumi- not distributed as part of CRONUS

  14. identified inter-calibration samples-distributed July 2007 • (A)ntarctic sample: high in Al-26 and Be-10. Quartz was separated at U of Vermont, etched 3 times in HF and washed. Recommended 5g be used. Approx 37g provided • (N)amibia: a low latitude sample, recommend 20g be used. approx 75g provided • For in-situ C-14, same samples provided in glass vials.

  15. typical format for reporting results eg Al-26 Mass of sample (quartz) (g) used in the measurement: AMS Standard used in the measurement Half-life used Background material used measured 26Al/27Al ratio (1 uncertainty) (specify units) mass (number of atoms of 27Al) in sample number of atoms 26Al in lab process blank

  16. Participating laboratories Al, Be, C- USA, UK, France, Switzerland, Germany, The Netherlands, Sweden, Australia, Canada (24 in total) Noble gases: USA, UK, Switzerland, France, Germany (14 in total)

  17. Results so far in CRONUS • So far, results from 7 laboratories for samples A and N, from 2 laboratories for sample P: laboratories have often reported replicate results • There is general consensus on half-life used for Al, but some variability for Be (1.5,1.51, 1.36, 1.37 x 106) • Standards used • Be:- NIST SRM4325, Nishiizumi SYD Be01-5-4 • Al:- Z92-0222(PRIME, Purdue), Nishiizumi STD Al 0143

  18. Results so far in CRONUS • Sample A, • Be analysis • 13 results, coefficient of variation (CV) (stdev/mean*100%) is 4.92 • Al analysis • 6 results, CV 7.92%

  19. Results so far in CRONUS • Sample N (lower Be and Al by approx factor of 100 than sample A), • Be analysis • 15 results, coefficient of variation (CV) (stdev/mean*100%) is 9.34% • Al analysis • 6 results, CV 9.2%%

  20. Potential analysis • Similar to that used commonly for the C-14 inter-comparisons, defining reproducibility but will use z-scores, defined as standardised deviations from a consensus value • for each sample, we define an agreed value (usually a robust estimate based on all results) • the z-score is defined as the difference between an individual result and the robust value standardised to account for the uncertainty (also based on a robust estimate). • properties of Z-scores well understood- used internationally in proficiency trials.

  21. The error in a measurement • A single value, which represents the difference between the measured value and the true value • However, for these samples, we do not have the true or real Al/Be atoms/g, so we use the inter-comparison to define this value (as a consensus from the participating laboratories)

  22. Key properties of measurement • Accuracy of the measurement refers to the deviation (difference) from the true value (or sometimes expected or consensus value) • Precision refers to the variation (expected or observed) in a series of replicate measurements (obtained under identical conditions). High precision, low uncertainty

  23. Accuracy and precision Accurate and inaccurate and precise Accurate and inaccurate and imprecise

  24. Evaluation of accuracy • In FIRI and VIRI, known-age material is used to define the ‘true’ age • The figure over shows a measure of accuracy for individual laboratories

  25. Between laboratory variation • reproducibility –identical samples, different laboratories

  26. Reliability and reproducibility • Repeatability (r) refers to measurements made under identical conditions in one laboratory, • Reproducibility (R) refers to measurements made in different laboratories, under different conditions. • Reproducibility is the closeness of agreement between test results under conditions where the same method is used in different laboratories.

  27. Reliability and reproducibility • The reproducibility value R is the value below which the absolute difference between two single results obtained under reproducibility conditions may be expected to lie with probability 0.95. • A difference larger than R cannot be ascribed to random fluctuations and would warrant investigation of possible sources of systematic differences.

  28. Reproducibility (VIRI phase 1)

  29. Estimation of r and R • Model: Y = m + B + e where Y is the measurement, m is the average activity, B is the between-laboratory variation and e is the random error. • B is assumed random and var (B) = 2L • e is assumed random and for a single laboratory var(e) = 2W. • 2W assumed constant for all laboratories, with average value 2r.

  30. r and R • The repeatability value r is 2.8 r • The reproducibility value R is 2.8 R , where R = (2L + 2W) • 2L , 2W and rmust all be estimated.

  31. Conclusions • All measurement is subject to uncertainty, the test of which is to make replicate measurements • Inter-laboratory trials provide generic measures of reproducibility and assessment of laboratory comparability • For cosmogenic isotope work, this is still at an early stage • CRONUS inter-comparison has archived material for future use, but for satisfactory characterisation, we need more results

  32. Actions- for 2008 • Finalise acquisition and preparation of the 2nd suite of samples. • Agree a timescale and distribute the samples (ideally distribute shortly after results of phase 1) • await results- for phase 2, assuming distributed September 2008, deadline for results- Feb 2009. • analysis of results from phase 1- reported by August 2008.

  33. other potential inter-calibration samples • Antarctic sample: 14C, 10Be, 26Al, 21Ne. • Namibia: 14C, 10Be, 26Al, 21Ne. • Maine: 14C, 10Be, 26Al, 21Ne. • NMT basalt: 36Cl. • Carbonate/sandstone: 36Cl, 10Be, 26Al. Some other potential materials are: • Lake Bonneville basalt sample. • Promontory Point quartzite • Blank quartz

  34. acknowledgements • All the participating laboratories, the sample providers (Paul Bierman and Joerg Schaefer), NSF for funding.

More Related