1 / 38

Chemistry 114 Lecture Notes

Chemistry 114 Lecture Notes. Jim Gimzewski. BASIC ERRORS IN MEASUREMENTS. Every measurement that is made is subject to a number of errors. If you cannot measure it, you cannot know it. A. Einstein. All scientific measures are subject to error.

genica
Download Presentation

Chemistry 114 Lecture Notes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chemistry 114Lecture Notes Jim Gimzewski

  2. BASIC ERRORS IN MEASUREMENTS Every measurement that is made is subject to a number of errors. If you cannot measure it, you cannot know it. A. Einstein

  3. All scientific measures are subject to error. These errors are reflected in the number of figures reported for the measurement. These errors are also reflected in the observation that two successive measures of the same quantity are different. The word error in science has a different significance than in general human experience

  4. 1. Errors Every measurement that is made is subject to a number of errors. The following is a list of possible sources of error: 1.1. Static Error Static error is an error that does not vary with time and is inherent in most instruments. A typical static error is the zero setting of the needle in an analog display. Some Multimeters are designed to be used laying flat on its back. If a measurement is made with these instrument standing up, then a static error inherent in the design of the instrument is introduced. 1.2. Dynamic Error A dynamic error can occur when the quantity being measured fluctuates as a function of time. An instrument may not give the correct reading if it is used to measure a voltage that varies slowly with time. Not all instruments can be used to measure periodic waveforms at higher frequencies.

  5. 1.3. Insertion and Loading Errors An important rule in making any measurement is that the measuring process must not significantly upset and alter the phenomena being measured. In practice, the measuring process will have some effect on the measurement being made, and this is something that should be considered as a source of error. Such errors may be caused by “inserting” an ammeter with a non-zero impedance in a circuit, or placing a voltmeter across a circuit that “loads” down the voltage being measured.

  6. 1.4. Instrument Error Any measuring instrument will be accurate only to a certain extent and only if it has been calibrated. 1.5. Human Error A human error is an error made by an observer when recording a measurement or by using an instrument incorrectly. Two types of human error are parallax reading error (caused by reading an instrument pointer from an angle) and interpolation error (made in “guessing” the correct value between two calibrated marks on the meter scale).

  7. 1.6. Theoretical Error Theoretical models are often used to determine the range of measurement that can be expected when measurements of phenomena are made. The actual phenomena, however, may be complex and the model used may be valid over a small limited range of values. This can result in a discrepancy between theoretical and measured results that is not caused by any experimental error, but by the inadequacy of the theoretical model to describe the phenomena being measured. 1.7. Miscellaneous Error A miscellaneous error is an error that does not fit in the above categories. An example of such an error is an error caused by taking measurements under different temperature conditions.

  8. Precision and Accuracy • Measurements that are close to the “correct” value are accurate. • Measurements which are close to each other are precise. • Measurements can be • accurate and precise; • precise but inaccurate; • neither accurate nor precise.

  9. Significant Figures The number of digits reported in a measurement reflect the accuracy of the measurement and the precision of the measuring device. In any calculation, the results are reported to the fewest significant figures (for multiplication and division) or fewest decimal places (addition and subtraction).

  10. Significant Figures Non-zero numbers are always significant. Zeros between non-zero numbers are always significant. Zeros before the first non-zero digit are not significant. (Example: 0.0003 has one significant figure.) Zeros at the end of the number after a decimal place are significant. Zeros at the end of a number before a decimal place are ambiguous (e.g. 10,300 g).

  11. Examples • 1 g 1 significant figure • 2.54 m 3 significant figures • 15,000 kg 2 significant figures • 1.500 x 10e3 4 significant figures • 0.0002 sec 1 significant figures • 221.001 V 6 significant figures

  12. Rounding off • When the answer to a calculation contains too many significant figures, it must be rounded off. • There are 10 digits that can occur in the last decimal place in a calculation. One way of rounding off involves underestimating the answer for five of these digits (0, 1, 2, 3, and 4) and overestimating the answer for the other five (5, 6, 7, 8, and 9). This approach to rounding off is summarized as follows. • If the digit is smaller than 5, drop this digit and leave the remaining number unchanged. Thus, 1.684 becomes 1.68. • If the digit is 5 or larger, drop this digit and add 1 to the preceding digit. Thus, 1.247 becomes 1.25.

  13. 2. Determinate (or Systematic) Errors. The terms determinate error and systematic error are synonyms. "Systematic" means that when the measurement of a quantity is repeated several times, the error has the same size and algebraic sign for every measurement. "Determinate" means that the size and sign of the errors are determinable (if the determinate error is recognized and identified). A common cause of determinate error is instrumental or procedural bias. For example: a miscalibrated scale or instrument, a color-blind observer matching colors. Another cause is an outright experimental blunder. Examples: using an incorrect value of a constant in the equations, using the wrong units, reading a scale incorrectly. Every effort should be made to minimize the possibility of these errors, by careful calibration of the apparatus and by use of the best possible measurement techniques. Determinate errors can be more serious than indeterminate errors for three reasons. (1) There is no sure method for discovering and identifying them just by looking at the experimental data. (2) Their effects can not be reduced by averaging repeated measurements. (3) A determinate error has the same size and sign for each measurement in a set of repeated measurements, so there is no opportunity for positive and negative errors to offset each other

  14. . 1. Indeterminate Errors.[2] Indeterminate errors are present in all experimental measurements. The name "indeterminate" indicates that there's no way to determine the size or sign of the error in any individual measurement. Indeterminate errors cause a measuring process to give different values when that measurement is repeated many times (assuming all other conditions are held constant to the best of the experimenter's ability). Indeterminate errors can have many causes, including operator errors or biases, fluctuating experimental conditions, varying environmental conditions and inherent variability of measuring instruments. The effect that indeterminate errors have on results can be somewhat reduced by taking repeated measurements then calculating their average. The average is generally considered to be a "better" representation of the "true value" than any single measurement, because errors of positive and negative sign tend to compensate each other in the averaging process.

  15. Example to convert length in meters to length in inches: length in inches = (length in m)(conversion factor for mcm)(conversion factor for cminches). 2.54 cm / inch, 100cm = 1 m In dimensional analysis always ask three questions: What data are we given? What quantity do we need? What conversion factors are available to take us from what we are given to what we need? M L T K (theta) a) energy joule (J) kg·m2/s2 b) force newton (N) kg·m/s2 c) frequency hertz (Hz) (cycles)·s-1 d) power watt (W) J/s = kg·m2/s3 calculate energy of 77.9 g object e) charge coulomb (C) A·s traveling at velocity of 120 m/s E= ½ mv2 M. L2. T-2 units kg. m.2 s-2 SI units…. Estimates of orders of magnitude… 100g . 100 m/s = 0.1 kg. ( 10e2) 2 m2.s-2 = 10e3 J

  16. Data Acquisition (DAQ) • Data acquisition is the process by which events in the real world are sampled and translated into machine-readable signals. Sometimes abbreviated DAQ, data acquisition typically involves sensors, transmitters and other instruments to collect signals, waveforms etc. to be processed and analyzed with a computer.

  17. 10000001= 129; 8bits; n=0,1,2,3 or1,2,4,8,16,32,64,128; 2n If you were an ADC with 12-bit resolution you can resolve a signal into 2 12 or 4096 possible amplitude values, which is adequate for most biological signals; an ADC with 16-bit resolution can resolve a signal into 2 16 or 65,536 possible amplitude values.

  18. RANGE in Digitization For a 12 bit digitization the values in a 10 V range, say, would be divided into 4000 fixed values from –10 V to +10 V; the minimum change in voltage that could be discerned at that range would be 5 mV. At 10 mV range, the minimum discernible voltage change would be 5 mV. ADC resolution is part of the hardware, and cannot be set by the user. But the input range can often be.

  19. Digital Systems: Aliasing and Sampling Rates • Bits

  20. Filters: Remove information!

  21. Use Common Sense in Experiments • Was data reliable Was enough data taken • Was the data unbiased • Check calculations on computers with estimates • On the calculator, make approximate estimates • And compare with what comes out of the computer • Understand what the software does and the underlying assumptions

  22. Types of Error Determinate (systemic or technical) • Error that can be corrected • Inadequate design, malfunctions, technician blunders or technique Indeterminate (random) • This error cannot be corrected • Error inherent to the object being measured

  23. Precision • Describes the range of spread of the individual measurements from the average value for the series • Describes the reproducibility of the measurement • Improves with reduction in random error

  24. Accuracy • Only obtained if measured values agree with true values • Must reduce systemic & random error to improve accuracy • Always requires the use or comparison to a known standard

  25. Random and Systematic ErrorsIn most experiments we do not know the true value. Here we can asses the random errors but not the systematic errors

  26. Errors • Error = Measured Value - True o9Value • Can be calculated as percent error: • 100% • (Measured Value -True Value) / True Value • = 100% • (Observed -Expected)/Expected • We don’t know the true value so we use a best value at the moment

  27. 1.0 1.1 0.9 0.8 1.2 s Best estimate 1.0 s Best estimate = average = 1.0 s Probable range=0.8 to 1.2 s Uncertainty is ± 0.2 s Measured value is xbest ± δ x 1.0 ± 0.2 s Simple Starts

  28. Last significant figure should be same order of magnitude as the uncertainty • Weight Measured 9.82 ± 0.2385 g=9.82 ± 0.02 g • 6051.78 ± 30 m/s=6050 ± 30 m/s • For stating uncertainty: Round uncertainty to one significant figure….unless δ x has a 1 as a leading digitδ x = 0.14 then =0.14 not 0.1 • For calculation though you should retain one significant more than justified

  29. Estimating error: Basic Significant Figures: experimental uncertainties should be rounded to one significant digit The last in any stated answer should usually be of the same order of magnitude (in the decimal position) as the uncertainty.

  30. Discrepancy: differences between two measured numbers(a) Significant and (b) non significant discrepancy

  31. Significant & • non significant discrepancy • No error bars • With error bars consistent with proportionality • Inconsistent with proportionality m = H.x ????? m = H.x2

  32. Uncertainty in a Difference ( Provisional Rule)

  33. Independent Uncertainties: Adding in quadrature

  34. Error propagation: Uncertainty in a Product (Provisional Rule) • δq/|q best| ~ δx/ |x best| + δy /|y best| • q = x.y CONSTANTS IN EQUATIONS: q = B.x • δq= |B|. Δx POWER q = xn • δq /|q best| = n.δx /|x|

  35. Propagation of errors: Provisional

  36. Propagation of errors: Provisional

  37. Propagation of errors

More Related