1 / 37

Chapter 4 - Measurement Accuracy

Chapter 4 - Measurement Accuracy. Precision vs Accuracy. In the fields of science, engineering and statistics, the accuracy of measurement system is the degree of closeness of measurements of a quantity to that quantity's true value .

tpineda
Download Presentation

Chapter 4 - Measurement Accuracy

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 4 - Measurement Accuracy

  2. Precision vs Accuracy • In the fields of science, engineering and statistics, the accuracy of measurement system is the degree of closeness of measurements of a quantity to that quantity's true value. • The precision of a measurement system, related to reproducibility and repeatability, is the degree to which repeated measurements under unchanged conditions show the same results. Accuracy is unattainable Precision alone is no good

  3. Precision vs Accuracy Accuracy degree of closeness of measurements of a quantity to that quantity's true value. Precision , is the degree to which repeated measurements under unchanged conditions show the same results.

  4. Precision vs Accuracy A measurement system can be accurate but not precise, precise but not accurate, neither, or both. For example, if an experiment contains a systematic error, then increasing the sample size generally increases precision but does not improve accuracy. The result would be a consistent yet inaccurate string of results from the flawed experiment. Eliminating the systematic error improves accuracy but does not change precision.

  5. Measurement Accuracy • Terminology • Definitions of Accuracy • Closeness with which an instrument reading approaches the true value of the variable being measured. • The maximum error in the measurement of a physical quantity in terms of the output of an instrument when referred to the individual instrument calibrations. • The degree of conformance of a test instrument to absolute standards. • The ability to produce an average measured value which agrees with the true value or standard being used.

  6. Measurement Accuracy • Terminology • Definitions of Accuracy • Closeness with which an instrument reading approaches the true value of the variable being measured. • The maximum error in the measurement of a physical quantity in terms of the output of an instrument when referred to the individual instrument calibrations. • The degree of conformance of a test instrument to absolute standards. • The ability to produce an average measured value which agrees with the true value or standard being used.

  7. Measurement Accuracy • Terminology • Precision • A measure of the reproducibility of the measurements. • Given a fixed value of a variable, precision is a measure of the degree to which successive measurements differ from one another. • The degree to which repeated measurements of a given quantity agree when obtained by the same method and under the same conditions. • Also called repeatability or reproducibility. • The ability to repeatedly measure the same product or service and obtain the same results.

  8. Measurement Accuracy • Book Terminology • Accuracy - refers to the overall closeness of an averaged measurement to the true value. • Repeatability - the consistency with which that measurement can be made. • The word precision will be avoided. • Accuracy takes all error sources into account • Systematic Errors • Random Errors • Resolution (Quantization Errors)

  9. Measurement Accuracy • Terminology • Systematic Errors • Errors that appear consistently from measurement to measurement • Ideal Value = 100mV • Measurements : 101mV, 103mV, 102mV, 101mV, 102mV, 103mV, 103mV, 101mV, 102mV • Average Error : 2mV • Caused by DC offsets, gain errors, non-linearities in the DVM • Systematic errors can often be reduced through calibrations.

  10. Measurement Accuracy • Terminology • Random Errors • Notice that the list of numbers in the last slide vary from 101mV to 103mV. • All measurement tools have random errors even $2 Million Automated test instruments • Random Errors are perfectly normal in analog and mixed-signal measurements. • Big challenge is in determining whether the random error is caused by a bad DIB design, bad DUT design or by the tester itself.

  11. Measurement Accuracy • Terminology • Resolution (Quantization Errors) • Notice that in the previous list of numbers, the measurement was always rounded off to the nearest milivolt. • Limited resolution results from the fact that continuous analog signals must be converted to digital format (using ADC’s) before a computer can evaluate the test results. • The inherent error in ADCs and measurement instrumentation is called Quantization Error. • Quantization error is a result of the conversion from an infinitely variable input voltage to a finite set of possible outputs from the ADC.

  12. Problem 4 - 15 Minutes • A 5 mV signal is measured with a meter ten times resulting in the following sequence of readings: 5 mV, 6 mV, 9 mV, 8 mV, 4 mV, 7 mV, 5 mV, 7 mV, 8 mV, 11 mV? What is the average measured value? What is the systematic error? • A meter is rated at 8-bits and has a full-scale range of ±5 V. What is the measurement uncertainty of this meter? • A signal is to measured with a maximum uncertainty of ±0.5 mV. How many bits of resolution are required by a meter having a ±1 V full-scale range?

  13. Solution - Problem 4 • A 5 mV signal is measured with a meter ten times resulting in the following sequence of readings: 5 mV, 6 mV, 9 mV, 8 mV, 4 mV, 7 mV, 5 mV, 7 mV, 8 mV, 11 mV? What is the average measured value? What is the systematic error? • Average Value = Sum of Measurements / N = 7 mV • Systematic Error = Difference of Average Value from Actual Value. = 2 mV • A meter is rated at 8-bits and has a full-scale range of ±5 V. What is the measurement uncertainty of this meter? • Uncertainty = Vfs / 2N = ± 19.5 mV. • A signal is to measured with a maximum uncertainty of ±0.5 mV. How many bits of resolution are required by a meter having a ±1 V full-scale range? • ±0.5 mV = ±1 V/ 2N ; solving for N gives 21 bits

  14. Measurement Accuracy • Terminology • Repeatability • Non-repeatable answers are a fact of life for mixed-signal test engineer • Could be caused by random noise or other external influences • If a test engineer gets the same value multiple times in a row, it should raise the question of ranging of the measurement tool. • Repeatability is desirable, but it does not in itself guarantee accuracy.

  15. Measurement Accuracy • Terminology • Stability • The degree to which a series of supposedly identical measurements remains constant over time, temperature, humidity, and all other time varying factors is referred to as stability. • Testers are equipped with temperature sensors to allow recalibration if a certain change in temperature occurs. • Caution must be exercised in the power-up of the tester, since temperature of tester electronics must stabilize before calibrations are accurate • Also, if the test cabinet or test head are opened, the temperature must stabilize before any calibrations can be performed.

  16. Measurement Accuracy • Terminology • Correlation • The ability to get the same answer using different pieces of hardware or software. • Tester - to - Bench Correlation • Tester - to - Tester Correlation • Program - to - Program Correlation • DIB - to - DIB Correlation • Day - to - Day Correlation

  17. Measurement Accuracy • Terminology • Reproducibility • Reproducibility is often incorrectly used interchangeably with repeatability • Reproducibility is defined as the statistical deviations between a particular measurement taken by any operator on any group of testers on any given day using any DIB board. • Repeatability is used to describe the ability of a single tester and DIB board to get the same answer multiple times as the test program is repeatedly executed. • If a measurement is highly repeatable, but not reproducible, then the test program may consistently pass a particular DUT one day but then may consistently fail the same DUT on another day or on another tester.

  18. Calibration and Checkers • Traceability to Standards • National Institute of Standards and Technology (NIST) • Thermally stabilized standardized instrument • periodically replaced by a freshly calibrated source • Hardware Calibration • Any mechanical process which brings a piece of equipment back into agreement with calibration standards • usually not a convenient process • Robotic manipulations can be used to automate the process, but it is still not optimal

  19. Calibration and Checkers • Software Calibration • The basic idea behind software calibration is the separation of the instrument’s ideal operation from its non-idealities so that a model of the instruments non-ideal operation can be constructed, followed by a correction of the non-ideal behavior using a mathematical routine written in software. • Most testers have extensive calibration processes for each measurement range in the tester instrumentation

  20. Calibration and Checkers • System Calibrations & Checkers • Checkers verify the functionality of the hardware instruments in the tester. • Calibration and checkers are often found in the same program. • Several levels of checkers and calibrations are used • Calibration reference source replacement and re-calibration is performed approximately every six months • Extensive performance evaluation (PV) process is used to verify the tester is in compliance with its published specifications • Automated calibrations on test floor as conditions warrant them

  21. Calibration and Checkers • Focussed Instrument Calibrations • Accuracy of faster instruments can be improved by periodically referencing them back to slower more accurate instruments. • Test specific calibration to focus on the exact parameters of the test • Tester Focussed calibration may not be necessary on all tests any longer, yet DIB focussed calibrations will remain a major task of the test engineer

  22. Calibration and Checkers • Focussed DIB Circuit Calibrations • Often, circuits are added to the DIB board to improve the accuracy of the particular test, or to buffer a weak output of a device before it is tested. • Since DIB circuits are added in series between the DUT and the tester, the contribution of calibration factors must be treated accordingly. • It is critical that the test engineer have a clear understanding which characteristics of each DIB circuit affects the test being performed.

  23. Calibration and Checkers • DIB Checkers • Verifies the basic functionality of the DIB circuits. • Performed in the first run of the test program along with the calibration • Every possible relay and circuit path should be checked to produce a go-nogo response verifying the functionality of as much of the DIB board as possible.

  24. Calibration and Checkers • Tester Specifications • Test engineers must determine if the tester instrument is capable of making the measurements they require. • Due to the lack of information about specification values from the manufacturer, the test engineer needs to understand the spec conditions and the variations from that spec which will affect the performance of the instrument. • A good example is a specification of the “noise floor” of a tester. In a professional shielded room with no digital circuits operating, the noise floor will be totally different from when the same tester is operating at a university.

  25. Calibration and Checkers • Tester Specifications • Example of a DC meter • five output ranges (set by a PGA internally and calibrated) • accuracy is specified as a percentage of the measured value - with a limit of 1 mV or 2.5 mV. • Assumes the measurement is made 100 times and averaged • single measurement may have greater measurement error along with repeatability error. • Meter may also pass the signal through a low pass filter with the input either enabled or disabled. • Indicates extra settling time • if filter is disabled, is the spec still true????

  26. Programmable Gain Amplifier (PGA) vDUT ADC Tester Computer Range Control Meter

  27. Dealing with Measurement Error • Filtering • Acts as a hardware averaging circuit and allows only the desired frequencies to pass. • The closer the cutoff frequency to the measurement frequency, the more noise is removed. • Unfortunately, the lower the frequency, the longer the test time required for settling • Settling time is inversely proportional to the cutoff frequency.

  28. Dealing with Measurement Error • Averaging • A form of discrete time filtering that can be used to improve the repeatability of a measurement. • To reduce the effect of noise RMS voltage by a factor of two, one has to take four times as many readings and average them. • This quickly results in a point of diminishing returns with respect to test times. • Note: Do not average values in dB - always convert to linear form and average - then return them to dB form.

  29. Dealing with Measurement Error • Guardbanding • If a particular measurement is known to be accurate and repeatable with a worst cast uncertainty of  , then the final test limits should be tightened by  to insure that no bad devices are shipped to the customer. • Guardbanding Positive Test Limit = Positive Test Limit -  • Guardbanding Negative Test Limit = Negative Test Limit +  • The only way to reduce guardbanding is to increase accuracy and repeatability - this increases test time and may not be a viable option.

  30. Basic Data Analysis • Datalogs • A concise list of results generated by the test program • test number • test category • test description • maximum and minimum test limits • measured result • Pass / fail indication

  31. Sequencer: S_continuity • 1000 Neg PPMU Cont Failing Pins: 0 • Sequencer: S_VDAC_SNR • 5000 DAC Gain Error T_VDAC_SNR -1.00 dB < -0.13 dB < 1.00 dB • 5001 DAC S/2nd T_VDAC_SNR 60.0 dB <= 63.4 dB • 5002 DAC S/3rd T_VDAC_SNR 60.0 dB <= 63.6 dB • 5003 DAC S/THD T_VDAC_SNR 60.00 dB <= 60.48 dB • 5004 DAC S/N T_VDAC_SNR 55.0 dB <= 70.8 dB • 5005 DAC S/N+THD T_VDAC_SNR 55.0 dB <= 60.1 dB • Sequencer: S_UDAC_SNR • 6000 DAC Gain Error T_UDAC_SNR -1.00 dB < -0.10 dB < 1.00 dB • 6001 DAC S/2nd T_UDAC_SNR 60.0 dB <= 86.2 dB • 6002 DAC S/3rd T_UDAC_SNR 60.0 dB <= 63.5 dB • 6003 DAC S/THD T_UDAC_SNR 60.00 dB <= 63.43 dB • 6004 DAC S/N T_UDAC_SNR 55.0 dB <= 61.3 dB • 6005 DAC S/N+THD T_UDAC_SNR 55.0 dB <= 59.2 dB • Sequencer: S_UDAC_Linearity • 7000 DAC POS ERR T_UDAC_Lin -100.0 mV < 7.2 mV < 100.0 mV • 7001 DAC NEG ERR T_UDAC_Lin -100.0 mV < 3.4 mV < 100.0 mV • 7002 DAC POS INL T_UDAC_Lin -0.90 lsb < 0.84 lsb < 0.90 lsb • 7003 DAC NEG INL T_UDAC_Lin -0.90 lsb < -0.84 lsb < 0.90 lsb • 7004 DAC POS DNL T_UDAC_Lin -0.90 lsb < 1.23 lsb (F) < 0.90 lsb • 7005 DAC NEG DNL T_UDAC_Lin -0.90 lsb < -0.83 lsb < 0.90 lsb • 7006 DAC LSB SIZE T_UDAC_Lin 0.00 mV < 1.95 mV < 100.00 mV • 7007 DAC Offset V T_UDAC_Lin -100.0 mV < 0.0 mV < 100.0 mV • 7008 Max Code Width T_UDAC_Lin 0.00 lsb < 1.23 lsb < 1.50 lsb • 7009 Min Code Width T_UDAC_Lin 0.00 lsb < 0.17 lsb < 1.50 lsb • Bin: 10

  32. Basic Data Analysis • Histograms • A graphical method used to view the repeatability of numerical data • Ideally the values of the acquired data should be closely packed • Statistical relevance of the data is determined by the number of samples taken - in Test engineering, the minimum for statistical relevance is 100. • Histograms also give numerical values which indicate the fit to the standard bell curve, including the mean and standard deviation

  33. Basic Data Analysis • Normal (Gaussian) Distributions • Any summation of a large number of random variables results in a Gaussian distribution. • The variations in a typical mixed-signal measurement comes from a summation of many different sources of noise and crosstalk in both the device and the tester instrument. • The standard deviation of the Gaussian distribution is roughly equal to one sixth of the total variation from the minimum value to the maximum value • In the example the standard deviation is 0.0029 dB, so we would expect to see values ranging from -0.139 dB to -0.121 dB. These values are labeled as “Mean -3 sigma” and “Mean +3 sigma”

  34. Basic Data Analysis • Non-Gaussian Distributions • Bimodal • Outliers

  35. Basic Data Analysis • Noise, Test Time and Yield • Yield = total good devices / total tested devices • Definite trade off between test time and production yield. • Designer controls the design margins which reduce the need for guardbanding • Centering of design within the specifications • May cost extra silicon or extra power • May make the test unnecessary

More Related