1 / 54

ME 392 Chapter 6 Data Processing March 12, 2012 week 9

ME 392 Chapter 6 Data Processing March 12, 2012 week 9. Joseph Vignola. Experimental Error. There is always an assumption that what you measure is the sum, or some combination of the real phenomena plus some error. Experimental Error.

ledell
Download Presentation

ME 392 Chapter 6 Data Processing March 12, 2012 week 9

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ME 392Chapter 6Data Processing March 12, 2012week 9 Joseph Vignola

  2. Experimental Error There is always an assumption that what you measure is the sum, or some combination of the real phenomena plus some error

  3. Experimental Error There is always an assumption that what you measure is the sum, or some combination of the real phenomena plus some error

  4. Experimental Error There is always an assumption that what you measure is the sum, or some combination of the real phenomena plus some error Typically we measure a voltage that represents a physical quantity like temperature or velocity and then multiply or divide by some constant (the sensitivity)

  5. Experimental Error There is always an assumption that what you measure is the sum, or some combination of the real phenomena plus some error Typically we measure a voltage that represents a physical quantity like temperature or velocity and then multiply or divide by some constant (the sensitivity) For now we can use the voltage to represent what ever physical quantity we might be interested in.

  6. Experimental Error There is always an assumption that what you measure is the sum, or some combination of the real phenomena plus some error Typically we measure a voltage that represents a physical quantity like temperature or velocity and then multiply or divide by some constant (the sensitivity) For now we can use the voltage to represent what ever physical quantity we might be interested in. We use a sensitivity to relate the voltage to the thing we want

  7. Experimental Error There is always an assumption that what you measure is the sum, or some combination of the real phenomena plus some error Typically we measure a voltage that represents a physical quantity like temperature or velocity and then multiply or divide by some constant (the sensitivity) For now we can use the voltage to represent what ever physical quantity we might be interested in. We use a sensitivity to relate the voltage to the thing we want for the accelerates

  8. Experimental Error There is always an assumption that what you measure is the sum, or some combination of the real phenomena plus some error Typically we measure a voltage that represents a physical quantity like temperature or velocity and then multiply or divide by some constant (the sensitivity) For now we can use the voltage to represent what ever physical quantity we might be interested in. We use a sensitivity to relate the voltage to the thing we want for the accelerates for the speaker

  9. Experimental Error There is always an assumption that what you measure is the sum, or some combination of the real phenomena plus some error Typically we measure a voltage that represents a physical quantity like temperature or velocity and then multiply or divide by some constant (the sensitivity) For now we can use the voltage to represent what ever physical quantity we might be interested in. We use a sensitivity to relate the voltage to the thing we want for the accelerates for the speaker

  10. Experimental Error The error can be broke down into two categories

  11. Experimental Error • The error can be broke down into two categories • Systematic error (a.k.a. bias) • Random error (a.k.a. noise)

  12. Experimental Error • The error can be broke down into two categories • Systematic error (a.k.a. bias) • Random error (a.k.a. noise) Systematic error is error that occurs the say way every time an experiment is run.

  13. Experimental Error • The error can be broke down into two categories • Systematic error (a.k.a. bias) • Random error (a.k.a. noise) Systematic error is error that occurs the say way every time an experiment is run. Examples of systematic error

  14. Experimental Error • The error can be broke down into two categories • Systematic error (a.k.a. bias) • Random error (a.k.a. noise) • Systematic error is error that occurs the say way every time an experiment is run. • Examples of systematic error • Bad calibration

  15. Experimental Error • The error can be broke down into two categories • Systematic error (a.k.a. bias) • Random error (a.k.a. noise) • Systematic error is error that occurs the say way every time an experiment is run. • Examples of systematic error • Bad calibration • Non-linearity

  16. Experimental Error • The error can be broke down into two categories • Systematic error (a.k.a. bias) • Random error (a.k.a. noise) • Systematic error is error that occurs the say way every time an experiment is run. • Examples of systematic error • Bad calibration • Non-linearity • Digitizer noise (bit noise)

  17. Experimental Error • The error can be broke down into two categories • Systematic error (a.k.a. bias) • Random error (a.k.a. noise) • Systematic error is error that occurs the say way every time an experiment is run. • Examples of systematic error • Bad calibration • Non-linearity • Digitizer noise (bit noise) • Saturating the digitizer • (hitting the rail)

  18. Experimental Error • The error can be broke down into two categories • Systematic error (a.k.a. bias) • Random error (a.k.a. noise)

  19. Experimental Error • The error can be broke down into two categories • Systematic error (a.k.a. bias) • Random error (a.k.a. noise) Random error is error that is different every time you run your experiment.

  20. Experimental Error • The error can be broke down into two categories • Systematic error (a.k.a. bias) • Random error (a.k.a. noise) Random error is error that is different every time you run your experiment. Examples of random error

  21. Experimental Error • The error can be broke down into two categories • Systematic error (a.k.a. bias) • Random error (a.k.a. noise) • Random error is error that is different every time you run your experiment. • Examples of random error • Electronic noise

  22. Experimental Error • The error can be broke down into two categories • Systematic error (a.k.a. bias) • Random error (a.k.a. noise) • Random error is error that is different every time you run your experiment. • Examples of random error • Electronic noise • You lab partner

  23. Experimental Error • The error can be broke down into two categories • Systematic error (a.k.a. bias) • Random error (a.k.a. noise) • Random error is error that is different every time you run your experiment. • Examples of random error • Electronic noise • You lab partner • Environmental effect • (room temperature, humidity…)

  24. Aggregate Comparisons There is always an assumption that what you measure is the sum of the real phenomena plus some error

  25. Aggregate Comparisons There is always an assumption that what you measure is the sum of the real phenomena plus some error Typically we measure a voltage that represents a physical quantity like temperature or velocity and then multiply or divide by some constant (the sensitivity)

  26. Aggregate Comparisons There is always an assumption that what you measure is the sum of the real phenomena plus some error Typically we measure a voltage that represents a physical quantity like temperature or velocity and then multiply or divide by some constant (the sensitivity) And we make the measurement many times

  27. Aggregate Comparisons There is always an assumption that what you measure is the sum of the real phenomena plus some error Typically we measure a voltage that represents a physical quantity like temperature or velocity and then multiply or divide by some constant (the sensitivity) And we make the measurement many times

  28. Aggregate Comparisons There is always an assumption that what you measure is the sum of the real phenomena plus some error For this plot we can think of this as finding the “best” horizontal line that represents the data

  29. Aggregate Comparisons There is always an assumption that what you measure is the sum of the real phenomena plus some error For this plot we can think of this as finding the “best” horizontal line that represents the data But what if we don’t expect the data to remain the same with each measurement and we want to know something about the trend?

  30. Curve Fitting If we have reason to expect that the data will follow some functional form But what if we don’t expect the data to remain the same with each measurement and we want to know something about the trend?

  31. Curve Fitting If we have reason to expect that the data will follow some functional form But what if we don’t expect the data to remain the same with each measurement and we want to know something about the trend?

  32. Curve Fitting If we have reason to expect that the data will follow some functional form But what if we don’t expect the data to remain the same with each measurement and we want to know something about the trend?

  33. Curve Fitting We use the polyfit command in Matlab to find the closest polynomial to a set of data The polynomial might be a Horizontal line (average)

  34. Curve Fitting We use the polyfit command in Matlab to find the closest polynomial to a set of data The polynomial might be a Sloping line (linear fit)

  35. Curve Fitting We use the polyfit command in Matlab to find the closest polynomial to a set of data The polynomial might be a

  36. Curve Fitting We use the polyfit command in Matlab to find the closest polynomial to a set of data The polynomial might be a Any polynomial (linear fit)

  37. Curve Fitting We use the polyfit command in Matlab to find the closest polynomial to a set of data The polynomial might be a Maybe a parabola

  38. Ultimately we would like to make a plot where we can compare data to some predictive model

  39. Using “polyfit.m” for Curve Fitting So in Matlab we can make these fits by first reading in the data infile = 'height_data.mat'; load(infile) % this file has a pair of vector t & height (same length) N = length(t);

  40. Using “polyfit.m” for Curve Fitting … using “polyfit.m” infile = 'height_data.mat'; load(infile) % this file has a pair of vector t & height (same length) N = length(t); p = polyfit(t,height,1); % p is a vector of the coefficients of the polynomial The first two inputs are your x – y data pair The third input is the order of the fit, 1 for a straight line

  41. Using “polyfit.m” for Curve Fitting … using “polyfit.m” infile = 'height_data.mat'; load(infile) % this file has a pair of vector t & height (same length) N = length(t); p = polyfit(t,height,2); % p is a vector of the coefficients of the polynomial The first two inputs are your x – y data pair The third input is the order of the fit, 2 for a parabola

  42. Using “polyfit.m” for Curve Fitting … using “polyfit.m” infile = 'height_data.mat'; load(infile) % this file has a pair of vector t & height (same length) N = length(t); p = polyfit(t,height,6); % p is a vector of the coefficients of the polynomial The first two inputs are your x – y data pair The third input is the order of the fit, 6 for a …

  43. Using “polyfit.m” for Curve Fitting ..and then evaluate the polynomial using “polyval.m” infile = 'height_data.mat'; load(infile) % this file has a pair of vector t & height (same length) N = length(t); p = polyfit(t,height,2); % p is a vector of the coefficients of the polynomial [y_fit] = polyval(p,t); % polyval evaluates the polynomial

  44. Using “polyfit.m” for Curve Fitting …and plotting as we have earlier. infile = 'height_data.mat'; load(infile) % this file has a pair of vector t & height (same length) N = length(t); p = polyfit(t,height,2); % p is a vector of the coefficients of the polynomial [y_fit] = polyval(p,t); % polyval evaluates the polynomial zf(5) = figure(5);clf za(1) = axes; zp = plot(t,height,'o’,t,y_fit);grid xlabel('time (months)’) ylabel('childs height (in)’) ss0 = ['data']; ss1 = ['fit = ' num2str(p2(1)) 'x^2 + ' num2str(p2(2)) 'x +' num2str(p2(3))]; zl = legend(ss0,ss1);

  45. Using “polyfit.m” for Curve Fitting infile = 'height_data.mat'; load(infile) % this file has a pair of vector t & height (same length) N = length(t); p = polyfit(t,height,2); % p is a vector of the coefficients of the polynomial [y_fit] = polyval(p,t); % polyval evaluates the polynomial zf(5) = figure(5);clf za(1) = axes; zp = plot(t,height,'o’,t,y_fit);grid xlabel('time (months)’) ylabel('childs height (in)’) ss0 = ['data']; ss1 = ['fit = ' num2str(p2(1)) 'x^2 + ' num2str(p2(2)) 'x +' num2str(p2(3))]; zl = legend(ss0,ss1);

  46. Today’s Assignment There are notes on the web page for fitting to a function of your choosing

  47. Today's Assignment The current assignment is to put an accelerometer on the tip of the shaker and shake it

  48. Today's Assignment The current assignment is to put an accelerometer on the tip of the shaker and shake it At something like 10 amplitudes

  49. Today's Assignment The current assignment is to put an accelerometer on the tip of the shaker and shake it At something like 10 amplitudes And something like 3 frequencies maybe 10Hz, 100Hz, 1kHz

  50. Today's Assignment Then for each frequencies

More Related