1 / 52

Statistical Evaluation of Data

Statistical Evaluation of Data. Chapter 15. Descriptive / inferential. Descriptive statistics are methods that help researchers organize, summarize, and simplify the results obtained from research studies.

veiga
Download Presentation

Statistical Evaluation of Data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Statistical Evaluation of Data Chapter 15

  2. Descriptive / inferential • Descriptive statistics are methods that help researchers organize, summarize, and simplify the results obtained from research studies. • Inferential statistics are methods that use the results obtained from samples to help make generalizations about populations.

  3. Statistic / parameter • A summary value that describes a sample is called a statistic. M=25 s=2 • A summary value that describes a population is called a parameter. µ =25 σ=2

  4. Frequency Distributions One method of simplifying and organizing a set of scores is to group them into an organized display that shows the entire set.

  5. Example

  6. Histogram & Polygon

  7. Bar Graphs

  8. Bar Graph

  9. Central tendency The goal of central tendency is to identify the value that ismost typical or most representative of the entire group.

  10. Central tendency • The mean is the arithmetic average. • The median measures central tendency by identifying the score that divides the distribution in half. • The mode is the most frequently occurring score in the distribution.

  11. Variability Variability is a measure of the spread of scores in a distribution. • Range (the difference between min and max) • Standard deviation describes the average distance from the mean. • Variance measures variability by computing the average squared distance from the mean.

  12. Variance = the index of variability. SD = SQRT (Variance) Variance = (Sum of Squares) / N X-M 4 1 3 2 1 0 -1 -2 -3 -5 (X-M)2 16 1 9 4 1 0 1 4 9 25 X 10 7 9 8 7 6 5 4 3 1 Variance = 70/10= 7 SD = SQRT(7) =2.64 SS =70 Total=60 Mean=6

  13. Non-numerical Data Proportion or percentage in each category. For example, • 43% prefer Democrat candidate, • 28% prefer Republican candidate, • 29% are undecided

  14. Correlations A correlation is a statistical value that measures and describes the direction and degree of relationship between two variables.

  15. Types of correlation Phi for dichotomous data only Pearson's contingency coefficient known as C Cramer's V coefficient Goodman and Kruskal lambda coefficient http://www.andrews.edu/~calkins/math/edrm611/edrm13.htm

  16. Regression

  17. Regression • Whenever a linear relationship exists, it is possible to compute the equation for the straight line that provides the best fit for the data points. • The process of finding the linear equation is called regression, and the resulting equation is called the regression equation.

  18. Where is the regression line? 120 110 100 90 80 STRENGTH 70 140 150 160 170 180 190 200 210 220 WEIGHT

  19. Which one is the regression line? 120 110 100 90 80 STRENGTH 70 140 150 160 170 180 190 200 210 220 WEIGHT

  20. regression equation All linear equations have the same general structure and can be expressed as • Y = bX+a Y= 2X + 1

  21. standardized form • Often the regression equation is reported in standardized form, which means that the original X and Y scores were standardized, or transformed into z- scores, before the equation was computed. ȥy=βȥx

  22. Multiple Regression

  23. INFERENTIAL STATISTICS • INFERENTIAL STATISTICS

  24. Sampling Error Random samples No treatment

  25. Is the difference due to a sampling error? Random samples Violent /Nonviolent TV

  26. Is the difference due to a sampling error? • Sampling error is the naturally occurring difference between a sample statistic and the corresponding population parameter. • The problem for the researcher is to decide whether the 4- point difference was caused by the treatments ( the different television programs) or is just a case of sampling error

  27. Hypothesis testing • A hypothesis test is a statistical procedure that uses sample data to evaluate the credibility of a hypothesis about a population.

  28. 5 elements of a hypothesis test • The Null Hypothesis The null hypothesis is a statement about the population, or populations, being examined, and always says that there is no effect, no change, or no relationship. 2. The Sample Statistic The data from the research study are used to compute the sample statistic.

  29. 5 elements of a hypothesis test 3. The Standard Error Standard error is a measure of the average, or standard distance between sample statistic and the corresponding population parameter. "standard error of the mean , sm" refers to the standard deviation of the distribution of sample means taken from a population. 4. The Test Statistic A test statistic is a mathematical technique for comparing the sample statistic with the null hypothesis, using the standard error as a baseline.

  30. 5 elements of a hypothesis test 5. The Alpha Level ( Level of Significance) The alpha level, or level of significance, for a hypothesis test is the maximum probability that the research result was obtained simply by chance. A hypothesis test with an alpha level of .05, for example, means that the test demands that there is less than a 5% (. 05) probability that the results are caused only by chance.

  31. Reporting Results from a Hypothesis Test • In the literature, significance levels are reported as p values. For example, a research paper may report a significant difference between two treatments with p <.05. The expression p <.05 simply means that there is less than a .05 probability that the result is caused by chance.

  32. Errors in Hypothesis Testing If a researcher is misled by the results from the sample, it is likely that the researcher will reach an incorrect conclusion. Two kinds of errors can be made in hypothesis testing.

  33. Type I Errors • A Type I error occurs when a researcher finds evidence for a significant result when, in fact, there is no effect ( no relationship) in the population. • The error occurs because the researcher has, by chance, selected an extreme sample that appears to show the existence of an effect when there is none. • The consequence of a Type I error is a false report. This is a serious mistake. • Fortunately, the likelihood of a Type I error is very small, and the exact probability of this kind of mistake is known to everyone who sees the research report.

  34. Type II error • A Type II error occurs when sample data do not show evidence of a significant effect when, in fact, a real effect does exist in the population. • This often occurs when the effect is so small that it does not show up in the sample.

  35. Factors that Influence the Outcome of a Hypothesis Test 1. The sample size. The difference found with a large sample is more likely to be significant than the same result found with a small sample. 2. The Size of the Variance When the variance is small, the data show a clear mean difference between the two treatments.

  36. Effect Size • Knowing the significance of difference is not enough. We need to know the size of the effect.

  37. Measuring Effect Size with Cohen’s d

  38. Measuring Effect Size as a Percentage of Variance ( r2) The effect size can also be measured by calculating the percentage of variance in the treatment condition that could be predicted by the variance in the control group. df = (n1-1)+(n2-1)

  39. Examples of hypothesis tests report • Two- Group Between- Subjects Test • df = (n1-1)+(n2-1) • Two- Treatment Within- Subjects Test df = (n-1)

  40. ANOVA reports • Comparing More Than Two Levels of a Single Factor (ANOVA) • df= k-1 • df(within)=(k-1) * (n-1) • df(between)=(n1-1)+(n2-1)+(n3-1)+…

  41. Post Hoc Tests Are necessary because the original ANOVA simply establishes that mean differences exist, but does not identify exactly which means are significantly different and which are not.

  42. Factorial Tests Report The simplest case, a two- factor design, requires a two- factor analysis of variance, or two- way ANOVA. The two- factor ANOVA consists of three separate hypothesis tests. page 550 P<.01 4.26-7.82 P<.01 3.40-5.61 P<.01 3.40-5.61

  43. Comparing Proportionschi- square

  44. Reporting X2 • The report indicates that the researcher obtained a chi- square statistic with a value of 8.70, which is very unlikely to occur by chance ( probability is equal to .02). The numbers in parentheses indicate that the chi- square statistic has degrees of freedom ( df) equal to 3 and that there were 40 participants ( n= 40) in the study. df = ( C1 – 1)( C2 – 1)

  45. Evaluating Correlations • r= 0.65, n =40, p= .01 • The report indicates that the sample correlation is r= 0.65 for a group of n= 40 participants, which is very unlikely to have occurred if the population correlation is zero ( probability less than .01).

  46. Reliability & Validity • Reliability refers to the relationship between two sets of measurements.

  47. split-half evaluate the internal consistency of the test by computing a measure of split- half reliability. However, the two split- half scores obtained for each participant are based on only half of the test items. So we can fix this by using ;

  48. Kuder-Richardson • The Kuder-Richardson Formula 20 estimates the average of all the possible split- half correlations that can be obtained from all of the possible ways to split a test in half.

  49. Cronbach’s Alpha • One limitation of the K- R 20 is that it can only be used for tests in which each item has only multiple choice/true-false/yes-no alternatives. • Cronbach’s Alpha can be used for scaled-scores

More Related