Steps in Statistical Testing: 1) State the null hypothesis (Ho) and the alternative hypothesis (Ha). 2) Choose an acceptable and appropriate level of significance (a) and sample size (n) for your particular study design.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
1) State the null hypothesis (Ho) and the alternative hypothesis (Ha).
2) Choose an acceptable and appropriate level of significance (a) and sample size (n) for your particular study design.
3) Determine the appropriate statistical technique and corresponding test statistic.
4) Collect the data and compute the value of the test statistic.
5) Calculate the number of degrees of freedom for the data set.
6) Compare the value of the test statistic with the critical values in a statistical table for the appropriate distribution and using the correct degrees of freedom.
7) Make a statistical decision and express the statistical decision in terms of the problem under study.
The number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary.
Estimates of statistical parameters can be based upon different amounts of information or data. The number of independent pieces of information that go into the estimate of a parameter is called the degrees
A data set contains a number of observations, say, n. They constitute n individual pieces of information. These pieces of information can be used to estimate either parameters or variability. In general, each item being estimated costs one degree of freedom. The remaining degrees of freedom are used to estimate variability. All we have to do is count properly.
A single sample: There are n observations. There's one parameter (the mean) that needs to be estimated. That leaves n-1 degrees of freedom for estimating variability.
Two samples: There are n1+n2 observations. There are two means to be estimated. That leaves n1+n2-2 degrees of freedom for estimating variability.
One-way ANOVA with g groups: There are n1+..+ng observations. There are g means to be estimated. That leaves n1+..+ng-g degrees of freedom for estimating variability. This accounts for the denominator degrees of freedom for the F statistic.What are degrees of freedom?
Contingency table- examine the relationship between 2 categorical variables
Chi Square Test
McNemar’s Test- analysis of paired categorical variables, disagreement or change
Mantel-Haenszel Test- relationship between 2 variables accounting for influence by a 3rd variable
Mann-Whitney U or KS- compare 2 independent groups, alternative to a t test
Wilcoxon Sign Test- alternative to 1 sample t test
Kruskal-Wallis- compare 2 or more independent groups alternative to ANOVA
Friedman Test- compare 2 or more related groups alternative to repeated measures ANOVA
Spearman’s Rho- association between 2 variables, alternative to pearson’s rNonparametric Tests
General notation for a 2 x 2 contingency table.
For a 2 x 2 contingency table the Chi Square statisticis calculated by the formula:
We now have our chi square statistic (x2 = 3.418), our predetermined alpha level of significalnce (0.05), and our degrees of freedom (df =1). Entering the Chi square distribution table with 1 degree of freedom and reading along the row we find our value of x2 (3.418) lies between 2.706 and 3.841. The corresponding probability is 0.10<P<0.05. This is below the conventionally accepted significance level of 0.05 or 5%, so the null hypothesis that the two distributions are the same is verified.
Applying the formula above we get:
Chi square = 105[(36)(25) - (14)(30)]2 / (50)(55)(39)(66) = 3.418
probability level (alpha)
To calculate the value of Mann-Whitney U test, we use the following formula:
U=Mann-Whitney U test
N1=sample size one
N2= Sample size two
Ri = Rank of the sample size
1. Arrange the data of both samples in a single series in ascending order.
2. Assign rank to them in ascending order. In the case of a repeated value, assign ranks to them by averaging their rank position.
3. Once this is complete, ranks of the different samples are separated and summed up as R1 R2 R3, etc.
4. To calculate the value of Kruskal- Wallis test, apply the following formula:
H = Kruskal- Wallis test
n = total number of observations in all samples
Ri = Rank of the sampleMann Whitney and Kruskal Wallis
Ho: The probability of a + difference is = to the probability of a – difference
Ha: The probability of a + difference is ≠ to the probability of a – difference