- 728 Views
- Updated On :
- Presentation posted in: General

What can you do with a MANOVA?. Until now we had only measured a single dependent variable. Therefore, ANOVA is called 'univariate'ANOVA can have one or more independent variable(s)?. A MANOVA is an ANOVA for several dependent variables.Therefore, MANOVA is called 'multivariate'Like ANOVA, MANOVA can have one or more independent variables.

What can you do with a MANOVA

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

**1. **Chapter_14

**2. **What can you do with a MANOVA? Until now we had only measured a single dependent variable.
Therefore, ANOVA is called 'univariate'
ANOVA can have one or more independent variable(s)? A MANOVA is an ANOVA for several dependent variables.
Therefore, MANOVA is called 'multivariate'
Like ANOVA, MANOVA can have one or more independent variables

**3. **Why MANOVA? If you conducted separate ANOVA's, any relationship between the dependent variables is ignored. However, there might be correlations between them.
? MANOVA can tell us whether groups differ along a combination of dimensions

**4. **Advantage of having multiple Dependent Variables
... multiple dependent variables as in MANOVA
Number of pedestrians killed
Number of lampposts hit
Number of cars they crash in

**5. **How many dependent variables? Do not add any number of dependent variables you can think of but only reasonable ones which are theoretically and empirically motivated
If you want to explore some novel dependent variables you might run separate analyses for the theoretically motivated ones and for the explorative ones.

**6. **Controversies MANOVA is a 2-staged test 1. Overall test
There are 4 possible ways for assessing the overall effect of MANOVA:
- Pillai-Bartlett trace (V)
- Hotelling's T2
- Wilks's lambda (?)?
- Roy's largest root 2. Separate tests for the various group differences
There are two main ways of following up on the group differences:
Univariate ANOVAs
Discriminant analysis

**7. **The power of MANOVA MANOVA has greater power than ANOVA in detecting differences between groups.
However, there is a complex relationship in that the power of MANOVA depends on a combination of the correlation between Dep Var's and the effect size.

**8. **The example throughout the chapter We want to assess the efffects of cognitive behaviour therapy (CBT) on obsessive compulsive disorder (OCD).
CBT will we compared with Behavior Therapy (BT) and with no-treatment (NT) as a control condition.
Since OCD manifests itself both behaviorally (obsessive actions) as well as cognitively (obsessive thoughts), both will be measured.
Note that the two dependent variables are theoretically motivated!

**9. **The data from OCD.sav

**10. **The theory of MANOVA For understanding what is going on in a MANOVA we have to understand (a little bit of ) Matrices:
A Matrix is a collection of numbers arranged in columns and rows.
Expls:
2x3 matrix: 5x4 matrix:
1 2 3 1 2 3 4
4 5 6 5 6 7 8
9 1 3 5
6 7 2 8
0 5 2 8

**11. **More on Matrices A square matrix is one with an equal number of rows and columns, e.g.:
5 3 5 7 8
4 2 1 0 5
1 3 9 7 4
1 3 5 8 0
9 6 3 7 2
The red numbers are 'diagonal components', the black ones 'off-diagonal components' An identity matrix is a square matrix in which the diagonal components are 1 and the off-diagonal components are 0:
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1

**12. **More on matrices A matrix with data from only 1 person is called 'row vector'. It can be thought of as a single person's score on five different variables:
(5 3 5 7 8)?
A matrix with only one column is called 'column vector'. It can be thought of as five participants' score on a single variable:
8
5
7
8
2

**13. **Important matrices and their functions MANOVA uses a matrix that contains information about the variance accounted for by the independent variables, for each dependent variable.
For each variance portion – hypothesis (model), error, and total variance – there is a sum of squares and cross-products matrix:

**14. **What are 'Cross products'? In the 'sum of squares and cross-products matrix', what do the 'cross products' mean?
It is the value for the total combined error between two variables. The represent the total correlation between two variables in an unstandardized way.
It is these cross-products in terms of which MANOVA accounts for any correlation between dependent variables.

**15. **Calculating MANOVA by hand (using the OCD data) First approach: two univariate ANOVAs Univariate ANOVA for DV1 (actions):
For the first dependent variable 'number of compulsive actions' we have to determine 3 variance portions:
Total, Model, and Residual Sum of Squares.

**16. **Calculating SST, SSM, and SSR for Action SST = s2grand (n-1)?
= 2.1195 (30-1)?
= 61.47

**17. **Calculating SST, SSM, and SSR for Action SSM = Summing up the differences between each group mean and the grand mean, squaring it and multiplying by the number of subjects in the group
SSM = 10(4.9-4.53)2 + 10(3.7-4.53)2 + 10(5-4.53)2
= 10(0.37)2 + 10(-0.83)2 + 10(0.47)2
=1.37 + 6.68 + 2.21
= 10.47

**18. **Calculating SST, SSM, and SSR for Action SSR = Taking the variance within each group, multiplying them with the n of scores -1 and adding them all up:
SSR = s2CBT (nCBT -1) + s2BT (nBT -1) + s2NT (nNT -1)?
= 1.433(10-1) + 3.122(10-1) + 1.111(10-1)?
= (1.433x9) + (3.122x9) + (1.111x9)?
= 12.9 + 28.1 + 10.00
= 51.00

**19. **Calculating Mean Sum of Squares: MST, MSM, and MSR for Action We divide SSM and SSR by their df's and derive the Mean sum of squares.

**20. **Univariate ANOVA for DV2 (thought): For the second dependent variable 'number of compulsive thoughts' we also have to determine the 3 variance portions: Total, Model, and Residual Sum of Squares.

**21. **Calculating SST, SSM, and SSR for Thought

**22. **Calculating SST, SSM, and SSR for Thought SSM = Summing up the differences between each group mean and the grand mean, squaring it and multiplying by the number of subjects in the group
SSM = 10(13.40-14.53)2 + 10(15.2-14.53)2 +10(15-14.53)2
= 10(-1.13)2 + 10(0,67)2 + 10(0.47)2
=12.77 + 4.49 + 2.21
= 19.47

**23. **Calculating SST, SSM, and SSR for Thought SSR = Taking the variance within each group, multiplying them with the n of scores -1 and adding them all up:
SSR = s2CBT (nCBT -1) + s2BT (nBT -1) + s2NT (nNT -1)?
= 3.6(10-1) + 4.4(10-1) + 5.56(10-1)?
= (3.6x9) + (4.4x9) + (5.56x9)?
= 32.4 + 39.6 + 50.00
= 122.00

**24. **Calculating Mean Sum of Squares: MST, MSM, and MSR for Thought We divide SSM and SSR by their df's and derive the Mean sum of squares.

**25. **The relationships between the 2 DVs MANOVA takes into consideration the relationship between the DVs by way of calculating their cross-products.
More specifically, there are 3 cross-products which relate to the 3 SSs that we have calculated within the Univariate ANOVAs:
- Total cross-product CPT
- Model cross-product CPM
- Residual cross-product CPR

**26. **Calculating the total cross-product, CPT The total cross-product between our 2 Dvs 'action' and 'thought' is calculated by the following equation:
CPT = ???xi(Action)-?Xgrand(Action))(Xi(Thoughts) -?Xgrand(Thoughts))]?
For all subjects, we have to add up the product of the differences between the individual scores for action and thought minus the respective grand means.

**27. **Total Cross-Product CPT

**28. **Calculating the Model cross-product, CPM The model cross-product tells us how the relationship between our 2 DVs 'action' and 'thought' is affected by our experimental manipulation:
CPM = ??n???xgroup(Actions)-?Xgrand(Action))(?xgroup(Thoughts) -?Xgrand(Thoughts))]
For all 3 experimental groups, we have to add up the product of the differences between the group means for action and thought minus the respective grand means.

**29. **Model Cross-Product CPM

**30. **Calculating the Residual cross-product, CPR The residual cross-product tells us how the relationship between our 2 DVs 'action' and 'thought' is affected by individual differences (errors) in the model:
CPR = ??xi(Actions)-?Xgroup(Action))(xi(Thoughts) -?Xgroup(Thoughts))?
For all subjects, we have to add up the product of the differences between the individual scores for action and thought minus the respective group means.
An easier way to calculate CPR is to subtract CPM from CPT:
CPR = CPT - CPM = 5.47 - (-7.53) = 13

**31. **Residual Cross Product CPR

**32. **The Sum of squares cross- product (SSCP) matrices

**33. **The total Sum of squares cross product (SSCP) matrix (T)? The T matrix represents the Total Sum of squares of each DV (SST(Actions) and SST(Thoughts) ) as well as their cross-product (CPT), the total co-dependence between the 2 DVs.

**34. **The residual (error) Sum of squares cross product (SSCP) matrix (E)? The E matrix represents the Residual or Error Sum of squares of each DV (SSE(Actions) and SSR(Thoughts) ) as well as their cross-product (CPR), the residual co-dependence between the 2 Dvs.

**35. **The Model (Hypothesis) Sum of squares cross product (SSCP) matrix (H)? The H matrix represents the Model or Hypothesis Sum of squares of each DV (SSM(Actions) and SSM(Thoughts) ) as well as their cross-product (CPM), the model co-dependence between the 2 Dvs.

**36. **Checking the matrices You can calculate with matrices as you can with simple numbers. Thus, H + E = T.
This is a way of checking whether the numbers in the matrices are right. They are!

**37. **Principle of the MANOVA test statistic In univariate ANOVA, we divide MSM/MSR in order to obtain the F-value.
In multivariate ANOVA, we would have to divide H/E then.
Problem: H and E are matrices and matrices cannot be readily divided.
Solution: The equivalent to division for matrices is matrix inversion, hence H is multiplied by the inverse of E, called E-1. The product is HE-1.

**38. **Matrix inversion Matrix inversion is too difficult to be dealt with here. We just have to take for granted in this example that for
E = E-1=
H = HE-1=
Thus, our test statistics will be based on HE-1.

**39. **Test statistic HE-1represents the ratio of systematic variance of the model to the unsystematic variance of the error. So it is conceptually the same as the F-ration.
Problem: The F-ratio is a single value, whereas now we have several ones. We will always have as many as the square of the numbers of DVs, in our example 22=4.
Solution: the DVs are converted into underlying dimensions or factors, so-called 'eigenvalues' .
? If you want to know how HE-1 is computed, look into the Appendix for Chapter 14 on Field's CD.

**40. **Discriminant function variates Representing the DVs by underlying dimensions is like working back in a regression, namely to derive from a set of Dependent Variables the underlying Independent Variables. These linear combinations of the DVs are called variates, latent variables, or factors.
Knowing these linear variates, we can predict which group (here, therapy group) a person belongs to. Since the variates are used to discriminate groups of people, they are called discriminant function variates.

**41. **Discriminant function variates How do we find the discriminant function variates?
By maximization which means that the first discriminant function (V1) is the linear combination of dependent variables that maximizes the differences between groups.
Hence the ratio of systematic to unsystematic variance (SSM/SSR) will be maximized for V1. For subsequent variates (V2, etc.), this ratio will be smaller.
Practically, we obtain the maximum possible value of the F-ratio when we look at V1.

**42. **Discriminant function variates The variate V1 can be described as a linear regression equation, where the 2 DVs are the predicting values and V1 is the predicted value:
Y = b0 + b1X1 + b2X2
V1 = b0 + b1DV1 + b2DV2
V1 = b0 + b1Actions1 + b2Thoughts2
In linear regression, the b-values are the weights of the predictors. In discriminant function analysis they are obtained from eigenvectors of the HE-1 matrix .
b0 can be ignored since we are only interested in the discrimination function and not in the constant.

**43. **Discriminant function variates How many variates are there?
? The smaller number of either p or (k-1) where p is the number of DVs and k-1 is the number of levels of the independent variable. In our case, both yield 2.
? We will find 2 variates.
The b-values of the 2 variates are derived from the eigenvalues of the matrix HE-1. There will be 2 such matrices with 2 eigenvalues, one for each variate.

**44. **Discriminant function variates An eigenvector is a vector of a matrix which is unchanged by transformations of that matrix to a diagonal matrix, i.e., one with only diagonal elements.
By changing HE-1 into a diagonal matrix we reduce the numbers of elements we have to consider for testing significance while preserving the ratio of systematic vs. unsystematic variance.
We won't calculate those eigenvalues ourselves
but just adopt them from the book (Field_2005_589):
eigenvector1 = 0.603
for variate 1 -0.335
eigenvector2 = 0.425
for variate 2 0.339

**45. **The regression equation for the 2 variates V1 = b0 + b1Actions1 + b2Thoughts2
(b0 can be omitted since it plays
no role in the discrimination function)?
Variate V1 = 0.603 Actions - 0.335 Thoughts
Variate V2 = 0.425 Actions + 0.339 Thoughts

**46. **The discriminant function The equation can be used to calculate a score for each subject on the variate.
Exp.: Subject 1 in the CBT had 5 obsessive actions and 14 obsessive thoughts. His scores for variate 1 and 2 are:
V1 = (0.603 x 5) + (0.335 x 14) = -1.675
V2 = (0.425 x 5) + (0.339 x 14) = 6.871

**47. **The virtue of the data reduction Calculating such scores for each subjects and then calculate the SSCP matrices (H,E,T, and
HE-1) leads to zero cross-products. This is because the variates extracted from the data are uncorrelated, so no cross-product can arise. This has the effect that all non-diagonal elements (the cross-products) vanish. Only the diagonal elements remain which represent the ratio of the systematic to the unsystematic variation. This is a welcome data reduction.
The HE-1matrix for our two variates is:
HE-1variates

**48. **Eigenvalues as F-ratios The eigenvalues we have just derived (0.335 and 0.073) are the conceptual analog to the F-ratio in univariate ANOVA.
Once we have them, they have to be compared against the value that would result by chance alone.
There are 4 ways how those chance values can be calculated:
- Pillai-Bartlett trace (V)?
- Hotelling's T2
- Wilks's lambda (?)?
- Roy's largest root

**49. ** Pillai-(Bartlett) trace (V)? s
V = ??i/(1 + ?i)?
i=1
V = 0.335 + 0.073 = 0.319
1+ 0.335 1+0.073
??is the eigenvalue for each of the discriminant variates, s is the number of variates. Pillai's trace is thus the sum of the proportion of explained variance on the discriminant functions. It directly corresponds to SSM/SST.

**50. **Hotelling's T2 Hotelling's T2 is simply the sum of the eigenvalues for each variate.
s
T = ??i = 0.335 + 0.073 = 0.408
i=1
Here, we sum SSM/SSR for each of the variates. It compares directly to the F-ratio in ANOVA.

**51. **Wilks' lambda (?)? Wilks' lambda(?) is the product of the unexplained variance on each of the variates. The symbol ? is similar to the summation symbol ?, but it means 'multiply' rather than 'add'.
s
? = ?i [1/(1 + ?i)]
i=1
= [1/(1+0.335)] [(1/(1+ 0.073)] = 0.698
Wilks' lambda represents the ratio of error variance to total variance (SSR/SST) for each variate.

**52. **Roy's largest root Roy's largest root is simply the largest eigenvalue, here 0.335, the eigenvalue of the first variate.
It is thus the ratio of the explained to unexplained variance of the first discrimination function.
Again, it is conceptually the same as SSM/SSR in univariate ANOVA.
Since the first variate is the maximum value for the discrimination of the between-group differences, taking it as statistics is often most powerful.

**53. **Assumptions of MANOVA MANOVA adds further assumptions to the familiar assumptions for ANOVA:
Independence: Observations have to be statistically independent.
Random sampling: data have to be sampled randomly from the population on the interval scale level
Multivariate normality: The dependent variables have to have multivariate normality with groups, collectively.
Homogeneity of covariance matrices: The variances within all groups on all DVs have to be the same. Furthermore, the correlation between any 2 DVs has to be the same in all groups. It has to be tested whether the population variance-covariance matrices of the different groups in the analysis are equal.

**54. **What is 'multivariate normality'?

**55. **What does 'multivariate normality' look like? (Excerpt from http://nitro.biosci.arizona.edu/zdownload/Volume2/Appendix02.pdf)?

**56. **Checking multivariate assumptions The assumption of multivariate normality cannot be directly checked in SPSS. Alternatively, for each DV separately, univariate normality has to be checked. However, this is a necessary but not sufficient condition.
The assumption of equality of covariance matrices presupposes equality of variances between groups. This can be checked by Levene's test. Levene's test should be ns for any of the dependent variables. The variance-covariance matrices have to be compared between groups using Box's test. (Since Box's test relies on multivariate normality, this assumption has always to be checked first).

**57. **Chosing a test statistic wrt to power Which of the 4 test statistics shall we use?
For small and medium-sized groups all 4 have similar statistical power.
If groups differ most on the 1st variate, 'Roy's largest root' is the most powerful statistic. If they differ on more than 1 variate, Pillai's trace is better.
If you only have a medium-sized sample, you should not use too many DVs.
All four statistics are relatively robust in terms of violations of multivariate assumptions.

**58. **Follow-up analysis – univariate ANOVA Traditionally, MANOVA is always followed up by univariate ANOVAs for the single DVs. However, a Bonferroni correction should be applied to correct for the increased family-wise error.
Note that univariate ANOVAs are NOT in the spirit of MANOVAs since you can only find out about any single DV, not about the joint contribution of the various DVs.
Single ANOVAs are only justified after the overall MANOVA has proved significant.

**59. **Follow-up analysis – discriminant analysis An alternative is to use 'discriminant analysis' which finds the linear combination(s) of the dependent variables that best discriminate(s) between the experimental groups.
Here, emphasis is laid on the relationships that exist between the DVs.
'Discriminant analysis' reduces the DVs in terms of a set of underlying dimensions, not single dimensions as univariate ANOVAs do.

**60. **MANOVA on SPSS (using OCD.sav)? Analyze ? General Linear Model ? Multivariate

**61. **Multiple comparisons – contrasts

**62. **Post-hoc tests

**63. **Options

**64. **Output of MANOVA Preliminary analysis and testing assumptions Descriptives

**65. **Equality of variances

**66. **Sphericity

**67. **MANOVA test statistics All 4 test statistics are produced. Except Hotelling's Trace, they are all significant.
? We can conclude that there is an overall effect. However, we don't know for which group or dependent variable the effect holds

**68. **Following up on MANOVA (I): Univariate ANOVAs Given the equality of variances for both DVs, we can trust the MANOVA.

**69. **Univariate ANOVAs: Between-subjects effects How can we explain the contradictory results that MANOVA found a significant effect of 'group' while the univariate ANOVAs did not?
? because MANOVA has found that the groups differ along a combination of both DVs and not on any single one.

**70. **SSCP matrices: The model SSCP (H) and the residual SSCP (E)? The big SS's in the Error matrix (51 and 122) tell us that the MANOVA is significant because there is a significant effect through the relationship between the DVs.

**71. **SSCP Error matrix (E): The average SSCP Note: Bartlett's Test of sphericity is based on this matrix.

**72. **Contrasts The only significant contrast is between BT 'Behavior therapy' (2) and NT 'No Treatment' (3) in the DV 'action'.
? BT reduces compulsive actions * as compared to NT.
Note: The simple contrasts are carried out on each DV separately, hence like in univariate ANOVA.

**73. **Following up on MANOVA (II): Discriminant Analysis This 2nd alternative is recommended more than separate ANOVAs.
Analyze ? Classify ? Discriminant...

**74. **Following up on MANOVA (II): Discriminant Analysis: Statistics

**75. **Following up on MANOVA (II): Discriminant Analysis: Classify

**76. **Following up on MANOVA (II): Discriminant Analysis: Classify

**77. **Output of Discriminant Analysis D1xD2/n-1:
0.4/9 = 0.044
22.6/9 = 2.511
-10/9 = -1.111

**78. **Output of Discriminant Analysis

**79. **The eigenvalues of the 2 variates correspond to those we had calculated before. V1 accounts for 82.2% of the variance; V2 for 17.8%.
? The group differences shown in MANOVA can be accounted for by a single underlying factor.

**80. **Output of Discriminant Analysis

**81. **Output of Discriminant Analysis

**82. **Output of Discriminant Analysis

**83. **Output of Discriminant Analysis

**84. **The final interpretation How can we interpret the statistical results of MANOVA: what does all this mean?
The insignificant univarite ANOVAs tell us that the improvement is not simply in terms of 'actions' or 'thoughts'.
MANOVA tells us that therapy groups can indeed be differentiated by a single underlying dimension which is neither 'action' nor 'thought' but the 'Obsessive Compulsive Disorder' (OCD) itself. OCD is composed of compulsive actions and thoughts alike
The nature of the influence of therapy on OCD is unclear, however...

**85. **The final interpretation Which is the best therapy?
In order to decide this, we have to look at the relation between the 2 DVs in the data for all 3 therapy groups and compare their means.

**86. **Correlations between actions and thoughts in the 3 groups

**87. **Means between the DVs 'action' and 'thought' for each group

**88. **Comparing the 3 therapy groups The discriminant analysis told us that actions are more important in terms of OCD: the standardized canonical discriminant function coefficient for 'action' loaded highly positive on the first variate V1 (.829) while 'thoughts' loaded highly negative (-.713). The same pattern was obtained in the structure matrix and for the distance between the group centroids (means).
? Hence, Behavior Therapy (BT) seems to be best since it reduces actions more than any other group, esp. CBT. However, we do not know whether BT was any better than NT.

**89. **The 3 therapy groups and the construct OCD “...Behavior Therapy (BT) has the most influence on OCD as a construct, because of the relative importance of behaviours in that construct compared to cognitions.“ (Field, 2005, 616).

**90. **Univariate ANOVA AND Discriminant Analysis You should run Univariate ANOVA after MANOVA in order to fully understand the data. You should also run a Discriminant Analysis since it informs you best about the underlying dimensions of the various DVs.