1 / 20

Experimental Research Methods in Language Learning

Experimental Research Methods in Language Learning. Chapter 11 Correlational Analysis. Correlational Analysis. Can you think of real life examples of correlations? How do you express how much one thing is related to another thing? What is a positive correlation? What is a negative one?.

wgoodman
Download Presentation

Experimental Research Methods in Language Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Experimental Research Methods in Language Learning Chapter 11 Correlational Analysis

  2. Correlational Analysis • Can you think of real life examples of correlations? • How do you express how much one thing is related to another thing? • What is a positive correlation? What is a negative one?

  3. Correlational Analysis • Correlation allows researchers to explore a hypothetical relationship between variables. • Correlational analysis is typically used in non-experimental research, such as survey research or correlational research which aims to examine whether there is an association between variables of interest and if such an association exists, to what extent the variables are connected.

  4. Correlational Analysis • Correlation is fundamental to several advanced statistical techniques, including factor analysis, reliability analysis, and regression analysis. • One of the best strategies to start learning statistics for experimental research is via first learning how correlations are analyzed.

  5. The Size and Sign of a Correlation Coefficient There are five types of correlations introduced in this chapter: • Pearson Product Moment • Point BiserialCorrelation • Spearman’s Rho Correlation • Kendall’s Tau Correlation • Phi Correlation.

  6. The Size and Sign of a Correlation Coefficient • A correlation coefficient is typically expressed on a scale from 0 (i.e., 0%, no relationship) to 1 (i.e., 100%, perfect relationship). • If two variables are uncorrelated (i.e., 0), there is no systematic relationship between them and hence a prediction of one variable by the other is not possible. • A positive (+) correlation = two variables are associated and move in the same direction in a systematic way. • A negative (-) correlation suggests that the two variables are associated with each other, but move systematically in the opposite direction.

  7. Curvilinear Relationship • A curvilinear relationship = At some level, something can be positive, but when it exceeds a certain level, it can become negative. • For example, we all know that some level of anxiety/pressure is good for [test] performance (because we will work harder and try to overcome it), but too much anxiety is bad for test performance because it takes control of our emotions.

  8. Hypothesis Testing in Correlation • Researchers seek to test a hypothesis of whether two variables are related. • The null hypothesis (H0) states that there is no relationship between variable A and variable B (i.e., correlation coefficient = 0). • The non-directional alternative hypothesis (H1) is that there is a relationship between variable A and variable B (i.e., correlation coefficient ≠ 0). • A directional alternative hypothesis is: there is a positive relationship between variable A and variable B (i.e., one-tailed). • The two-tailed test of significance is recommended.

  9. Hypothesis Testing in Correlation • A typical p-value is taken to be 0.05. That is, a 5% chance is acknowledged to error in rejecting the null hypothesis. • The degree of freedom (df) is determined by the total number of cases (or sample size) minus 1 (i.e., N-1).

  10. Effect Size and R-squared (R2) • The sign of the correlation (i.e., + or -) is not related to the strength (or size) of the correlation. • A negative correlation does not mean that the finding is of no worth. • A correlation coefficient does not tell us how much one variable accounts for the other. • In some research topics, fairly weak correlations can be very important (i.e., they can indicate theoretical or practical significance) and in some cases even a shared variance of 10% can be worth acting upon.

  11. Shared Variance

  12. Correction for Attenuation • A correction for attenuation takes the reliability coefficients of the two measures into account in a Pearson correlation coefficient. • According to Hatch and Lazaraton (1991, p.444), a correction for attenuation can be computed as: rAB÷ √[reliability of A x reliability of B], where rAB= correlation coefficient.

  13. Pearson Correlations • The Pearson Product Moment correlation (Pearson’s r) is a parametric test for describing the relationship between two continuous variables. • Known as a simple bivariate correlation. • Pearson correlation can be used for numeric variables on continuous scales such as interval and ratio scales. • Two variables must be from the same participants. • A data set must be normally distributed or close to a normally distributed shape.

  14. Five Statistical Assumptions for Pearson’ r • Pairs of data are related (i.e., X and Y scores are from the same person) • Continuous or interval-like data • Normal distribution • Linearity (i.e., the X-Y relationship that can be represented as a straight line; • A spread of score variability • Assumptions 1 and 2 are easy to check. • Assumptions 3 and 5 can be addressed by computing descriptive statistics and creating a histogram. • Assumption 4 is checked through a scatterplot.

  15. Point Biserial Correlations • Point Biserial Correlation is a non-parametric test • Is a special case of the Pearson correlation. • Can be used to examine the relationship between a dichotomous variable (e.g. male-female, and yes-no) and a continuous variable (e.g., test scores).

  16. Spearman’s Rho Correlations • Spearman’s Rho correlation (ρ) is a non-parametric test. • Typically used for numeric variables on an ordinal or ranked scale (e.g., ranked list of test results, letter grades A-F, and steps on a Likert scale). • Can calculate the correlation of an ordinal score with an interval score. • Some information may be lost as continuous variables are ranked.

  17. Kendall’s Tau b Correlation • The Kendall’s tau b correlation is a non-parametric alternative to the Spearman correlation. • Can be used to examine the level of agreement and disagreement between two sources of data. • For example, if a number of judges are used to score and rank candidates in order of performance outcomes, we would like to see the extent to which these judges agree with their ranking.

  18. Phi Correlations • Phi (ø) correlation is a non-parametric test. • Is not used much in correlational studies. • Is useful for examining the relationship between two dichotomous variables (e.g., male or female, living or dead, pass or fail, agree or disagree, correct or wrong, homework or no homework, pair work or individual work). • These variables can be assigned as 1 or 0, or 1 or 2, depending on how we code them in SPSS.

  19. Factors Affecting Correlation Coefficients A correlation coefficient is affected by several factors, some known and some unknown, including • Sample size • Correlational test being used • Outliers (extreme cases). • Reliability of the research instruments. • Restricted data or score range (e.g., truncated data).

  20. Discussion • Why do you think it is not suitable to use correlation to explain causation? • What is the difference between a correlation coefficient and a shared variance? • Do you think whether setting a probability value to be less than 0.05 is better or worse than setting it at 0.01? Why do you think so?

More Related