1 / 46

Measuring of Correlation

Learn about correlation, its applications, and characteristics, including types of correlation, estimation methods, and average values.

aholden
Download Presentation

Measuring of Correlation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Measuring of Correlation

  2. Definition Correlation is a measure of mutual correspondence between two variables and is denoted by the coefficient of correlation.

  3. Applications and characteristics a) The simple correlation coefficient, also called the Pearson's product-moment correlation coefficient,is used to indicate the extent that two variables change with one another in a linear fashion.

  4. Applications and characteristics b) The correlation coefficient can range from - 1 to + 1 and is unites (Fig. A, B, C).

  5. Applications and characteristics c) When the correlation coefficient approaches - 1, a change in one variable is more highly, or strongly, associated with an inverse linear change (i.e., a change in the opposite direction) in the other variable.

  6. Applications and characteristics d) When the correlation coefficient equals zero, there is no association between the changes of the two variables.

  7. Applications and characteristics (e) When the correlation coefficient approaches +1, a change in one variable is more highly, or strongly, associated with a direct linear change in the other variable.

  8. Applications and characteristics A correlation coefficient can be calculated validly only when both variables are subject to random sampling and each is chosen independently.

  9. Correlation coefficient

  10. Correlation coefficient

  11. Correlation coefficient

  12. Types of correlation There are the following types of correlation (relation) between the phenomena and signs in nature: • а) the reason-result connection is the connection between factors and phenomena, between factor and result signs. • б) the dependence of parallel changes of a few signs on some third size.

  13. Quantitative types of connection • functional one is the connection, at which the strictly defined value of the second sign answers to any value of one of the signs (for example, the certain area of the circle answers to the radius of the circle)

  14. Quantitative types of connection • correlation - connection at which a few values of one sign answer to the value of every average size of another sign associated with the first one (for example, it is known that the height and mass of man’s body are linked between each other; in the group of persons with identical height there are different valuations of mass of body, however, these valuations of body mass varies in certain sizes – round their average size).

  15. Correlative connection • Correlative connection foresees the dependence between the phenomena, which do not have clear functional character. • Correlative connection is showed up only in the mass of supervisions that is in totality. The establishment of correlative connection foresees the exposure of the causal connection, which will confirm the dependence of one phenomenon on the other one.

  16. Correlative connection • Correlative connection by the direction (the character) of connection can be direct and reverse. The coefficient of correlation, that characterizes the direct communication, is marked by the sign plus (+), and the coefficient of correlation, that characterizes the reverse one, is marked by the sign minus (-). • By the force the correlative connection can be strong, middle, weak, it can be full and it can be absent.

  17. Estimation of correlation by coefficient of correlation

  18. Types of correlative connection By direction • direct (+) – with the increasing of one sign increases the middle value of another one; • reverse (-) – with the increasing of one sign decreases the middle value of another one;

  19. Types of correlative connection By character • rectilinear - relatively even changes of middle values of one sign are accompanied by the equal changes of the other (arterial pressure minimal and maximal) • curvilinear – at the even change of one sing there can be the increasing or decreasing middle values of the other sign.

  20. Average Values • Mean: the average of the data sensitive to outlying data • Median: the middle of the data not sensitive to outlying data • Mode: most commonly occurring value • Range: the difference between the largest observation and the smallest • Interquartile range: the spread of the data commonly used for skewed data • Standard deviation: a single number which measures how much the observations vary around the mean • Symmetrical data: data that follows normal distribution  (mean=median=mode) report mean & standard deviation & n • Skewed data: not normally distributed (meanmedianmode) report median & IQ Range

  21. Average Values • Limit is it is the meaning of edge variant in a variation row lim = Vmin Vmax

  22. Average Values • Amplitude is the difference of edge variant of variation row Am = Vmax - Vmin

  23. Average Values • Average quadratic deviation characterizes dispersion of the variants around an ordinary value (inside structure of totalities).

  24. Average quadratic deviation σ = simple arithmetical method

  25. Average quadratic deviation d = V - M genuine declination of variants from the true middle arithmetic

  26. Average quadratic deviation σ = i method of moments

  27. Average quadratic deviation is needed for: 1. Estimations of typicalness of the middle arithmetic (М is typical for this row, if σ is less than 1/3 of average) value. 2. Getting the error of average value. 3. Determination of average norm of the phenomenon, which is studied (М±1σ), sub norm (М±2σ) and edge deviations (М±3σ). 4. For construction of sigmal net at the estimation of physical development of an individual.

  28. Average quadratic deviation This dispersion a variant around of average characterizes an average quadratic deviation (  )

  29. Coefficient of variation is the relative measure of variety; it is a percent correlation of standard deviation and arithmetic average.

  30. Terms Used To Describe The Quality Of Measurements • Reliability is variability between subjects divided by inter-subject variability plus measurement error. • Validity refers to the extent to which a test or surrogate is measuring what we think it is measuring.

  31. Measures Of Diagnostic Test Accuracy • Sensitivity is defined as the ability of the test to identify correctly those who have the disease. • Specificity is defined as the ability of the test to identify correctly those who do not have the disease. • Predictive values are important for assessing how useful a test will be in the clinical setting at the individual patient level. Thepositive predictive valueis the probability of disease in a patient with a positive test. Conversely, the negative predictive valueis the probability that the patient does not have disease if he has a negative test result. • Likelihood ratioindicates how much a given diagnostic test result will raise or lower the odds of having a disease relative to the prior probability of disease.

  32. Measures Of Diagnostic Test Accuracy

  33. Expressions Used When Making Inferences About Data • Confidence Intervals • The results of any study sample are an estimate of the true value in the entire population. The true value may actually be greater or less than what is observed. • Type I error (alpha) is the probability of incorrectly concluding there is a statistically significant difference in the population when none exists. • Type II error (beta) is the probability of incorrectly concluding that there is no statistically significant difference in a population when one exists. • Power is a measure of the ability of a study to detect a true difference.

  34. Multivariable Regression Methods • Multiple linear regression is used when the outcome data is a continuous variable such as weight. For example, one could estimate the effect of a diet on weight after adjusting for the effect of confounders such as smoking status. • Logistic regression is used when the outcome data is binary such as cure or no cure. Logistic regression can be used to estimate the effect of an exposure on a binary outcome after adjusting for confounders.

  35. Survival Analysis • Kaplan-Meier analysis measures the ratio of surviving subjects (or those without an event) divided by the total number of subjects at risk for the event. Every time a subject has an event, the ratio is recalculated. These ratios are then used to generate a curve to graphically depict the probability of survival. • Cox proportional hazards analysis is similar to the logistic regression method described above with the added advantage that it accounts for time to a binary event in the outcome variable. Thus, one can account for variation in follow-up time among subjects.

  36. Kaplan-Meier Survival Curves

  37. Why Use Statistics?

  38. Descriptive Statistics • Identifies patterns in the data • Identifies outliers • Guides choice of statistical test

  39. Percentage of Specimens Testing Positive for RSV (respiratory syncytial virus)

  40. Descriptive Statistics

  41. Distribution of Course Grades

  42. Describing the Data with Numbers Measures of Dispersion • RANGE • STANDARD DEVIATION • SKEWNESS

  43. Measures of Dispersion • RANGE • highest to lowest values • STANDARD DEVIATION • how closely do values cluster around the mean value • SKEWNESS • refers to symmetry of curve

  44. Measures of Dispersion • RANGE • highest to lowest values • STANDARD DEVIATION • how closely do values cluster around the mean value • SKEWNESS • refers to symmetry of curve

  45. Measures of Dispersion • RANGE • highest to lowest values • STANDARD DEVIATION • how closely do values cluster around the mean value • SKEWNESS • refers to symmetry of curve

  46. The Normal Distribution . • Mean = median = mode • Skew is zero • 68% of values fall between 1 SD • 95% of values fall between 2 SDs Mean, Median, Mode 2 1

More Related