1 / 50

PSC 570

PSC 570. Weeks 6-7. Topics to be covered. Controlled comparisons, experiments, case studies Interactive, partial and spurious relationships Central limit theorem Normal distribution, population Confidence interval Standard deviation, Standard error Variance and Z-score, probability

amory
Download Presentation

PSC 570

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PSC 570 Weeks 6-7

  2. Topics to be covered • Controlled comparisons, experiments, case studies • Interactive, partial and spurious relationships • Central limit theorem • Normal distribution, population • Confidence interval • Standard deviation, Standard error • Variance and Z-score, probability • Sampling techniques • Along the way: • some introductory experience with statistical inference

  3. Review:

  4. Inference • Causal and descriptive • Valid procedures • Description, explanation, and causation, how are they linked? Is there a difference in explaining and understanding causative effects? • Good description and causal inference are related • Good causal inference requires a good theory and explicit hypotheses (i.e. Know what you are seeking to answer)

  5. Experiments • Many different types • Lab experiments: infrequently used in polsci, more so in sociology or psychology • Quasi-experiments (control of some elements) have questionable validity • Remember, in conducting social science research we can only be more or less certain, we are never 100%, absolutely, certain of any relationship. Like forensic scientists we never PROVE anything definitively. BUT we can find relationships that de facto satisfies such demands

  6. As seen with cross-tabs we can include “controls”, and in certain settings we actually control for influences • The random selection-techniques discussed are useful, but not perfect, and not always applicable. When and why is this the case? • Assigning vs. matching groups (certain factors may affect people’s selection of groups) • BUT-we have to conduct social research…

  7. When designing research please be aware of potentially spurious relationships, omitted variables (more later), and need for controls • Spurious relationship. Ex: The higher the income the more likely one is to vote (but what if race and gender are both found to have primary effects on income and likelihood to vote?) • Interactive relationship = collinearity (more later) • Partial relationship • Let’s turn to Bennett

  8. Bennett • Causality • Complexity • Covariation, congruity, sequencing, and path-dependency • Mill’s methods of “most similar” and “most different” (agreement & difference) • Probability and inference • Questions

  9. Case studies • Case studies give depth, aims at, but never achieves, a holistic understanding. Why? -Validity and reliability depend on proper and explicit operationalization (conceptualization, and sound techniques) -Types: theory generating (exploratory) testing (illustrative), comparative (both), policy oriented process tracing (also generating/exploratory) -Techniques: interviews, surveys, original and secondary sources. Most are quantitative, yet they can be quantitative or a combination

  10. Mean deviation, standard deviation, & variance • Mean deviation: the average difference of a variable from its mean. Not used much. Why? • Variance: the average of the squared deviations from the mean • The standard deviation: the square root of the variance • if you know one statistic, then you can calculate the other • Standard deviation is especially useful in comparing the distributions of variables, e.g. among different groups. • The standard deviation is sensitive to extreme values • Both variance and standard deviation…make sure you know their differences

  11. Population vs. sample variance • From the text, you know that the population and sample variances are not the same • The population variance is S(Xi - m) 2 / N • The sample variance is S(Xi - m) 2 /If we have two observations and we calculate the mean, then if we know x1-m, we also know x2-m • they are linearly dependent, so they each provide exactly the same information • That is (roughly) why the sample variance has (N - 1) in the denominator, while the population variance has N in the denominator (N - 1)

  12. Sample variance, cont. • Again, you have one independent observation of the variance, because (x1 - m) = (height of 1) - (mean of 1 and 2)is just a linear function of (x2 - m) = (height of 2) - (mean of 1 and 2)(in fact, they are equal)

  13. So, guessing the population mean from a sample • After you meet one person, you can make a guess about the mean of the population, but you cannot guess the population variance (why not?) • When you have met a second person, only then can you guess the population variance • How will you guess? (Remember, deviations are the distances between the scores and the mean) • You will guess that the person you have met is average: the first person’s height equals the population mean

  14. Sample variance again • Your question: what is the average (squared) deviation from the mean? • As you meet more people, your guess will get better and better (you will assume that you have met a representative sample) • You can calculate the mean, since you have two independent observations---you simply take their average • But when you calculate the variance, you do not have two independent observations--you have only one.

  15. The standard error (more later) • The standard deviation of the sample mean is referred to as the standard error • It is the sample standard deviation divided by the square root of n, the number of observations • The st.error can be interpreted as a measure of the imprecision of our estimate of the mean If the standard error is large, we will have less confidence in our estimate of the mean This is important in statistical inference

  16. Average (mean) and standard deviation • If the average is 2.85 on a scale of 1-5, with a st.d. of 2.07 (average distance away from the mean) • Interpretation? Greatly dispersed responses or not? • Compare by sex: what’s your interpretation? • Women: mean = 2.7, std. dev. = 2.0 • Men: mean = 2.9, std. dev. = 2.1

  17. Sampling continued • In real social science research, we rarely get to sample and re-sample • We observe a single sample of a variable x • We rely on the central limit theorem to allow us to make estimates and inferences based on that sample • As noted and used in the examples above, there is no such thing as “population data”. The “population” is a theoretical abstraction • We return to sampling techniques weeks 7 and 9

  18. Think about the central limit theorem The sample mean has a known distribution: • The normal distribution • This is true even if the underlying variable (from which we are sampling) is not normally distributed Because of the central limit theorem, we can … • NOT assume that the data we observe are actually distributed as the normal distribution • Instead, we can assume that the population from which the sample data were drawn is distributed normally • An example ...

  19. Histogram of annual population data, 1948-95; US, Germany, Austria combined

  20. Reading the histogram • Note that the histogram represents a skewed, non-normally distributed variable (N=150) • Let’s see what happens when we use this data to sample the mean and construct a sampling distribution • Taking 150 samples of size 50 from this distribution, and collecting the sample means in a new distribution, we get ...

  21. .393333 Fraction 0 63.8373 117.587 mean An approximately normally distributed variable

  22. The normal distribution and other probability functions aretheoreticaldistributions • Frequency distributions are real distributions of observed data (we’ll see examples below) • Frequency distributions can resemble the normal distribution, but no real data can actually be distributed as the normal distribution • For continuous variables histograms are given in terms of intervals rather than exact values. Why?

  23. Example: a normal distribution with mean=0 and st.dev.=0.5

  24. Another normal distribution

  25. Thus measures of dispersion indicate the amount of heterogeneity or variety in a distribution of values • When the number of samples presented increases, the central limit theorem dictates that the distribution increasingly looks “normal,” like a bell curve • The normal distribution is symmetric about its mean, and therefore has a median equal to its mean, and is characterized completely by two parameters: mean and standard deviation The ND: is important for all kinds of statistical inference. • the t distribution; the Chi-squared distribution; the F distribution

  26. Still another normal distribution

  27. FYI The normal distribution.…its mathematical formula:

  28. the normal distribution and hypotheses testing • The main concept to understand for hypothesis testing is: the area under a portion of a probability distribution • the area under the distribution = 1 (why?) • the area to the right of the mean = 1/2 • the area between the mean and one (positive) standard deviation is 0.3413 • the area between the mean and two (positive) standard deviations is 0.4772 (see the text)

  29. Confidence interval • The 95% level of confidence usually suffices, but in some cases you seek (or require) greater confidence, other times perhaps less • A 95% con. int. means 5 %of the results will be off the wall…and we reject our hypothesis • In standardized scores (z scores) you need a z score of 1.96 at the .05 level of significance (and 2.58 at the .01 level of significance)

  30. Standard scores (z scores) • The z score is a good thing to know • we will use it later when conducting hypothesis testing • Every variable has a mean and a standard deviation, measured in real units of the variable • We can standardize any variable so that it is measured not in its original units, but in units of standard deviation • The z score retains the original distributional information of the data, while providing a useful metric: standard deviation space instead of raw data space

  31. Basic probability • The point is this: suppose we take a single observation at random from the distribution…. • What is the likelihood that we will pick a number more than two standard deviations away from the mean (either below or above it)? • The answer: just under 5% (0.05 probability) “Statistical significance” = the probability with which we would see the observed frequencies given that there is no association between the variables

  32. Review of hypothesis testing on a single sample • Recall what we did when we used cross-tabulation: • We compared two subsets of people in the GSS data (for example, men and women) • We asked questions like: “Are men more likely to own a gun than women?” • We generated a table that allowed us to compare these frequencies • We found that men were more likely to own a gun

  33. Consider the following thought experiment: • You have data on the height of all of the people at ESU • From this, you can calculate the population mean, the population variance (average squared deviation from the mean), etc. • BUT we never know the population , we work with samples and infer to the population …so now consider a second thought experiment: • You do not have any data on the people at ESU. Instead, you meet each person one at a time and after meeting each person, you make guesses about the population statistics (mean and variance)

  34. Review of hypothesis testing 1 • Suppose that the average height in the general population is 5.75, st.dev 0.5 • our ESU sample it is 6 feet, and suppose that the sample size is 250 • The null hypothesis is: ESU people are not taller than average (one directional) • In testing our hypothesis, we are posing a very simple question: • what is the probability that we would observe a sample mean of 6.0 if the null hypothesis were true?

  35. Review of hypothesis testing (2) • We know that our sampling mean is a random variable that follows a normal distribution (the central limit theorem) • We know (in this example) that the average height of the general public is 5.75, and the standard deviation is 0.5 • And we know that in our sample, the average height is 6.0 feet.

  36. Review of hypothesis testing (3) • We are interested in rejecting the null hypothesis • If it is true that the data in our sample would be observed with only 5% probability, given that the null hypothesis is true, then we will say the null hypothesis has been rejected • In other words...

  37. Review of hypothesis testing 4 • When I take a small sample from the population, the sample average will probably not be equal to the average of the entire population • But... • If my sample is very different from the general population, then I can say with some confidence that my sample is not representative of the general population • How different is “very different”? • The answer: so different that there is only a 0.05 probability that the sample data would be observed in a random sample of the population

  38. Real-world hypothesis testing • In real hypothesis testing, we don’t know the population standard deviation. • For this reason, we can’t use the normal distribution to calculate the probability with which we can reject the null hypothesis (except in “toy” examples)

  39. Continuing along…. • Recall that the sample mean has a distribution….the normal distribution. • And remember the standard deviation of the sample mean (called the standard error) = sample s.d. / square root of sample size • Assume that in reality, ESU people are exactly the same height on average as people in the rest of the community…that is, assume that the null hypothesis is true • With what probability would we observe the sample mean 6 feet for people at ESU?

  40. Using probability • To answer this question, we do three things, all having to do with the distribution of the sample mean • (1) subtract the population mean from the ESU mean • (2) divide by the standard error • (3) compare the result to a table of values of the standard normal distribution • Remember that the st.dev of the sampling distribution is called the standard error of the mean s/n (.5/15.811)

  41. Now what? • Now we want to use the table of the standard normal distribution • What is the probability that z > .0311? • Using a table of the standard normal distribution, the answer is .3783 (one-tailed) or .7566 (two-tailed) • If the null hypothesis were true, we would observe an ESU sample mean of 6 feet or taller with probability .7566 (two-tailed test) • so, we fail to reject the null hypothesis • In standardized scores (z scores) you need a z score of 1.96 at the .05 level of significance, and 2.58 at the .01 level

  42. What have we done? • We state a hypothesis • We formulate the null hypothesis • Based on the data, we calculate the probability with which we could reject the null hypothesis • We tested a hypothesis using a “toy” example • We run the risk of a type I error: rejecting a null hypothesis when it is true. Type II: failure to reject a null hypothesis (when it is false and the alternative hypothesis is true).

  43. Accepting errors in hypothesis testing (Ho= Null Hypothesis)

  44. So, another example… • The height of all people is (we assume) known; it has a known mean and standard deviation (assume mean = 5.5 feet and standard deviation = 0.5 feet). • Now, suppose that the average height of people in our Stroud sample is 5.75 feet, and suppose that the sample size is 25

  45. Recall that the sample mean has a distribution. It is … • The normal distribution. • And remember the standard deviation of the sample mean (called the standard error) = sample s.d. / square root of sample size • Assume that in reality, Stroud people are exactly the same height on average as people in the rest of the world • that is, assume that the null hypothesis is true • With what probability would we observe the sample mean 5.75 for people in Stroud?

  46. So we again (1) subtract the population mean from the Stroud mean • (2) divide by the standard error • (3) compare the result to a table of values of the standard normal distribution • First, subtract 5.5 from 5.75, leaving .25 • note: our null hypothesis is that this difference = 0

  47. Then, divide by the standard error, s/√n • here, though, we can do better--we “know” the population standard deviation s, so we use it in place of the sample s.d. • that gives us .25/(.5/ √25) = 2.5 What is the probability that z > 2.5? • Using a table of the standard normal distribution, the answer is .0062 (one-tailed) or .0124 (two-tailed) • If the null hypothesis were true, we would observe a Stroud sample mean of 5.75 feet or taller with probability .0124 (two-tailed test) • so, we reject the null hypothesis at that level

  48. Later... • Later in the course, we will need to use these concepts intensively to conduct hypothesis tests • so, now is a good time to make sure you understand them

More Related