1 / 14

QUANTITATIVE RESEARCH

HYPOTHESES[see pp. 176-179]. I. Defining hypotheses A. Statements posing a relationship between 2 or more variables.B. Usually emerge out of substantial literature review. C. Types of relationships.1. Covariance (correlative)a. Vary together but not causal. b. Can be positive or negati

cambree
Download Presentation

QUANTITATIVE RESEARCH

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. QUANTITATIVE RESEARCH HYPOTHESES

    2. HYPOTHESES[see pp. 176-179] I. Defining hypotheses A. Statements posing a relationship between 2 or more variables. B. Usually emerge out of substantial literature review. C. Types of relationships. 1. Covariance (correlative) a. Vary together but not causal. b. Can be positive or negative.

    3. Hypotheses cont. 2. Causal a. Concepts are related & changes in one precede changes in the other. b. Assumes a predictive relationship, usually between IV & DV. 3. Null, or no relationship--concepts operate independently of one another.

    4. Hypotheses, cont. D. Benefits of hypotheses: 1. Help rule out intervening & confounding variables (because must be precise, testable statements). 2. Permit quantification of variables (through operational definitions of concepts). 3. Provide directions for a quantitative study. 4. Eliminate trial-and-error research. E. Without a hypothesis, quantitative research lacks focus & clarity.

    5. Hypotheses, cont. F. Hypotheses testing is also known as significance testing. 1. Not all significance tests assume variables are normally distributed in the population. 2. Significance tests that do are called parametric, those that do not are called non-parametric. 3. Parametric--the probability of a specified sample compared to assumed population conditions (parameters) in a sampling distribution.

    6. Hypotheses, cont. II. Developing a Research Hypothesis A. Remember, RH is tentative statement about relationship between IV & DV. B. Should meet 4 criteria (Wimmer & Dominick): 1. Compatible with current knowledge 2. Logically consistent 3. Stated in most parsimonious form 4. Testable through empirical methods

    7. Hypotheses, cont. C. RH(s) attempt to answer RQs, as relate to variables. 1. One-tailed predicts nature of relationship or difference (positive or negative)directional a. Less than (<) b. More than (>) 2. Two-tailed predicts a relationship; doesnt specify nature of relationship or its difference--non-directional (more conservative).

    8. Hypotheses, cont. D. Must specify empirical observations required to test the proposed answers. E. Can have several hypotheses (H, H1, etc.) 1. Written out as statements of difference, so null hypothesis (Ho) can be crafted. 2. Null hypothesis states that no relationship exists between the variables (chance occurrence). 3. It is the null hypothesis that is tested, then either accepted or rejected.

    9. Hypotheses, cont. F. By testing the null, can confirm or disconfirm RH. 1. To determine level of rejection/acceptance, set a probability level [p value], against which the null hypothesis is tested. a. Probability levels (aka alpha levels or significance) represent confidence that predicted difference not due to chance. b. Expressed by a lowercase letter p followed by a less than, or less than or equal to sign & a value: e.g. p > .01 (99% probably true; 1% chance)

    10. Hypotheses, cont. 2. If results indicate a probability lower than this level, they are significant, & can reject the null. a. If can reject the null, RH assumed true (differences not due to chance). b. If one fails to reject it means accepting the null, which means rejecting the RH. 3. Establishing a significance level depends on amount of errors researchers are willing to accept.

    11. Hypotheses, cont. G. Set specific significance level prior to conducting a study & analyzing data using a decision rule. 1. Common significance levels=.05 (1 chance in 20), .01 (1 chance in 100) & .001 (1 chance in 1000). 2. Some studies, set a level at .10 or .20 (esp. if a pilot or preliminary study).

    12. Hypotheses, cont. 3. All the following are equivalent statements: The finding is significant at the .05 level The alpha level is .05 a = .05 p = .05 The p-value is .05 The area of the region of rejection is .05 There is a 95% certainty (or confidence) that the result is not due to chance There is a 1 in 20 chance of obtaining this result

    13. Hypotheses, cont. H. Decision Errors 1. In deciding to accept or reject the null, sometimes make an incorrect decision. 2. Type I error (alpha error)-reject a null when null probably true & should have been accepted. 3. Type II error (beta or acceptance error)--accept a null when it is probably false, thereby rejecting a sound RH.

    14. Hypotheses, cont. 4. Stringent significance levels--less chance of a Type I error, but more chance of a Type II error. 5. More liberal significance levels decrease risk of committing a Type II error, but increase chances of a Type I error. 6. Doing a pilot study may help avoid error. 7. If cant do a pilot study, better to reduce the risk of Type I error (erring on the side of caution). Well come back to all this later!

More Related