1 / 55

The Tools of Research

Sampling. Most survey research involves selecting a sample because of the cost and time involved in surveying the entire population.. Types of Samples. Probability SamplingRegarded as the best; most scientificEveryone in the population has an equal chance of being selectedNon-Probability SamplingNon-scientificSample may not be (generally isn't) representative of the general population.

Pat_Xavi
Download Presentation

The Tools of Research

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. The Tools of Research

    2. Sampling Most survey research involves selecting a sample because of the cost and time involved in surveying the entire population.

    3. Types of Samples Probability Sampling Regarded as the best; most scientific Everyone in the population has an equal chance of being selected Non-Probability Sampling Non-scientific Sample may not be (generally isn’t) representative of the general population

    4. Probability Sampling Simple Random Sample Each individual in the population has an equal chance of being selected. An example: Put everyone's names in a hat and then draw them out.

    5. Probability Sampling Stratified Random Sample Used to ensure that sub-groups within a population are represented proportionally in the sample. Example: If the state is divided into six geographic regions and the Southeast Region contains 25% of the total population under study, then 25% of the sample has to be drawn from this region.

    6. Stratified Random Sample

    7. Probability Sampling Cluster Sampling Random selection of groups that already exist. Example: To do a study of Horticulture I Ag students, you would randomly select schools, then randomly select Hort I classes from within the schools

    8. Cluster or Multi-Stage Sampling

    9. Probability Sampling Systematic Random Sample The sample is drawn from a numbered list of people. A person is randomly picked near the top of the list, then every Nth name is selected after that (Nth could be 3rd, 7th, 10th or whatever number is needed to get the correct sample size).

    10. Systematic Random Sample (every 3rd person selected) Bob Adams Billy Benham Sue Conners Ward Dunlap Teresa Elgin Bob Franks Cindy Gomez Dan Headley Aaron Jackson Sue Kimmons Todd Larson Barb Morris Helen Newcomb Inez Oppenheimer Tad Porter Linda Rush Robert Sims Tina Thompson

    11. Non-Probability Sampling Convenience (also called accidental sample) The researcher selects whomever is convenient Example: A researcher at the mall selects the first five people who walk by to get their opinion of a product.

    12. Non-Probability Sampling Purposive (or judgmental sample) Individuals are selected because of their expertise, specialized knowledge, or characteristics. Example: To learn more about emerging trends or issues in the field, you might want to survey the professional organization leaders.

    13. Non-Probability Sampling Snowball Sampling (also know as chain or network sampling) A small group is initially identified . After data are collected from them, they are asked to identify others who might have specialized knowledge regarding the topic; those thus identified recommend others.

    14. How Big Does the Sample Need to Be? Researchers and statisticians have developed formulas and tables that show how big the sample has to be. Generally, two things are needed in order to used these tools How big is the population? How much chance error are you willing to accept (confidence level and confidence interval)

    15. Common Sample Size Experts Cochran’s Q Cochran, W. G. (1977). Sampling techniques (3rd ed.). New York: Wiley Krejcie & Morgan Krejcie, R.V. & Morgan, D.W. (1970). Determining sample size for research activities. Educational & Psychological Measurement, 30, 607-610.

    16. Sample size With the Cochran formula, you have to plug in data and manually calculate an answer Krejcie and Morgan have developed a table (presented on the next page).

    17. Krejcie & Morgan Chart to come

    18. The Easy Way to Determine Sample Size Go to http://www.surveysystem.com/sscalc.htm and enter your figures.

    19. The Mechanics of Selecting a Sample Pick everyone’s name on a piece of paper and draw names out of a hat. (not very efficient use of time for large groups) Use a table of random numbers Number all the people in the population, then use a table of random numbers (found in statistics books or on the web) to identify which individuals to select.

    20. Selecting a Sample Go to http://www.randomizer.org/form.htm and have numbers automatically generated for you.

    21. Other Views According to Gay & Diehl, (1992), generally the number of respondents acceptable for a study depends upon the type of research involved - descriptive, correlational or experimental.

    22. Gay and Diehl (1992) For descriptive research the sample should be 10% of population. But if the population is small then 20% may be required. (I am not sure if I agree with this, but some folks do)

    23. Gay and Diehl (1992) In correlational research at least 30 subjects are required to establish a relationship. For experimental research, 30 subjects per group is often cited as the minimum.

    24. Instrumentation

    25. Instrumentation Since the instrument you use forms the core of your research, it has to be “right”

    26. Designing the Instrument Consider assembling your instrument as a booklet instead of just sheets of paper. This increases response rate and looks more professional (Use 8 ˝ x 11 sheets of paper in landscape mode).

    27. Designing the Instrument Leave plenty of space to write for questions in which people write in answers Allow as much white space as possible Include only questions that are really needed

    28. Designing the instrument Make the manner in which people respond as simple as possible

    29. Designing the Instrument Avoid “and” questions. Example: Do you like the convenience and cost of web based courses? You might like the convenience but not the cost.

    30. Designing the Instrument Have answers that are mutually exclusive Have responses that include all possible answers

    31. Designing the Instrument Don’t use words that could influence the response. Example: Are you in favor of the Gestapo- like tactics used by the police in invading private homes in search of guns?

    32. Designing the Instrument Avoid using jargon, acronyms and words people may not know! Examples: The EFNEP program has improved nutrition in the inner city. EOCs are important to improve the quality of education. Environmental scanning is no longer need in program planning.

    33. Designing the Instrument Allow a response for “I don’t know”, “other”, “not applicable” or “no opinion”

    34. Designing the Instrument In measuring attitudes, use both positive and negative statements. VoCATS has increased student learning. VoCATS costs too much for the benefits gained.

    35. Designing the Instrument Group questions in sections using some logical scheme Start the instrument with non-controversial questions but follow soon with the really important items

    36. Instrument Concerns Validity – Does the instrument measure what it is supposed to measure? Reliability – Does the instrument consistently yield the same results?

    37. Validity There are four types of validity. Content Concurrent Predictive Construct

    38. Content Validity Researchers are most concerned with Content or Face Validity Does the instrument really measure what it is supposed to measure. This is similar to “does a test in a class really cover the content that has been taught.” Content validity is determined by having a panel of experts in the field examine the instrument and compare it to the research objectives. THERE IS NO STATISTICAL TEST TO DETERMINE CONTENT VALIDITY.

    39. Concurrent Validity Does the instrument yield similar results to other recognized instruments or tests? Example: If I developed an instrument to identify quality FFA chapters and most of the chapters I identified were recognized as 3 star FFA chapters by the National FFA organization, this means my instrument has concurrent validity.

    40. Predictive Validity Does the instrument predict how well subjects will perform at a later data? Example: I developed an instrument that I believed would identify freshman in high school who would go to college. Four years later if the students I identified did go to college, then my instrument has predictive validity.

    41. Construct Validity Does the instrument really measure some “abstract” or “metal” concept such as creativeness, common sense, loyalty, or honesty. This is very difficult to establish, thus we seldom are concerned with construct validity in educational research.

    42. Reliability An instrument first must be valid, then it must be reliable. It must measure accurately and consistently each time it is used. If I had a bathroom scale and stepped on it three times in a row, and got drastically different weights each time—then it would not be reliable. Some survey instruments that are not well designed behave in the same manner.

    43. Determining Instrument Reliability Test-Retest – administer the instrument, then administer it to the same group later, correlate the two scores (I really don’t like this technique. I think it is impractical and too time consuming)

    44. Determining Instrument Reliability Equivalent Forms – have two forms of the instrument, administer form A to a group then administer form B later, correlate the two scores (I really don’t like this technique. I think it is impractical and too time consuming)

    45. Determining Instrument Reliability Split-halves – divide the instrument in half and calculate a correlation between the halves. There should be a high correlation. This determines “internal consistency” which is generally regarded as being synonymous with reliability.

    46. Determining Instrument Reliability Statistical calculations – There are several statistical procedures for determining internal consistency. Cronbach’s Alpha Kuder-Richardson 20 or 21 Reliability coefficients (scores) range from 0-1. The higher the score, the more reliable the instrument. Aim for .70 or higher.

    47. Increasing Instrument Reliability In addition to doing a good job in designing an instrument, two factors impact on the reliability of an instrument: Number of questions – the more questions the more reliable (but more questions can reduce response rates) Number of people completing the instrument – the more who take it, the more reliable it will be

    48. Collecting Data The mailed survey is the most common method for collecting data. Personal interviews, phone interviews, group administrated instruments and web-based surveys can also be used. There are some obvious and not-so-obvious advantages and disadvantages of these data collection techniques. These are covered in AEE 579. In this class we will focus on the mailed survey.

    49. Instrumentation Have your graduate committee, colleagues or others review the instrument and provide suggestions before it is administered Field test the instrument before administering it

    50. Getting High Response Rates from Mailed Surveys Print on colored paper Mail on a Thursday or Friday Use incentives (pencils, pens, stamps, money, food, coupons, phone cards stickers, drawings, etc.)

    51. Getting High Response Rates from Mailed Surveys Two weeks after your initial mailing, contact the non-respondents with a postcard, phone call or second mailing of the instrument (I prefer the later) Consider a 3rd mailing if needed

    52. Getting High Response Rates from Mailed Surveys Include an addressed stamped reply envelope Put your name and mailing address on the actual instrument Type “over” on the bottom of each page if there are questions on the back. Make the instrument as short as possible (but still contain the questions you need)

    53. Getting High Response Rates from Mailed Surveys Include a cover letter where the importance and significance of the research is clearly described. Have a prominent person sign the cover letter. Have clear specific directions for completing the instrument.

    54. Non-Responders Consider a telephone interview with a random sample of non-responders Compare early and late responders. If they are the same on selected variables, one can conclude non-responders are similar to non-responders.

    55. How do I know who the non-responders are? Have a code number on the instrument (address this in the cover letter and tell why it is there; only to contact non-responders) Have a code on the return envelope (a small number written inside the envelope or a line in the return address that has no real meaning other than to identify the respondent—Department 87, 88, 89, etc.) Place a code number under the stamp on the return envelope

    56. What is a Good Response Rate? In Agricultural and Extension Education, 70% is the norm Some authors say 50% is OK (Gay), while others (Dillman) believe you should get 80%.

More Related