1 / 28

Introduction to Research

Introduction to Research. Chris Fowler. Contents. Part I: Introduction What is Research? How is research carried out? What is Evaluation? Part II: Quantitative Methods Notion of Causality Simple Experimental Designs Measurement Scales. Part I: What is Research?. Your views!.

Download Presentation

Introduction to Research

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Research Chris Fowler

  2. Contents Part I: Introduction • What is Research? • How is research carried out? • What is Evaluation? Part II: Quantitative Methods • Notion of Causality • Simple Experimental Designs • Measurement Scales

  3. Part I: What is Research? • Your views!

  4. What is Research? • A systematic process for answering questions? • Questions about: • Why things happen (understanding) • How things happen (explaining) • What has happened (describing) • When things will happen (prediction)

  5. Different sources of questions • The nature of research will differ according to the source or origin of your research question: • Theories (Fundamental Research) • Problems (Applied Research) • Behaviours (Action Research)

  6. Testing Theories • Theory testing has its own language. • The research question is called a research or alternative hypothesis and is a testable prediction derived from a theory. • But to test hypothesis you need to define theoretical concepts in empirical terms – an operational definition. • How would you operationalise the concept of intelligence?

  7. Different Philosophical positions • The nature of research may be different according to your philosophical position: • Positivistic research: the scientific approach (seeking causes and effects - human behaviour is no different from behaviour of anything else and can be subjected to the same methods) • Phenomenological research: the person is central, their experiences, interpretations and behaviours. (Describes experiences rather than explains them)

  8. What do we mean by systematic? • Methods or approaches to gathering, analysing and interpreting findings (data). • The methods generally fall into one of two types: • Quantitative • Qualitative

  9. Qualitative Approach: Characteristics • A qualitative approach is characterised by being: • explorative & insightful (understanding rather than explaining) • naturalistic (not laboratory bound) • wholistic (the whole system) • participative (the researcher is part not apart of the approach) • subjective (analysis involves the perceptions and interpretations of the researcher) • Non numerical (analysis of texts, pictures, objects etc) • Associated with a more Phenomenological approach to research

  10. Quantitative Approach: Characteristics • A quantitative approach is characterised by being: • focused (knowing what you are looking for) • explanatory (testing theories/models) • objective (removes the researcher bias) • statistical (analysis of numbers) • scientific (use of experimental methods) • Associated with a more positivistic approach to research.

  11. Types of Quantitative Methods • There are four main types or classes of quantitative methods: • Descriptive – describes the way things are according to a number of variables (e.g. learners in terms of their personality, intelligence, ability etc). • Correlational – seeking (non causal) associations or relationships between variables (e.g. intelligence and academic performance) • Causal–comparative (or ex post facto) – no experimenter induced manipulation. Starts with causes and tries to deduce effects (e.g. One class of students are very gifted at maths another not so gifted – why?) • Experimental – attempts to establish a causal relationship with a manipulation (e.g. demonstrating that extra maths tuition (the manipulation) at home improves mathematical performance in school). • The causal strength increases from 1 to 4.

  12. The Research Process • All research shares some common factors or processes: • Asking the right question • Choosing the appropriate data collection method • Appropriate analysis & interpretation of the data • Reflections on implications for the question and the methods

  13. Positivistic The Research Philosophy Phenomenological Action The Research Question Fundamental Reflections Applied The Research Method Qualitative Analysis & Interpretation Quantitative

  14. What is Evaluation? • Your views?

  15. What is evaluation • Evaluation is determining the value of an object or process. It is not about ‘testing theories’ or solving applied problems or changing behaviours. It is not therefore ‘real’ research. • It involves the creation of value criteria or claims, then determining whether the criteria or claims have been met. The more that are met the more ‘valuable’ the object or process.

  16. Part II: Causality • Much of science is about demonstrating or proving causes and effects. The cause (A) must precede the effect (B) • A good experimental design ensures that no other cause could be responsible for the effect.

  17. The Simple Experiment • The experimenter changes (manipulates) one variable (the independent variable) and observes the effects of the change on another variable (the dependent variable) • However, you also need to demonstrate that the absence of the manipulation has no effect (the control group) For example you want to show that taking vitamin supplements improves school performance. One group takes the supplements (exp group) and one doesn’t (the control group) – then compare the school performance of the two groups. O1 (the control group) X O2 (the exp group) A post-test only design

  18. But…. • What if the two groups were not equivalent at the beginning? Perhaps the experimental group contained brighter students (confounding variable) and that caused the effect. • Ensure equivalence by doing a pretest O1 O2 Control group O3 X O4 Exp Group If O1=O3 and 02 is different from 04 then the manipulation is most likely to be the cause of the effect. Pre-Test Post-test Design

  19. And….. • What if the very act of giving the vitamin supplement has an effect (subject bias)? • Then give the control group a ‘placebo’ (dummy drug) so neither group no whether they received the treatment (pill) or not. • What if there is an effect from the observer knowing which group received what treatment? • Then use the ‘double-blind’ procedure (where neither the observer nor the subject are aware of who took the vitamin tablet and who took the placebo).

  20. And….. • What if we used the same subject for both the treatment and control condition? • This is called a within subject design or repeated measures design. • This would reduce any constant errors resulting from variations between different subjects as each subject acts as their own control (also need fewer subjects!). • However, they can be carry over effects where performance in a previous condition can effect performance in subsequent conditions (fatigue and practice effects) • A between subject design is when different subjects are used in the treatment conditions.

  21. Between- and Within-subject Designs T1 T2 T3 T4 S1 O1 O2 O3 O4 Each S provides four S2 O5 O6 O7 O8 Observations (scores) S3 O9 O10 O11 O12 T1 = Control S4 O13 O14 O15 O16 T1 T2 T3 T4 O1 O2 O3 O4 Each S (N=16) provides one O5 O6 O7 O8 Observation (score) O9 O10 O11 O12 T1 = Control O13 O14 O15 O16

  22. but…. • Carry over effects can be overcome by counter balancing the treatments so each subject gets them in a different order so any order effect is not constant. • This is an example of turning a constant error (order effect) into a random (non systematic) error.

  23. Counter-balanced Design 1st 2nd 3rd 4th (test order) S1 T1 T2 T3 T4 S2 T2 T1 T4 T3 S3 T3 T4 T1 T2 S4 T4 T3 T2 T1 Each subject does the treatment in a different order When the two factors are equal (4 in this case) – its called a latin square design.

  24. Measurement Scales (1) • Quantitative methods are numerical so where do the numbers come from? Measuring things. • Nominal scales: names or labels, but must be mutually exclusive (only belong to one category) and exhaustive (all possibilities are listed). Not true measurement but often used for frequency counts. E.g. Sex: Male or female; Ethnic origin: West Indian, East Indian, White; others

  25. Measurement Scales (2) • Ordinal Scales: order - relative or ranked positions (greater than). Tells you nothing about the absolute values, E.g. Winning order (1st, 2nd, 3rd etc) in a running race (but not their finishing times). • Interval scales: fixed intervals but no true or absolute zero (so can’t make ratio statements). The intervals are therefore fixed or equivalent but are arbitrary. E.g 15C is 15 degrees higher than 30 degrees but not twice as hot.

  26. Measurement Scales (3) • Ratio scales are like interval scales but have a true zero. So you can make ratio statements (twice, three times etc) and these can be made irrespective of the measurement unit (something is twice as tall regardless of whether is measured in inches or meters) E.g. Zero on weight scale corresponds to ‘no weight’ in reality so 5gs (0.18oz) is ½ of 10 gs (0.36oz), 15gs (0.54oz) is 3x 5gs and so on.

  27. Measurement Scales (4) • Why are measurement scales important? • Interval and ratio scales allow all arithmetic operations (adding, subtracting multiplying etc) to be used, thus allowing more powerful descriptors (e.g means and variances) to be calculated, and more powerful tests (e.g ANOVA) to be applied to them.

  28. Some final thoughts… • True experiments are difficult to do in Education because: • Not naturalistic • Ethical considerations • Difficult to control all the variables • Non experimental quantitative techniques are suitable but are less powerful (if you are want to make causal inferences).

More Related