740 likes | 756 Views
Dive into advanced quantitative research methods with case studies, theory, and sampling techniques. Learn about data analysis, statistical methods, and presenting results. Recommended for social work professionals.
E N D
Quantitative Research Methods: Part 2, for social workers Mike Griffiths m.griffiths@gold.ac.uk
Part 2? • I gave an introductory lecture on Tuesday • If you missed it, you should be able to follow this lecture, but I recommend you read through the previous one anyway.
Agenda • Some case studies • More theory, centred on those case studies • Sampling • Ways of getting data to analyse • Results and how they are presented • A bit more on statistical analysis and SPSS
Case studies: introduction • It is a good idea to read past research papers • Not just for their findings • But for their style, structure and methodology • I will use three papers as case studies, to illustrate some relevant theory • These examples all come from Social Work, July 2015, available from the library, hard copy or electronically (click the second option if searching electronically)
Case study 1: Pritzker and Applewhite, 2015 Going “Macro”: Exploring the careers of macro practitioners. Social Work, July 2015, pp 191-199 • Analyses data from a sample of MSW* graduates working as “macro practitioners**” • Based on a questionnaire the researchers devised and sent out This is an American study, using the following terminology: * Master of Social Work ** Broadly speaking, advising organisations rather than dealing with individuals
Case study 1: Headline findings • The majority of respondents work in non-profit agencies • Over 70% of them work in posts open to other professions • 33% work in a job requiring some kind of licence • Salary information (e.g. the median at entry level was US$ 45,000 – 45,999). • How often respondents carried out various responsibilities: e.g. 69% did Programme Planning and Development frequently or very frequently
Case study 2: Oxhandler, Parrish, Torres and Achenbaum The integration of clients’ religion and spirituality in social work practice: A national survey. Social Work, July 2015, pp 228-237 • Investigated the factors that predict social workers’ orientation towards integrating spirituality and religion in their practice • Incorporated a previously published questionnaire, the RSIPAS (Religious/Spiritually Integrated Practice Assessment Scale), designed to measure this orientation • Together with the researchers’ own questions
Case study 2: Headlinefindings • The total score on the RSIPAS was significantly correlated with all four predictors tested (the practitioners’ level of: organizational religious activity, non-organizational religious activity, intrinsic religiosity, prior training) • The last two of these predictors remained significant in a multiple regression
Case study 3: Bronstein, Gould,, Berkowitz, James, and Marks Impact of a social work care coordination intervention on hospital readmission: a randomized controlled trial. Social Work, July 2015, pp 248-255 • Investigated whether an intervention (a home visit and phone calls from a social work intern) • would reduce the likelihood of readmission to hospital • compared against patients receiving “usual care” (the control group)
Case study 3: Headline findings • The number of readmissionswas: • zero in the intervention group • eight in the control group • This difference (in favour of the intervention group) was statistically significant.
Theory 1: sampling(a) snowball samples • Pritzker and Applewhite used a “purposive snowball method” • Their target population was MSW graduates of a particular university, practicing macro social work • Alumni relations staff, field office and macro faculty were asked to identify alumni engaged in macro practice • Those alumni were asked to fill in the questionnaire, and to pass the request on in turn to other alumni they knew who were engaged in macro practice.
Snowball sampling contd. • This passing on of the request (and ideally those recipients passing it on in turn) is what we mean by a “snowball sample” • As the paper acknowledges, this may limit how representative the sample is, e.g. of people who are not in touch with other alumni • Nonetheless, it is quite common for survey studies.
Sampling (b) random sampling • Oxhandler et al. selected licensed clinical social workers at random from a database • However the database could only be searched by zip code*, so they randomly selected 2,000 zip codes to search by • From those results, they selected only individual social workers • And excluded group practices, agencies, schools; and individuals without a social work degree, email and mailing address *the US equivalent of a postcode
Random sampling contd. • Oxhander et al. wanted a sample size of 400, and a comparable previous study had a 52% response rate, so they selected 1,000 individuals • This was reduced to 984 for various reasons, e.g. invalid mail or email addresses • 482 responded (49%) • They got quite a good response rate – 10% is not unknown! • But in any survey, remember that the people who respond may not be typical of all the people who were asked to participate
Sampling (c):Randomised Control Trials • Bronstein et al. did a randomised control trial • Participants were recruited from a clearly defined group, i.e. patients admitted to a particular hospital between certain dates • Clear eligibility criteria: aged 50 or older, with a moderate or high risk of readmission as determined by a scoring protocol • In clinical trials it is important to be clear about your inclusion and exclusion criteria • Participants were then allocated at random into the intervention group or the control group • This is the hallmark of a RCT • The aim is to ensure that the two groups are equivalent
Getting data: Your own questionnaires (a) Questions • Although there is nothing to stop you devising a questionnaire combining quantitative and qualitative elements … • In quantitative research, you should aim to know what you intend to do with the answers before you have collected them! • So we want to use closed questions • Closed questions are ones with a limited, predictable range of answers, e.g. • What is your age in years? ____ • What is your sex?: male / female • Forced choice (coming up) • Likert scales (coming up)
Questions continued: forced choice E.g. A questionnaire handed out to people at Goldsmiths might ask • But beware! Categories should be • Comprehensive (what if someone is a visitor?) • Mutually exclusive (what if someone is both a student and a teacher?) • Are you • A student • Academic staff • Other staff • A contractor
Forced choice questions (continued) • One way to find problems may be in piloting (coming up) • If you really can’t make your questions comprehensive and mutually exclusive, you can say “tick which best applies”, and/or have an “other” category, with or without a box to fill in details • but think about how you will analyse the answers!
Forced choice questions (continued) • Don’t go for forced choice if more than one option can apply • E.g. Pritzker and Applewhite asked whether practitioners were in jobs open to other professions, such as MBAs, MPHs etc • So they would have asked respondents to tick as many boxes as applicable
Likert Scales • E.g. to respond to the question “How do you feel about the service you received?” • or • No hard and fast rules as to how many categories, or whether to include a middle point • But do avoid unbalanced scales, e.g.
Wording of questions • Advice may seem obvious, e.g. • avoid leading questions • avoid jargon your respondents won’t understand • But is often forgotten • So use a good checklist, e.g. in Robson, Real World Research (in the library)
(b) Format of the questionnaire • Plenty of advice in books (e.g. Robson) • Do read and follow it, but if you are doing an online questionnaire, also think about how it looks on screen (perhaps even on a mobile phone)
(c) Piloting • Test out a draft version of your questionnaire, ideally on one or more people from the target population • Get them to fill it in, also say how easy it was, and tell you any problems they foresee
(d) Issuing the questionnaire • A physical piece of paper is harder to ignore • But online is more common these days • Plenty of options for online forms • Survey Monkey is the most famous; it has sample questions, and even panels of people willing to do surveys • Google Forms probably gives the most free functionality • See comparison websites for a confusing amount of choice! • Whatever you choose, check you know how to download and analyse results before sending out the invitation
Getting data: Using scale questionnaires • Often known as “psychometric” questionnaires, although they are not limited to psychological concepts • These are questionnaires where the respondent answers a number of questions, then you add up the answers to get a score • E.g. the RIPAS, used by Oxhander et al., asked people to respond on a Likert scale to questions such as: • I know how to skilfully gather a history from my clients about their religious/spiritual beliefs and practices • I know what to do if my client brings up thoughts of being possessed by Satan or the Devil • There is a large selection of such tests, validated and published • Developing and validating a psychometric questionnaire is a significant piece of work in its own right
Scale questionnaires (continued) • Make sure you get hold of the instructions (in a manual or journal article). • Some important things to know • Are there any “reverse scored” questions? • E.g. if the RSIPAS had included the question “I struggle to deal with my clients’ spiritual issues”, you would need to turn low scores into high ones (and vice versa) before you added the score into the total • Do you add up the scores, or do you take an average? • Does the scale divide up into subscales? • E.g. RSIPAS has four subscales • If there are subscales, can they be combined into a grand total (as Oxhandler et al. did) or do they measure totally different things that need to be kept separate?
Scale questionnaires (continued) • The manual or journal article should also give information on • The norm group (the people the scale was tested on) and what their mean score was • The reliability (i.e. consistency) of the scale (as testedon the norm group) • Certainly the internal reliability, measured by Cronbach’s alpha • As a rule of thumb, .7 is acceptable, .8 is good (1 is the maximum) • For the RSIPAS, Cronbach’s alpha for the norm group was .95 • Possibly, the validity of the scale (the extent to which it measures what it is supposed to measure) • although this is usually difficult to say objectively
Getting data: Randomised control trials In diagrammatic form, here is what Bronstein et al. did: All RCTs follow a similar format, although often we also measure the DV after the random allocation, but before treatment.
Randomised control trials: Control groups • Why do we like a control (or comparison) group? • to compare the intervention group against what would have happened anyway, e.g. • In Bronstein et al., to show what would have happened without any intervention • When testing a treatment for a disease, to compare against the extent that people might have got better anyway, without any treatment • And attempt to isolate the specific effect of the intervention against any general effects, e.g. just being given attention, or a pill that they believe works • The “placebo effect”
Randomised control trials: the ideal • The ideal is a triple-blind placebo-control trial • The control treatment looks just like the real treatment (e.g. a dummy pill that looks and tastes just like the real one, called a placebo) • Neither the participant, nor anyone in contact with them, knows who has the real treatment and who has the placebo (double blind) • Even the person doing the statistics does not know which group is which until they have finished their analysis (so they are also blind)
Randomised control trials (cont) • In social science studies it is unlikely anyone is going to be blind to the conditions • For ethical reasons, it is likely that even the control group know that there was an alternative treatment which they didn’t get • This means we have to be alert to possible internal validity issues, e.g. the practitioners treating people differently in other ways (such as being more positive when talking to them), the treatment group being enthusiastic or co-operative, the control group getting disillusioned, etc.
Randomised control trials • And in social science, we may not even allocate the participants at random • E.g. the treatment may be given to people in one region, and not to people in another region • This leads to other possible validity issues, because the regions may not be comparable • Nonetheless, RCTs are becoming popular, e.g. in public policy • See this paper by the Cabinet Office and Ben Goldacre, with more hints and examples: https://www.gov.uk/government/publications/test-learn-adapt-developing-public-policy-with-randomised-controlled-trials
3(a) Pritzker and Applewhite • All of their results were descriptive statistics: • Percentages of who did what, who earned what, etc. • Percentages who said they did, or did not, perform various functions in their current and prior positions • As I said in the last lecture, simple descriptive statistics can sometimes be useful • Whatever it is you research, your Introduction should justify why it is useful!
3(b) Oxhandler et al. • They gave descriptive statistics of the answers to the questions on the scale questionnaire • But their main results were about which characteristics of social workers correlate with the RSIPAS score • i.e. what characteristics predict the use of spirituality/religion in their practice • Note that this is quantitative research, so they asked closed, predetermined questions • the only characteristics they could find out about were the ones they asked about.
Oxhander et al. (contd) • Note also that they couldn’t resist also presenting regression results • It is quite usual for quantitative researchers to report regression results • or even more complicated results, such as moderation, mediation, SEM path analysis, etc. • Beware of going too far outside your comfort zone here! • And don’t do complicated analyses that don’t illuminate your hypothesis or research question • But I will explain in a few slides the difference between correlation and regression
Correlations, partial correlation and regression (1) • We can visualise a correlation as a diagram like this. • Correlation asks about how much shared variance there is, and whether it is a significant proportion Time off work Depression The overlap represents the shared variance – the extent to which depression predicts time off work. (To be more precise, the overlap is the square of the correlation.)
Correlations, partial correlation and regression (2) What happens when we take into account another variable – which might correlate with the first one? Depression Time off work Stress
Correlations, partial correlation and regression (3) We might ask how much they predict between them (a + b + c) – which is not just the two effects added up, because that would double-count b. Depression a Time off work b c Stress Multiple regression can tell us this (in its overall result, R-squared)
Correlations, partial correlation and regression (4) We might also ask, if we knew how much time off work was predicted by depression, does stress add any extra useful predictiveness (area c)? Depression a Time off work b c Stress
Correlations, partial correlation and regression (5) Whether area c is significant is measured by a partial correlation, or by whether stress is significant in a multiple regression*. Depression a Time off work b c Stress * in the coefficients table
Correlations, partial correlation and regression (6) • Such results are all very well, but sometimes it is simply the correlations that are important • Reminder: beware of going outside your comfort zone, and/or of doing clever analysis that doesn’t illuminate your research question
3 (c) Bronstein et all • Randomised control trial • As previously mentioned, they gave the outcome for • The group who had been given the treatment • The control group who had not been given the treatment • And they said that this difference was statistically significant • Which brings us onto the next section!
Inferential statistics • In the last lecture, I said that if you report on a relationship between two variables, people usually expect to see inferential statistics • Do you remember my “straw man” case study?
Case study 1: dice throwing • Suppose a researcher believes that men and women differ in their ability at throwing dice • i.e. in his research hypothesis, the Independent Variable is sex and the Dependent Variable is dice score • He tests this by getting five men and five women to throw dice • N.B. this is not a good sample size, but we are only making a point here!
Case study 1: discussion • Of course, the differences in these averages arose by chance • We could easily have got this result if our two groups had both been men • So it is not good enough to simply find a relationship between two variables (e.g. between gender and dice throwing score) • We need to show that the result is more extreme than you would expect by chance