110 likes | 190 Views
Learn about integrating qualitative data, QDAS, and evaluations in three distinct case studies with varied team sizes and funders' requirements to glean invaluable lessons on effective qualitative data utilization and evaluation practices.
E N D
Teaching / managing practitioner-researchers in using N6: evaluations Dr Chih Hoong Sin
Introduction • Qualitative data, QDAS and evaluations • Expectations for, and uses of • Three case studies: • Different team sizes, distribution, abilities • Different requirements by funders • Different type and amount of qualitative data • Lessons learned
Qualitative data and evaluation • Cabinet Office guidance (Spencer et al. 2003) but still little understanding of quals and of QDAS • Quant components of evaluation tend to be thought of as remit of ‘specialists’ • Quals thought of as something that ‘non-specialists’ can do: “just ask a few questions”
QDAS and evaluation • ‘Quality stamp’ and ‘wow factor’ • Unrealistic expectations (e.g. quick, so let’s collect more data!) • Lack planning to interrogate data systematically and thoroughly: • Count • Describe • Theorise (under-performed) • Can damage quals and QDAS enterprise
The study: 3-year, multi-component, programme/case studies, ODPM Around 100 SSIs in each of 4 research cycles Funder’s requirements: Need outcome data Need ‘richness’ of local accounts The team: Ranged from 15 individuals employed by 2 organisations, multi-site, to 6 individuals in 1 organisation Most have no quals and QDAS experience. 1 with quals expertise but not in QDAS Case 1: Street Wardens Evaluation
Street Wardens Evaluation • How it worked: • Trained as a group, outside expertise • Coding tree designed by one person, refined through group discussion • Codes as descriptive themes • Time to practice with real data • Everyone coded assigned documents in entirety • Coding conducted multi-site • Centrally merged, weekly basis • Ongoing support, internal • Analysis by smaller core team, mainly descriptive
The study: Intended for 18 months, multi-component, Home Office 21 SSIs, 3 focus groups Funder’s requirement: Hard outcome data Reduce quals The team: 1 full-time on-site researcher, 2 others All have quals training at graduate level, no QDAS experience Case 2: Evaluation of MMDU
Evaluation of MMDU • How it worked: • Trained as a group, internal • All involved in design/conduct of field work • All involved in generating coding structure • Time to practice with real data • Each would code all documents using certain codes • Merged weekly • Regular group analysis and discussion, mainly descriptive but rudimentary theory-building
The study: 2-year evaluation, process and outcome, NDST 10 SSIs in first research cycle Funder’s requirement: Hard outcome data ‘Grand’ theory The team: 6 individuals, with 2 on quals/QDAS 1 with no quals experience, 1 extensive quals experience but not in QDAS Case 3: Evaluation of NDST
Evaluation of NDST • How it worked: • Trained as a group, internal • All involved in design/conduct of field work • Small number of documents, temptation not to use N6 • Analysis and coding together • Two individuals working closely and in constant discussion • Theory-building
Lessons learned • Balancing research ideals with pragmatism (size of team, abilities, distribution, needs) • Group training important • You don’t need to know everything • Make it real (it won’t self-destruct!) • Play Familiarity Confidence • Look ahead (e.g. housekeeping, longitudinal work)