1 / 11

Teaching / managing practitioner-researchers in using N6: evaluations

Teaching / managing practitioner-researchers in using N6: evaluations. Dr Chih Hoong Sin. Introduction. Qualitative data, QDAS and evaluations Expectations for, and uses of Three case studies: Different team sizes, distribution, abilities Different requirements by funders

rufina
Download Presentation

Teaching / managing practitioner-researchers in using N6: evaluations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Teaching / managing practitioner-researchers in using N6: evaluations Dr Chih Hoong Sin

  2. Introduction • Qualitative data, QDAS and evaluations • Expectations for, and uses of • Three case studies: • Different team sizes, distribution, abilities • Different requirements by funders • Different type and amount of qualitative data • Lessons learned

  3. Qualitative data and evaluation • Cabinet Office guidance (Spencer et al. 2003) but still little understanding of quals and of QDAS • Quant components of evaluation tend to be thought of as remit of ‘specialists’ • Quals thought of as something that ‘non-specialists’ can do: “just ask a few questions”

  4. QDAS and evaluation • ‘Quality stamp’ and ‘wow factor’ • Unrealistic expectations (e.g. quick, so let’s collect more data!) • Lack planning to interrogate data systematically and thoroughly: • Count • Describe • Theorise (under-performed) • Can damage quals and QDAS enterprise

  5. The study: 3-year, multi-component, programme/case studies, ODPM Around 100 SSIs in each of 4 research cycles Funder’s requirements: Need outcome data Need ‘richness’ of local accounts The team: Ranged from 15 individuals employed by 2 organisations, multi-site, to 6 individuals in 1 organisation Most have no quals and QDAS experience. 1 with quals expertise but not in QDAS Case 1: Street Wardens Evaluation

  6. Street Wardens Evaluation • How it worked: • Trained as a group, outside expertise • Coding tree designed by one person, refined through group discussion • Codes as descriptive themes • Time to practice with real data • Everyone coded assigned documents in entirety • Coding conducted multi-site • Centrally merged, weekly basis • Ongoing support, internal • Analysis by smaller core team, mainly descriptive

  7. The study: Intended for 18 months, multi-component, Home Office 21 SSIs, 3 focus groups Funder’s requirement: Hard outcome data Reduce quals The team: 1 full-time on-site researcher, 2 others All have quals training at graduate level, no QDAS experience Case 2: Evaluation of MMDU

  8. Evaluation of MMDU • How it worked: • Trained as a group, internal • All involved in design/conduct of field work • All involved in generating coding structure • Time to practice with real data • Each would code all documents using certain codes • Merged weekly • Regular group analysis and discussion, mainly descriptive but rudimentary theory-building

  9. The study: 2-year evaluation, process and outcome, NDST 10 SSIs in first research cycle Funder’s requirement: Hard outcome data ‘Grand’ theory The team: 6 individuals, with 2 on quals/QDAS 1 with no quals experience, 1 extensive quals experience but not in QDAS Case 3: Evaluation of NDST

  10. Evaluation of NDST • How it worked: • Trained as a group, internal • All involved in design/conduct of field work • Small number of documents, temptation not to use N6 • Analysis and coding together • Two individuals working closely and in constant discussion • Theory-building

  11. Lessons learned • Balancing research ideals with pragmatism (size of team, abilities, distribution, needs) • Group training important • You don’t need to know everything • Make it real (it won’t self-destruct!) • Play Familiarity Confidence • Look ahead (e.g. housekeeping, longitudinal work)

More Related