1 / 28

Evaluation Using a Collaborative Action Research Approach

Evaluation Using a Collaborative Action Research Approach Professor Eileen Piggot-Irvine Royal Roads University Presentation to BC CES AGM 2012.

Download Presentation

Evaluation Using a Collaborative Action Research Approach

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluation Using a Collaborative Action Research Approach Professor Eileen Piggot-IrvineRoyal Roads University Presentation to BC CES AGM 2012

  2. A constant refrain from evaluators is that of ‘difficulty and struggle’: difficulty associated with evaluation design, co-operation, acceptance, impact ….. the list is endless! • Many believe that traditional evaluation has outlived its usefulness … but can we merge the best of rigor from traditional approaches with more collaborative, responsive, ‘developmental’, ‘capacity building’ designs? • AR allows for such a ‘merger’ because it has quite an eclectic approach to inclusion of methods.

  3. Content • Traditional and recent views on evaluation • Alignment with action research (AR) • A case study of an AR evaluation approach

  4. Distance Traditional Approaches to Evaluation External = Evaluator objectivity Evaluator Role Clinical Compliance driven Predominantly summative Low ownership Non-participatory Process = Limited learning? Linear, deductive Rigid process models Limited Development? Outcomes driven Stops after evaluation

  5. Focus on growing capacity - ECB On-going after evaluation More Participatory Recent, More ‘Developmental’, Ideas on Evaluation Individual & organisational understanding & learning Consultative Formative & summative Real time feedback Emphasis still on data-based decision making? Linked to change & development Responsive process models Fosters DLL Sustainable Iterative processes Greater internal org involvement Considers andragogy Embedded Self-renewing Honesty about evaluator objectivity Dialogue based

  6. A Brave, or Foolish, Attempt to Bring Together the Best of Traditional and Recent Developmental Ideas

  7. Evaluation AR (EAR)(acknowledging that this interpretation of AR will not be universally loved!) An approach that takes the best of the rigor from traditional evaluation and fits this within a participatory, responsive, developmental, design

  8. What is AR Generally? • AR is not easy to define, although a plethora of attempts exist in the literature … pursues the dual purpose of action and research … a cyclical or spiral process (Dick, 2002) • The word ‘action’ in action research is key. It is an approach that always involves participants making or implementing change, rather than just investigating an issue • The word ‘research’ in action research is also important. Rather than making ad hoc decisions, the participants in projects make informed decisions (Piggot-Irvine, 2010)

  9. Transparency about values and ‘bias’ occurs in both the evaluation process and reporting • Bias can be minimised via evaluator reflexivity and transparency, stakeholder critique of all aspects of process, triangulation of methods, and in-depth dialogue about analyses • Objectivity in AR “lies in the systematic and open attempt to check the interpretation of what happens against evidence …. in checking with other people whether those interpretations are the most appropriate ones in the light of the data. … objectivity is ensured by seeking wide and continuous criticism of conclusions provisionally reached.” (Pring, 2000: 135) Underpinnings

  10. Underpinnings • Dialogical co-construction of improvement - processes and data collection designed to create a ‘critical community’ with relational ‘fabric’ in order to advance ownership of the evaluation outcomes • Stakeholder new knowledge is central and valued more highly than external, distanced, research that is seen to have limited impact (see Cohen, Manion & Morrison, 2007, for such critique) • These underpinnings are now well-accepted amongst constructivist scholars (see Lincoln & Guba, 2005) … and they have many overlaps with ‘developmental evaluation’ underpinnings (Quinn Patton, 2010) • What differs in my model….

  11. The Evaluation AR (EAR) Model Has cyclic (iterative) COLLABORATIVE phases of:- Issue identification Evaluation of the current situation (Reconnaissance) Implementation of improvements Evaluation of Implementation Reporting On-going improvement But it also has more detailed planning and rigorous data collection, extensive and intensive ‘real time’ feedback, stronger evidence-based reflection, and more objective summative evaluation than developmental models. It’s a merger of traditional and recent!

  12. A Case Study • Context: • Evaluation of an 18 month national, government/Ministry funded, non-credentialed, leadership development program. • Nationally designed: regionally interpreted, governed and facilitated. • The program included elements of: • residential seminar sessions • on-line learning • professional learning groups (PLGs) • mentoring • individual ‘inquiry’ projects

  13. Three key research questions guided the evaluation: • Is the program effective professional development for aspiring senior leaders? • At the conclusion of the program are the aspirants confident and do they have the skills and knowledge required for senior leadership? • 3. At the conclusion of the program are the aspirants prepared for recruitment to senior leadership roles?

  14. The research was concerned with examination of a wide range of variables: • regional governance • program quality • design and delivery • recruitment and retention • program outcomes • …. and a wide range of stakeholders: • funders (national Ministry officials) • designers (6 regional governing committees and regional co-ordinators) • facilitators in 6 regions • participants • current managers of aspirants

  15. Why I Used this AR Model as the Design for the Evaluation? • The funders had never used improvement oriented evaluation before – I wanted to stretch their thinking but I had to take small steps! • EAR contained both development and rigorous data collection – a safer bridge than going fully developmental • Dissention existed amongst multiple stakeholder groups – I thought on-going collaboration might reduce that • There was a high level need for ownership of the evaluation results • Continuous program improvement was my goal – EAR allows for that • Immediate feedback was important (summative evaluation would be too late) • A Mixed Method (MM) approach could be employed to meet program complexity issues.

  16. Reconnaissance Phase • ‘Lead Team’ (LT) of key stakeholders and evaluators formed and action plan for current situation data collection developed • Literature review on ‘effective leadership development’ conducted by LT and translated to evaluation criteria • Data collected on current situation in terms of: - pre-development knowledge, skills, experience of aspirants via confidential surveys to aspirants and their managers; - interviews and focus groups with program designers and facilitators to determine espousals for program; - documentary analysis of program policy, guidelines and plans. All of this was done collaboratively and consultatively.

  17. Improvement Phase • Reconnaissance data analyzed by evaluators • LT collaborated in decision-making re program improvement needed and action planning for that • Improvements communicated consultatively in national meetings to all designers and facilitators • Improvements rolled-out Comprehensive data collection (interviews, surveys, observation, focus groups, documentary analysis) followed by feedback and improvement occurred three times during the program implementation – it was iterative!

  18. Evaluation Phase Data collected at end point of program in terms of: - new knowledge, skills, experience gained by aspirants via confidential surveys to aspirants and their managers; - interviews and focus groups with participants, their managers, program designers and facilitators to determine program impact; - documentary analysis of program feedback and participation; and - on-line use analysis. Data collected 6 months post-program to determine participant success in gaining senior leadership positions, program effectiveness perceptions.

  19. Distinctive Features of the Design - Rigorous data collection via MM - A real time, intensive and specific, feedback approach with one region

  20. MM Data CollectionDiverse demands of the evaluation required equally diverse data. MM met this. It: - Is increasingly used (Russ-Eft & Preskill, 2009), particularly as alternative to generalised quantitative surveys; - Often requires large teams for wide range of research skills (Rallis & Rossman, 2003);- Is in the middle of the post-positivistic to constructivist paradigm continuum (Leech, Dellinger, Brannagan & Tanaka, 2008) and is an eclectically orientated paradigm (Mutch, 2009); and- Requires that quantitative (Qn) and qualitative (Ql) data should complement each other, allowing for more robust analysis (Johnson & Onwuegbuzie, 2004).See Youngs and Piggot-Irvine (2011) for a full outline of the multi-level, multiphase, MM approach employed.

  21. The Distinctive Feedback ApproachAll sites and stakeholders were provided with on-going (non-intensive) formative feedback as well as 3 monthly summative reportingOne site received immediate, real time, action science based (one-to-one intensive, non-defensive, bilateral dialogue) formative feedback to program designers and facilitators as well as summative reporting Use of intensive formative feedback differed markedly from the traditional provision of solely summative activity (usually perceived as time-consuming, less threatening and more ‘objective’) The was new for a national evaluationSee Piggot-Irvine(2010) for substantive outcomes from just this feedback approach on its own

  22. The enhanced feedback created deep critical analysis and immediate improvement

  23. Reflections on the EAR from the funding director

  24. Exceptionally positive feedback on the evaluation design and outcomes (eg reporting) was provided by the national funding director and all other stakeholders -especially those receiving intensive real time feedback. The ultimate test of impact of evaluation design was whether it would be accepted for future evaluations. This eventuated the following year!

  25. References . Cohen, L., Manion, L. & Morrison, K. (2007). Research methods in education (6th ed.). Abingdon:Routledge. . Guskey, T. R. (2000). Evaluating professional development. Thousand Oaks: Corwin Press, Inc. . Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed methods research: A research paradigm whose time has come. Educational Researcher, 33(7), 14-26. . Leech, N.L., Dellinger, A.B., Brannagan, K.B., & Tanaka, H. (2008). Evaluating mixed research studies: A mixed methods approach. Journal of Mixed Methods Research, 4(1), 17-31. . Lincoln, Y. S. & Guba, E. G. (2005). Paradigmatic controversies, contradictions, and emerging confluences. In N. K. Denzin & Y. S. Lincoln (Eds.), The Sage handbook of qualitative research (pp. 191 – 215). Thousand Oaks: Sage Publications. . Mutch, C. (2009). Mixed method research: Methodological eclecticism or muddled thinking? Journal of Educational Leadership, Policy and Practice, 34(2), 18-30. . Piggot-Irvine, E. (2010). Confronting evaluation blindness: Evidence of influence of action science based feedback. American Journal of Evaluation, 31(3), 314-325, DOI: 10.1177/1098214010369251. . Pring, R. (2000). Philosophy and educational research. London: Continuum. . Quinn Patton, M. (2010). Developmental evaluation: Applying complexity concepts to enhance innovation and use. Guilford Press. . Rallis, S. F., & Rossman, G. B. (2003). Mixed methods in evaluation contexts: A pragmatic framework. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in social and behavioral research (pp. 491-512). Thousand Oaks: Sage Publications. . Rossman, G.B., & Wilson, B.L. (1985). Numbers and words: Combining quantitative and qualitative methods in a single large-scale evaluation study. Evaluation Review, 9, 627-643. . Russ-Eft, D., & Preskill, H. (2009). Evaluation in organizations: A systematic approach to enhancing learning, performance, and change (2nd ed.). New York: Basic Books. . Youngs, H. & Piggot-Irvine, E. (2011). Evaluating a multiphase triangulation approach to mixed methods: The research of an aspiring school principal development program. Journal of Mixed Methods Research, DOI: 10.1177/1558689811420696

More Related