1 / 50

Techniques to Evaluate Effective Learning

Techniques to Evaluate Effective Learning. Masterclass Two Dr Carol Marrow Emeritus Associate Professor University of Cumbria. Lancaster. Associate Professor Robert Kennedy College, Zurich. Switzerland. Learning Outcomes.

more
Download Presentation

Techniques to Evaluate Effective Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Techniques to Evaluate Effective Learning Masterclass Two Dr Carol Marrow Emeritus Associate Professor University of Cumbria. Lancaster. Associate Professor Robert Kennedy College, Zurich. Switzerland.

  2. Learning Outcomes • Understand quality assurance reporting techniques that develops knowledge to influence change and development. • Be able to creatively configure findings, drawing both from existing policy, evidence, current practice and upcoming initiatives. • Be able to present findings in a high level written report, seminars, posters and case studies.

  3. Introductions • Introductions • About your role • About me • Name three things you hope to get from the day

  4. Evaluation • Write down briefly your understanding of the term evaluation

  5. Evaluation • What is evaluation? • “Evaluation refers to • Evaluation determines the merit, worth, or value of things. The evaluation process identifies relevant values or standards that apply to what is being evaluated, performs empirical investigation using techniques from the social sciences, and then integrates conclusions with the standards into an overall evaluation or set of evaluations (Scriven, 1991). Shriven (1991) Reflecting on the past and future of evaluation The Evaluation Exchange: A periodical on emerging strategies in evaluation Volume IX, No 4, Winter 2004/2004

  6. Evaluation and Assessment What is the difference between evaluation and assessment? • Evaluation appraises the strengths and weaknesses of programs, policies, personnel, products, and organizations to improve their effectiveness. • Assessment is an on-going process aimed at improving student learning, programs, and services that involves a process of: • publicly sharing expectations • defining criteria and standards for quality • gathering, analyzing, and interpreting evidence about how well performance matches the criteria • using the results to documents, explain, and improve performance • Effective learning, development and evaluation help to promote a

  7. Why evaluate effectively? • The HEE Quality Framework 2016/17 lays out clear aims regarding quality assurance. • This can involve demonstrating particularly dynamic and contextual values, which go beyond core metrics, e.g.: • Continuous improvement of the quality of education and training • Empowering learners • Adaptability and receptivity to research and innovation

  8. Evaluating Transformative Learning • With the nearest person, talk through: • What activities do you currently (knowingly) evaluate? • Do you think this is effective evaluation? Why/why not? • What kind of things would show you that transformative learning had taken place? • Can we share your thoughts with the group are to share this with us?

  9. Principles and Practicalities • Evaluation often begins with a straightforward question: “Does it work?” Or “What is the impact?” • But this is nearly always the wrong question.

  10. Principles and Practicalities • Evaluation is always politicalbecause… • Where there is policy, there is politics • Simply ‘collecting data’ without a sense of strategy is likely to be ineffective. • Evaluation often involves a ‘they’: • E.g. commissioners; line managers; stakeholders; participants • Before any evaluation, we need to ask: • Who are the ‘they’? • What do ‘they’ think they want? • What are ‘they’ going to find credible?

  11. Effective Evaluation • Effective evaluation should not tell us simply ‘what works’; • Rather, it should tell us: ‘what works in which circumstances, and for whom?’ • Or: “What works, for whom, in what respects, to what extent, in what contexts, and how?”

  12. Making Sense of it all:Contexts, Mechanisms, Outcomes • Any role, programme, intervention etc. in healthcare is complex. The numerous activities around it mean finding a single ‘theory’ to ‘drive’ it is difficult. • Instead, it can help to break down activities into different propositions. • Propositions can be formalised as contexts, mechanisms and outcomes: • An outcome happens because of the action of some mechanism which only operates in certain contexts. • If the right processes operate in the right conditions, then the right outcome appears.

  13. Task • Take a sheet of paper and make three columns: Context Mechanism Outcome

  14. Task • Think about what you do in your role. Try and formulate your role in terms of: The (context) and (mechanism), then (outcome) • A context: individual participants; their interrelationships; the institutional location; the surrounding infrastructure • A mechanism: an intervention; a process; a specific activity • An outcome: a result that would not have happened without the previous context and mechanism • NB. These are loose boundaries at this point, so don’t worry about getting the ‘right’ column! • Now share this with the people near you. How do your columns compare? How easy was the task?

  15. CMO and Evaluation • The point is not simply to list *everything* we do. This could be endless. • Rather, it is to notice how these are configured as a way of explaining what is happening. • Context, mechanisms and outcomes ‘take their meaning from their function in explanation and their role in testing those explanations.’ (Pawson 2013: 26) • They allow us to articulate the theory of what makes an activity effective. • This requires ‘reusable conceptual platforms’: the way we configure is CMOs is based on our pre-understanding of what know about an activity • We can apply this to any programme, activity, etc. in order to measure success and analyse causes. Pawson, R (2013) The Science of Evaluation: A realist Manifesto. Sage Publications.

  16. Break

  17. What can Evaluation tell us? • What can evaluation show, then? • It can identify regularities and patterns in outcomes; • It can offer interpretations and explanations for why those patterns are there; • It can identify specific contexts or mechanisms that are enabling or disabling • These are all driven by understanding the propositions, theories or logic being tested by the evaluation.

  18. Doing an Evaluation: More Circles Taken from Pawson (2013)

  19. The Effectiveness Cycle • The key here is not to replicate a ‘scientific experiment’ • ‘The weakness of the hypothetico-deductive system, insofar as it might profess to offer a complete account of the scientific process, lies in its disclaiming any power to explain how the hypotheses came into being.’ (Medewar 1982: 135) • But to allow for the dynamics of transformative learning to be captured… • …And to inform its development. • Evaluation, in this sense, must be an ongoing and dynamic ‘effectiveness cycle’ (Kazi 2003: 30) Kazi, M.A.F (2003) Realist Evaluation in Practice Health and Social Work. Sage Publications

  20. Outcome evaluation • Outcome and Impact Evaluation • Outcome evaluations measure to what degree programme objectives have been achieved (i.e. short-term, intermediate, and long-term objectives). This form of evaluation assesses what has occurred because of the programme, and whether the programme has achieved its outcome objectives. • So for example: The student received basic safety induction within the first 24 hours of the placement. If a number of student’s have problems with this outcome – the programme/placement outcomes will need examining and possibly revisiting. The “value” of the intervention and programme should be consistently assessed

  21. Capturing Effectiveness and Impact Outcome evaluations can be simple or complex involving a single programme, the comparison of multiple programmes and multiple systems.  The main focus of outcome evaluation is to: Compare programme participants before and after they receive the programme to see if they have made improvements on key outcomes Compare programme participants with an equivalent group of individuals who didn’t receive the programme (e.g. a control group, preferably randomly assigned if feasible) to determine if the programme group exceeded the gains made by the control group. Changes in contexts of learners • Contexts are typically less likely to change during a programme; as such can be tracked using a variety of existing measures such as audit as well as qualitative data Changes in mechanisms of learning • Mechanisms can be tracked via programme procedures (assessment, routine recording etc.) • Changes in activities/content of the programme can be captured through qualitative data, such as focus groups or supervisions • Tracks what is being done, over time

  22. Forms of Data • From this, we can see that there are three main forms of data used in evaluation: • Data based on the evaluator’s observation of what is happening • Data based on asking other people what is happening • Data based on existing documents, statistics, minutes, etc. • Ideally, a variety of data forms can be used; as these will inform different aspects of the context, mechanisms and outcomes.

  23. Forms of Data • When and where this data is taken also has an effect on what kind of questions can be answered: • Cross-sectional and Longitudinal • Case Studies and Representative Studies • Probability Sampling • Purposive Sampling

  24. Data Quality • Given the different kinds of data available, it can be tempting to go for a ‘catch-all approach’. • But the quality of data is key: • Has the data been provided consistently? E.g. survey participants; ‘routine’ recording, etc. • Timeliness of data • Whether the data addresses the evaluation question you are asking • Whether different data sets ‘speak’ to each other • What your relationship with the data, or data provider, is

  25. Reliability and Validity • Data rarely ‘speaks for itself’ in the context of effective learning – as the number of influences on change are so great. • It must be re-presented or coded in order to give meaning to the evaluation questions being asked. • The CMO configuration can guide us via a Template Analysis • This also helps us to link together different data forms. • The programme theory acts as a lens to draw out significant data. In turn, data allows us to re-focus the programme theory.

  26. Being Rigorous with Data • But how do we know data is reliable? • There are four general headings for both quantitative and qualitative data that are useful guides for establishing the effectiveness of both. • Qualitative Criteria • Credibility • Transferability • Dependability • Confirmability • Quantitative Criteria • Internal Validity • External Validity • Reliability • Objectivity

  27. Internal Validity and Credibility • Internal Validity (Quant): • Whether observed changes in a phenomenon can be attributed to your programme or intervention (i.e., the cause) and not to other possible causes (sometimes described as “alternative explanations” for the outcome). • Credibility (Qual): • The results of qualitative research are credible or believable from the perspective of the participant in the programme. Since from this perspective, the purpose of qualitative research is to describe or understand the phenomena of interest from the participant's eyes, the participants are the only ones who can legitimately judge the credibility of the results.

  28. External Validity and Transferability • External Validity (Quant): • The degree to which the conclusions of an evaluation would hold for other persons in other places and at other times. • Transferability (Qual): • The degree to which the results of qualitative analysis can be generalized or transferred to other contexts or settings. From a qualitative perspective transferability is primarily the responsibility of the one doing the generalizing. • This can be enhanced by doing a thorough job of describing the explanatory framework (CMOs), and the assumptions that were central to the research. The person who wishes to ‘transfer’ the results to a different context is then responsible for making the judgment of how sensible the transfer is.

  29. Reliability and Dependability • Reliability (Quant): • Whether we would obtain the same results if we could observe the same thing more than once. • This is chiefly about noticing what contextual factors are at play in data collection. • When evaluating effective change, this is often a hypothetical idea! • Dependability (Qual): • The evaluation accounts for the ever-changing context within which it occurs. The evaluator describes the changes that occur in the setting and how these changes affected the way they approached the study.

  30. Objectivity and Confirmability • Objectivity (Quant): • The degree to which the findings of the research are demonstrably uninfluenced by the personal and/or subjective stance of the researcher. • For example: routine recording can be seen as an objective metric, according to the context it is collected in. • Confirmability (Qual): • Given that each evaluation brings a unique perspective to a phenomenon, confirmability refers to the degree to which the results could be confirmed or corroborated by others as consistent with the evaluation process. • Both of these traits can be enhanced via multi-method approaches to data.

  31. Discretion, Doubt and Judgement • Clarifying contextual validity and reliability will always involve understanding the ‘value’ of evaluation • There are an infinite number of potential influences on a programme outcome.

  32. Discretion, Doubt and Judgement • Discretion is always needed, then, but should be governed by organised scepticism. • Glouberman and Zimmerman (2002) discuss the role of our discretion and judgement in dealing with complexity: • Simply activities – e.g. baking a cake • Involves following a formula, but previous experience is always useful • Complicated activities – e.g. sending a man into space • Formula-following is exact and precise; expertise more useful than experience • Complex activities – e.g. raising a child • Limitations of existing formula and previous experience

  33. Lunch Over lunch, think of a possible activity that you would evaluate. We will use this as the basis of the workshop this afternoon.

  34. Framework for Evaluation Design Purpose Theory Evaluation Question Methods Sampling Strategy

  35. Matching Method to Theory • Some evaluation topics are naturally more suited to particular types of design, dependent on what aspect of the programme theory is being tested. Do you want to know about... • General trends? Conduct a survey; analyse existing metrics • Particular incidences? Use case studies • Meanings and concepts? Use interviews; analyse programme documents • But often allowing for multiple data sets will enable you to build a better picture of what propositions best capture activities.

  36. Using Surveys • Provides ‘snapshot’ of issues within a population at a given time. • Can be re-administered to measure change over time in terms of multiple ‘snapshots’. • Provides for descriptive analyses, plus explorations of relationships and differences. • A survey should begin with a clear sense of what is being tested. • This means thinking through which questions are likely to show whether your programme theory is correct, or in need of revising. • So: ‘did you enjoy your placement?’ is unlikely to be helpful.

  37. Using Surveys • The CMOs can help to identify who the survey should be targeted at, or what key features of population to explore and interrelate: • Nominal: Gender, occupation etc. • Ordinal: Attitude scales, belief measures, GCSE grades etc. • Ratio/scale: Age, weight, income, exam percentages, running times etc. • Choose an appropriate response format: • Exact responses and Category responses • Dichotomous Responses (Yes/No) • The Likert Scale (Strongly Agree, Agree, etc.) • Graphic Scales • Constant-sum Scales

  38. Some Final Comments on Surveys • The strength of a survey is to ask people about their first-hand experiences: what they’ve done, how they feel, etc. • But many surveys ask questions about e.g.: • Information that would only be acquired second-hand • Hypothetical questions • Asking for solutions to complex problems • Asking about perceptions of causality • Instead, we should focus on understanding how participants relate context, mechanism and outcome.

  39. Face-to-Face Data Collection • It is imperative for evaluation that variation across your data reflects a variation related to your programme theory. • Some variations in data, however, are outcome of faults in the data collection methods themselves. This is problematic, as it gets in the way of offering a reliable explanation. • Talking to individuals, face-to-face, can overcome this, via: • Structured interview/dialogue • Semi-structured interview/dialogie • Unstructured dialogue

  40. Group Facilitation • Unlike interviews, where the researcher mostly guides the subject, in group discussions the participants themselves mostly take the initiative. • This means that the participants can address issues of their own choosing, rather than simply talking about what the researcher wants to hear. • Likewise, it can allow for differences of view to be discussed and reconciled between different stakeholders in the programme. • Remember to ensure that participants are happy with their contributions being recorded as ‘data’!

  41. Group Data is Particularly Useful When... • ...you want to know the range of possible issues surrounding your evaluation question. • ...you want to improve your understanding of definitions and concepts; this helps to clarify mechanisms. • ...you want to know aboutthe sources and resources people have used in forming opinions; this helps to establish contexts. • ...you are interested in the impact on the use of language or the culture of particular groups. • …you want to explore the degree of consensus or conflict on a given topic.

  42. Keys to Analysis • Categorising data • Codes and themes – emerging • Codes and themes – axial (relationships) • Using templates • Measuring Frequency • Can be applied to both statistics and qualitative themes • Can identify ‘improvement’ in terms of specific categories • Testing your Analysis • Modifying delivery and continual monitoring • Feedback loops to participants

  43. Group work • In a group discuss an element of your work • Consider how you collect information on this aspect of your work • Analyse that information • Identify themes (5 at the most) • Set quality improvement measures • Feedback and discussion

  44. More group work Evaluative report Cover page/title Executive summary Introduction – overview of project, timeline, aims key stakeholders /audience etc Evaluation framework Purpose of the evaluation- key questions Evaluation team Evaluation methods including limitations Evaluation findings – key evaluation questions/categories – present, interpret and make a value judgment Conclusions and recommendations- Key results – success lessons – recommendations. How findings will be used in policy, future projects Any references and appendices Feedback and discussion

  45. PARE Practice Assessment Record and Evaluation • Section 1 Quality – 18 questions • Section 2 Support - 10 • Section 3 Experience - 7 • Section 4 Resources - 1 • Section 5 Other – 1 Strongly agree, Agree, Disagree, Strongly disagree NHS Health Education North West info@onlinepare.net https://onlinepare.net

  46. PARE Practice Assessment Record and Evaluation • Sufficient preparatory information prior to my placement(s) was available? • I received an orientation to the staff and working practices, policies and procedures? • I received basic safety induction within my first 24 hours of placement? • My named Mentor or Placement Educator was identified prior to being on placement? • My supernumerary status was upheld? • Practice learning opportunities were identified and relevant to my current stage in the programme of study? • My learning needs were recognised and help was offered with attainment of outcomes, action plans, and goals? • I was encouraged to undertake a range of learning activities relevant to my stage in the programme of study? • I was able to achieve my placement learning outcomes? • I had the opportunity to engage with members of the multidisciplinary team, and participate in the delivery of care to Service Users via 'care pathways'?

  47. Please answer all questions. The following questions are incomplete: • Sufficient preparatory information prior to my placement(s) was available? • I received an orientation to the staff and working practices, policies and procedures? • I received basic safety induction within my first 24 hours of placement? • My named Mentor or Placement Educator was identified prior to being on placement? • My supernumerary status was upheld? • Practice learning opportunities were identified and relevant to my current stage in the programme of study? • My learning needs were recognised and help was offered with attainment of outcomes, action plans, and goals? • I was encouraged to undertake a range of learning activities relevant to my stage in the programme of study? • I was able to achieve my placement learning outcomes? • I had the opportunity to engage with members of the multidisciplinary team, and participate in the delivery of care to Service Users via 'care pathways'? PARE Practice Assessment Record and Evaluation • I was able to learn with and from Service Users and Carers where applicable and appropriate? • I was able to learn with students/trainees from different professions where applicable to care pathways? • I was given my shifts/hours of work for the first week before my placement began? • I knew who to contact if I had any safety issues (i.e. personal safety, patient/Service User safety or safety of other staff in placement) or other concerns regarding placement experiences at all times? • I felt able to raise concerns regarding standards of care if/where required? • I was encouraged to promote dignity and respect for the diversity of culture and values of Service Users and carers? • My placement enabled me to learn from team working and care delivery consistent with core NHS values and behaviours? • Should a loved one require care, I would be happy for them to be cared for within this placement area?

  48. Key Points from Today • Evaluations will never give ‘one’ answer; rather effective evaluation should show what works for whom, and under what circumstances. • Unpacking the context, choices and constraints on a programme can help to refine what needs to be evaluated. • Evaluations are never ‘neutral’: they link together policy, strategy, localised knowledge, etc. • Working from a theory of an activity allows an effectiveness cycle to capture the variations of transformative learning. • Using mutliple data sources allows evaluations to test the way contexts, mechanisms and outcomes are configured.

More Related