1 / 51

Techniques to Evaluate Effective Learning

Techniques to Evaluate Effective Learning. Masterclass Two Dr Carol Marrow Emeritus Associate Professor University of Cumbria. Lancaster. Associate Professor Robert Kennedy College, Zurich. Switzerland. Learning Outcomes.

brabham
Download Presentation

Techniques to Evaluate Effective Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Techniques to Evaluate Effective Learning Masterclass Two Dr Carol Marrow Emeritus Associate Professor University of Cumbria. Lancaster. Associate Professor Robert Kennedy College, Zurich. Switzerland.

  2. Learning Outcomes • Understand quality assurance reporting techniques that develops knowledge to influence change and development. • Be able to creatively configure findings, drawing both from existing policy, evidence, current practice and upcoming initiatives. • Be able to present findings in a high level written report, seminars, posters and case studies.

  3. Introductions • Introductions • About you – who do you teach and support in practice? • About me • Name three things you hope to get from the day

  4. Evaluation • Write down briefly your understanding of the term evaluation

  5. Evaluation • What is evaluation? • “Evaluation refers to • Evaluation determines the merit, worth, or value of things. The evaluation process identifies relevant values or standards that apply to what is being evaluated, performs empirical investigation using techniques from the social sciences, and then integrates conclusions with the standards into an overall evaluation or set of evaluations (Scriven, 1991). Shriven (1991) Reflecting on the past and future of evaluation The Evaluation Exchange: A periodical on emerging strategies in evaluation Volume IX, No 4, Winter 2004/2004

  6. Evaluation and Assessment What is the difference between evaluation and assessment? • Evaluation appraises the strengths and weaknesses of programs, policies, personnel, products, and organizations to improve their effectiveness. • Assessment is an on-going process aimed at improving student learning, programs, and services that involves a process of: • publicly sharing expectations • defining criteria and standards for quality • gathering, analyzing, and interpreting evidence about how well performance matches the criteria • using the results to documents, explain, and improve performance • Effective learning, development and evaluation help to promote a

  7. Purpose of evaluation • In a group write down the purpose of your evaluations

  8. Purpose of evaluation • What is the purpose of the evaluation?Effective practice for example:- the quality of the educational provision (the product) - which could be the whole programme, a course (module), a class (lecture, seminar, laboratory, clinical practice etc)- the performance of the provider(s) - the academic staff, tutors, support staff, mentors etc involved in the delivery of this programme/course/class- the experience of the students as partners in the process - their experience of what is provided, and of the providers their motivation and approach to learning- a combination of these things - provided that the various purposes are made absolutely clear to those asked to make the evaluation

  9. Focus of evaluations • Still in your group now write down the focus of your evaluations

  10. Focus of the evaluation • For example, you might want to know about • the clarity of the stated educational aims and learning outcomes • curriculum and content - perceptions of relevance/usefulness • the appropriateness of the methods of assessment • the appropriateness of the style of teaching, and the performance of teacher/mentor • the quality of feedback to the student on the performance of the student • the motivation/attitudes of the student • the educational challenge presented to the students • the workload, how reasonable, how realistic • the support available to students/coursebooks/resources for independent learning • the effort made by the student, and the take-up of support/guidance • the overall experience of the student of the teaching and support for learning

  11. Why evaluate effectively? • The HEE Quality Framework 2016/17 lays out clear aims regarding quality assurance. • This can involve demonstrating particularly dynamic and contextual values, which go beyond core metrics, e.g.: • Continuous improvement of the quality of education and training • Empowering learners • Adaptability and receptivity to research and innovation

  12. Evaluating Transformative Learning • With the nearest person, talk through: • What activities do you currently (knowingly) evaluate? • Do you think this is effective evaluation? Why/why not? • What kind of things would show you that transformative learning had taken place? • Can we share your thoughts with the group are to share this with us?

  13. Principles and Practicalities • Evaluation often begins with a straightforward question: “Does it work?” Or “What is the impact?” • But this is nearly always the wrong question.

  14. Principles and Practicalities • Evaluation is always politicalbecause… • Where there is policy, there is politics • Simply ‘collecting data’ without a sense of strategy is likely to be ineffective. • Evaluation often involves a ‘they’: • E.g. commissioners; line managers; stakeholders; participants • Before any evaluation, we need to ask: • Who are the ‘they’? • What do ‘they’ think they want? • What are ‘they’ going to find credible?

  15. Effective Evaluation • Effective evaluation should not tell us simply ‘what works’; • Rather, it should tell us: ‘what works in which circumstances, and for whom?’ • Or: “What works, for whom, in what respects, to what extent, in what contexts, and how?”

  16. Break

  17. What can Evaluation tell us? • What can evaluation show, then? • It can identify regularities and patterns in outcomes; • It can offer interpretations and explanations for why those patterns are there; • It can identify specific contexts or mechanisms that are enabling or disabling • These are all driven by understanding the propositions, theories or logic being tested by the evaluation.

  18. Outcome evaluation • Outcome and Impact Evaluation • Outcome evaluations measure to what degree programme objectives have been achieved (i.e. short-term, intermediate, and long-term objectives). This form of evaluation assesses what has occurred because of the programme, and whether the programme has achieved its outcome objectives. • So for example: The student received basic safety induction within the first 24 hours of the placement. If a number of student’s have problems with this outcome – the programme/placement outcomes will need examining and possibly revisiting. The “value” of the intervention and programme should be consistently assessed

  19. Finding the Right Questions • Look at an activity that you would like to evaluate and write down all of the questions you could ask around it. The more the better! • Once you have a number of questions, try and arrange them into a hierarchy of questions (i.e. think about whether some fit within others): General Research Question(s) Specific Research Question(s) Data Collection Question(s)

  20. Finding the right questions It can be useful to remember the difference between types of research question: What questions – require description How questions – look at process, change, interventions and outcomes Why questions – look at causes, reasons, relationships and activities Remember that the aim is to discover how context, mechanism and outcome are configured: or, what worked for who, and in why.

  21. Forms of Data • From this, we can see that there are three main forms of data used in evaluation: • Data based on the evaluator’s observation of what is happening • Data based on asking other people what is happening • Data based on existing documents, statistics, minutes, etc. • Ideally, a variety of data forms can be used; as these will inform different aspects of the context, mechanisms and outcomes.

  22. Forms of Data • When and where this data is taken also has an effect on what kind of questions can be answered: • Cross-sectional and Longitudinal • Case Studies and Representative Studies • Probability Sampling • Purposive Sampling

  23. Data Quality • Given the different kinds of data available, it can be tempting to go for a ‘catch-all approach’. • But the quality of data is key: • Has the data been provided consistently? E.g. survey participants; ‘routine’ recording, etc. • Timeliness of data • Whether the data addresses the evaluation question you are asking • Whether different data sets ‘speak’ to each other • What your relationship with the data, or data provider, is

  24. Measures of Central Tendency • One way of showing basic patterns in quantitative data is to describe or summarise data according to ‘central tendency’ • This also gives a ‘typical’ score against which other scores can be compared • 3 kinds of measures of central tendency: The Mean The Median The Mode

  25. The Mean (average) • Add up the scores and divide by the number of scores: 7 7 4 6 7 9 2 7+7+4+6+7+9+2 = 42 (7 scores) 42 ÷ 7 = 6 Mean is 6

  26. The Median (middle value) • Put a set of numbers in order of rank • The median number is the one with as many scores above it as below it. E.g.: 7 7 4 6 7 9 2 In rank order is 2 4 6 7 7 7 9 Median is 2 4 6 7 7 7 9 it has the same number of scores above and below it

  27. The Mode (most frequent) • The most frequently occurring number in a set: 7 7 4 6 7 9 2 Mode is 7 • There can be more than 1 mode! 7 7 4 4 4 7 9 2 Modes are 4 and 7

  28. Reliability and Validity • Data rarely ‘speaks for itself’ in the context of effective learning – as the number of influences on change are so great. • It must be re-presented or coded in order to give meaning to the evaluation questions being asked. • The programme theory acts as a lens to draw out significant data. In turn, data allows us to re-focus the programme theory.

  29. Being Rigorous with Data • But how do we know data is reliable? • There are four general headings for both quantitative and qualitative data that are useful guides for establishing the effectiveness of both. • Qualitative Criteria • Credibility • Transferability • Dependability • Confirmability • Quantitative Criteria • Internal Validity • External Validity • Reliability • Objectivity

  30. Internal Validity and Credibility • Internal Validity (Quant): • Whether observed changes in a phenomenon can be attributed to your programme or intervention (i.e., the cause) and not to other possible causes (sometimes described as “alternative explanations” for the outcome). • Credibility (Qual): • The results of qualitative research are credible or believable from the perspective of the participant in the programme. Since from this perspective, the purpose of qualitative research is to describe or understand the phenomena of interest from the participant's eyes, the participants are the only ones who can legitimately judge the credibility of the results.

  31. External Validity and Transferability • External Validity (Quant): • The degree to which the conclusions of an evaluation would hold for other persons in other places and at other times. • Transferability (Qual): • The degree to which the results of qualitative analysis can be generalized or transferred to other contexts or settings. From a qualitative perspective transferability is primarily the responsibility of the one doing the generalizing. • This can be enhanced by doing a thorough job of describing the explanatory framework and the assumptions that were central to the research. The person who wishes to ‘transfer’ the results to a different context is then responsible for making the judgment of how sensible the transfer is.

  32. Reliability and Dependability • Reliability (Quant): • Whether we would obtain the same results if we could observe the same thing more than once. • This is chiefly about noticing what contextual factors are at play in data collection. • When evaluating effective change, this is often a hypothetical idea! • Dependability (Qual): • The evaluation accounts for the ever-changing context within which it occurs. The evaluator describes the changes that occur in the setting and how these changes affected the way they approached the study.

  33. Objectivity and Confirmability • Objectivity (Quant): • The degree to which the findings of the research are demonstrably uninfluenced by the personal and/or subjective stance of the researcher. • For example: routine recording can be seen as an objective metric, according to the context it is collected in. • Confirmability (Qual): • Given that each evaluation brings a unique perspective to a phenomenon, confirmability refers to the degree to which the results could be confirmed or corroborated by others as consistent with the evaluation process. • Both of these traits can be enhanced via multi-method approaches to data.

  34. Discretion, Doubt and Judgement • Clarifying contextual validity and reliability will always involve understanding the ‘value’ of evaluation • There are an infinite number of potential influences on a programme outcome.

  35. Discretion, Doubt and Judgement • Discretion is always needed, then, but should be governed by organised scepticism. • Glouberman and Zimmerman (2002) discuss the role of our discretion and judgement in dealing with complexity: • Simply activities – e.g. baking a cake • Involves following a formula, but previous experience is always useful • Complicated activities – e.g. sending a man into space • Formula-following is exact and precise; expertise more useful than experience • Complex activities – e.g. raising a child • Limitations of existing formula and previous experience

  36. Lunch Over lunch, think of a possible activity that you would evaluate. We will use this as the basis of the workshop this afternoon.

  37. Framework for Evaluation Design Purpose Theory Evaluation Question Methods Sampling Strategy

  38. Matching Method to Theory • Some evaluation topics are naturally more suited to particular types of design, dependent on what aspect of the programme theory is being tested. Do you want to know about... • General trends? Conduct a survey; analyse existing metrics • Particular incidences? Use case studies • Meanings and concepts? Use interviews; analyse programme documents • But often allowing for multiple data sets will enable you to build a better picture of what propositions best capture activities.

  39. Using Surveys • Provides ‘snapshot’ of issues within a population at a given time. • Can be re-administered to measure change over time in terms of multiple ‘snapshots’. • Provides for descriptive analyses, plus explorations of relationships and differences. • A survey should begin with a clear sense of what is being tested. • This means thinking through which questions are likely to show whether your programme theory is correct, or in need of revising. • So: ‘did you enjoy your placement?’ is unlikely to be helpful.

  40. Using Surveys • Nominal: Gender, occupation etc. • Ordinal: Attitude scales, belief measures, GCSE grades etc. • Ratio/scale: Age, weight, income, exam percentages, running times etc. • Choose an appropriate response format: • Exact responses and Category responses • Dichotomous Responses (Yes/No) • The Likert Scale (Strongly Agree, Agree, etc.) • Graphic Scales • Constant-sum Scales

  41. Some Final Comments on Surveys • The strength of a survey is to ask people about their first-hand experiences: what they’ve done, how they feel, etc. • But many surveys ask questions about e.g.: • Information that would only be acquired second-hand • Hypothetical questions • Asking for solutions to complex problems • Asking about perceptions of causality • Instead, we should focus on understanding how participants relate context, mechanism and outcome.

  42. Face-to-Face Data Collection • It is imperative for evaluation that variation across your data reflects a variation related to your programme theory. • Some variations in data, however, are outcome of faults in the data collection methods themselves. This is problematic, as it gets in the way of offering a reliable explanation. • Talking to individuals, face-to-face, can overcome this, via: • Structured interview/dialogue • Semi-structured interview/dialogie • Unstructured dialogue

  43. Group Facilitation • Unlike interviews, where the researcher mostly guides the subject, in group discussions the participants themselves mostly take the initiative. • This means that the participants can address issues of their own choosing, rather than simply talking about what the researcher wants to hear. • Likewise, it can allow for differences of view to be discussed and reconciled between different stakeholders in the programme. • Remember to ensure that participants are happy with their contributions being recorded as ‘data’!

  44. Group Data is Particularly Useful When... • ...you want to know the range of possible issues surrounding your evaluation question. • ...you want to improve your understanding of definitions and concepts; this helps to clarify mechanisms. • ...you want to know aboutthe sources and resources people have used in forming opinions; this helps to establish contexts. • ...you are interested in the impact on the use of language or the culture of particular groups. • …you want to explore the degree of consensus or conflict on a given topic.

  45. Keys to Analysis • Categorising data • Codes and themes – emerging • Codes and themes – axial (relationships) • Using templates • Measuring Frequency • Can be applied to both statistics and qualitative themes • Can identify ‘improvement’ in terms of specific categories • Testing your Analysis • Modifying delivery and continual monitoring • Feedback loops to participants

  46. Group work • In a group discuss an element of your work • Consider how you collect information on this aspect of your work • Analyse that information • Identify themes (5 at the most) • Set quality improvement measures • Feedback and discussion

  47. Group work – developing a skeleton evaluative report Cover page/title Executive summary Introduction – overview of project, timeline, aims key stakeholders /audience etc Evaluation framework Purpose of the evaluation- key questions Evaluation team Evaluation methods including limitations Evaluation findings – key evaluation questions/categories – present, interpret and make a value judgment Conclusions and recommendations- Key results – success lessons – recommendations. How findings will be used in policy, future projects Any references and appendices

  48. PARE Practice Assessment Record and Evaluation • Section 1 Quality – 18 questions • Section 2 Support - 10 • Section 3 Experience - 7 • Section 4 Resources - 1 • Section 5 Other – 1 Strongly agree, Agree, Disagree, Strongly disagree NHS Health Education North West info@onlinepare.net https://onlinepare.net

  49. PARE Practice Assessment Record and Evaluation • Sufficient preparatory information prior to my placement(s) was available? • I received an orientation to the staff and working practices, policies and procedures? • I received basic safety induction within my first 24 hours of placement? • My named Mentor or Placement Educator was identified prior to being on placement? • My supernumerary status was upheld? • Practice learning opportunities were identified and relevant to my current stage in the programme of study? • My learning needs were recognised and help was offered with attainment of outcomes, action plans, and goals? • I was encouraged to undertake a range of learning activities relevant to my stage in the programme of study? • I was able to achieve my placement learning outcomes? • I had the opportunity to engage with members of the multidisciplinary team, and participate in the delivery of care to Service Users via 'care pathways'?

  50. Please answer all questions. The following questions are incomplete: • Sufficient preparatory information prior to my placement(s) was available? • I received an orientation to the staff and working practices, policies and procedures? • I received basic safety induction within my first 24 hours of placement? • My named Mentor or Placement Educator was identified prior to being on placement? • My supernumerary status was upheld? • Practice learning opportunities were identified and relevant to my current stage in the programme of study? • My learning needs were recognised and help was offered with attainment of outcomes, action plans, and goals? • I was encouraged to undertake a range of learning activities relevant to my stage in the programme of study? • I was able to achieve my placement learning outcomes? • I had the opportunity to engage with members of the multidisciplinary team, and participate in the delivery of care to Service Users via 'care pathways'? PARE Practice Assessment Record and Evaluation • I was able to learn with and from Service Users and Carers where applicable and appropriate? • I was able to learn with students/trainees from different professions where applicable to care pathways? • I was given my shifts/hours of work for the first week before my placement began? • I knew who to contact if I had any safety issues (i.e. personal safety, patient/Service User safety or safety of other staff in placement) or other concerns regarding placement experiences at all times? • I felt able to raise concerns regarding standards of care if/where required? • I was encouraged to promote dignity and respect for the diversity of culture and values of Service Users and carers? • My placement enabled me to learn from team working and care delivery consistent with core NHS values and behaviours? • Should a loved one require care, I would be happy for them to be cared for within this placement area?

More Related