1 / 12

How can we inform our teaching with evidence from education and psychological research?

Evidence into practice. How can we inform our teaching with evidence from education and psychological research? How can we evaluate our innovations using research methods?. Evidence into practice. What’s this group about?. Proposed Aim #1 Share evidence about teaching and learning

feleti
Download Presentation

How can we inform our teaching with evidence from education and psychological research?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evidence into practice How can we inform our teaching with evidence from education and psychological research? How can we evaluate our innovations using research methods?

  2. Evidence into practice What’s this group about? Proposed Aim #1 Share evidence about teaching and learning Find interesting articles and pieces of research on education and learning. Share them within this group. Identify novel ideas to guide our own innovations in teaching. Select ones that could be useful resources to support coaching. Proposed Aim #2 Move beyond opinion as the basis of evaluation Use simple research methods to evaluate the impact of our innovations in teaching. Create an evidence base to drive the development of our professional practice. Model the process and share the results with other teachers.

  3. Evidence in education There’s lots of evidence out there … … it’s just that many teachers and politicians don’t seem very interested in it. Educational journals Advantage = relates directly to education Disadvantage = teaching and learning often takes a back seat to policy Newspaper articles Advantage = frequently offer practical advice in easy to understand terms Disadvantage = rarely much explicit reference to evidence Psychological research Advantage = quality of evidence is typically high Disadvantage = theory-based approaches often require work to apply to teaching Education blogs and social media Advantage = free and easy to access Disadvantage = some are more practical and evidence based, others more about politics and opinion.

  4. Evidence in education Research ‘bites’ and tasters What did you think of the CUREE resources? Route maps Advantage = guides teachers through a set of resources Disadvantage = many people didn’t find them ‘deep’ enough or especially helpful. Some teachers really like them – others didn’t find them particularly useful. Could we find better resources? Homework: Assume we have a small budget … What resources would be of most interest and use to this group? Should we subscribe to a journal or a newspaper pay-wall? Are there books we should read?

  5. Evaluating innovation Double blind Randomised Controlled Trials Involve the participant and the researcher being randomly allocated to a condition and not knowing whether they are in the treatment or a control group. Probably not possible in an education context. Randomised Controlled Trials Involve participants (e.g. individuals, classes or schools) being randomly allocated to a treatment or a control group. This is possible in education (e.g. Penn Resilience) but typically requires cooperation between schools. A cohort study involves a group of individuals with a common characteristic compared to a similar group – without randomisation. Case control studies involve a treatment group compared to a fairly similar non-treatment group – without randomisation. Case series studies involve describing the outcomes of a group of individuals in the same treatment group – without a comparison group. Case report studies are a detailed description of a single individual in a treatment group – without a comparison. Ideas, opinions – what ‘gurus’, consultants and politicians think. Experience has its place, but it is frequently subjective, biased or just plain wrong. This is by far the weakest level of ‘evidence’ though typically this is what we rely on in education. Quality of evidence Increasing quality Don’t worry about remembering any of this information – it’s all on the hand out!

  6. Evaluating innovation Choosing a level of evidence To evaluate the impact of innovations we make (e.g. in our coaching groups), what level of evidence can we realistically work towards? Cohort study Just about possible – could work as a faculty to compare outcomes of your current year 9s with last year’s – but usually requires cooperation between ‘similar’ schools. (This was my suggested level for trying to evaluate BLP). Case control study Possible – a teacher could do this with two similar classes – or pair up with another teacher in their faculty with a similar mixed ability group. Case series study Fairly straightforward – could track the impact of an innovation with a specific group over a half term. This is the level of evidence for things like BLP (though when you look close a great deal is opinion). Case report study Easy – this is about the level I achieved in coaching last year due to time constraints. For example, looking at whether individual students maintained a misconception after changing the way I taught a topic; or seeing if a student improved their recall of a specific study after using mnemonics to support learning. Increasing quality

  7. Evaluating innovation Generating a hypothesis A testable statement which says how a change in our teaching will affect a measurable outcome • Some example hypotheses • Teaching students to use a mnemonic strategy improves recall of psychology studies. • Using a keyword ‘ladder’ will improve the number of keywords that students include in a written assessment. • Teaching students explicitly about a misconception in science will reduce the number of students repeating that misconception in a homework task. • Students that use the new writing frame will write improved evaluations compared to students using the old writing frame. • Student self-report ratings of confidence on a particular topic will improve after practice questions on the topic compared to ‘free revision’. • A good hypothesis is ‘operationalised’ – i.e. it’s clear exactly what you are going to change or compare and how you are going to measure it.

  8. Evaluating innovation Choosing a design If we want to generate evidence for the impact of our innovations, we need to undertake a baseline comparison. A-B An AB design is a two part design composed of a baseline ("A" phase) with no changes, and an experimental or innovation ("B") phase. If there is a change then the innovation may be said to have had an effect. However, there are lots of other factors that may have caused the changes, making strong conclusions difficult. • Reversal or A-B-A • The reversal design is a more powerful research design showing a strong reversal from baseline ("A") to treatment ("B") and back again. If the variable returns to baseline measure without the innovation the researcher can have greater confidence in the efficacy of that treatment. However, many interventions cannot be reversed, some for ethical reasons (e.g. the effect is highly beneficial) and some for practical reasons (they cannot be unlearned, like a skill). • Multiple baselines or A-B-B, A-A-B • This design avoids some of the practical and ethical issues that arise with the reversal design. In it innovations are introduced in a staggered way - where a change is made to one group, but not the other, and then later to the second group. Differential changes that occur help to strengthen the basic AB design. • Repeated acquisitions or A-B-A-B • In addition to having multiple baselines, this overcomes some of the ethical problems of ending a study on a baseline – by reintroducing the innovation after seeing whether the effect can be reversed in its absence. Don’t worry about remembering any of this information – it’s all on the hand out!

  9. Evaluating innovation Choosing our measurements If we want to generate evidence for the impact of our innovations, what will we measure? Quantitative methods The easiest data to analyse are ones that produce numbers. There are lots of ways to do this: Test scores: Use a formative assessment test to produce scores for students on a particular skill or topic, then introduce your innovation, then test the outcome during summative assessment and see if there is an improvement (an A-B design). Self-report: Get students in two of your teaching groups to rate their confidence on a particular topic using a questionnaire. In one group introduce the innovation, whilst teaching the other group as you would normally have done. Get the groups to rate their confidence again and see if the treatment group gained confidence more than your non-treatment group. Then introduce the innovation to the other group – and test again. (A-B, A-A-B design). Behavioural checklist: Observe the behaviour a specific student in a lesson (perhaps get a TA or another teacher to observe) – e.g. time on task, shouting out, distracting others, turning around. Count the frequency of these behaviours before and after an intervention (e.g. change in seating plan – working individually to working in pairs). (A-B design). Content analysis: Analyse a piece of work produced by a student (e.g. the number keywords used in answers), then introduce your innovation (e.g. a keyword ‘ladder’) and compare scores, perhaps then returning to the former method to see if the effect reverses (A-B-A design). • Going further -> Inferential statistical tests • Using quantitative measures allows you to statistically compare the outcomes of your innovations to see if they are significant. Non-parametric inferential statistical tests are easy to do and don’t require specialist software (just a paper, pencil and calculator; or excel). If the data we are using is parametric and normal, we can calculate the size of the effect.

  10. Evaluating innovation Choosing our measurements If we want to generate evidence for the impact of our innovations, what will we measure? Qualitative methods Genuine qualitative analysis is time-consuming and somewhat subjective – but it can give you rich detail about a specific individual / group. There are several ways you can do this: Questionnaires: Using open questions on a questionnaire (e.g. what do you find difficult in psychology lessons?) allows students to give open-ended responses that you can analyse for themes and categories. This has advantages – in that you’ll likely discover things you didn’t anticipate – but it is limited by the student’s ability to express themselves in writing. Interviews: You could have another teacher in your coaching group interview a student about some aspect of your lessons / teaching. Again, by asking open questions and recording the answers. These can be transcribed and analysed for themes that emerge – which is very time-consuming – but overcomes issues of weak literacy and written expression. Observations: You (or a coaching partner) could film your lesson and analyse aspects of your teaching; for example the styles of questions you ask, the body language you project, etc and look for patterns that emerge.

  11. Evaluating innovation Have a go … Use the list below to design an evaluation of some aspect of teaching. (Just for fun – you may or may not choose to actually do this one!) What level of evidence will you work towards? Something quick and simple like a case report study of one student? Or will you try something longer-term and more ambitious? What’s your hypothesis? What are you going to change? What are you going to measure? What design will you use? A simple ‘before-and-after’ comparison (A-B) or something more complex? How will you measure the outcome? Will you use a quantitative or qualitative measure? Are you going to look at test scores, a specific piece of writing, the behaviour of a specific student? I turn this into a game for my Y13s called ‘mad psychologist’ to get them to practice formulating research ideas.

  12. Evidence into practice What do you want from this group? Proposed Aim #1 Share evidence about teaching and learning Find interesting articles and pieces of research on education and learning. Share them within this group. Identify novel ideas to guide our own innovations in teaching. Select ones that could be useful resources to support coaching. Do you agree with these aims? Proposed Aim #2 Move beyond opinion as the basis of evaluation Use simple research methods to evaluate the impact of our innovations in teaching. Create an evidence base to drive the development of our professional practice. Model the process and share the results with other teachers. Homework: Assume we have a small budget … What resources would be of most interest and use to this group? Should we subscribe to a journal or a newspaper pay-wall? Are there books we should read? Planners out!

More Related