1 / 10

Use of Contextual Data at Oxford University

The admissions process at Oxford. 15 October application deadline 15,000 applications 75% of applicants take a pre-interview test in Novembershort-listing process in November 10,000 applicants called for interview in December 3,500 offers made for 3,200 places All decisions are made by acade

akiko
Download Presentation

Use of Contextual Data at Oxford University

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. Use of Contextual Data at Oxford University Introduction Summarise aims of presentation – see admissions in an Oxford context; why we moved to collecting contextual data in the way that we do (and what we were already doing)Introduction Summarise aims of presentation – see admissions in an Oxford context; why we moved to collecting contextual data in the way that we do (and what we were already doing)

    2. The admissions process at Oxford Summary of how Oxford admissions differs from other institutions. Short-listing on a range of criteria – statement, reference, GCSEs, predicted grades, test performance, any written work. Some subjects short-list more mechanistically than others. Interview figure at capacity. Will show how we are using contextual data to inform the interview short-listing stage, not the offer-making stage. Very much a case study of one institution’s approach which will doubtless not be applicable for all.Summary of how Oxford admissions differs from other institutions. Short-listing on a range of criteria – statement, reference, GCSEs, predicted grades, test performance, any written work. Some subjects short-list more mechanistically than others. Interview figure at capacity. Will show how we are using contextual data to inform the interview short-listing stage, not the offer-making stage. Very much a case study of one institution’s approach which will doubtless not be applicable for all.

    3. Why collect contextual data? UCAS – firstly (2007) collecting data on parental participation (which we didn’t want) secondly the looked-after-children question Other institutions – some other RG unis flagging applicants according to school performance data, EMA entitlement, socio-economic classification; some operating local partnerships with e.g. schools in the area Internal pressure – large departments wanting advice on how to use information alongside existing short-listing criteria: “As all offers in Oxford require an applicant to be interviewed, and more courses are reducing the numbers invited to interview, there is concern that a candidate’s ‘potential’ will be overlooked where interview selection metrics are applied that focus on academic achievement” [internal consultation document] Changes to our own application forms Access Scheme was an additional form for applicants to identify themselves as ‘access cases’ but it required applicants/schools to know about it – only funded for a certain period – discontinued in 2007. Up to 2008 entry we used the separate OAF – final iteration expanded to include contextual – #s progressing to HE, #s achieving entry requirements, #s receiving EMA – but no systematic treatment or collection/comparison of this information, very much at individual tutor’s discretion. 2009 entry – OAF also abolished (after Cam.), so no additional forms. Going paperless – summarise how ADSS imports UCAS data as a web-based system; users can batch-print pdfs of applications according to subject/college etc, which include the information flags. (include example later?)UCAS – firstly (2007) collecting data on parental participation (which we didn’t want) secondly the looked-after-children question Other institutions – some other RG unis flagging applicants according to school performance data, EMA entitlement, socio-economic classification; some operating local partnerships with e.g. schools in the area Internal pressure – large departments wanting advice on how to use information alongside existing short-listing criteria: “As all offers in Oxford require an applicant to be interviewed, and more courses are reducing the numbers invited to interview, there is concern that a candidate’s ‘potential’ will be overlooked where interview selection metrics are applied that focus on academic achievement” [internal consultation document] Changes to our own application forms Access Scheme was an additional form for applicants to identify themselves as ‘access cases’ but it required applicants/schools to know about it – only funded for a certain period – discontinued in 2007. Up to 2008 entry we used the separate OAF – final iteration expanded to include contextual – #s progressing to HE, #s achieving entry requirements, #s receiving EMA – but no systematic treatment or collection/comparison of this information, very much at individual tutor’s discretion. 2009 entry – OAF also abolished (after Cam.), so no additional forms. Going paperless – summarise how ADSS imports UCAS data as a web-based system; users can batch-print pdfs of applications according to subject/college etc, which include the information flags. (include example later?)

    4. What we decided to use … Mixture of academic and socio-economic datasets. Look at where we get them from later.Mixture of academic and socio-economic datasets. Look at where we get them from later.

    5. … and how we decided to use it ADSS – the web-based system. Reason for caveats: Ideally we wanted to say that colleges had to interview flagged candidates, but this was felt to be verging on positive discrimination. So the two provisions allow a degree of flexibility. Realistically a prediction of <AAA from a school who knows the candidate is applying to Oxford will not look favourable, but the test criterion is very wide. Also worth noting that both ADSS and the pdf display all five individual flags, so tutors can see which flags were ticked, and e.g. if candidate is flagged on all five criteria but just outside the top 80%, the tutor can see this, or e.g. school profile data where results less than expected etc.ADSS – the web-based system. Reason for caveats: Ideally we wanted to say that colleges had to interview flagged candidates, but this was felt to be verging on positive discrimination. So the two provisions allow a degree of flexibility. Realistically a prediction of <AAA from a school who knows the candidate is applying to Oxford will not look favourable, but the test criterion is very wide. Also worth noting that both ADSS and the pdf display all five individual flags, so tutors can see which flags were ticked, and e.g. if candidate is flagged on all five criteria but just outside the top 80%, the tutor can see this, or e.g. school profile data where results less than expected etc.

    6. Example of ADSS-produced pdf Mentioned earlier re going paperless – this is how. This is what the application pdf looks like to the tutor either on-screen or printed-out. Other side = statement and reference. Real applicant – info blocked out. Contextual data – indicate flags. Welsh so NA for qualification profiles; overall Yes (1 of 3).Mentioned earlier re going paperless – this is how. This is what the application pdf looks like to the tutor either on-screen or printed-out. Other side = statement and reference. Real applicant – info blocked out. Contextual data – indicate flags. Welsh so NA for qualification profiles; overall Yes (1 of 3).

    7. Example of ADSS web view Again a bit crudely-done – here we are logged in as St John’s College and filtering by WP Flag (‘A’ = Access candidate). Headers are all sortable & the tutors can also see the actual scores for school qualification profile, i.e. the %age achieving five A*-C grades at GCSE and the average QCA tariff qualification score. System available to every individual involved in admissions either read-only or editable user access.Again a bit crudely-done – here we are logged in as St John’s College and filtering by WP Flag (‘A’ = Access candidate). Headers are all sortable & the tutors can also see the actual scores for school qualification profile, i.e. the %age achieving five A*-C grades at GCSE and the average QCA tariff qualification score. System available to every individual involved in admissions either read-only or editable user access.

    8. Additional contextual aspect for Medicine 2008 data modelling – shows limited impact of this approach (most candidates being picked up by existing mechanisms anyway). Of the 177, assuming even distribution, ~35 would not have scored in the lowest 20% of the pre-interview test. Main subject areas affected = Medicine, Law, Maths & JS – all heavily reliant on pre-interview tests (BMAT, LNAT, Maths). But Medicine and Law already have mechanisms for considering WP criteria and add on borderline candidates to their short-listed contingents anyway, so the approach can be used to inform this process rather than bring about wholesale change. In most populous subjects still only 1-2 candidates per college so not threatening the capacity for total interviewees. Division of labour + minimal impact = happy tutors. 2009 entry – figures mostly as expected; rise in applications and rise in flags. We wondered if publicising our approach would lead to more applications from the areas likely to be flagged – but IMO probably statistically insignificant (anyone done a chi test?). Number not short-listed is lower than in the 08 modelling, and: 110 of these had missing information, so may well include candidates who were given the benefit of the doubt on qualification profiles of the remaining 55 about half had predicted grades lower than what we would expect the remaining 25-30 are mostly Medics and Mathematicians who may well have underperformed in the test only left with a very small number without obvious reason for not being short-listed Current situation and next steps Annual monitoring exercise was part of the package approved by the various committees. Following this we can change any of the metrics accordingly. A three-year review is requested by the Educational Policy Standards Committee.2008 data modelling – shows limited impact of this approach (most candidates being picked up by existing mechanisms anyway). Of the 177, assuming even distribution, ~35 would not have scored in the lowest 20% of the pre-interview test. Main subject areas affected = Medicine, Law, Maths & JS – all heavily reliant on pre-interview tests (BMAT, LNAT, Maths). But Medicine and Law already have mechanisms for considering WP criteria and add on borderline candidates to their short-listed contingents anyway, so the approach can be used to inform this process rather than bring about wholesale change. In most populous subjects still only 1-2 candidates per college so not threatening the capacity for total interviewees. Division of labour + minimal impact = happy tutors. 2009 entry – figures mostly as expected; rise in applications and rise in flags. We wondered if publicising our approach would lead to more applications from the areas likely to be flagged – but IMO probably statistically insignificant (anyone done a chi test?). Number not short-listed is lower than in the 08 modelling, and: 110 of these had missing information, so may well include candidates who were given the benefit of the doubt on qualification profiles of the remaining 55 about half had predicted grades lower than what we would expect the remaining 25-30 are mostly Medics and Mathematicians who may well have underperformed in the test only left with a very small number without obvious reason for not being short-listed Current situation and next steps Annual monitoring exercise was part of the package approved by the various committees. Following this we can change any of the metrics accordingly. A three-year review is requested by the Educational Policy Standards Committee.

    9. Impact modelling and validation 2008 data modelling – shows limited impact of this approach (most candidates being picked up by existing mechanisms anyway). Of the 177, assuming even distribution, ~35 would not have scored in the lowest 20% of the pre-interview test. Main subject areas affected = Medicine, Law, Maths & JS – all heavily reliant on pre-interview tests (BMAT, LNAT, Maths). But Medicine and Law already have mechanisms for considering WP criteria and add on borderline candidates to their short-listed contingents anyway, so the approach can be used to inform this process rather than bring about wholesale change. In most populous subjects still only 1-2 candidates per college so not threatening the capacity for total interviewees. Division of labour + minimal impact = happy tutors. 2009 entry – figures mostly as expected; rise in applications and rise in flags. We wondered if publicising our approach would lead to more applications from the areas likely to be flagged – but IMO probably statistically insignificant (anyone done a chi test?). Number not short-listed is lower than in the 08 modelling, and: 110 of these had missing information, so may well include candidates who were given the benefit of the doubt on qualification profiles of the remaining 55 about half had predicted grades lower than what we would expect the remaining 25-30 are mostly Medics and Mathematicians who may well have underperformed in the test only left with a very small number without obvious reason for not being short-listed Current situation and next steps Annual monitoring exercise was part of the package approved by the various committees. Following this we can change any of the metrics accordingly. A three-year review is requested by the Educational Policy Standards Committee.2008 data modelling – shows limited impact of this approach (most candidates being picked up by existing mechanisms anyway). Of the 177, assuming even distribution, ~35 would not have scored in the lowest 20% of the pre-interview test. Main subject areas affected = Medicine, Law, Maths & JS – all heavily reliant on pre-interview tests (BMAT, LNAT, Maths). But Medicine and Law already have mechanisms for considering WP criteria and add on borderline candidates to their short-listed contingents anyway, so the approach can be used to inform this process rather than bring about wholesale change. In most populous subjects still only 1-2 candidates per college so not threatening the capacity for total interviewees. Division of labour + minimal impact = happy tutors. 2009 entry – figures mostly as expected; rise in applications and rise in flags. We wondered if publicising our approach would lead to more applications from the areas likely to be flagged – but IMO probably statistically insignificant (anyone done a chi test?). Number not short-listed is lower than in the 08 modelling, and: 110 of these had missing information, so may well include candidates who were given the benefit of the doubt on qualification profiles of the remaining 55 about half had predicted grades lower than what we would expect the remaining 25-30 are mostly Medics and Mathematicians who may well have underperformed in the test only left with a very small number without obvious reason for not being short-listed Current situation and next steps Annual monitoring exercise was part of the package approved by the various committees. Following this we can change any of the metrics accordingly. A three-year review is requested by the Educational Policy Standards Committee.

    10. Data sources: positives and negatives ST/OYA – self-declared on the UCAS application. Cross-referenced with our own lists. Postcode – ACORN dataset. WP have this license for research purposes – it provides a list of ~2m postcodes and categorises them into one of five major groups – Wealthy Achievers, Urban Prosperity, Comfortably Off, Moderate Means, Hard-Pressed – each subdivided into ‘types’, totalling 56 of which we flag the bottom 20 (bottom two groups). Very specific classification based on >400 variables including Census demographics. Like upmystreet.com We considered using EMA data but there are no national or regional comparison figures available. GCSE and A-level school profiles – Two main issues. One is that the data is only available for England. Some data is available for certain school types in Wales & NI but it involves trawling different websites in piecemeal fashion. We would very much like comparative figures for non-English schools where the curriculum is standard (obviously trickier for Scottish qualifications but could still have some comparison). Can SPA help? The second issue is that the UCAS school code is not always up-to-date with the DCSF school code, so a few different matching processes (automated and manual) are needed. Where information is missing and we cannot get it from elsewhere, the applicant is neither flagged nor unflagged on this criterion (so 2 of 4 etc – benefit of the doubt leading to more ‘overall flags’ than there probably are). Leads to e.g. some candidates getting flagged when they probably shouldn’t be – e.g. Scottish applicant gets one postcode flag but actually attends really good school. Two big positives – everything is automated or mostly automated and no information is sought from the applicants. Sorry to pick on Cambridge but they replaced the CAF with a 22-page online questionnaire with school permission required. Our system uses public sources of information. No candidates disadvantaged – anyone identified under this scheme and then invited to interview as a result is not invited at the expense of another non-flagged candidate (numbers involved suggest at most 1-2 applicants per college; division of labour) – exercise is intended to catch the few students who may ‘slip through the net’.ST/OYA – self-declared on the UCAS application. Cross-referenced with our own lists. Postcode – ACORN dataset. WP have this license for research purposes – it provides a list of ~2m postcodes and categorises them into one of five major groups – Wealthy Achievers, Urban Prosperity, Comfortably Off, Moderate Means, Hard-Pressed – each subdivided into ‘types’, totalling 56 of which we flag the bottom 20 (bottom two groups). Very specific classification based on >400 variables including Census demographics. Like upmystreet.com We considered using EMA data but there are no national or regional comparison figures available. GCSE and A-level school profiles – Two main issues. One is that the data is only available for England. Some data is available for certain school types in Wales & NI but it involves trawling different websites in piecemeal fashion. We would very much like comparative figures for non-English schools where the curriculum is standard (obviously trickier for Scottish qualifications but could still have some comparison). Can SPA help? The second issue is that the UCAS school code is not always up-to-date with the DCSF school code, so a few different matching processes (automated and manual) are needed. Where information is missing and we cannot get it from elsewhere, the applicant is neither flagged nor unflagged on this criterion (so 2 of 4 etc – benefit of the doubt leading to more ‘overall flags’ than there probably are). Leads to e.g. some candidates getting flagged when they probably shouldn’t be – e.g. Scottish applicant gets one postcode flag but actually attends really good school. Two big positives – everything is automated or mostly automated and no information is sought from the applicants. Sorry to pick on Cambridge but they replaced the CAF with a 22-page online questionnaire with school permission required. Our system uses public sources of information. No candidates disadvantaged – anyone identified under this scheme and then invited to interview as a result is not invited at the expense of another non-flagged candidate (numbers involved suggest at most 1-2 applicants per college; division of labour) – exercise is intended to catch the few students who may ‘slip through the net’.

    11. Conclusion Low impact – what I mean by this is that we are not reinventing the wheel. Our admissions procedures are academically demanding and internally we would not be allowed to introduce any element of positive discrimination or social bias. The system we have does – as far as we can see from its early stages – accurately identify access candidates but merely ensures they are not overlooked rather than gives them specific advantage. Designed to be a ‘softly-softly’ approach not drastically affecting existing admissions procedures. More criteria – by using both socio-economic and academic criteria we are able to monitor the process from different angles & perhaps in future compare it with application success rates. Not offer-making – all our offers are set at AAA without exception. It may be appropriate for other institutions to give automatic interviews or lower offers to candidates identified in this way, which was felt inappropriate within Oxford.Low impact – what I mean by this is that we are not reinventing the wheel. Our admissions procedures are academically demanding and internally we would not be allowed to introduce any element of positive discrimination or social bias. The system we have does – as far as we can see from its early stages – accurately identify access candidates but merely ensures they are not overlooked rather than gives them specific advantage. Designed to be a ‘softly-softly’ approach not drastically affecting existing admissions procedures. More criteria – by using both socio-economic and academic criteria we are able to monitor the process from different angles & perhaps in future compare it with application success rates. Not offer-making – all our offers are set at AAA without exception. It may be appropriate for other institutions to give automatic interviews or lower offers to candidates identified in this way, which was felt inappropriate within Oxford.

More Related