1 / 36

Key aims:

Production of comparative language tests The European Survey on Language Competences Neil Jones Cambridge ESOL SQA 28 february 2013. Key aims:. The ESLC set out to: provide information on the general level of foreign language knowledge of pupils

celine
Download Presentation

Key aims:

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Production of comparative language tests The European Survey on Language CompetencesNeil JonesCambridge ESOLSQA 28 february 2013

  2. Key aims: • The ESLC set out to: • provide information on the general level of foreign language knowledge of pupils • provide strategic information to policy makers, teachers and learners • Using the following instruments: • Language Tests: • English, French, German, Italian, Spanish • 3 skills (Reading, Listening, Writing) • A1 to B2 levels of CEFR • Contextual questionnaires: • addressing 13 language policy issues • for students, teachers, principals and countries

  3. .. Inference “Real World” of language use Validity as inference to some “real world” .. World of the test

  4. Inference to some “real world”: a sequence of steps Test/ task features Processes, knowledge Learner features “Real world” (target situation of use) Test score Measure Test construction Test performance 1 2 3 4

  5. Evaluation Generalization Extrapolation Alignment Frame-worklevels “Real world” (target situation of use) Test score Measure Test performance 1 2 3 4 5 How can we score what we observe? Does the test score reflect the candidate’s actual ability? Are scores consistent and interpretable? How does the specific learning/testing context relate to a more general proficiency framework? Scoring validity Measurement validity Scale construction, measurement Standard setting, interpretation Context-specific Context-neutral Test construction Theory-based validity Context validity Test/ task features Processes, knowledge Learner features What to observe? How?

  6. Approach to developing the language testing framework • Identify the language testing objectives of the ESLC. • For each skill, identify test content and testable subskills derived from: • a socio-cognitive model of language proficiency • language functions or competences salient at levels A1 to B2 in CEFR • identify appropriate task types to test these subskills • develop specifications, item writer guidelines and a collaborative test development process that are shared across languages in order to produce comparable language tests.

  7. Common European Framework model of language use/learning • “…the actions performed by persons who as individuals and as social agents develop a range of competences, both general and in particular communicative language competences. They draw on the competences at their disposal in various contexts under various conditions and constraints to engage in language activities involving language processes to produce and/or receive texts in relation to themes in specific domains, activating those strategies which seem most appropriate for carrying out the tasks to be accomplished. The monitoring of these actions by the participants leads to the reinforcement or modification of their competences.” (Council of Europe 2001:9, emphasis in original).

  8. Domain of use Language activity Strategies The language learner/ user Topic (situation,theme…) Task Processes Knowledge Monitoring, assessment CEFR’s model of language use and learning

  9. Test Domain of use (TLU) Task Language activity Strategies Task The language learner/ user Topic (situation,theme…) Task Processes Knowledge Task Task Test tasks reflect TLU tasks. An interactional view Learner’s engagement with tasks has interactional authenticity. Test performance enables inference to performance in TLU.

  10. Creating a text level structure: Construct an organised representation of the text [or texts] Text structure knowledge: Genre Rhetorical tasks Building a mental model Integrating new information Enriching the proposition Remediation where necessary General knowledge of the world Topic knowledge Meaning representation of text(s) so far Monitor: goal checking Inferencing Goal setter Selecting appropriate type of reading: Careful reading Local: Understand sentence GlobaI Comprehend main idea(s) Comprehend overall text Comprehend overall texts Expeditious reading Local: Scan for specifics Global: Skim for gist Search for main ideas and important detail Strategies The language learner/ user Establishing propositional meaning at clause and sentence levels Processes Knowledge Syntactic knowledge Parsing Lexical access Lexicon Lemma: Meaning Word class Lexicon Form: Orthography Phonology Morphology Word recognition Metacognitive mechanisms/ Strategies Visual input Central processing core Knowledge A model for reading (after Weir 2005)

  11. Domains of language use

  12. Features of approach • Implementation of construct: subskills mapped to specific task types • Reading and Listening: objectively marked; Writing: subjectively marked • Four task development stages: Pilot (2008), Pretesting (2009) Field Trial (2010), Main study (2011) • Task adaptation across languages • Cross-language vetting

  13. Reading – an A1 task You will read a notice about a cat. For the next 4 questions, answer A, B or C. Leo is lost. He’s my little cat. He’s white with black paws. He’s small and very sweet. He has brown eyes. He wears a grey collar. He didn’t come home on Monday and it’s Thursday today. That’s a long time for a little cat! Leo often sits on top of the houses near here between Smith’s baker’s shop and King Street. If you find him in your garden or under your car, please telephone me immediately. Please note – Leo doesn’t like it when people pick him up, and he doesn’t like milk. Thank you for your help! Sophie Martin tel: 798286 Busco a mi gato Leo. Ha desaparecido. Es blanco con las patas negras. Es pequeño, tiene 7 meses y es muy bonito. Tiene los ojos marrones. Lleva un collar gris. Le gusta sentarse en los tejados de las casas que están entre la panadería García y la calle de la Victoria. No veo a Leo desde el lunes y hoy es jueves. Es mucho tiempo para un gato tan pequeño. Leo no bebe leche y no come pan. Si lo ves cerca de tu casa o debajo de un coche, llámame. Gracias por tu ayuda. Sofía Alonso 626 537 548

  14. Reading – an A1 task

  15. Marking of Writing • Responsibility of countries • Central trickle-down training sessions held for national coordinators • A proportion of multiple marking in each country: check on in-country rater agreement • But (all) multiple-marked scripts also centrally marked: additional check on leniency/severity

  16. Country A Country C Country B Single marking Multiple marking Central marking Central markers

  17. A. Communication how many of the content points are dealt with (clearly) how well the points are expanded style – register B. Language coherence vocabulary cohesion accuracy

  18. 1 3 ~~~~~~~~~ ~~~~~~~~~ ~~ ~~~~~~~~~ ~~~~~~~~~ ~~ 2 lower higher

  19. 1 3 5 Lower exemplar Higher exemplar ~~~~~~~~~ ~~~~~~~~~ ~~~~~~~~~ ~~~~~~~~~ ~~ ~~ ~~~~~~~~~ ~~~~~~~~~ ~~~~~~~~~ ~~~~~~~~~ ~~ ~~ 2 4 lower higher

  20. Item response theory and item-banking Standards consistently applied 90 Test 1 80 B1 70 Test 2 60 A2 50 Test 3 40 . . . . . . A1 30 Tests at appropriate level Learners located on scale Item bank links all levels Measurement scale

  21. B1 B2 A2 B1 A1 A2 Targeted language testing Routing test

  22. Test design

  23. Standard setting to the CEFR • Standard reference: the CoE Manual for relating language exams to the CEFR; • http://www.coe.int/t/dg4/linguistic/manuel1_en.asp • Jones, N (2009) A comparative approach to constructing a multilingual proficiency framework: constraining the role of standard setting • http://www.coe.int/t/dg4/linguistic/Proceedings_CITO_EN.pdf • See too the CoE Manual for language test development and examining (ALTE) • http://www.coe.int/t/dg4/linguistic/ManualtLangageTest-Alte2011_EN.pdf

  24. Standard setting to the CEFR • My conclusions: • Build on what you already know; • Performance skills are a more practical target for standard setting judgment than indirectly observable, objectively marked skills; • Comparative judgments are easier than absolute judgments, and therefore ranking may offer more than rating; • In a multilingual framework it is essential to minimize the role of subjective judgment.

  25. Cross-language alignment • In ESLC a study was possible for Writing. • A ranking study, cf Sevres (2008) for Speaking

  26. Ranking approach to cross-language comparison (Speaking, CIEP 2008) Standard Set for Rankings C1 B2 B1 A2 A1 A1 A2 B1 B2 C1 Levels from rating

  27. ESLC Writing alignment: five languages on a single scale Level

  28. First target language (Skills averaged)

  29. Second target language (Skills averaged)

  30. Asset Languages link between GCSE and CEFR

  31. GCSE grades and CEFR levels

  32. http://www.surveylang.org

More Related