Writing assessment items and instructional texts for english second language speakers
Download
1 / 29

- PowerPoint PPT Presentation


  • 361 Views
  • Uploaded on

Writing assessment items and instructional texts for English Second Language speakers Marise Ph. Born Erasmus University Rotterdam, The Netherlands & Cheryl Foxcroft Nelson Mandela Metropolitan University, South Africa [email protected] , [email protected]

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about '' - Ava


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Writing assessment items and instructional texts for english second language speakers l.jpg

Writing assessment items and instructional texts for English Second Language speakers

Marise Ph. Born

Erasmus University Rotterdam,

The Netherlands

&

Cheryl Foxcroft

Nelson Mandela Metropolitan University,

South Africa

[email protected], [email protected]


Part 1 general challenges l.jpg
Part 1 : General Challenges Second Language speakers

  • English is a global language that is a widely used language medium in business and higher education.

  • Most instructional texts and journals are published in English. Many higher education institutions use English as the main medium of instruction and assessment. Given the high cost of adapting tests together with the argument that a certain level of English proficiency is needed in many countries to function effectively in the workplace, many assessment measures (tests) are only available in and/or are administered in English. This could pose difficulties for people whose 1st language is not English as English can be confusing at times.


Part 1 general challenges cont l.jpg
Part 1 : General Challenges Cont. Second Language speakers

  • The same word can mean different things, for example, “tire” could mean that you are getting tired or it could mean the round rubber wheel on a motor car, if the American spelling is used (and the differences between the English and American spelling of a word adds to the confusion).

  • A word used as part of a term can take on a different meaning, for example, “random” is usually used to refer to a chance/accidental/ haphazard happening, but “random sampling” is not haphazard it is a systematically planned probability sampling technique that gives everyone an equal chance of being selected.


Part 1 general challenges cont4 l.jpg
Part 1 : General challenges cont. Second Language speakers

  • Unfortunately, many writers of texts and developers of tests have English first language speakers in mind when they pen (write) texts and test items, yet many second and third language (etc.) English speakers will want to derive meaning and knowledge from the texts or their futures may depend on answering the test items correctly. But, do such texts and test items not immediately put non-native English speakers at a disadvantage?


Part 1 general challenges cont5 l.jpg
Part 1 : General challenges cont. Second Language speakers

  • According to Nell (2000), in a multilingual country such as South Africa, “language is generally regarded as the most important single moderator of test performance”. This is because performance on assessment measures could be the product of language difficulties and not ability factors if a measure is administered in a language other than the test-takers home language” (in Foxcroft & Roodt, 2005, p. 230).


Part 1 general challenges cont6 l.jpg
Part 1 : General challenges cont. Second Language speakers

  • Language rarely constitutes the trait being assessed directly. Instead, language commonly is used as a vehicle of communication between the test (i.e., through written text or presented orally by an examiner) and the person taking the test. Although a test may not be designed to assess language, scores may be attenuated by a test-taker’s deficiencies in listening or reading … or in speaking or writing. These and other qualities that may artificially depress test performance have the potential to promote construct irrelevant variance and thus decrease the test’s validity. Efforts to minimize the impact of language-related qualities, including reading, when designing or adapting tests that assess other traits are needed” (Oakland & Lane, 2004, p. 242)


Readability defined l.jpg
Readability defined Second Language speakers

  • Among other things, there is a relationship between the reading difficulty of a test and the extent to which there could be construct irrelevant variance. Thus, knowledge of the methods used to assess reading difficulty may lead to the development/adaptation of better texts and tests for non-native English speakers.

  • “Readability is the sum total (including the interactions) of all those elements within a given piece of printed material that affects the success a group of readers have with it. The success is the extent to which they understand it, read it at an optimum speed, and find it interesting” (Dale & Chall, 1949, p. 23).


Factors that contribute to text difficulty oakland lane 2004 p 248 l.jpg
Factors that contribute to text difficulty (Oakland & Lane, 2004, p. 248

Text Factors

Reader Factors

Text

Difficulty

Syntax

Reading

Fluency

Vocabulary

Background

Knowledge

Idea Density

Language

Cognitive

Load

Motivation &

Engagement


Readability of instructional material in english for first and second language readers l.jpg

Part 2 : Readability Study 2004, p. 248

Readability of Instructional Material in English for first- and second-language readers

Marise Ph. Born

Erasmus University Rotterdam,

The Netherlands

&

Cheryl Foxcroft

Nelson Mandela Metropolitan University,

South Africa


Overview l.jpg
Overview 2004, p. 248

  • Context of study: ITC ORTA-project

  • Readability: matching reader and text

    • Factors determining readability

    • First- and second language readers

    • Measuring readability

  • Purpose of study

  • Study among South African students: Method and Results

  • Implications for follow-up research and for practice


Slide11 l.jpg

On-Line Readings in Testing and Assessment Project 2004, p. 248 (ORTA)

Cheryl Foxcroft (South Africa) and Marise Born (The Netherlands)

Goal:To provide on-line, free-of-charge readings on aspects of testing and assessment to students and scholars from developing countries in particular.

How: Inviting knowledgeable authors in the domain of testing and assessment to provide extremely reader-friendly texts.

  • No royalties involved. Authors contribute to advance knowledge across the globe.

  • Framework of topics, invitation letter, guide for authors available from [email protected], [email protected]

  • ORTA is a dynamic project, where texts are added to the website as they become available and the framework of topics may be expanded.


Readability l.jpg
Readability 2004, p. 248

  • How to determine the ORTA instructional material’s readability?

  • One possibility: Flesch-Reading ease formula, derived from sentence length and syllables per word. Material receives a reading ease score. Measures surface structure characteristics.

  • Problem: assumption of readability as an inherent property of the text.

  • Runs counter to the meaning of readability as a match between reader and text:

  • How is a particular text is responded to by different readers?


Readability cont d l.jpg
Readability ~cont’d 2004, p. 248

We can distinguish between:

  • collection of individuals with given interests and reading skills

  • collection of reading materials, differing in content, style and complexity

    Match of both sides determines the extent to which the material can be read with profit.

    Profit: not only comprehensible but also compelling information!


Factors determining readability l.jpg
Factors determining readability 2004, p. 248

Readers:

Goals: Interest & motivation to read the text

Background knowledge of the topic

Reading fluency

Language: knowledge of the words in the text

Instructional text:

Surface structure features of text (paragraphing, titles, sentence length, text aids such as tables and graphs…)

Word ease/ frequency

Causal structure of, and inferences required in text


First and second language readers l.jpg
First- and second- language readers 2004, p. 248

Have FL and SL reader differences been an issue at all in the readability literature until now?

Gap in research:

  • Linguists: language learning, modern language journal, foreign language annals

  • Psychologists: how can instructional texts be improved?

  • Marsh et al study (2000; 2002) among Chinese: SL-instruction in early high school years negative effects on academic self-concept and academic achievements. Particularly problematic in nonlanguage subjects.


Measuring readability l.jpg
Measuring readability 2004, p. 248

Many measures have been used

(Wagenaar et al. 1987):

Objective ones:

Flesch score (surface level feature);

reading time, eye movements during reading;

recall test, sentence completion tests;

number of required inferences per 100 words (structure-level feature), etc.

Subjective ones:

Perceived difficulty of text, assessment of text readability;

compellingness, etc.


Purpose of present study l.jpg
Purpose of present study 2004, p. 248

St Paul’s first letter to the Corinthians 14:9:

“Except ye utter by tongue words easy to be understood how shall it be known what is spoken?”

Determining readability differences for first- and second-language readers of English university-level

instructional texts.


Slide18 l.jpg

Participants: 2004, p. 248Undergraduate students at the NMMU South campus

Characteristics of Sample (N=59)

  • 3rd years

  • Intro to

  • psych

  • assessment


Stimulus material l.jpg
Stimulus Material 2004, p. 248

  • Given a passage to read on history of psychological testing (6 pages)

  • Some statistics:

    • Counts: words = 1847; characters = 10433; paragraphs = 25; sentences = 78

    • Averages: sentences per paragraph = 4.5; words per sentence = 22.5

    • Readability: Passive sentences = 25%; Flesch Reading Ease = 20.4 (Mean should be 60-70); Grade level = 12.

  • Why we did not use these as objective measures: readability measures not sensitive to academic writing style


Procedure l.jpg
Procedure 2004, p. 248

  • Two classes – counterbalanced order of presentation.

  • Class 1 (n=21).

    • Pretest.

    • Cloze test.

    • Read text (stimulus material).

    • Answer questions regarding word ease, word meaning, drawing inferences, and rating of surface features.

    • Post-test

  • Class 2 (n=38).

    • Pretest.

    • Read text (stimulus material).

    • Answer questions regarding word ease, word meaning, drawing inferences, and rating of surface features.

    • Cloze test.

    • Post-test


Results l.jpg
Results 2004, p. 248

Pretest correlates significantly with some of the readability measures (e.g., perceived difficulty = .3 and

Cloze = .6)


Results continued post test l.jpg
Results continued 2004, p. 248Post test

All showed significant

Improvement, but ESL-

Xhosa showed least

Sign. diff among 3 lang groups.

EFL mean > ESL-Xhosa mean


Slide23 l.jpg

Results continued 2004, p. 248Cloze Test

NB: Only half the sample

could complete the task

For example – A further milestone in the development of ……… psychological assessment came from the work of …….., a professor of philosophy in Germany. …….. to McReynolds, (1986), Thomasius made two …….. contributions to the emerging field of assessment.


Slide24 l.jpg

Results continued 2004, p. 248Perceived Difficulty (Word Ease)

EFL, ESL-Afrik & ESL-Xhosa

EFL and ESL comparison


Slide25 l.jpg

Results continued 2004, p. 248Word Meaning


Slide26 l.jpg

Results continued 2004, p. 248Rating of Importance of Visual/Surface Features


Slide27 l.jpg

Results continued 2004, p. 248Causal Inferences


Example implications for practice l.jpg
Example: Implications for practice 2004, p. 248

Differences in causal inferences.

  • Implication: enhance instructional texts by leaving in the inferences in text (Britton & Gulgoz, 1991).

    • Do not use a different word when mentioning a concept for the second time

    • Realization that automated inferences one makes as an SME who is writing an instructional text, are the ones that a novice must go through

    • Training writers: possible (Britton et al., 1989; T.M. Duffy et al., 1989) ?

    • Implication: enhance underdeveloped inference-making skills in certain ESL groups.


Implications for follow up research and writing of instructional texts l.jpg
Implications for follow-up research and writing of instructional texts

  • Investigate validity of readability measures:

    • Is the cloze test measuring something different in the ESL-group?

  • Do not treat all ESL people as the same

  • Use further measures of readability (reading rate, free –recall tests),

  • Use covariates (motivation, interest: Matthew effect (Stanovich, 1986)).

  • Use experimental design: several enhanced conditions (improved causal structure, surface structure, word difficulty…)

  • Can get info on writing instructional texts for ESL students from Marise and Cheryl.


ad