1 / 51

TESL 4340 -- Evaluating CALL Packages: Curriculum/Pedagogical/Linguistics

TESL 4340 -- Evaluating CALL Packages: Curriculum/Pedagogical/Linguistics. Dr. Henry Tao GUO Email: henryguo@u ic.edu.hk Office: B 418. Outline. Part one: Language program evaluation (Lynch 1996: 1-9) Part two: Evaluating CALL( Chapelle 2001:44-94). Part one : Language program evaluation.

creda
Download Presentation

TESL 4340 -- Evaluating CALL Packages: Curriculum/Pedagogical/Linguistics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TESL 4340 -- Evaluating CALL Packages:Curriculum/Pedagogical/Linguistics Dr. Henry Tao GUO Email:henryguo@uic.edu.hk Office: B 418

  2. Outline • Part one: Language program evaluation (Lynch 1996: 1-9) • Part two: Evaluating CALL(Chapelle 2001:44-94)

  3. Part one: Language program evaluation • Critical issues • Context-adaptive model (CAM)

  4. Evaluation • Evaluation here is defined as the systematic attempt to gather information in order to make judgments or decisions.

  5. As such, evaluative information can be gathered through different methods such as • Observation • Questionaire • Interview …

  6. Why CALL evaluation? • Internally, to better a program • Externally, to justify program funding

  7. Context Adaptive Model (CAM) • Who needs the information? And what are the goals of evaluation? What are the roles of evaluators? • Contextual questions • Which aspect of the program should the evaluation focus on? • Data collection and analysis • Evaluation report (Adapted from Lynch 1990a)

  8. Who needs the information? Audience • Who is requesting the evaluation? • Who will be affected by the evaluation? Stakeholders (or clients) & Peripheral audience

  9. What are the goals of evaluation? • Why is the evaluation being conducted? • What information is being requested and why? The goals of evaluation may be determined by audience

  10. What are the role of evaluator? • The role of evaluator is determined by the particular audience and their goals • Someone providing consultation • An expert standing in judgment • A collaborator in program development • A decision-making facilitator

  11. Hiring external evaluators for greater objectivity or internal evaluators for improving the curriculum?

  12. Contextual questions • Is there a comparison group? • Are there reliable and valid measures of language skills? 3. Are there various types of evaluation expertise? 4. Timing of the evaluation 5. How students are admitted into the program?

  13. 6. What the program students are like 7. What the program staff are like 8. Size and intensity of the program 9. Instructional materials and resources 10. Perspective and purpose of the program 11. Social and political climate

  14. Preliminary thematic framework (PTF) • Where should the evaluator begin? • What aspects of the program should evaluator investigate in detail?

  15. An example of PTF developed for EST reading program • Effects of focusing instruction on reading only 2. Effects of focusing instruction on reading skills and strategies 3. Effects of using authentic reading texts 4. Feasibility of using students’ mother tongue versus target language for instruction

  16. 5. Availability of classrooms 6. Feasibility of using a ‘modified adjunct model’ approach 7. Feasibility and effects of conducting classroom-centered research 8. Levels of student proficiency in English upon entering the program (Adapted from Lynch 1990a)

  17. Data collection design/system • What type of data need to be gathered-quantitative, qualitative or both? • What will be the best method for gathering the data? • The contextual questions can be used to determine which types of data collection design is possible.

  18. Data collection and analysis • How is data collection to be conducted? • How is the results to be interpreted?

  19. In the case of quantitative designs, have the assumptions of the design and statistical models been met? • In the case of qualitative design, have the procedures for data gathering been portrayed accurately, and have alternative interpretations of the data been pursued?

  20. Evaluation report • The evaluator must be sensitive to the audience and goals of the evaluation • The social and political climate dimension of the context needs to be considered carefully at this stage

  21. The critical issue is how to communicate the findings of the evaluation honestly and successfully. • Prepare multiple reports or express the findings in different ways, depending on the intended audiences.

  22. Part two: Evaluating CALL • The problem of instructed SLA • Principles for CALL evaluation • Judgmental evaluation of CALL • Empirical evaluation of CALL

  23. The problem of instructed SLA • Cognitive conditions for SLA • Social-affective conditions for SLA • Other factors

  24. Cognitive conditions for SLA • Choose a range of target structures • Choose tasks which meet the utility condition • Select and sequence tasks to achieve balanced goal development • Maximize the chances of focus on form through attentional manipulation • Use cycles of accountability

  25. Social-affective conditions for SLA • Willingness to communicate (WTC) • The desire to communicate with a particular person • Communicative self confidence at that particular moment • Interpersonal motivation • Intergroup motivation

  26. Self confidence • Intergroup attitudes(e.g., integrativeness) • Social situation(i.e.,features of context affecting communication) • Communicative competence • Intergroup climate • personality

  27. Other factors • Individual differences in cognitive characteristics of learners due to, for example, age or cognitive style. • Learning situation such as the effects of task choice on teachers and learners and others who may be involved with the task • Practical factors such as the available resources

  28. Principles for CALL evaluation • Evaluation of CALL is a situation specific argument • Judgemental anlaysis of software and planned tasks and empirical analysis of learners’ performances

  29. Criteria should come from theory and research on second language teaching • Criteria should be applied in view of the purpose of the task • Language learning potential should be the central criterion in evaluation of CALL

  30. Judgmental and empirical analyses

  31. Criteria from theory and research on SLA • Language learning potential • Learner fit • Meaning focus • Authenticity • Positive impact • practicality

  32. These criteria for CALL appropriateness need to be applied in view of the purpose of a CALL task. Moreover, tasks may have different purposes at various stages of instruction (Doughty & Williams,1998). Whatever the goal of the CALL task, however, evaluation of the task requires that it have a stated purpose. Criteria applied based on task purpose

  33. The centrality of language learning • Even though the importance of each of thesix criteria may vary depending on the purpose of the tasks, language learning potential should be considered the most critical for CALL activities

  34. Judgemental evaluation of CALL

  35. Cases(Chapelle 58-67) • Computer-assisted classroom discussion • Microworld • Text analysis • Storyboard • Concordancing

  36. Summary

  37. Empirical evaluation of CALL • Why? ‘Students are often doing something very different from what [ language teachers ] assume they are doing’ ( Hosenfeld,1976:123). In other words, it is necessary to identify the observable data that provide evidence of CALL qualities.

  38. How? Look for evidences

  39. Language learning potential • Focus on form • Modified interaction • Modified output

  40. Learner fit • Level of linguistic difficulty • Individual difference

  41. Meaning focus • Effects of meaning-based instruction • Assessing engagement with meaning

  42. Authenticity • Comparing CALL with non-CALL activities

  43. Positive impact

  44. Practicality • Given the role of resources for the success of CALL, 1) some formal mechanism needs to be in place to monitor adequacy, and 2) an argument about CALL appropriateness should include a statement about sufficiency of resources.

  45. Conclusion • Evidence concerning the six ideal qualities of a particular CALL task needs to be combined to form an evaluative argument about the appropriateness of a CALL task for particular learners at a given point in time.

  46. Integrated research is needed to examine the types of CALL activates to seek evidence about CALL appropriateness in particular settings.

More Related