500 likes | 641 Views
Giving useful feedback on open questions in the law faculty. The development of “cases” and a “supported-open” question type. Content. Problem Beyond closed exercises: typology of exercise types Two possible solutions Strategy 1 : decision tree or “case” Strategy 2 : open supported
E N D
Giving useful feedback on open questions in the law faculty The development of “cases”and a “supported-open” question type
Content • Problem • Beyond closed exercises:typology of exercise types • Two possible solutions • Strategy 1 : decision tree or “case” • Strategy 2 : open supported • Quantitative & qualitative evidence • Work in progress
1. Problem How can we give useful feedback on open questions (with many correct answers) in an online (language) learning environment ?
challenges: most exercise and test platforms only focus on closed questions (with a very limited number of possible answers) for which corrective feedback is offered + they also include an open question type(e.g. Blackboard: essay question) that allow for free text answers but without corrective feedback -> go beyond closed exercises but with corrective feedback
Why go beyond closed exercises? to allow for more complex cognitive activities = crucial in CAL: - go beyond drill & practice - allow for skills oriented tasks with focus on meaning (and not only on form) - essential for intermediate and advanced learners
All results have been integrated into ourgeneric exercise and test platform used to create a considerable amount of language learning and testing environments • @ K.U.Leuven, used with single sign on (SSO) webservice from within Blackboard LMS • also fully SCORM compliant e.g. http://www.franel.eu a free online learning environment for French and Dutch with more than 16.000 registered users
Context of this presentation: 2 research projects granted by K.U.Leuven OI-project Jurim@tic (2005-2007) OI-project Juriflex (2007-2009) Prof. Bernard Tilleman, director, law faculty, Prof. Piet Desmet, co-director, faculty of arts/linguistics, Isabelle Demortier & Bart Loosvelt, lawyers, junior researchers, Bert Wylin, educational technology, Televic Education Eddy Demeersseman, educational scientist, educational support, K.U.Leuven Campus Kortrijk
Focus of these projects: • Electronic revolution seemed to have more or less passed by the Law Faculty • One of the reasons is that questioning in law school is far more open ended • The typical educational approach of the Law Faculty is to work with cases -> How can we reduce the complexicity of law questioning and working with cases so it could be handled in an online learning environment ? -> More generic question:How to integrate open questions into an e-learning environment
These 2 projects focus on CALI (Computer Assisted Legal Instruction), but are transposable to other domains, like CALL: 1) CLIL (Content and Language Integrated Learning): one learns a subject through the medium of a foreign language 2) TBL (Task-Based Learning): goal-oriented real-life tasks focusing on meaning (not on form) 3) Comprehension activities (reading and listening): meaning-focused questions 4) …
2. Beyond closed exercises: typology of exercise types Parameters • input from learner • number of correct answers • level of freedom • possible answers given beforehand • output software • type of correction • reliability of automated correction
3. Two possible solutions Strategy 1: reformulate an open question (e.g. case) as a decision tree with a series of closed or half-closed exercises (allows for automatic corrective feedback) Strategy 2: create a supported open question type that allows for half-automated correction and feedback
4. Strategy 1:decision tree or “case” Could open questions like cases be seen as decision trees with branches ? • one case reformulated into multiple connected questions • answer path is clarified • The path to knowledge is as important as the knowledge itself • Get the right answer for the right reason
Example Several questions must be answered if a Belgian statute regulation is claimed to be in conflict with the Constitution: • Which norm has to be tested? • Which norm can serve as a basis for the examination? • Which court is competent for examining this conflict? • Is there a term to be observed? • Which procedure has to be followed? • What is the result of this examination (abeyance or nullification)? • Etc. These questions are the path to the solution of the conflict.
Technical solution: “case” case is not really a question type it is a complex or multiple question type it links several items together in one case one overview is provided for the entire case the overview is an option the overview is progressive
“Case” allows to integrate all existing exercise types + development of 2 new half-closed exercise types: 1) connect: explicitation of logical relations 2) select text: selection of relevant text fragments
Connect connect is a half-closed exercise type (possible answers are not given in beforehand) horizontal or vertical matching in 2 stages 1st stage: match elements 2nd stage: define the relations with several feedback options
Select text half-closed (possible answer not given, limited level of freedom) mark the keyword(s) in a given text(sentence or paragraph) define ranges for selection ranges don’t count linked keywords grouped keywords feedback
select text Terug
select text Terug
“Case”: useful, but … • Level of freedom of the learner input is limited • Clarify the path to knowledge is interesting in the first phase of the learning path, but less afterwards -> strategy 2: supported open
5. Strategy 2 : supported open open question with free learner input with due date generation of feedback on the basis of: model answer keyword matching white list (+ score) and if if then black list (0 or – score) negations (and range)
4 functions of supported open exercise type: • Creation of open questionwith model answer, black list, white list, elaborated feedback, etc. • Publication of this item • fixdue date, select student groups, follow-up received • answers, etc. • Half-automatedcorrection of the answers • correctionproposalon the basis of the available info • manualcorrection of scores and adaptation of • black list & white list (-> update of automatic scores) • Generation of feedbackreport • individualised feedback, fix scores, addpersonalcomments • notify all usersbyautomaticallygenerated mail
Use of supported open exercises in three steps Step 1 : try out as a marking and feedbacktool (aid) used by teaching staff -> human verification and improvement of the black & white list is necessary Step 2 : learning result of scenario 1 can be used as an exercise with full automatic corrective and elaborated feedback (with human intervention!) -> human verificationand e-mail feedback Step 3 : exam simulation results of scenario 2 can be used as an exercise with full immediate automatic corrective and elaborated feedback (without human intervention!)
6. Quantitative & qualitativeevidence • Effectiveness of the half-automated correction tool • User statistics: Intensity of use • Student satisfaction
Effectiveness of the halfautomated correction tool N= 200 Exam january 2008 One supported open question Manual correction vs (half-)automatic correction: total match for 195/200 answers = 97,5% ! mismatch for 5 answers due to incoherence of student answer (1) due to incoherence of human correction (4) -> high marginal profit (quality of correction) and low marginal cost (people & time)
Effectiveness of the halfautomated correction tool N= 200 Exam january 2008 One supported open question Feedback (by analysis of student input): 5 black list words predicted by teacher analysis 1-50: + 3 analysis 51-100: + 1 analysis 101-200: + 0
User statistics (1) Example 1: first bachelor Law (at Kortrijk) • 68 students • 1771 sessions • 554 hours activity • 26 sessions/student • Mean of 8 hours 9 minutes of activity per student • Mean length of a session: 19 minutes
User statistics (2) Example 2: first bachelor Law (Leuven) • 285 students • 8806 sessions • 3232 hours • 30 sessions/student • Mean of 11 hours 20 minutes per student • Mean length of a session: 22 minutes
User statistics (3) Learning visits June 2006 (1 BA Political and Social sciences)
Student satisfaction: survey N= 135 The content of the cases matches with the content of the course.
7. Work in progress • Avoiding meaningless errors (e.g. typing errors): • by improving input(development of e.g. generator of “law articles”) • by correction of input(before meaning-focused correction) • Spellchecker (SPL) • approximate string matching:Levinsthein and Dice coefficients
this presentation is available @ http://www.itec-research.eu/presentations/worldcall contactpiet.desmet@kuleuven-kortrijk.bebert.wylin@kuleuven-kortrijk.beb.wylin@televic-education.com Eduma-tic authoring tool @ http://www.edumatic.be