1 / 33

Improving the Student Learning Experience for SQL using Automatic Marking

Improving the Student Learning Experience for SQL using Automatic Marking. Dr. Gordon Russell, Andrew Cumming. Napier University, Edinburgh, Scotland. CELDA 2004, Lisbon, Portugal. Introduction. SQL tutorials can be highly repetitive for the tutor.

sonora
Download Presentation

Improving the Student Learning Experience for SQL using Automatic Marking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Improving the Student Learning Experience for SQL using Automatic Marking Dr. Gordon Russell, Andrew Cumming. Napier University, Edinburgh, Scotland. CELDA 2004, Lisbon, Portugal.

  2. Introduction • SQL tutorials can be highly repetitive for the tutor. • Tutorials are 50 students to 1 lecturer (plus 2 helpers) • Tutorials run best when the tutor: • Gets asked interesting questions • Gets around everyone in a single tutorial period • Has time to chat with the students about life • Tutorials run best for the student when: • They can progress easily without it seeming difficult • Get to speak to the tutor whenever they want to • Have immediate feedback • Can work from home • Can avoid tutorials when they want to

  3. Feedback • Part of the problem in improving tutorials is understanding “feedback”. • Feedback can mean anything to a student, however in this talk the feedback of interest is the answers given to some key frequently asked questions from students where the answer must be individualised on a per student basis: • Is my SQL right? • Am I heading in the right direction? • When do I get the coursework assessment? • What mark did I get in the assessment and why? • Do I have to keep working on this stuff? • I have forgotten my username/password/notes/brain, so how are you going to help me?

  4. Assessment • Originally the students submitted some SQL in answer to a number of assessment questions. • These were submitted after 9 weeks of study. • At my peak I could mark 6 of these per hour, and each one had a feedback sheet attached to them. • Back then I had 280 students… Including sanity breaks this took about 2 weeks. • Maintaining consistent marking schemes for so many students was difficult. • We also franchise this material, and perform moderation on other people marking similar work, and this is another significant reason for the need for a new method.

  5. Main Targets • Perform automatic marking of student SQL assessments • Provide feedback on SQL written as part of tutorials. • Support incremental assessments scheduled under student control. • Support some of the issues of distance learning of SQL. • Manage students online • Provide multi-campus support for franchised module. • Gather student statistics to understand student behaviour • Modify student behaviour to improve module performance.

  6. Marking • Marking the SQL from students can easily become subjective. • As a starting point, the original marking scheme was edited to remove all things which could not be formally stated as a “marking rule”. • There are many different ways of writing an SQL statement to answer a specific question, and many of those statements are equally valid. • The new scheme was divided into two categories: • Accuracy of the result of executing the specified SQL • Simplistic quality measures of the SQL

  7. Accuracy • The accuracy measure used is, put simply, how similar is the output of executing the student’s SQL to the SQL I wrote as the sample solution. • The basic algorithm takes the table produced by the student’s SQL statement, and then compares each cell of that table against the sample solution. • If the cell is in both tables then score+1 • If the cell is not in the sample solution score-1 • Divide the final score by the number of cells in the biggest of the sample solution table and the student’s SQL table.

  8. Example: Accuracy 4/9 or 45% Student’s Attempt Correct Answer SELECT * FROM people SELECT ID, Lastname FROM people WHERE ID IN (1,7)

  9. Tutorial Index

  10. Algorithm Complexity • It was decided to avoid penalising students if they choose a different column order from the sample solution. • In general, the order of the rows does not effect the quality of the answer. • Producing a comparison algorithm which is row and column order insensitive is an expensive operation if not done carefully. • The algorithm used employs tree pruning algorithms and other optimisations to allow this to be done efficiently.

  11. Faking the output • If the question asked “how many employees are 17 years old” a student could count them (say 5) and write SELECT 5 FROM DUAL; • This would have the right answer, thus perfect accuracy, but is cheating! • The system for this reason does a “hidden database check”. • If the measured accuracy is 100%, the query is executed again on a different dataset which uses the same database schema. This dataset is specially constructed so that it produces different answers to all the queries. • If the student uses a cheat like this one it will produce the same output on a different dataset, and fail the test. • We take 30 points off the accuracy for this.

  12. Quality measures • In addition to accuracy, points are lost for poor quality. • The algorithm for measuring this is simplistic, as SQL quality is hard to calculate. • Some things which are related to quality are easy to measure: • User SQL is much longer than the sample solution • Having LIKE, but the string comparison has no wildcards. • Create a view without dropping the view.

  13. Environment • To use marking for assessments we need to identify who is responsible for what submissions. • This requires the concept of users. • In general users can either be created by the users themselves or by an administrator. • The administration by hand of 300 students per semester is not a nice thought… • User registration by users can be problematic…

  14. Safeguarding Registrations • ActiveSQL uses an email registration confirmation system. • When someone registers they record an email address. • An email is sent to that address which contains a web link. • If they do not click on that link their account locks up within 14 days. • If they forget their system password they can get a link emailed to that address which allows them to set a new password.

  15. Some observations • With registration the student’s name appears on the screen. • Some students see the hidden database as a sneaky trick. • In “real-life” the hidden database always bites back. • Hours worked, progress so far and success rate are all visible to the student at all timer. Many students find this self-monitoring useful. • Students failing to register is a continuing problem. • A balance must be struck between • Managing and monitoring the students effectively. • Encouraging self-reliance.

  16. Statistics • One of the main objectives of this work is to provide support into investigating student behaviour. • This allows us to make changes to the system and then measure the impact on student behaviour. • We measure: • How long a student takes to get 100% on a question • What tutorials have been worked on and scores. • How long a student spends logged into the system. • What was their overall scores. • What was the student scores in the related exam.

  17. Experiment 1:Time management • We found that a significant number of students waited until the last possible minute before starting their coursework. • We instead wanted to encourage students to work at this over a long period in an incremental way. • To attempt this we: • Changed from 1 assessment to 4 smaller assessments. • Imposed a rule where you can only do an assessment if you have completed the corresponding tutorial. • In this way completing tutorials had some value (more than just learning).

  18. Typical Students on Target

  19. Typical students on Target • A weekly “progress target” was defined. • This is the amount of material a typical student should have completed at each week of the semester. • The chart shows that more of the 2003 students achieved the target. • This improvement was immediate and lasted throughout the semester. • In the final week more students were able to recover with a final push before the deadline.

  20. Good Students On Target

  21. Good Students on Target • Good student behaviour has decreased slightly. The likely issues here are: • We have crushed their “push ahead” spirit… • The workload of having assessments early means less time on assessments and more on workload. • They were better at controlling their time than I am. • Luckily the effect is small on the good students, and the effect on the average student is significantly good to call this a positive change.

  22. Experiment 2: Reward Effort • Student feedback identified that students were unhappy completing a whole tutorial, then its assessment, only to discover they achieved 0% in that assessment. • Left them with the “why did I bother” feeling… • To counter this, we changed the assessments such that: • Each assessment had two questions. • The first question was from the just completed tutorial difficulty level • The other question was from the tutorial level below this one.

  23. Time vs. Final Mark - BEFORE

  24. Time vs. Final Mark - AFTER

  25. Result #2 • Some shape changes but not a significant change to the overall statistics. • The change was implemented in a way that not all students would have benefited. • Know more when experiment is repeated next year. • Just shows you that not all changes result in a change!

  26. Side Effects • Some aspects of the system were deliberate, but had surprising side-effects… • On each question the time taken to do that question was shown to the student. • This resulted in many statements like “I have been working on this for 30 minutes” as a reason why we should tell them the answer… • It is possible to write rubbish SQL and have a good accuracy (a coincidence), but this always results in a failed hidden db check. Students will say this is unfair, as if it has a high accuracy it must therefore be right, and thus my marking scheme is wrong.

  27. In assessments not all the accuracy measures (and sometimes none of the measures) are shown to the student until they “close” an assessment. • This is often cited as a problem…“How do you expect me to know if me SQL is right unless the system tells me”. • Actually the hiding of the accuracy measures in assessments is something to be looked at for next year, as it is not clear that this is such a good thing wrt producing good module mark statistics. • Are we working towards good statistics, or good teaching?

  28. More observations…

  29. Future Work • Investigate the following questions: • Exam vs. coursework mark… correlation? • Is rewarding effort important, or just achievement? • Is cram learning better than incremental? • Are statistics more important than actual learning (from the perspective or teachers, students, and management)? • Is ActiveSQL only 100 times worse than SQLZoo or is it higher? • Will this talk ever end?

More Related