1 / 46

Core Methods in Educational Data Mining

Core Methods in Educational Data Mining. EDUC691 Spring 2019. We lost a class session. Let’s see if we can find a day and time for a substitute session Would adding a class May 8 at the regular class time work? Or… we can run through all possible times 

cmcmahon
Download Presentation

Core Methods in Educational Data Mining

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Core Methods in Educational Data Mining EDUC691Spring 2019

  2. We lost a class session • Let’s see if we can find a day and time for a substitute session • Would adding a class May 8 at the regular class time work? • Or… we can run through all possible times  • Alternative: we drop causal mining and merge correlation mining into network analysis session

  3. Diagnostic Metrics -- HW • Any questions about any metrics? • Does anyone want to discuss any of the problems?

  4. Diagnostic Metrics -- HW • When do you want to use fail-soft interventions?

  5. Diagnostic Metrics -- HW • When do you not want to use fail-soft interventions?

  6. Textbook/Readings

  7. Detector Confidence • Any questions about detector confidence?

  8. Detector Confidence • What is a detector confidence value?

  9. Detector Confidence • What are the pluses and minuses of making sharp distinctions at 50% confidence?

  10. Detector Confidence • Is it any better to have two cut-offs?

  11. Detector Confidence • How would you determine where to place the two cut-offs?

  12. Cost-Benefit Analysis • Why don’t more people do cost-benefit analysis of automated detectors?

  13. Detector Confidence • Is there any way around having intervention cut-offs somewhere?

  14. Goodness Metrics

  15. Exercise • What is accuracy?

  16. Exercise • What is kappa?

  17. Accuracy • Why is it bad?

  18. Kappa • What are its pluses and minuses?

  19. ROC Curve

  20. Is this a good model or a bad model?

  21. Is this a good model or a bad model?

  22. Is this a good model or a bad model?

  23. Is this a good model or a bad model?

  24. Is this a good model or a bad model?

  25. ROC Curve • What are its pluses and minuses?

  26. AUC ROC • What are its pluses and minuses?

  27. Any questions about AUC ROC?

  28. Precision and Recall • Precision = TP TP + FP • Recall = TP TP + FN

  29. Precision and Recall • What do they mean?

  30. What do these mean? Precision = The probability that a data point classified as true is actually true Recall = The probability that a data point that is actually true is classified as true

  31. Precision and Recall • What are their pluses and minuses?

  32. Correlation vs RMSE • What is the difference between correlation and RMSE? • What are their relative merits?

  33. What does it mean? • High correlation, low RMSE • Low correlation, high RMSE • High correlation, high RMSE • Low correlation, low RMSE

  34. AIC/BIC vs Cross-Validation • AIC is asymptotically equivalent to LOOCV • BIC is asymptotically equivalent to k-fold cv • Why might you still want to use cross-validation instead of AIC/BIC? • Why might you still want to use AIC/BIC instead of cross-validation?

  35. AIC vs BIC • Any comments or questions?

  36. LOOCV vs k-fold CV • Any comments or questions?

  37. Other questions, comments, concerns about textbook?

  38. Thoughts on the Knowles reading?

  39. Thoughts on the Jeni reading?

  40. Thoughts on the Kitto reading?

  41. Kitto et al.’s warnings • Warning 1. For some educational scenarios, reporting improvement in algorithmic performance is insufficient as a form of validation. • Warning 2. Being able to report upon a metric does not mean that you should use it, either in the tool, or in reporting its worth. • Warning 3. Feedback should not necessarily be set at the same resolution that the analytics make possible. • Warning 4. Overemphasising computational accuracy is likely to delay the adoption of LA tools that could already be used productively.

  42. Please explain why Kitto says these; do you agree? • Warning 1. For some educational scenarios, reporting improvement in algorithmic performance is insufficient as a form of validation. • Warning 2. Being able to report upon a metric does not mean that you should use it, either in the tool, or in reporting its worth. • Warning 3. Feedback should not necessarily be set at the same resolution that the analytics make possible. • Warning 4. Overemphasising computational accuracy is likely to delay the adoption of LA tools that could already be used productively.

  43. Kitto et al.’s suggestion • Once the analytics is embedded in an appropriate learning design we can see that its purpose is to provide enough scaffolding to “start a conversation” between the student and the analytics-driven feedback, or between peers. • What are the pluses and minuses of this framing?

  44. Other questions or comments?

  45. Creative Assignment 2 • Feature Engineering

  46. Next Class • Wednesday, February 27 • Feature Engineering • Baker, R.S. (2014) Big Data and Education. Ch. 3, V3, V4, V5 • Sao Pedro, M., Baker, R.S.J.d., Gobert, J. (2012) Improving Construct Validity Yields Better Models of Systematic Inquiry, Even with Less Information. Proceedings of the 20th International Conference on User Modeling, Adaptation and Personalization (UMAP 2012),249-260. • Creative: Feature Engineering

More Related