Bayesian student modeling, user interfaces and feedback: A sensitivity analysis

In IJAIED 12 (2)

Publication information

Abstract

The Andes physics tutoring system has a student modeler that uses Bayesian networks. Although the student modeler was evaluated once with positive results, in order to better understand it and student modeling in general, a sensitivity analysis was conducted. That is, we studied the effects on accuracy of varying both numerical parameters of the student modeler (e.g., the prior probabilities) and structural parameters (e.g., whether the tutor uses feedback; whether the tutor insists that students correct errors; whether missing entries are counted as errors). Many of the results were surprising. For instance: Leaving feedback on when testing students improved the assessor's accuracy; Long tests harmed accuracy in certain circumstances; CAI-style user interfaces often yielded higher accuracy than ITS-style user interfaces. Furthermore, we discovered that the most important problem confronted by the Andes student modeler was not the classic assignment of credit and blame problem, which is what Bayesian student modeling was designed to solve. Rather, it is that if students do not keep moving along a solution path, knowledge that they have mastered may not get a chance to apply, and thus the student modeler can not detect it. This factor had more impact on assessment accuracy than any other numerical or structural parameter. It is arguably a problem for all student modelers, and other assessment technology as well.