Are We There Yet? Evaluating the Effectiveness of a Recurrent Neural Network-Based Stopping Algorithm for an Adaptive Assessment

Publication Information


  • Jeffrey Matayoshi, McGraw-Hill Education/ALEKS Corporation
  • Eric Cosyn, McGraw-Hill Education/ALEKS Corporation
  • Hasan Uzun, McGraw-Hill Education/ALEKS Corporation


  • 304-336


  • Recurrent neural networks, Adaptive assessment, Knowledge Space Theory, Deep learning


  • Many recent studies have looked at the viability of applying recurrent neural networks (RNNs) to educational data. In most cases, this is done by comparing their performance to existing models in the artificial intelligence in education (AIED) and educational data mining (EDM) fields. While there is increasing evidence that, in many situations, RNN models can improve on the performance of these existing methods, in this work we take a different approach. Rather than directly comparing RNNs with other models, we are instead interested in the results when RNNs are combined with one of these existing models. In particular, we attempt to improve the performance of ALEKS (“A ssessment and LE arning in K nowledge S paces”), an adaptive learning and assessment system based on Knowledge Space Theory, through the use of RNN models. Using data from more than 1.4 million ALEKS assessments, we first build an RNN classifier that attempts to predict the final result of each assessment. After verifying the accuracy of these predictions, we develop our stopping algorithm, with the goal of improving the efficiency of the ALEKS assessment by reducing the total number of questions that are asked. Based on this stopping algorithm, we give a comprehensive analysis of the possible effects it would have on students. We show that the combination of an RNN with the ALEKS assessment can reduce the average assessment length by over 26%, while a high degree of accuracy is maintained.