Predicting Affective States expressed through an Emote-Aloud Procedure from AutoTutor's Mixed- Initiative Dialogue

In IJAIED 16 (1)

Publication information

Abstract

This paper investigates how frequent conversation patterns from a mixed-initiative dialogue with an intelligent tutoring system, AutoTutor, can significantly predict users' affective states (e.g. confusion, eureka, frustration). This study adopted an emote-aloud procedure in which participants were recorded as they verbalized their affective states while interacting with AutoTutor. The tutor-tutee interaction was coded on scales of conversational directness (the amount of information provided by the tutor to the learner, with a theoretical ordering of assertion > prompt for particular information > hint), feedback (positive, neutral, negative), and content coverage scores for each student contribution obtained from the tutor's log files. Correlation and regression analyses confirmed the hypothesis that dialogue features could significantly predict the affective states of confusion, eureka, and frustration. Standard classification techniques were used to assess the reliability of the automatic detection of learners' affect from the conversation features. We discuss the prospects of extending AutoTutor into an affect-sensing tutoring system.