A Systematic Review of Automatic Question Generation for Educational Purposes

Publication Information


  • Ghader Kurdi, The University of Manchester
  • Jared Leo, The University of Manchester
  • Bijan Parsia, The University of Manchester
  • Uli Sattler, The University of Manchester
  • Salam Al-Emari, Umm Al-Qura University


  • 121-204


  • Automatic question generation, Semantic Web, Education, Natural language processing, Natural language generation, Assessment, Difficulty prediction


  • While exam-style questions are a fundamental educational tool serving a variety of purposes, manual construction of questions is a complex process that requires training, experience, and resources. This, in turn, hinders and slows down the use of educational activities (e.g. providing practice questions) and new advances (e.g. adaptive testing) that require a large pool of questions. To reduce the expenses associated with manual construction of questions and to satisfy the need for a continuous supply of new questions, automatic question generation (AQG) techniques were introduced. This review extends a previous review on AQG literature that has been published up to late 2014. It includes 93 papers that were between 2015 and early 2019 and tackle the automatic generation of questions for educational purposes. The aims of this review are to: provide an overview of the AQG community and its activities, summarise the current trends and advances in AQG, highlight the changes that the area has undergone in the recent years, and suggest areas for improvement and future opportunities for AQG. Similar to what was found previously, there is little focus in the current literature on generating questions of controlled difficulty, enriching question forms and structures, automating template construction, improving presentation, and generating feedback. Our findings also suggest the need to further improve experimental reporting, harmonise evaluation metrics, and investigate other evaluation methods that are more feasible.