Distractor quality evaluation in Multiple Choice Questions
Résumé
Multiple choice questions represent a widely used evaluation mode; yet writing items that properly evaluate student learning is a complex task. Guidelines were developed for manual item creation, but automatic item quality evaluation would constitute a helpful tool for teachers. In this paper, we present a method for evaluation of option quality, based on Natural Language Processing criteria, which evaluate their syntactic and semantic homogeneity. We perform an evaluation of this method on a large MCQ corpus and show that the combination of several measures enables to validate distractors.
Origine : Fichiers produits par l'(les) auteur(s)
Loading...