Using Automated Questions to Assess Reading Comprehension, Vocabulary, and Effects of Tutorial Interventions
Jack Mostow, Joseph Beck, Juliet Bey, Andrew Cuneo, June Sison, Brian Tobin and Joseph Valeri
We describe the automated generation and use of 69,326 comprehension cloze questions and 5,668 vocabulary matching questions in the 2001-2002 version of Project LISTEN’s Reading Tutor used by 364 students in grades 1-9 at seven schools. To validate our methods, we used students’ performance on these multiple-choice questions to predict their scores on the Woodcock Reading Mastery Test. A model based on students’ cloze performance predicted their Passage Comprehension scores with correlation R = .85. The percentage of vocabulary words that students matched correctly to their definitions predicted their Word Comprehension scores with correlation R = .61. We used both types of questions in a within-subject automated experiment to compare four ways to preview new vocabulary before a story – defining the word, giving a synonym, asking about the word, and doing nothing. Outcomes included comprehension as measured by performance on multiple-choice cloze questions during the story, and vocabulary as measured by matching words to their definitions in a posttest after the story. A synonym or short definition significantly improved posttest performance compared to just encountering the word in the story – but only for words students didn’t already know, and only if they had a grade 4 or better vocabulary. Such a preview significantly improved performance during the story on cloze questions involving the previewed word – but only for students with a grade 1-3 vocabulary.