TICL HomeIssue Contents

Towards Data-driven Instruction
Arnon Hershkovitz

With the current flood of studies in the fields of Educational Data Mining (EDM) and Learning Analytics (LA), it is hard to believe that these terms were born only a few years ago. The first International Conference on Educational Data Mining was held in 2008 (EDM workshops were held annually since 2005 with relevant workshops being held in 2000 and 2004) and the first International Conference on Learning Analytics and Knowledge (LAK) was held in 2011. Today, there are societies dedicated to promoting studies in the field, IEDMS (International Educational Data Mining Society) and SOLAR (Society for Learning Analytics Research), which hold annual, constantly growing conferences and other events and publish journals (JEDM, JLA). Despite the differences between them, many scholars conveniently take part in both communities’ initiations; more than that, leaders of these two communities have been actively collaborating in order to promote quality research that will eventually improve teaching and learning processes (e.g., Siemens & Baker, 2012; Siemens et al., 2011).

As research in these fields expand, it is quite difficult to find a definition that will suit to all the studies that take place under these umbrellas. EDM was broadly defined as “the area of scientific inquiry centered around the development of methods for making discoveries within the unique kinds of data that come from educational settings, and using those methods to better understand students and the settings which they learn in” (Baker, 2010). LA is defined as “the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning and the environments in which it occurs”1. Both definitions focus on students and learners, that is, on learning. This indeed has been the focus of research in these fields from the very beginning, however in recent years studies emerge that focus on the other side of the same coin, that is, instructors and instruction. These issues stand at the heart of this special issue.

Data-driven instruction is still a huge scope and has many shades. One promising way of adding learning analytics to traditional teaching is to offer teachers with accessible, data-driven information, either in a dashboard style or with a UI with which they could perform their own analysis on student data (e.g., Ben-Naim, Bain, & Marcus, 2009; Dyckhoff, Zielke, Bültman, Chatti, Schroeder, 2012; Murray et al., 2013; Pedraza-Perez, Romero & Ventura, 2011; Verbert, Duval, Klerkx, Govaertz, & Santos, 2013). Similarly, students may also be able to reflect on their own learning, based on data-driven mechanisms, which might facilitate reflection on their own learning, increase self-regulation and eventually promote learner-centered instruction (e.g., May, George, & Prévôt, 2011; Rivera-Pelayo, Munk, Zacharias, & Braun, 2013). Other common methods of improving instructional processes using analytics are supplying students with immediate feedback and assisting teachers with automatic assessment. These kinds of approaches to improving instructional processes are discussed in this special issue.

Fossati, DiEugenio, Ohlsson, Brown and Chen (2015) suggest a novel EDM application: automatic generation of procedural knowledge models (PKM). This model contains information about students’ problem-solving behavior, specifically by taking into consideration all possible solution paths that have been observed and computing their respective goodness towards reaching a solution. This approach is theoretically rooted in the notion of Dialogue Acts (cf. Litman & Forbes-Riley, 2006), and using it, Fossati et al. are able to provide reactive and proactive timely feedback. They have developed a tutor for graduate students who learn the notion of lists in Computer Science, which implements these ideas and aims on replacing human tutoring. Their empirical results are promising.

Ming and Ming (2015) present a novel application of learning analytics to formative assessment of unstructured text. The novelty of this approach lies in the fact that it relies on no expert content knowledge and still it can reveal students’ conceptual understanding and can predict final grades. The authors apply a unique tokenizing technique called phrasal pursuit, which learns statistically meaningful phrases of arbitrary length, and later implement topic modeling techniques using probabilistic latent semantic analysis (pLSA) and hierarchical latent Dirichlet allocation (hLDA). Creating an assessment and visualization tool based on this approach may assist teachers in better understanding student thinking, in turn providing teachers with valuable content-relevant feedback.

Similarly, Py, Despres and Jacoboni’s (2015) efforts are also aimed on providing teachers with useable, actionable information about the current state of their learners’ knowledge and conceptions. As in Ming and Ming’s (2015) work, Py, Despres and Jacoboni also rely on content-dependent cues and hope to build an open learner model (i.e. a learner model that is directly open to and accessible by learners or teachers); open learner models have shown their effectiveness in promoting learner awareness and reflection (Bull, 2012; Kay, 1997), nevertheless their construction is still a complex task (Self, 1988). The approach presented in this paper is based on analyzing student-tutor interactions and will fit both constraint- based tutors and indicator-based tutors. It produces an ontology of concepts along with their level of proficiency, hence can easily be used by the learner or by the teacher.

Finally, Roscoe, Snow, Varner and McNamara (2015) present a method of enhancing formative assessment (Nicol, 2006; Pellegrino, 2001; Shute, 2008) – which is part of a writing ITS (W-Pal) – by brilliantly taking into account students’ essay revising patterns. Applying ideas from computational linguistics, this approach assists instructors with time-saving assessment (just think of the tedious work of examining students’ revisions…) and students with timely feedback.

Overall, the first issue of this double special issue gives a broad picture of the state-of-the-art in the field, demonstrating how instruction in various domains and in various grade levels can be assisted with analytics. Improving instruction might be a crucial step towards improving learning.

Full Text (IP)