Model-Based Feedback for Improving Expertise and Expert Performance
In the field of learning and instruction, feedback is considered an elementary component for supporting and regulating learning processes. Feedback plays a particularly important role in highly self-regulated model-centered learning environments because it facilitates the development of mental models, thus improving expertise and expert performance. In this article, we will investigate different types of model-based feedback generated automatically with our own highly integrated model assessment technology and tools. Seventy-four participants were assigned to three experimental groups in order to examine the effects of different forms of model-based feedback. With the help of our seven automatically calculated measures, we were able to investigate changes in the participants’ understanding of the subject domain “climate change”, represented by causal diagrams. Our results show that even if the representation’s structure increases, the semantic correctness will not automatically also increase. Hence, guided only by expert feedback, learners may integrate a huge amount of concepts into their understanding of the phenomenon which do not necessarily help them to come to a better and more correct solution to the problem. This strengthens our assumption that the mental model building process for experts and expert performance should be trained in a more direct way, such as with simulation environments.
Keywords: Feedback, mental models, HIMATT, expert, novice.