Innovative Design with Rigorous Evaluation
We conduct interdisciplinary research in the field of information systems, computer science and human-computer-interaction to increase learning success based on ai-powered education tools such as conversational agents, smart personal assistants and intelligent feedback systems.
Our developed tools are evaluated in large scale experiments using rigorous quantitative statistical analysis.

AL: An Adaptive Learning Support System for Argumentation Skills
Recent advances in Natural Language Processing (NLP) bear the opportunity to analyze the argumentation quality of texts. This can be leveraged to provide students with individual and adaptive feedback in their personal learning journey. To test if individual feedback on students‘ argumentation will help them to write more convincing texts, we developed AL, an adaptive IT tool that provides students with feedback on the argumentation structure of a given text. We compared AL with 54 students to a proven argumentation support tool. We found students using AL wrote more convincing texts with better formal quality of argumentation compared to the ones using the traditional approach. The measured technology acceptance provided promising results to use this tool as a feedback application in different learning settings. The results suggest that learning applications based on NLP may have a beneficial use for developing better writing and reasoning for students in traditional learning settings.
Published at CHI2020. Link to paper.

MyTeachingBot: A platform for educators to develop their own chatbots without technical know-how
Would you like to offer your students individual support 24/7?
Research shows that learners in online courses lack individual support, which results in poor learning outcomes and high attrition rates. Smart Personal Assistants are able to offer individual support to learners depending on their needs. Despite a growing body of research on the design and use of Smart Personal Assistants in online education, a helping hand for educators to create their own agents is still missing but necessary in order to scale Smart Personal Assistants in online education. This is why we created myTeachingBot, a Smart Personal Assistant Platform that offers educators a drag and drop interface to create their own assistant. Educators can iteratively test their assistant until they think it is perfect. As a last step, they can share the assistant with their students and can then track students’ performance.
Link to website.

Sara, the Lecturer: Improving Learning in Online Education with a Scaffolding-Based Conversational Agent
Enrollment in online courses has sharply increased in higher education. Although online education can be scaled to large audiences, the lack of interaction between educators and learners is difficult to replace and remains a primary challenge in the field. Conversational agents may alleviate this problem by engaging in natural interaction and by scaffold- ing learners’ understanding similarly to educators. However, whether this approach can also be used to enrich online video lectures has largely remained unknown. We developed Sara, a conversational agent that appears during an online video lecture. She provides scaffolds by voice and text when needed and includes a voice-based input mode. An evaluation with 182 learners in a 2 x 2 lab experiment demonstrated that Sara, compared to more traditional conversational agents, significantly improved learning in a programming task. This study highlights the importance of including scaffolding and voice-based conversational agents in online videos to improve meaningful learning.
Published at CHI2020. Link to paper.

A Conversational Agent to Improve Response Quality in Course Evaluations
Recent advances in Natural Language Processing (NLP) bear the opportunity to design new forms of human-computer interaction with conversational interfaces. We hypothesize that these interfaces can interactively engage students to increase response quality of course evaluations in education compared to the common standard of web surveys.Past research indicates that web surveys come with disadvantages, such as poor response quality caused by inattention, survey fatigue or satisficing behavior. To test if conversational interfaces have a positive impact on the level of enjoyment and the response quality, we design an NLP- based conversational agent and deploy it in a field experiment with 127 students in our lecture and compare it with a web survey as a baseline. Our findings indicate that using conversational agents for evaluations are resulting in higher levels of response quality and level of enjoyment, and are therefore, a promising approach to increase the effective- ness of surveys in general.
Published at CHI2020. Link to paper.

ArgueTutor: An Adaptive Dialog-Based Learning System for Argumentation Skills
Techniques from Natural-Language-Processing offer the opportunities to design new dialog-based forms of human-computer interaction as well as to analyze the argumentation quality of texts. This can be leveraged to provide students with adaptive tutoring when doing a persuasive writing exercise. To test if individual tutoring for students‘ argumentation will help them to write more convincing texts, we developed ArgueTutor, a conversational agent that tutors students with adaptive argumentation feedback in their learning journey. We compared ArgueTutor with 55 students to a traditional writing tool. We found students using ArgueTutor wrote more convincing texts with a better quality of argumentation compared to the ones using the alternative approach. The measured level of enjoyment and ease of use provides promising results to use our tool in traditional learning settings. Our results indicate that dialog-based learning applications combined with NLP text feedback have a beneficial use to foster better writing skills of students.

ELEA: An Adaptive Learning Support System for Empathy Skills
Advances in Natural Language Processing offer techniques to detect the empathy level in texts. This can be leveraged to provide students with adaptive feedback on their personal learning journey. To test if individual feedback on students’ empathy level will help them to foster their empathy skills, we developed ELEA, an adaptive writing support system that provides students with feedback on the cognitive and emotional empathy structures of their texts. We compared ELEA to a proven empathy support tool with 119 students. We found students using ELEA judged their empathy skill learning to be significantly higher compared to the ones using the alternative approach. The high technology acceptance and level of enjoyment for ELEA provides promising results to use this tool as a feedback application in traditional learning settings. Our results indicate that learning applications based on NLP text feedback are beneficial to foster better empathy skills of students.