
We publish our research in top journals and conferences in the fields of Information Systems (IS), Human-Computer-Interaction (HCI) and Educational Technology.
Puplications
2020: Unlocking Transfer Learning in Argumentation Mining: A Domain-Independent Modelling Approach in 15th International Conference on Wirtschaftsinformatik, Potsdam, Germany.
Abstract: Argument identification is the fundamental block of every Argumentation Mining pipeline, which in turn is a young upcoming field with multiple applications ranging from strategy support to opinion mining and news fact-checking. We developed a model, which is tackling the two biggest practical and academic challenges of the research field today. First, it addresses the lack of corpus-agnostic models and, second, it tackles the problem of human-labor- intensive NLP models being costly to develop. We do that by suggesting and implementing an easy-to-use solution that utilizes the latest advancements in natural language Transfer Learning. The result is a two-fold contribution: A system that delivers state-of-the-art results in multiple corpora and opens up a new way of academic advancement of the field through Transfer Learning. Additionally, it provides the architecture for an easy-to-use tool that can be used for practical applications without the need for domain-specific knowledge.
2020: Designing a Conversational Agent as a Formative Course Evaluation Tool in 15th International Conference on Wirtschaftsinformatik, Potsdam, Germany.
Abstract: Today’s graduating students face ever-changing environments when they enter their job life. Educational institutions must therefore continuously develop their course structure and content in order to prepare their students to be future employees. A very important means for developing the courses is the students’ course evaluations. Due to financial and organizational restrictions, these course evaluations are usually carried out quantitatively and at the end of the semester. However, past research has shown that this kind of evaluation faces certain constraints such as low acceptance rates, only time-related insights and low-quality answers that do not really help the lecturer to improve the course. Drawing on social response theory, we propose that conversational agents as a formative course evaluation tool are able to address the mentioned problems by interactively engaging with students. Therefore, we propose a set of design principles and evaluate them with our prototype Eva.
2020: Empowering mainstream Educators to create their own smart personal assistants in 53rd Hawaii International Conference on System Science, Maui, Hawaii.
Abstract: Despite a growing body of research about the design and use of Smart Personal Assistants such as Amazon’s Alexa or Google’s Assistant, little is known about their ability to help educators offering individual support in large-scale learning environments. Smart Personal Assistant ecosystems empower educators to develop their own agents without deep technological knowledge. The objective of this paper is to design and validate a method that helps educators to create Smart Personal Assistants for their learning environments. Using a design science research approach, we first gather requirements from students and educators as well as from information systems and education theory. Next, we create an alpha version of our method and evaluate it with a focus group before we instantiate our artifact in an everyday learning environment. The findings of the paper indicate that our method is able to empower educators to design Smart Personal Assistants that significantly improve students’ learning success.
2020: AL : An Adaptive Learning Support System for Argumentation Skills in ACM CHI Conference on Human Factors in Computing Systems.
Abstract: Recent advances in Natural Language Processing (NLP) bear the opportunity to analyze the argumentation quality of texts. This can be leveraged to provide students with individual and adaptive feedback in their personal learning journey. To test if individual feedback on students’ argumentation will help them to write more convincing texts, we developed AL, an adaptive IT tool that provides students with feedback on the argumentation structure of a given text. We compared AL with 54 students to a proven argumentation support tool. We found students using AL wrote more convincing texts with better formal quality of argumentation compared to the ones using the traditional approach. The measured technology acceptance provided promising results to use this tool as a feedback application in different learning settings. The results suggest that learning applications based on NLP may have a beneficial use for developing better writing and reasoning for students in traditional learning settings.
2020: A Conversational Agent to Improve Response Quality in Course Evaluations in ACM CHI Conference on Human Factors in Computing Systems Extended Abstract.
Abstract: Recent advances in Natural Language Processing (NLP) bear the opportunity to design new forms of human-computer interaction with conversational interfaces. We hypothesize that these interfaces can interactively engage students to increase response quality of course evaluations in education compared to the common standard of web surveys. Past research indicates that web surveys come with disadvantages, such as poor response quality caused by inattention, survey fatigue or satisficing behavior. To test if conversational interfaces have a positive impact on the level of enjoyment and the response quality, we design an NLP- based conversational agent and deploy it in a field experiment with 127 students in our lecture and compare it with a web survey as a baseline. Our findings indicate that using conversational agents for evaluations are resulting in higher levels of response quality and level of enjoyment, and are therefore, a promising approach to increase the effective- ness of surveys in general.
Author
2019: Towards Designing an Adaptive Argumentation Learning Tool in Proceedings of the International Conference on Information Systems (ICIS) 2019, Munich, Germany.
Abstract: Digitalization triggers a shift in the compositions of skills and knowledge needed for students in their future work life. Hence, higher order thinking skills are becoming more important to solve future challenges. One subclass of these skills, which contributes significantly to communication, collaboration and problem-solving, is the skill of how to argue in a structured, reflective and well-formed way. However, educational organizations face difficulties in providing the boundary conditions necessary to develop this skill, due to increasing student numbers paired with financial constraints. In this short paper, we present the first steps of our design science research project on how to design an adaptive IT-tool that helps students develop their argumentation skill through formative feedback in large-scale lectures. Based on scientific learning theory and user interviews, we propose preliminary requirements and design principles for an adaptive argumentation learning tool. Furthermore, we present a first instantiation of those principles.
2019: Looking Beneath the Tip of the Iceberg: The Two-Sided Nature of Chatbots and their Roles for Digital Feedback Exchange in Proceedings of the European Conference on Information Systems (ECIS), Stockholm, Sweden.
Abstract: Enterprises are forecasted to spend more on chatbots than on mobile app development by 2021. Up to today little is known on the roles chatbots play in facilitating feedback exchange. However, digitization and automation put pressure on companies to setup digital work environments that enable reskilling of employees. Therefore, a structured analysis of feedback-related chatbots for Slack was conducted. Our results propose six archetypes that reveal the roles of chatbots in facilitating feedback exchange on performance, culture and ideas. We show that chatbots do not only consist of conversational agents integrated into instant messenger but are tightly linked to complementary front-end systems such as mobile and web apps. Like the upper part of an iceberg, the conversational agent is above water and visible within the chat, whereas many user interactions of feedback-related chatbots are only possible outside of the instant messenger. Further, we extract six design principles for chatbots as digital feedback systems. We do this by analyzing chatbots and linking empirically observed design features to (meta-)requirements derived from explanatory theory on feedback, self-determination and persuasive systems. The results suggest that chatbots benefit the social environment of conversation agents and the richness of the graphical user interface of external applications.
2019: Insights into Using IT-Based Peer Feedback to Practice the Students Providing Feedback Skill in 52rd Hawaii International Conference on System Science, Maui, Hawaii.
Abstract: The skills students need nowadays have changed over the last decades. The required skills are shifting more and more towards higher order thinking skills, such as critical thinking, collaboration and communication. One of the main ways of practicing these skills is through formative feedback, which consists of self-assessment and peer-assessment in our setting.
However, today’s lecturers are facing the challenge that the number of students per lecture is continuously increasing, while the available budget is stagnating. Hence, large scale lectures often lack feedback, caused by the scarcity of resources. To overcome this issue, we propose a teaching-learning scenario using IT to provide formative feedback at scale. In this paper, we are focusing on the students’ providing-feedback skill, which is important for collaborative tasks. In our experiment with around 101 master students, we were able to show that the students’ ability to provide feedback significantly improved by participating in IT-based peer feedback iterations.
2018: Knowing what Learners Like? Developing a Cultural Sensitive Peer Assessment Process in MOOCs in Proceedings of the Multikonferenz Wirtschaftsinformatik (MKWI), Lüneburg, Germany.
Abstract: MOOCs attract learners from various cultural backgrounds with differing educational beliefs and learning preferences. Research has long acknowledged that culture has an impact on the adoption and use of information technology. Cultural differences can cause conflicts, especially when learners provide each other with feedback during the peer assessment process. With this paper, we use a design science approach to create a cultural sensitive peer assessment process in MOOCs. Based on Hofstede’s cultural dimension theory we derive design elements and evaluate them in a qualitative and comparative study with Swiss and Chinese students. Our results show that different cultures prefer different designs. Consequently, our key contribution is the practical elaboration of design elements, which can be integrated in MOOCs to provide a better learning experience. Further, we contribute to cross-cultural theory by using an existing framework and adapting it to a new and relevant phenomenon: MOOCs.
2018: Design and Evaluation of an IT-based Formative Feedback Tool to Forster Student Performance in Proceedings of the International Conference on Information Systems (ICIS) 2018, San Francisco, CA, USA.
Abstract: Today’s world is changing faster than ever before. Students need to constantly acquire new knowledge and the required skills shift more and more to higher order thinking skills. One major approach to support the students in this shift is through formative feedback, which initiates self-regulated learning processes. However, given the constant rise of student numbers while public spending remains constant, one of the major challenges is how to effectively provide formative feedback in large-scale lectures. The aim of our project is to develop an IT-tool which provides students with formative feedback to increase their performance in large-scale lectures. Our results show that through the use of the IT-tool the objective and perceived student performance can be significantly increased. By documenting our design knowledge according to the eight components of Gregor and Jones (2007), we take a first step towards creating a design theory for formative feedback tools.