Pinned post

Hello world, this is my :
I am a doctoral candidate at the Research Group for Applied Software Engineering (Prof. Bruegge) at the Technical University of Munich (TUM). My research focuses on computer science education and automated assessments of textual exercises. In addition, I am teaching the lecture courses Introduction to Software Engineering (~2,200 students) and Patterns in Software Engineering (~700 students) at TUM.

We then evaluated CoFee in a large course at the Technical University of Munich from 2019 to 2021, with up to 2, 200 enrolled students per course. We collected data from 34 exercises offered in each of these courses. On average, CoFee suggested feedback for 45% of the submissions. 92% (Positive Predictive Value) of these suggestions were precise and, therefore, accepted by the instructors.

Show thread

A language model builds an intermediate representation of the text segments. Hierarchical clustering identifies groups of similar text segments to reduce the grading overhead. We first demonstrated the CoFee framework in a small laboratory experiment in 2019, which showed that the grading overhead could be reduced by 85%. This experiment confirmed the feasibility of automating the grading process for problem-solving exercises.

Show thread

This rise has led to large courses that cause a heavy workload for instructors, especially if they provide individual feedback to students. This article presents CoFee, a framework to generate and suggest computer-aided feedback for textual exercises based on machine learning. CoFee utilizes a segment-based grading concept, which links feedback to text segments. CoFee automates grading based on topic modeling and an assessment knowledge repository acquired during previous assessments.

Show thread

Abstract: Many engineering disciplines require problem-solving skills, which cannot be learned by memorization alone. Open-ended textual exercises allow students to acquire these skills. Students can learn from their mistakes when instructors provide individual feedback. However, grading these exercises is often a manual, repetitive, and time-consuming activity. The number of computer science students graduating per year has steadily increased over the last decade.

Show thread

Article "Machine learning based feedback on textual student answers in large courses" 

The pre-proof version of my article "Machine learning based feedback on textual student answers in large courses" with S. Krusche and B. Bruegge is now published with the Journal "Computers and Education: Artificial Intelligence". DOI:



Evaluating 3D Human Motion Capture on Mobile Devices

by Lara Marie Reimer, Maximilian Kapsecker, Takashi Fukushima, and Stephan M. Jonas

"[...] In this study, we performed a laboratory experiment with ten subjects, comparing the joint angles in eight different body-weight exercises tracked by Apple ARKit, a mobile 3D motion capture framework, against a gold-standard system for motion capture: [...]"

Wir müssen die #Chatkontrolle Stoppen!
„Die Chatkontrolle ist als fundamental fehlgeleitete Technologie grundsätzlich abzulehnen“ so der CCC
Mehr zu den Hintergründe findet ihr hier:

#Chatkontrolle verhindern!

@digitalcourage @digiges

"Schutz digitaler Rechte und Freiheiten bei der Gesetzgebung zur wirksamen Bekämpfung von Kindesmissbrauch"

Crossposted from Twitter ( 

EU Commissioner @YlvaJohansson is preparing to launch a new law to force the mass surveillance of private online communications but has refused to meet with privacy experts like @edri.

Stop and now! Protect our !

We implemented this approach in a reference implementation called Athene and integrated it into Artemis. We used Athene to review 17 textual exercises in two large courses at the Technical University of Munich with 2,300 registered students and 53 teachers. On average, Athene suggested feedback for 26% of the submissions. Accordingly, 85% of these suggestions were accepted by the teachers, 5% were extended with a comment and then accepted, and 10% were changed.

Show thread

This paper presents CoFee, a machine learning approach designed to suggest computer-aided feedback in open-ended textual exercises. The approach uses topic modeling to split student answers into text segments and language embeddings to transform these segments. It then applies clustering to group the text segments by similarity so that the same feedback can be applied to all segments within the same cluster.

Show thread

Open-ended textual exercises facilitate the comprehension of problem-solving skills. Students can learn from their mistakes when teachers provide individual feedback. However, courses with hundreds of students cause a heavy workload for teachers: providing individual feedback is mostly a manual, repetitive, and time-consuming activity.

Show thread

Paper "A Machine Learning Approach for Suggesting Feedback in Textual Exercises in Large Courses" 

The third and last paper I want to toot about is titled "A Machine Learning Approach for Suggesting Feedback in Textual Exercises in Large Courses" and was presented at the 8th ACM Conference on Learning @ Scale (L@S) in 2021. DOI: Preprint:

This paper presents two things: (1) CoFee (approach) and (2) Athene (reference implementation). 🧵

We have evaluated the algorithm qualitatively by comparing automatically produced segments with manually produced segments created by humans. The results show that the system can produce topically coherent segments. The segmentation algorithm based on topic modeling is superior to approaches purely based on syntax and punctuation.

Show thread

The goal is to reduce the workload for instructors, while at the same time creating timely and consistent feedback to the students. We present the design and a prototypical implementation of an algorithm using topic modeling for segmenting the submissions into smaller blocks. Thereby, the system derives smaller units for assessment and allowing the creation of reusable and structured feedback.

Show thread

Employing tutors in the process introduces new challenges. Feedback should be consistent and fair for all students. Additionally, interactive teaching models strive for real-time feedback and multiple submissions.

We propose a support system for grading textual exercises using an automatic segment-based assessment concept. The system aims at providing suggestions to instructors by reusing previous comments as well as scores.

Show thread

Abstract–Growing student numbers at universities worldwide pose new challenges for instructors. Providing feedback to textual exercises is a challenge in large courses while being important for student’s learning success. Exercise submissions and their grading are a primary and individual communication channel between instructors and students. The pure amount of submissions makes it impossible for a single instructor to provide regular feedback to large student bodies.

Show thread
Show older
Scholar Social

Scholar Social is a microblogging platform for researchers, grad students, librarians, archivists, undergrads, academically inclined high schoolers, educators of all levels, journal editors, research assistants, professors, administrators—anyone involved in academia who is willing to engage with others respectfully.