CS1 courses with large student numbers commonly use autograders to provide students automated feedback on basic programming exercises. Programming such feedback to integrate it into an autograder is a non-trivial and time-consuming task for teachers. Furthermore, such feedback is often based only on expected outputs for a given input, or on the teacher’s perception of errors that students may make, rather than on the errors they actually make. We present an early implementation of a tool prototype and supporting methodology to address these problems. After mining the source code of earlier students’ responses to exercises for frequent structural patterns, and classifying the found patterns according to these students’ scores, our tool automatically generates unit tests that correspond to bad practices, errors or code smells observed in students’ submissions. These unit tests can then be used or adapted by a teacher to integrate them into an autograder, in order to provide feedback of higher quality to future generations of students.
Lienard, Julien ; Mens, Kim ; Nijssen, Siegfried ; et. al. Extracting Unit Tests from Patterns Mined in Student Code to Provide Improved Feedback in Autograders.Seminar Series on Advanced Techniques & Tools for Software Evolution (SATToSE) (Salerno, Italie, du 12/06/2023 au 15/06/2023). In: Seminar on Advanced Techniques & Tools for Software Evolution, Vol. 3483, p. 48--56 (13 Sep 2023)