An automated grading tool for evaluating user stories in software engineering courses, developed for EPFL's CS-311 (Software Enterprise) course.
This tool automatically assesses the quality of user stories using structured rubrics, heuristics, and custom NLP embeddings. It provides consistent, scalable grading and pedagogical feedback for software engineering assignments.
- Automated Quality Assessment: Evaluates user stories based on clarity, structure, and adherence to best practices
- Structured Rubrics: Uses well-defined criteria to ensure fair and consistent grading
- NLP-based Analysis: Employs custom embeddings to assess semantic quality
- Scalable Grading: Handles large numbers of submissions efficiently
- Pedagogical Feedback: Generates constructive feedback to help students improve
User stories are evaluated on multiple dimensions:
- Structure: Proper format (As a... I want... So that...)
- Clarity: Clear and unambiguous language
- Completeness: All necessary components present
- Specificity: Appropriate level of detail
- Testability: Can be verified through acceptance criteria
grading.ipynb: Main grading notebookgrading-indiv.ipynb: Individual story grading variantuser_stories.json: Input user storiesgraded_user_stories.json: Graded resultsgrade_comparison_plots.png: Visualization of grading resultsgrading.json: Grading configuration and rubrics
This tool was developed for CS-311 (Software Enterprise) at EPFL, where students learn Android application development and software engineering practices. The grader helps manage the evaluation of user stories submitted during the course bootcamp and project phases.
See also: CommitGrader - Automated grading for commit messages
Developed for use in EPFL's CS-311 course to provide consistent and scalable grading for software engineering assignments.