This is an automarker for the CMEE bootcamp (and other beginner coding exercises) using LLMs to give feedback,
- Use the rubric to evaluate the quality of coding practices followed
- Use a test output to show errors and warnings brought up when the code is run on a clean linux install
- Feedback should be generated for each criteria (see rubric for more details)
- Repository organisation & workflow discipline
- Code completeness & functional correctness
- Code quality & style
- Documentation
- Version-control practice
- Basic error-handling and input validation
- Problem-solving approach and method implementation
- Reporting and communication of results
- Learning progression demonstration
- Provides the good, the bad and the ugly feedback
Python
The script runs the submission code and saves the code along with warnings and errors produced to a .txt file It takes StudentsFile (a csv of info about the students) and RepoPath (containing the repos of the students) It can take expected_files.csv as an input, and will check the files in the code folder(s) against a list of expected files It can just git pull or git push if specified It can test one specific week or all weeks
This script loads in the submission and feedback .txt file, creates a prompt and runs an LLM to get detailed feedback
This script is used for loading an LLM, either Claude or ChatGPT
First lets set this up specifically for the bootcamp final submission