Skip to content

bridget284/Automarker_AI_Assessor

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

Automarker

This is an automarker for the CMEE bootcamp (and other beginner coding exercises) using LLMs to give feedback,

Objectives

  • Use the rubric to evaluate the quality of coding practices followed
  • Use a test output to show errors and warnings brought up when the code is run on a clean linux install
  • Feedback should be generated for each criteria (see rubric for more details)
    • Repository organisation & workflow discipline
    • Code completeness & functional correctness
    • Code quality & style
    • Documentation
    • Version-control practice
    • Basic error-handling and input validation
    • Problem-solving approach and method implementation
    • Reporting and communication of results
    • Learning progression demonstration
  • Provides the good, the bad and the ugly feedback

Dependecies

Python

Structure

Testing scripts - feedback.py

The script runs the submission code and saves the code along with warnings and errors produced to a .txt file It takes StudentsFile (a csv of info about the students) and RepoPath (containing the repos of the students) It can take expected_files.csv as an input, and will check the files in the code folder(s) against a list of expected files It can just git pull or git push if specified It can test one specific week or all weeks

main .py

This script loads in the submission and feedback .txt file, creates a prompt and runs an LLM to get detailed feedback

llm.py

This script is used for loading an LLM, either Claude or ChatGPT

Notes

First lets set this up specifically for the bootcamp final submission

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages