This is the R-based implementation of the AI autograding feedback system. It generates structured LLM-powered feedback on student assignment submissions across code, text, and image scopes. The system uses markdown prompts and templates, and supports multiple LLM backends including OpenAI and Claude (via API keys).
This version exposes a direct main()
function instead of using a CLI, making it easier to call programmatically or from within R scripts, notebooks, or services.
-
Supports image, text, and code scopes
-
Function-based interface (no CLI)
-
Uses
.md
prompt templates and markdown output formats -
Modular file structure
-
LLM support via OpenAI API (Claude and remote extensible)
Parameter | Description | Required |
---|---|---|
prompt |
Path to prompt file | ❌ ** |
prompt_text |
Inline string prompt to concatenate or use in place of file | ❌ ** |
prompt_custom |
Direct prompt content as string | ❌ ** |
scope |
Processing scope (code , text , image ) |
✅ |
submission |
Submission file path | ✅ |
solution |
Solution file path | ❌ |
model |
Model name: openai , claude , remote |
✅ |
remote_model |
Optional remote model string (used with remote models) | ❌ |
output |
Markdown output filepath | ❌ |
submission_image |
Path to image (image scope) | ❌ |
solution_image |
Path to reference image | ❌ |
output_template |
Path to output template file | ❌ |
system_prompt |
Path to system prompt file | ❌ |
question |
Exact question/subquestion to test | ❌ |
marking_instructions |
Marking instructions/rubric file path | ❌ |
json_schema |
JSON schema for structured response format | ❌ |
Note: You must provide one of prompt
, prompt_text
, or prompt_custom
.
The behavior of main()
adapts based on the selected scope
:
- Input: student + solution code files (e.g.,
.R
,.py
) - Use case: error analysis, hint generation, annotated feedback
- Input: student + solution PDFs or text files
- Use case: compare student writing to rubric
- Input: submission image + optional reference image (supports QMD/RMD with automatic plot extraction)
- Use case: plot evaluation, visual structure checking
In R (script or console):
library(aifeedbackr)
main(
prompt = "tests/fixtures/code_example/instructions.md",
scope = "code",
model = "openai",
submission = "tests/fixtures/code_example/fail_submission/fail_submission.R",
solution = "tests/fixtures/code_example/solution.R",
output = "output/q1.md"
)
Prompts may include placeholders such as:
{context}
{file_contents}
{file_references}
{submission_image}
{solution_image}
{marking_instructions}
Output templates can be provided as file paths via the output_template
parameter. They support the following placeholders:
{model}
– Model name used{request}
– Final constructed prompt{response}
– Model response{timestamp}
– Generation time{submission}
– Path to submission
Default behavior: If no template is provided, outputs only the {response}
content.
aifeedbackr/
├── R/
│ └── code_processing.R
│ └── text_processing.R
│ └── image_processing.R
│ └── main.R
│ └── template_utils.R
│ └── constants.R
│ └── ClaudeModel.R
│ └── OpenAIModel.R
│ └── RemoteModel.R
├── man/
├── inst/
│ └── markus_test_scripts/
├── tests/
│ ├── fixtures/
│ ├── testthat/
│ └── testthat.R
Imports:
base64enc
callr
dotenv
ggplot2
httr
jsonlite
magick
optparse
pdftools
R6
readr
stringr
tools
withr
Suggests:
testthat
(>= 3.0.0)here
Create a .env
file in the project root:
OPENAI_API_KEY=your_openai_key_here
CLAUDE_API_KEY=your_claude_key_here
REMOTE_API_KEY=your_remote_key_here
This project is derived from the original Python version and follows the same academic fair use and research-oriented licensing assumptions.