-
Notifications
You must be signed in to change notification settings - Fork 94
Description
Hi! I've been reviewing Python projects that use LLMs to understand what the EU AI Act means in practice for developers.
From pyproject.toml, your project uses:
- PyTorch for model inference
- Poetry for dependency management
- Local model approach (which is actually better for compliance)
Quick analysis
Risk category: Likely Limited (Article 50 — AI-generated text)
Since this is a RAG chatbot that generates text responses based on retrieved context, the main EU AI Act obligations are:
1. Transparency (Article 50)
Users should know they're interacting with AI-generated content. Adding a disclosure to chat responses is the simplest fix:
response = {
"answer": generated_text,
"ai_disclosure": "This response was generated by an AI system.",
"sources": retrieved_docs
}2. Documentation
An AI_COMPLIANCE.md file documenting:
- System purpose
- Model(s) used
- Known limitations (hallucination risk with RAG)
- Data handling for uploaded documents
What's already good:
- Local models = better data control than API-based approaches
- RAG architecture = responses are grounded in provided documents (reduces hallucination)
- Open-source = transparency by default
Context
The EU AI Act applies to AI systems available in the EU (including open-source). Full enforcement starts August 2026. For limited-risk systems like chatbots, the obligations are manageable — mainly transparency and documentation.
I built a free compliance scanner that detects AI frameworks and maps them to EU AI Act requirements, in case it's useful.
Feel free to close this if not relevant. Just trying to help the ecosystem prepare.