Skip to content

EU AI Act compliance analysis for this project #18

@desiorac

Description

@desiorac

Hi! I've been reviewing Python projects that use LLMs to understand what the EU AI Act means in practice for developers.

From pyproject.toml, your project uses:

  • PyTorch for model inference
  • Poetry for dependency management
  • Local model approach (which is actually better for compliance)

Quick analysis

Risk category: Likely Limited (Article 50 — AI-generated text)

Since this is a RAG chatbot that generates text responses based on retrieved context, the main EU AI Act obligations are:

1. Transparency (Article 50)

Users should know they're interacting with AI-generated content. Adding a disclosure to chat responses is the simplest fix:

response = {
    "answer": generated_text,
    "ai_disclosure": "This response was generated by an AI system.",
    "sources": retrieved_docs
}

2. Documentation

An AI_COMPLIANCE.md file documenting:

  • System purpose
  • Model(s) used
  • Known limitations (hallucination risk with RAG)
  • Data handling for uploaded documents

What's already good:

  • Local models = better data control than API-based approaches
  • RAG architecture = responses are grounded in provided documents (reduces hallucination)
  • Open-source = transparency by default

Context

The EU AI Act applies to AI systems available in the EU (including open-source). Full enforcement starts August 2026. For limited-risk systems like chatbots, the obligations are manageable — mainly transparency and documentation.

I built a free compliance scanner that detects AI frameworks and maps them to EU AI Act requirements, in case it's useful.

Feel free to close this if not relevant. Just trying to help the ecosystem prepare.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions