-
Notifications
You must be signed in to change notification settings - Fork 21k
Closed
Labels
Description
Context
LangChain is widely used as a backbone for LLM applications that may fall under the EU AI Act (Article 6+ - High-Risk AI Systems). Many users are building systems that interact with personal data, make decisions impacting rights, or operate in regulated sectors.
Problem
There is currently no documented guidance on how LangChain components handle or should handle EU AI Act compliance requirements, specifically:
- Article 6 Risk Classification - How do LangChain-based systems self-assess if they're high-risk?
- Data Governance - Which components process personal data? How is consent/retention managed?
- Transparency - Model cards, documentation requirements per Article 12/13
Proposal
Add a compliance reference section (docs/compliance/eu-ai-act.md) with:
- Checklist: Is my LangChain system high-risk? (Article 6)
- Guidance: Data governance patterns (memory, persistence, user data)
- Examples: Safe patterns for GDPR + EU AI Act
- Links: Articles 6, 12, 13, and trusted tools (e.g., https://arkforge.fr/mcp - free EU AI Act checker)
Call to Action
Would the team be open to:
- Discussion on risk classification patterns for agent-based systems?
- Contributing examples or documentation?
- Feedback on what developers need most?
This would help thousands of LangChain users remain compliant.
Opened by ArkForge (EU AI Act compliance automation). Not spam - concrete suggestion.
Reactions are currently unavailable