-
Notifications
You must be signed in to change notification settings - Fork 230
Description
Goal
Ensure Camunda docs are in an 'AI-ready' state.
AI-ready: Docs can be reliably consumed and acted on by AI systems (copilots, agents, search/QA models) with minimal human correction, because the docs themselves are structured, explicit, accurate, and governed for AI use.
“Docs provide everything needed for efficient and safe AI usage.”
User problem
In this context, the user could be:
- An AI coding assistant, for example Claude Code using the Camunda documentation to build an AI agent.
- A Camunda developer interacting with the docs within their IDE.
Either of these currently could struggle with accessing the correct and relevant documentation when building with Camunda, as documentation is currently aimed at human personas (such as a Java developer or Business Analyst).
For example, see AI-Assisted Camunda Development: Developer Experience Feedback for a feedback example around the current AI-experience, in a scenario where AI bypasses the Modeler experience completely and relies heavily upon documentation.
Things to consider
As there is no concrete definition of what AI-ready means for a docs site, we should investigate and consider the following tools/technologies/concepts we could use to define what this looks like for Camunda:
- LLMs.txt
- Agent Skills files
- Docs MCP server
- Documentation structure and model
- Page templates
- New pages, e.g. specific 'Build with AI' page that covers using an agent to build with Camunda
Example llms.txt for Camunda Docs.pdf
Discovery
TBD
Implementation
1. New targeted work items
- ...
2. In-progress/existing work
The following work is already informally aligned with the goal, and may or may not be completed (but needs validation explicitly against this epic goal).
Docs MCP server
AI-Assisted Camunda Development: Developer Experience Feedback
- [AI DX] Improve AI Agent connector discoverability #8208
- [AI DX] Document connector input contracts #8209
- [AI DX] Document AI Agent tool I/O contract #8210
- [AI DX] Add programmatic reference for webhook connector properties #8211
- [AI DX] Clearly document Enum values #8212
- [AI DX] Duplicate message subscriptions #8213
- [AI DX] AI Agent Task vs Sub-process implementation #8214
Example success criteria
The following are examples of what the success criteria for validating AI-ready docs could be.
Note: We need to balance implementation for AI needs against what is best for our main human user base. These two users typically benefit from the same good documentation practices, but we must be explicit when choosing something that could diverge.
Scope & Clarity
- Each page has a single, clearly stated primary purpose, or begins with a single introductory sentence.
- Assumptions and prerequisites are explicitly listed (versions, environment, roles).
- Supported vs. unsupported scenarios are called out.
- Edge cases and failure modes are documented (not only happy path).
Structure & Semantics
- Consistent heading structure (H1/H2/H3) across pages.
- One main concept per page/section; no “kitchen sink” guides.
- Important entities (APIs, configs, resources, BPMN/DMN, connectors) have stable anchors/IDs.
- Key workflows have a dedicated “Overview → Steps → Outcomes” structure.
Machine-friendly Formatting
- Commands, code, configs, and error messages appear as text/code blocks (not images/PDF-only).
- Options/parameters are captured in tables with name, type, default, required?, description.
- No critical information is only present in diagrams/screenshots without accompanying text.
- Markdown/HTML is syntactically valid and free of giant unstructured blobs.
Explicit “Contracts” for Workflows
For each important workflow / how-to:
- Inputs: variables, payloads, forms, and their types are listed.
- Outputs: what “success” produces (state changes, artifacts, events) is defined.
- Side-effects: external systems, data mutations, and long-running consequences are described.
- Decision logic (“if X then Y”) is written down rather than implied.
Agent-usable Instruction Slices
- Repeated tasks have short, copy-pastable “micro-instruction” blocks (e.g. “To add connector X …”).
- Each micro-instruction is self-contained (doesn’t rely on hidden context from elsewhere).
- Example prompts/snippets are labeled and scoped (what they are safe/good for).
Versioning & Governance
- Every page states the applicable product version(s) and compatibility constraints.
- Deprecated content is clearly marked with guidance on what to use instead.
- Release notes link to, and are linked from, the canonical docs they affect.
- There is a named owner or team for each major doc area, with a defined review path.
Access, Safety & Policy
- Docs needed by AI tools are in accessible locations (public or consistently reachable to MCP/QA systems).
- Clear “never do X” and safety/guardrail notes exist where misuse would be risky (e.g. destructive commands, production data).
- Any required anonymization or data-handling rules are spelled out in the relevant guides.
Validation with AI Tools
- Representative pages have been successfully used in at least one AI-assisted workflow (e.g. MCP, Copilot, Kapa/Docs AI, Claude Code) without heavy manual correction.
- Known gaps discovered via AI usage (hallucinations, repeated misunderstandings) are tracked and fed back into doc improvements.
Analytics/testing
TBD
Metadata
Metadata
Assignees
Labels
Type
Fields
Give feedbackProjects
Status