-
Notifications
You must be signed in to change notification settings - Fork 7
[Feature] Add AI chat icon to Code Link hover overlay for scoped implementation assistance #859
Description
Summary
Extend the Code Link hover overlay (introduced in #856) with an additional icon that opens a context-aware Claude AI chat session, pre-loaded with the scoped XML context of the hovered service task. This gives developers a one-click path from a BPMN diagram element to focused AI assistance for implementing its logic.
Motivation
BPMN files can be large and complex. When a developer wants AI help implementing a specific service task, sending the entire BPMN XML is wasteful — most of it is irrelevant. By extracting only the relevant element XML (task definition, I/O parameters, field injections, surrounding pool/lane) and using it as the initial context for an AI session, we get:
- Lower token usage — only relevant XML is sent
- More focused answers — the agent knows exactly the I/O contract
- Better UX — one click on the overlay icon opens a pre-seeded chat, no copy-pasting
Proposed UX Flow
- User hovers a service task in the BPMN modeler → the existing Code Link overlay appears
- Next to the existing "navigate to implementation" link, a new AI chat icon is displayed
- Clicking the icon sends a message to the extension host with the
activityId - The extension host extracts the scoped XML context for that element
- A webview chat panel opens (side panel or editor column), pre-seeded with the extracted XML as context
- User types e.g. "implement this in JavaScript for Camunda 7"
- Claude responds with focused, implementation-ready logic
- (Stretch) An "Insert into editor" button writes the generated script back into the virtual document editor
Technical Design
Overlay Enhancement
Add a second icon/button to the existing implementation-link overlay module (libs/implementation-link/). The new icon triggers a OpenAiChatCommand message (webview → extension host) carrying the activityId.
Message Protocol Extension
| Message | Direction | Payload |
|---|---|---|
OpenAiChatCommand |
Webview → Extension | activityId |
XML Extraction
Extract the relevant service task context using bpmn-io's internal model:
const elementRegistry = modeler.get('elementRegistry');
const element = elementRegistry.get(elementId);
const businessObject = element.businessObject;
const { xml } = await modeler.saveXML({ element: businessObject });Relevant fields to include:
<serviceTask>element with implementation attributes (camunda:class,camunda:expression,camunda:delegateExpression,zeebe:taskDefinition)<camunda:inputParameter>/<camunda:outputParameter>(the I/O contract)<camunda:field>injections- Surrounding pool/lane name for business context
Agent Integration
Use either the Claude Agent SDK (TypeScript) or the Claude CLI in non-interactive mode (claude -p) to start a session with the extracted XML as a system prompt:
const systemPrompt = `You are helping implement task logic for a BPMN process.
Here is the relevant service task definition:
${extractedXml}
The user will ask you to implement the execution logic.`;Webview Chat Panel
- Opens as a VS Code webview panel (side panel or editor column)
- Displays the extracted XML context as a collapsed preview so the user can see what was sent
- Streams Claude's response in real time
- (Stretch) Offers a button to insert the generated script directly into the script task's virtual document editor
Acceptance Criteria
- AI chat icon appears in the Code Link hover overlay alongside the existing navigation link
- Clicking the icon sends
OpenAiChatCommandwith theactivityIdto the extension host - Extension host extracts scoped XML for the element (task definition + I/O parameters + field injections)
- Webview chat panel opens with extracted XML shown as collapsed context preview
- AI session is started with the XML as system prompt
- Response streams into the panel
- (Stretch) "Insert into editor" button writes the script back to the virtual document
Open Questions
- Should the chat panel support multi-turn conversation, or is a single-shot query sufficient for v1?
- Which runtime targets should be pre-configured in the system prompt (JavaScript/Nashorn, Groovy, Python, Ruby)? Should the user pick via a dropdown?
- Should this integrate with the existing autocompletion strategy (Camunda 7 API hints), or remain a separate feature?
- Should the XML context be editable by the user before sending?
- Should the existing persisted implementation map data (I/O parameters, resolved file paths) be included in the context alongside the raw XML?
Related
- [Feature] Code Link: Navigate from BPMN tasks to their source code implementation #856 — Code Link feature (hover overlay + implementation map)
- Script task editing via Virtual Documents (custom URI scheme)
libs/implementation-link/— existing overlay module to extend