You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Enhance OpenTelemetry documentation and update package dependencies (kaiban-ai#250)
- Expanded the README.md to include detailed span hierarchy and span
kinds for better understanding of the tracing structure.
- Updated event descriptions for task and agent events to reflect new
statuses.
- Added new attributes for tasks and agents to improve observability.
- Included span context management details to clarify how spans are
correlated.
- Updated package-lock.json to include new dependencies and version
updates for improved functionality.
-`removeAgentSpan(agentId: string)` - Remove agent span from context
175
+
176
+
### Context Lifecycle
177
+
178
+
1.**Task Execution**: Task spans are created
179
+
2.**Agent Thinking**: Agent thinking spans are nested under task spans
180
+
3.**Task Completion**: All spans are completed and context is cleared
181
+
182
+
### Span Correlation
183
+
184
+
The context ensures proper parent-child relationships between spans:
185
+
186
+
- Task spans are parents of agent thinking spans
187
+
- All spans maintain proper trace context for distributed tracing
122
188
123
189
## Metrics
124
190
@@ -143,6 +209,8 @@ The package uses KaibanJS-specific semantic conventions for LLM attributes that
143
209
-`kaiban.llm.request.start_time` - When the thinking process started
144
210
-`kaiban.llm.request.status` - Status of the request (started, interrupted, completed)
145
211
-`kaiban.llm.request.input_length` - Length of the input messages
212
+
-`kaiban.llm.request.has_metadata` - Whether metadata is available
213
+
-`kaiban.llm.request.metadata_keys` - Available metadata keys
146
214
147
215
### LLM Usage Attributes (`kaiban.llm.usage.*`)
148
216
@@ -161,6 +229,39 @@ The package uses KaibanJS-specific semantic conventions for LLM attributes that
161
229
-`kaiban.llm.response.status` - Status of the response (completed, error, etc.)
162
230
-`kaiban.llm.response.output_length` - Length of the output messages
163
231
232
+
### Task Attributes (`task.*`)
233
+
234
+
-`task.id` - Unique task identifier
235
+
-`task.name` - Task title
236
+
-`task.description` - Task description
237
+
-`task.status` - Task status (started, completed, errored, aborted)
238
+
-`task.start_time` - When task execution started
239
+
-`task.end_time` - When task execution ended
240
+
-`task.duration_ms` - Task execution duration in milliseconds
241
+
-`task.iterations` - Number of iterations performed
242
+
-`task.total_cost` - Total cost for the task
243
+
-`task.total_tokens_input` - Total input tokens used
244
+
-`task.total_tokens_output` - Total output tokens generated
245
+
-`task.has_metadata` - Whether task has metadata
246
+
-`task.metadata_keys` - Available metadata keys
247
+
248
+
### Agent Attributes (`agent.*`)
249
+
250
+
-`agent.id` - Unique agent identifier
251
+
-`agent.name` - Agent name
252
+
-`agent.role` - Agent role description
253
+
254
+
### Error Attributes (`error.*`)
255
+
256
+
-`error.message` - Error message
257
+
-`error.type` - Error type
258
+
-`error.stack` - Error stack trace
259
+
260
+
### Span Types
261
+
262
+
-`task.execute` - Task execution spans
263
+
-`kaiban.agent.thinking` - Agent thinking spans (nested under task spans)
264
+
164
265
These conventions ensure that observability services like Langfuse, Phoenix, and others can automatically recognize and properly display LLM-related data in their dashboards.
0 commit comments