Skip to content

Commit 190575c

Browse files
committed
First version of custom agents instrumentation docs.
1 parent 8a7ce86 commit 190575c

File tree

1 file changed

+117
-0
lines changed
  • docs/platforms/python/tracing/instrumentation/custom-instrumentation

1 file changed

+117
-0
lines changed
Lines changed: 117 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,117 @@
1+
---
2+
title: Instrument AI Agents
3+
sidebar_order: 500
4+
description: "Learn how to manually instrument your code to use Sentry's Agents module."
5+
---
6+
7+
As a prerequisite to setting up [AI Agents](/product/insights/agents/), you’ll need to first <PlatformLink to="/tracing/">set up tracing</PlatformLink>. Once this is done, the Python SDK will automatically instrument AI agents created with the `openai-agents` library. If that doesn't fit your use case, you can set up using custom instrumentation described below.
8+
9+
## Custom Instrumentation
10+
11+
For your AI agents data to show up in the Sentry Agents Insights Module a couple of different spans can be created. Those spans need to have well defined names and attributes.
12+
13+
### Common Span Attributes
14+
15+
Some attributes are common to all types of AI Agents spans:
16+
17+
| Data Attribute | Type | Description |
18+
| :---------------------- | :----- | :----------------------------------------------------------------------------------- |
19+
| `gen_ai.system` | string | The Generative AI product as identified by the client or server instrumentation. [1] |
20+
| `gen_ai.request.model` | string | The name of the AI model a request is being made to. |
21+
| `gen_ai.operation.name` | string | The name of the operation being performed. [2] |
22+
| `gen_ai.agent.name` | string | The name of the agent this span belongs to. |
23+
24+
**[1]** Well defined values for data attribute `gen_ai.system`:
25+
26+
| Value | Description |
27+
| :---------------- | :-------------------------------- |
28+
| `anthropic` | Anthropic |
29+
| `aws.bedrock` | AWS Bedrock |
30+
| `az.ai.inference` | Azure AI Inference |
31+
| `az.ai.openai` | Azure OpenAI |
32+
| `cohere` | Cohere |
33+
| `deepseek` | DeepSeek |
34+
| `gcp.gemini` | Gemini |
35+
| `gcp.gen_ai` | Any Google generative AI endpoint |
36+
| `gcp.vertex_ai` | Vertex AI |
37+
| `groq` | Groq |
38+
| `ibm.watsonx.ai` | IBM Watsonx AI |
39+
| `mistral_ai` | Mistral AI |
40+
| `openai` | OpenAI |
41+
| `perplexity` | Perplexity |
42+
| `xai` | xAI |
43+
44+
**[2]** Well defined values for data attribute `gen_ai.operation.name`:
45+
46+
| Value | Description |
47+
| :----------------- | :---------------------------------------------------------------------- |
48+
| `chat` | Chat completion operation such as OpenAI Chat API |
49+
| `create_agent` | Create GenAI agent |
50+
| `embeddings` | Embeddings operation such as OpenAI Create embeddings API |
51+
| `execute_tool` | Execute a tool |
52+
| `generate_content` | Multimodal content generation operation such as Gemini Generate Content |
53+
| `invoke_agent` | Invoke GenAI agent |
54+
55+
### Invoke Agent Span
56+
57+
This span wraps one invocation of an agent.
58+
59+
- `span.op` = `"gen_ai.invoke_agent"`
60+
- `span.name` = `"gen_ai.invoke_agent {gen_ai.agent.name}"` (Example: `"gen_ai.invoke_agent Weather Forecast Agent"`)
61+
62+
- Span attributes:
63+
- `gen_ai.request.model`: The model that is used.
64+
- `gen_ai.request.available_tools`: An array of objects that describe the tools available to the agent.
65+
- `gen_ai.request.frequency_penalty`: Model configuration
66+
- `gen_ai.request.max_tokens`: Model configuration
67+
- `gen_ai.request.presence_penalty`: Model configuration
68+
- `gen_ai.request.temperature`: Model configuration
69+
- `gen_ai.request.top_p`: Model configuration
70+
71+
72+
### Execute Tool Span
73+
74+
This span wraps the execution of a tool.
75+
76+
- `span.op` = `"gen_ai.execute_tool"`
77+
- `span.name` = `"gen_ai.execute_tool {tool.name}"` (Example: `"gen_ai.execute_tool query_database"`)
78+
79+
80+
- Span attributes:
81+
- `gen_ai.request.available_tools`:
82+
- `gen_ai.request.frequency_penalty`: Model configuration
83+
- `gen_ai.request.max_tokens`: Model configuration
84+
- `gen_ai.request.model`:
85+
- `gen_ai.request.presence_penalty`: Model configuration
86+
- `gen_ai.request.temperature`: Model configuration
87+
- `gen_ai.request.top_p`: Model configuration
88+
- `gen_ai.tool.description`:
89+
- `gen_ai.tool.input`: \{"max":10\}
90+
- `gen_ai.tool.name:`: "random_number"
91+
- `gen_ai.tool.output`:
92+
- `gen_ai.tool.type`:
93+
94+
### AI Client Span
95+
96+
This span wraps the request to an LLM.
97+
98+
- `span.op` = `"gen_ai.{gen_ai.operation.name}"` (Example: `"gen_ai.chat"`)
99+
- `span.name` = `"{gen_ai.operation.name} {model.name}"` (Example: `"chat gpt-4o-mini"`)
100+
- Span attributes:
101+
- `gen_ai.request.available_tools`
102+
- `gen_ai.request.frequency_penalty`
103+
- `gen_ai.request.max_tokens`
104+
- `gen_ai.request.messages`
105+
- `gen_ai.request.model`
106+
- `gen_ai.request.presence_penalty`
107+
- `gen_ai.request.temperature`
108+
- `gen_ai.request.top_p`
109+
- `gen_ai.response.tool_calls`
110+
- `gen_ai.system`
111+
- `gen_ai.system.message`
112+
- `gen_ai.usage.input_tokens`
113+
- `gen_ai.usage.input_tokens.cached`
114+
- `gen_ai.usage.output_tokens`
115+
- `gen_ai.usage.output_tokens.reasoning`
116+
- `gen_ai.usage.total_tokens`
117+
- `gen_ai.user.message`

0 commit comments

Comments
 (0)