|
| 1 | +--- |
| 2 | +title: Set Up |
| 3 | +sidebar_order: 0 |
| 4 | +description: "Learn how to set up Sentry AI Monitoring" |
| 5 | +--- |
| 6 | + |
| 7 | +Sentry AI Monitoring is easiest to use with the Python SDK and an official integration like OpenAI. |
| 8 | + |
| 9 | + |
| 10 | + |
| 11 | + |
| 12 | +To start sending AI data to Sentry, make sure you've created a Sentry project for your AI-enabled repository and follow one of the guides below: |
| 13 | + |
| 14 | +## Official AI Integrations |
| 15 | + |
| 16 | +- [OpenAI](/platforms/python/integrations/openai/) |
| 17 | +- [Langchain](/platforms/python/integrations/langchain/) |
| 18 | + |
| 19 | +<Alert level="note" title="Don't see your platform?"> |
| 20 | + |
| 21 | +We'll be adding AI integrations continuously. You can also instrument AI manually with the Sentry Python SDK. |
| 22 | + |
| 23 | +</Alert> |
| 24 | + |
| 25 | + |
| 26 | +## Pipelines and LLMs |
| 27 | + |
| 28 | +The Sentry AI Monitoring feature relies on the fact that you have an orchestrator (like langchain) creating pipelines of one or more AI models (such as gpt-4). In the AI Monitoring dashboard, we show you a table of the AI pipelines and pull the token usage from your AI models. |
| 29 | + |
| 30 | +If you're using OpenAI without langchain, you'll need to manually create pipelines with the `@ai_track` annotation. If you're using langchain without OpenAI, you might have to manually record token usage with `record_token_usage()`. Both manual helpers are documented below. |
| 31 | + |
| 32 | +### Python SDK Decorators |
| 33 | + |
| 34 | +The [Python SDK](/platforms/python) includes an `@ai_track` decorator which will mark functions as AI-related and |
| 35 | +cause them to show up in the AI Monitoring dashboard. |
| 36 | + |
| 37 | +```python |
| 38 | + |
| 39 | +import time |
| 40 | +from sentry_sdk.ai_monitoring import ai_track, record_token_usage |
| 41 | +import sentry_sdk |
| 42 | +import requests |
| 43 | + |
| 44 | +@ai_track(description="AI tool") |
| 45 | +def some_workload_function(): |
| 46 | + """ |
| 47 | + This function is an example of calling arbitrary code with @ai_track so that it shows up in the Sentry trace |
| 48 | + """ |
| 49 | + time.sleep(5) |
| 50 | + |
| 51 | +@ai_track(description="LLM") |
| 52 | +def some_llm_call(): |
| 53 | + """ |
| 54 | + This function is an example of calling an LLM provider that isn't officially supported by Sentry. |
| 55 | + """ |
| 56 | + with sentry_sdk.start_span(op="ai.chat_completions.create.examplecom", description="Example.com LLM") as span: |
| 57 | + result = requests.get('https://example.com/api/llm-chat?question=say+hello').json() |
| 58 | + # this annotates the tokens used by the LLM so that they show up in the graphs in the dashboard |
| 59 | + record_token_usage(span, total_tokens=result["usage"]["total_tokens"]) |
| 60 | + return result["text"] |
| 61 | + |
| 62 | +@ai_track(description="My AI pipeline") |
| 63 | +def some_pipeline(): |
| 64 | + """ |
| 65 | + The topmost level function with @ai_track gets the operation "ai.pipeline", which makes it show up |
| 66 | + in the table of AI pipelines in the Sentry AI Monitoring dashboard. |
| 67 | + """ |
| 68 | + client = OpenAI() |
| 69 | + some_workload_function() |
| 70 | + some_llm_call() |
| 71 | + response = ( |
| 72 | + client.chat.completions.create( |
| 73 | + model="some-model", messages=[{"role": "system", "content": "say hello"}] |
| 74 | + ) |
| 75 | + .choices[0] |
| 76 | + .message.content |
| 77 | + ) |
| 78 | + print(response) |
| 79 | + |
| 80 | +with sentry_sdk.start_transaction(op="ai-inference", name="The result of the AI inference"): |
| 81 | + some_pipeline() |
| 82 | + |
| 83 | +``` |
| 84 | + |
0 commit comments