|
| 1 | +# Sentry and Python LLM Rules |
| 2 | + |
| 3 | +You are an expert in application architecture and the configuration of observability tools like Sentry. Ensure you are following these instructions, while taking into account the user's request. |
| 4 | + |
| 5 | +Always ensure you are installing Sentry using `pip install --upgrade sentry-sdk`. The latest package should always be used unless explicitly told otherwise. |
| 6 | + |
| 7 | +## Configuration examples |
| 8 | + |
| 9 | +These examples should be used to configure the different Sentry functionality in Python applications. Use these as the default configuration, factoring in the user's requests. |
| 10 | + |
| 11 | +### Sentry Setup (applies to all configurations) |
| 12 | + |
| 13 | +Configure Sentry using the following parameters unless explicitly told otherwise. |
| 14 | + |
| 15 | +```python |
| 16 | +import sentry_sdk |
| 17 | + |
| 18 | +sentry_sdk.init( |
| 19 | + dsn="<sentry dsn>", |
| 20 | + |
| 21 | + # Add request headers and IP for users |
| 22 | + send_default_pii=True, |
| 23 | + |
| 24 | + # Set traces_sample_rate to 1.0 to capture 100% |
| 25 | + # of transactions for tracing. |
| 26 | + traces_sample_rate=1.0, |
| 27 | + |
| 28 | + # Set profiles_sample_rate to 1.0 to profile 100% |
| 29 | + # of sampled transactions. |
| 30 | + # Recommend adjusting this value in production. |
| 31 | + profiles_sample_rate=1.0, |
| 32 | +) |
| 33 | +``` |
| 34 | + |
| 35 | +### Error Tracking and Exception Catching |
| 36 | + |
| 37 | +Instrument errors throughout the application using the following approaches: |
| 38 | + |
| 39 | +```python |
| 40 | +# Explicitly capture an exception |
| 41 | +try: |
| 42 | + division_by_zero = 1 / 0 |
| 43 | +except Exception as e: |
| 44 | + sentry_sdk.capture_exception(e) |
| 45 | + |
| 46 | +# Capture a custom message with additional context |
| 47 | +sentry_sdk.capture_message( |
| 48 | + "Something went wrong", |
| 49 | + level="error", |
| 50 | + extras={"additional_context": "value"} |
| 51 | +) |
| 52 | +``` |
| 53 | + |
| 54 | +### Tracing and Performance Monitoring |
| 55 | + |
| 56 | +Utilize the following examples for tracing scenarios: |
| 57 | + |
| 58 | +```python |
| 59 | +with sentry_sdk.start_transaction(name="task_name", op="task"): |
| 60 | + # Get the current active span to update during operation |
| 61 | + span = sentry_sdk.start_span(op="subtask", description="Subtask description") |
| 62 | + with span: |
| 63 | + try: |
| 64 | + # Your code here |
| 65 | + span.set_data("key", "value") |
| 66 | + except Exception as e: |
| 67 | + # Record failure information |
| 68 | + span.set_status("internal_error") |
| 69 | + raise e |
| 70 | +``` |
| 71 | + |
| 72 | +### AI/LLM Monitoring |
| 73 | + |
| 74 | +For AI and LLM monitoring: |
| 75 | + |
| 76 | +```python |
| 77 | +import sentry_sdk |
| 78 | +from sentry_sdk.ai.monitoring import ai_track |
| 79 | + |
| 80 | +sentry_sdk.init( |
| 81 | + dsn="<sentry dsn>", |
| 82 | + send_default_pii=True, |
| 83 | + # To include AI prompts and completions, set send_default_pii=True |
| 84 | +) |
| 85 | + |
| 86 | +@ai_track("My AI pipeline") |
| 87 | +def my_pipeline(): |
| 88 | + with sentry_sdk.start_transaction(op="ai-inference", name="AI operation"): |
| 89 | + # AI operation code |
| 90 | + pass |
| 91 | +``` |
| 92 | + |
| 93 | +### Framework Integrations |
| 94 | + |
| 95 | +Python SDK automatically enables integrations for frameworks detected in your environment. To explicitly configure: |
| 96 | + |
| 97 | +```python |
| 98 | +import sentry_sdk |
| 99 | +from sentry_sdk.integrations.flask import FlaskIntegration |
| 100 | +from sentry_sdk.integrations.django import DjangoIntegration |
| 101 | + |
| 102 | +sentry_sdk.init( |
| 103 | + dsn="<sentry dsn>", |
| 104 | + integrations=[ |
| 105 | + FlaskIntegration(), |
| 106 | + DjangoIntegration(), |
| 107 | + # Add other integrations as needed |
| 108 | + ], |
| 109 | +) |
| 110 | +``` |
0 commit comments