|
2 | 2 |
|
3 | 3 | By default, deployed apps use Application Insights for the tracing of each request, along with the logging of errors.
|
4 | 4 |
|
| 5 | +* [Performance](#performance) |
| 6 | +* [Failures](#failures) |
| 7 | +* [Dashboard](#dashboard) |
| 8 | +* [Customizing the traces](#customizing-the-traces) |
| 9 | + |
| 10 | +## Performance |
| 11 | + |
5 | 12 | To see the performance data, go to the Application Insights resource in your resource group, click on the "Investigate -> Performance" blade and navigate to any HTTP request to see the timing data.
|
6 | 13 | To inspect the performance of chat requests, use the "Drill into Samples" button to see end-to-end traces of all the API calls made for any chat request:
|
7 | 14 |
|
8 | 15 | 
|
9 | 16 |
|
| 17 | +## Failures |
| 18 | + |
10 | 19 | To see any exceptions and server errors, navigate to the "Investigate -> Failures" blade and use the filtering tools to locate a specific exception. You can see Python stack traces on the right-hand side.
|
11 | 20 |
|
12 |
| -You can also see chart summaries on a dashboard by running the following command: |
| 21 | +## Dashboard |
| 22 | + |
| 23 | +You can see chart summaries on a dashboard by running the following command: |
13 | 24 |
|
14 | 25 | ```shell
|
15 | 26 | azd monitor
|
16 | 27 | ```
|
| 28 | + |
| 29 | +You can modify the contents of that dashboard by updating `infra/backend-dashboard.bicep`, which is a Bicep file that defines the dashboard contents and layout. |
| 30 | + |
| 31 | +## Customizing the traces |
| 32 | + |
| 33 | +The tracing is done using these OpenTelemetry Python packages: |
| 34 | + |
| 35 | +* [azure-monitor-opentelemetry](https://pypi.org/project/azure-monitor-opentelemetry/) |
| 36 | +* [opentelemetry-instrumentation-asgi](https://pypi.org/project/opentelemetry-instrumentation-asgi/) |
| 37 | +* [opentelemetry-instrumentation-httpx](https://pypi.org/project/opentelemetry-instrumentation-httpx/) |
| 38 | +* [opentelemetry-instrumentation-aiohttp-client](https://pypi.org/project/opentelemetry-instrumentation-aiohttp-client/) |
| 39 | +* [opentelemetry-instrumentation-openai](https://pypi.org/project/opentelemetry-instrumentation-openai/) |
| 40 | + |
| 41 | +Those packages are configured in the `app.py` file: |
| 42 | + |
| 43 | +```python |
| 44 | +if os.getenv("APPLICATIONINSIGHTS_CONNECTION_STRING"): |
| 45 | + configure_azure_monitor() |
| 46 | + # This tracks HTTP requests made by aiohttp: |
| 47 | + AioHttpClientInstrumentor().instrument() |
| 48 | + # This tracks HTTP requests made by httpx: |
| 49 | + HTTPXClientInstrumentor().instrument() |
| 50 | + # This tracks OpenAI SDK requests: |
| 51 | + OpenAIInstrumentor().instrument() |
| 52 | + # This middleware tracks app route requests: |
| 53 | + app.asgi_app = OpenTelemetryMiddleware(app.asgi_app) |
| 54 | +``` |
| 55 | + |
| 56 | +You can pass in parameters to `configure_azure_monitor()` to customize the tracing, like to add custom span processors. |
| 57 | +You can also set [OpenTelemetry environment variables](https://opentelemetry.io/docs/reference/specification/sdk-environment-variables/) to customize the tracing, like to set the sampling rate. |
| 58 | +See the [azure-monitor-opentelemetry](https://pypi.org/project/azure-monitor-opentelemetry/) documentation for more details. |
| 59 | + |
| 60 | +By default, [opentelemetry-instrumentation-openai](https://pypi.org/project/opentelemetry-instrumentation-openai/) traces all requests made to the OpenAI API, including the messages and responses. To disable that for privacy reasons, set the `TRACELOOP_TRACE_CONTENT=false` environment variable. |
| 61 | + |
| 62 | +To set environment variables, update `appEnvVariables` in `infra/main.bicep` and re-run `azd up`. |
0 commit comments