Skip to content

Commit 7685893

Browse files
committed
Merge branch 'master' into cursor/set-up-flutter-web-for-chrome-extensions-1df2
2 parents c8ec7f8 + 859be5c commit 7685893

File tree

45 files changed

+1199
-170
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

45 files changed

+1199
-170
lines changed

develop-docs/integrations/index.mdx

Lines changed: 67 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,4 +4,70 @@ description: How to setup Sentry integrations. Integrations connect the Sentry s
44
sidebar_order: 80
55
---
66

7-
<PageGrid />
7+
<PageGrid/>
8+
9+
10+
11+
## Integration Backup and Restore Scripts
12+
13+
### Overview
14+
15+
When working on integration development locally, your database contains important configuration that makes integrations work properly. If you run `make reset-db` or need to delete your local environment, you lose all this setup and have to configure integrations from scratch.
16+
There are two scripts that help you backup and restore your local Sentry integration configuration and setup data.
17+
18+
These scripts allow you to:
19+
- **Backup**: Save your current integration state to a JSON file
20+
- **Restore**: Load the integration state back into a clean database
21+
22+
### What Data is Backed Up
23+
24+
The scripts handle the following Sentry models in the correct dependency order:
25+
- `IdentityProvider` - Authentication provider configurations
26+
- `Integration` - Integration instances and settings
27+
- `Identity` - User identity mappings
28+
- `OrganizationIntegration` - Organization-specific integration configurations
29+
30+
### Prerequisites
31+
32+
- Sentry development environment set up locally
33+
- Python environment with Sentry dependencies installed
34+
- Access to your local Sentry database
35+
36+
### Script Files
37+
38+
There are two scripts (exist in `sentry` and `getsentry`):
39+
40+
- `save_integration_data` - Backs up integration data
41+
- `load_integration_data` - Restores integration data
42+
43+
### Usage Instructions
44+
45+
#### Step 1: Save Your Integration Data
46+
47+
Before running `make reset-db` or making any destructive changes, backup your current integration state:
48+
49+
```bash
50+
# Navigate to your Sentry project directory
51+
cd /path/to/sentry # or /path/to/getsentry
52+
53+
# Run the save script
54+
bin/save_integration_data --output-file integration_backup.json
55+
```
56+
57+
#### Step 2: Restore Your Integration Data
58+
59+
After your database is reset and ready, restore your integration configuration:
60+
61+
```bash
62+
# Basic restore (preserves original organization IDs)
63+
bin/load_integration_data --input-file integration_backup.json
64+
```
65+
66+
**Or, if you need to change the organization ID:**
67+
68+
```bash
69+
# Restore and update organization ID for OrganizationIntegration objects
70+
bin/load_integration_data --input-file integration_backup.json --org-id 123
71+
```
72+
73+
After restoring, all the previous integration data will be restored and you can start using the integration for local development again.

develop-docs/sdk/data-model/envelope-items.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -400,7 +400,7 @@ _None_
400400

401401
Item type `"log"` contains an array of log payloads encoded as JSON. This allows for multiple log payloads to be sent in a single envelope item.
402402

403-
Only a single log item is allowed per envelope. The `item_count` field in the envelope item header must match the amount of logs sent, it's not optional. A `content_type` field in the envelope item header must be set to `application/vnd.sentry.items.log+json`.
403+
Only a single log container is allowed per envelope. The `item_count` field in the envelope item header must match the amount of logs sent, it's not optional. A `content_type` field in the envelope item header must be set to `application/vnd.sentry.items.log+json`.
404404

405405
It's okay to mix logs from different traces into the same log envelope item, but if you do, you MUST not attach a DSC (dynamic sampling context) to the envelope header.
406406

develop-docs/sdk/telemetry/replays.mdx

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -172,3 +172,22 @@ The other sub-item is the replay recording's instructions set. This payload shou
172172
{"segment_id":0}
173173
/* gzipped JSON payload */
174174
```
175+
176+
## SDK Behavior
177+
178+
### Session mode
179+
180+
When an SDK records Session Replay in `session` mode (`sessionSampleRate` is specified), the recording should start when the SDK is initialized and should be continuously streamed to the Sentry servers. The SDK should send a replay envelope every 5 seconds. The maximum duration of the recording should not exceed 60 minutes (or 15 minutes without an activity on Web). After the hard limit has been reached, the SDK should stop recording, clear out the current `replay_id` and remove it from the Scope so all of the subsequent events don't get associated with it.
181+
182+
For the SDKs that support disk cache, the recording should pause when there's no internet connection or the SDK is getting rate-limited. This is necessary to prevent overflowing the disk cache, which can potentially result in losing more critical envelopes. When internet connection is restored or rate limit is lifted, the recording should resume.
183+
184+
### Buffer mode
185+
186+
When an SDK records Session Replay in `buffer` mode (`onErrorSampleRate` is specified), the recording should start when the SDK is initialized and should be buffered in-memory (and to disk if the SDK supports disk cache) in a ring buffer for up to 30 seconds back. The capturing of the recording may be triggered when one of the following conditions is met:
187+
188+
- A crash or error event occurs and is captured by the SDK
189+
- `flush` API has been called manually on the replay (for SDKs that support manual API)
190+
191+
After the initial (buffered) segment has been captured, the SDK should continue recording in `session` mode. Note, however, that the `replay_type` field of the following segments should still be set to `buffer` to reflect the original `replay_type`.
192+
193+
If the crash or error event has been dropped in `beforeSend`, the replay should **not** be captured.

develop-docs/sdk/telemetry/traces/modules/ai-agents.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ Additional attributes on the span:
8383
Describes a tool execution.
8484

8585
- The spans `op` MUST be `"gen_ai.execute_tool"`.
86-
- The spans `name` SHOULD be `"gen_ai.execute_tool {gen_ai.tool.name}"`. (e.g. `"gen_ai.execute_tool query_database"`)
86+
- The spans `name` SHOULD be `"execute_tool {gen_ai.tool.name}"`. (e.g. `"execute_tool query_database"`)
8787
- The `gen_ai.tool.name` attribute SHOULD be set to the name of the tool. (e.g. `"query_database"`)
8888
- All [Common Span Attributes](#common-span-attributes) SHOULD be set (all `required` common attributes MUST be set).
8989

develop-docs/self-hosted/index.mdx

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,13 @@ Depending on your traffic volume, you may want to increase your system specifica
4040

4141
If increasing the disk storage space isn't possible, you can migrate your storage to use external storage such as AWS S3 or Google Cloud Storage (GCS). Decreasing your `SENTRY_RETENTION_DAYS` environment variable to lower numbers will save some storage space from being full, at the cost of having shorter data retention period. See [Event Retention](/self-hosted/configuration#event-retention) section.
4242

43-
There are known issues on installing self-hosted Sentry on RHEL-based Linux distros, such as CentOS, Rocky Linux, and Alma Linux. It is also not possible to install on an Alpine Linux distro. Most people succeed with Debian/Ubuntu-based distros. If you successfully install on another distro, please let us know on the [Sentry's Discord](https://discord.gg/sentry)!
43+
Below is a breakdown of self-hosted Sentry installation compatibility with various Linux distributions:
44+
* **Debian/Ubuntu-based** distros are preferred; most users succeed with them, and they're used on Sentry's dogfood instance.
45+
* **RHEL-based Linux** distributions (e.g., CentOS, Rocky Linux, Alma Linux) have known installation issues. While some users have made it work by disabling SELinux, this is highly discouraged.
46+
* **Amazon Linux 2023**, a Fedora Linux derivative, has seen one person successfully run self-hosted Sentry. This was achieved with SELinux enabled and by adding their user to the `docker` group.
47+
* **Alpine Linux** is unsupported due to install script compatibility.
48+
49+
If you successfully install Sentry on a different distribution, please share your experience on the [Sentry's Discord](https://discord.gg/sentry)!
4450

4551
## Getting Started
4652

develop-docs/self-hosted/troubleshooting/kafka.mdx

Lines changed: 65 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -16,33 +16,89 @@ This happens where Kafka and the consumers get out of sync. Possible reasons are
1616
2. Having a sustained event spike that causes very long processing times, causing Kafka to drop messages as they go past the retention time
1717
3. Date/time out of sync issues due to a restart or suspend/resume cycle
1818

19+
### Visualize
20+
21+
You can visualize the Kafka consumers and their offsets by bringing an additional container, such as [Kafka UI](https://github.com/provectus/kafka-ui) or [Redpanda Console](https://github.com/redpanda-data/console) into your Docker Compose.
22+
23+
Kafka UI:
24+
```yaml
25+
kafka-ui:
26+
image: provectuslabs/kafka-ui:latest
27+
restart: on-failure
28+
environment:
29+
KAFKA_CLUSTERS_0_NAME: "local"
30+
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: "kafka:9092"
31+
DYNAMIC_CONFIG_ENABLED: "true"
32+
ports:
33+
- "8080:8080"
34+
depends_on:
35+
- kafka
36+
```
37+
38+
Or, you can use Redpanda Console:
39+
```yaml
40+
redpanda-console:
41+
image: docker.redpanda.com/redpandadata/console:latest
42+
restart: on-failure
43+
entrypoint: /bin/sh
44+
command: -c "echo \"$$CONSOLE_CONFIG_FILE\" > /tmp/config.yml; /app/console"
45+
environment:
46+
CONFIG_FILEPATH: "/tmp/config.yml"
47+
CONSOLE_CONFIG_FILE: |
48+
kafka:
49+
brokers: ["kafka:9092"]
50+
sasl:
51+
enabled: false
52+
schemaRegistry:
53+
enabled: false
54+
kafkaConnect:
55+
enabled: false
56+
ports:
57+
- "8080:8080"
58+
depends_on:
59+
- kafka
60+
```
61+
62+
Ideally, you want to have zero lag for all consumer groups. If a consumer group has a lot of lag, you need to investigate whether it's caused by a disconnected consumer (e.g., a Sentry/Snuba container that's disconnected from Kafka) or a consumer that's stuck processing a certain message. If it's a disconnected consumer, you can either restart the container or reset the Kafka offset to 'earliest.' Otherwise, you can reset the Kafka offset to 'latest.'
63+
1964
### Recovery
2065
21-
Note: These solutions may result in data loss when resetting the offset of the snuba consumers.
66+
<Alert level="warning" title="Warning">
67+
These solutions may result in data loss for the duration of your Kafka event retention (defaults to 24 hours) when resetting the offset of the consumers.
68+
</Alert>
2269
2370
#### Proper solution
2471
25-
The _proper_ solution is as follows ([reported](https://github.com/getsentry/self-hosted/issues/478#issuecomment-666254392) by [@rmisyurev](https://github.com/rmisyurev)):
72+
The _proper_ solution is as follows ([reported](https://github.com/getsentry/self-hosted/issues/478#issuecomment-666254392) by [@rmisyurev](https://github.com/rmisyurev)). This example uses `snuba-consumers` with `events` topic. Your consumer group name and topic name may be different.
2673

27-
1. Receive consumers list:
74+
1. Shutdown the corresponding Sentry/Snuba container that's using the consumer group (You can see the corresponding containers by inspecting the `docker-compose.yml` file):
75+
```shell
76+
docker compose stop snuba-errors-consumer snuba-outcomes-consumer snuba-outcomes-billing-consumer
77+
```
78+
2. Receive consumers list:
2879
```shell
2980
docker compose run --rm kafka kafka-consumer-groups --bootstrap-server kafka:9092 --list
3081
```
31-
2. Get group info:
82+
3. Get group info:
3283
```shell
3384
docker compose run --rm kafka kafka-consumer-groups --bootstrap-server kafka:9092 --group snuba-consumers --describe
3485
```
35-
3. Watching what is going to happen with offset by using dry-run (optional):
86+
4. Watching what is going to happen with offset by using dry-run (optional):
3687
```shell
3788
docker compose run --rm kafka kafka-consumer-groups --bootstrap-server kafka:9092 --group snuba-consumers --topic events --reset-offsets --to-latest --dry-run
3889
```
39-
4. Set offset to latest and execute:
90+
5. Set offset to latest and execute:
4091
```shell
4192
docker compose run --rm kafka kafka-consumer-groups --bootstrap-server kafka:9092 --group snuba-consumers --topic events --reset-offsets --to-latest --execute
4293
```
43-
44-
<Alert title="Tip">
45-
You can replace <code>snuba-consumers</code> with other consumer groups or <code>events</code> with other topics when needed.
94+
6. Start the previously stopped Sentry/Snuba containers:
95+
```shell
96+
docker compose start snuba-errors-consumer snuba-outcomes-consumer snuba-outcomes-billing-consumer
97+
```
98+
<Alert level="info" title="Tips">
99+
* You can replace <code>snuba-consumers</code> with other consumer groups or <code>events</code> with other topics when needed.
100+
* You can reset the offset to "earliest" instead of "latest" if you want to start from the beginning.
101+
* If you have Kafka UI or Redpanda Console, you can reset the offsets through the web UI instead of the CLI.
46102
</Alert>
47103

48104
#### Another option

develop-docs/self-hosted/troubleshooting/sentry.mdx

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,14 @@ CSRF_TRUSTED_ORIGINS = ["https://sentry.example.com", "http://10.100.10.10", "ht
1515

1616
See [Django's documentation on CSRF](https://docs.djangoproject.com/en/4.2/ref/settings/#std-setting-CSRF_TRUSTED_ORIGINS) for further detail.
1717

18+
## Containers taking too much CPU/RAM usage
19+
20+
If you're seeing a higher incoming traffic load, then it's expected. If this is the case, you might want to increase your machine resources.
21+
22+
However, if you have very low incoming traffic and the CPU/RAM constantly spikes, you might want to check on your `top` (or `htop`, or similar) command to see which process are taking the most resources. You can then track down the corresponding Docker container and see if there are any logs that might help you identify the issue.
23+
24+
Usually, most containers that are not a dependency (like Postgres or Kafka) will consume some good amount of CPU & RAM during startup, specifically for catching up with backlogs. If they are experiencing a [bootloop](https://en.wikipedia.org/wiki/Booting#Bootloop), it usually comes down to invalid configuration or a broken dependency.
25+
1826
## `sentry-data` volume not being cleaned up
1927

2028
You may see the `sentry-data` taking too much disk space. You can clean it manually (or putting the cleanup cronjob in place).

docs/concepts/otlp/index.mdx

Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
---
2+
title: OpenTelemetry Protocol (OTLP)
3+
sidebar_order: 400
4+
description: "Learn how to send OpenTelemetry trace data directly to Sentry from OpenTelemetry SDKs."
5+
keywords: ["otlp", "otel", "opentelemetry"]
6+
---
7+
8+
<Include name="feature-available-alpha-otlp.mdx" />
9+
10+
Sentry can ingest [OpenTelemetry](https://opentelemetry.io) traces directly via the [OpenTelemetry Protocol](https://opentelemetry.io/docs/specs/otel/protocol/). If you have an existing OpenTelemetry trace instrumentation, you can configure your OpenTelemetry exporter to send traces to Sentry directly. Sentry's OTLP ingestion endpoint is currently in development, and has a few known limitations:
11+
12+
- Span events are not supported. All span events are dropped during ingestion.
13+
- Span links are partially supported. We ingest and display span links, but they cannot be searched, filtered, or aggregated. Links are are shown in the [Trace View](https://docs.sentry.io/concepts/key-terms/tracing/trace-view/).
14+
- Array attributes are partially supported. We ingest and display array attributes, but they cannot be searched, filtered, or aggregated. Array attributes are shown in the [Trace View](https://docs.sentry.io/concepts/key-terms/tracing/trace-view/).
15+
- Sentry does not support ingesting OTLP metrics or OTLP logs.
16+
17+
The easiest way to configure an OpenTelemetry exporter is with environment variables. You'll need to configure the trace endpoint URL, as well as the authentication headers. Set these variables on the server where your application is running.
18+
19+
```bash {filename: .env}
20+
export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="___OTLP_TRACES_URL___""
21+
export OTEL_EXPORTER_OTLP_TRACES_HEADERS="x-sentry-auth=sentry sentry_key=___PUBLIC_KEY___"
22+
```
23+
24+
Alternatively, you can configure the OpenTelemetry Exporter directly in your application code. Here is an example with the OpenTelemetry Node SDK:
25+
26+
```typescript {filename: app.ts}
27+
import { NodeSDK } from "@opentelemetry/sdk-node";
28+
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http";
29+
30+
const sdk = new NodeSDK({
31+
traceExporter: new OTLPTraceExporter({
32+
url: "___OTLP_TRACES_URL___",
33+
headers: {
34+
"x-sentry-auth": "sentry sentry_key=___PUBLIC_KEY___",
35+
},
36+
}),
37+
});
38+
39+
sdk.start();
40+
```
41+
42+
You can find the values of Sentry's OTLP endpoint and public key in your Sentry project settings.
43+
44+
1. Go to the [Settings > Projects](https://sentry.io/orgredirect/organizations/:orgslug/settings/projects/) page in Sentry.
45+
2. Select a project from the list.
46+
3. Go to the "Client Keys (DSN)" sub-page for this project under the "SDK Setup" heading.

0 commit comments

Comments
 (0)