You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: develop-docs/integrations/index.mdx
+67-1Lines changed: 67 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,4 +4,70 @@ description: How to setup Sentry integrations. Integrations connect the Sentry s
4
4
sidebar_order: 80
5
5
---
6
6
7
-
<PageGrid />
7
+
<PageGrid/>
8
+
9
+
10
+
11
+
## Integration Backup and Restore Scripts
12
+
13
+
### Overview
14
+
15
+
When working on integration development locally, your database contains important configuration that makes integrations work properly. If you run `make reset-db` or need to delete your local environment, you lose all this setup and have to configure integrations from scratch.
16
+
There are two scripts that help you backup and restore your local Sentry integration configuration and setup data.
17
+
18
+
These scripts allow you to:
19
+
-**Backup**: Save your current integration state to a JSON file
20
+
-**Restore**: Load the integration state back into a clean database
21
+
22
+
### What Data is Backed Up
23
+
24
+
The scripts handle the following Sentry models in the correct dependency order:
Copy file name to clipboardExpand all lines: develop-docs/sdk/data-model/envelope-items.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -400,7 +400,7 @@ _None_
400
400
401
401
Item type `"log"` contains an array of log payloads encoded as JSON. This allows for multiple log payloads to be sent in a single envelope item.
402
402
403
-
Only a single log item is allowed per envelope. The `item_count` field in the envelope item header must match the amount of logs sent, it's not optional. A `content_type` field in the envelope item header must be set to `application/vnd.sentry.items.log+json`.
403
+
Only a single log container is allowed per envelope. The `item_count` field in the envelope item header must match the amount of logs sent, it's not optional. A `content_type` field in the envelope item header must be set to `application/vnd.sentry.items.log+json`.
404
404
405
405
It's okay to mix logs from different traces into the same log envelope item, but if you do, you MUST not attach a DSC (dynamic sampling context) to the envelope header.
Copy file name to clipboardExpand all lines: develop-docs/sdk/telemetry/replays.mdx
+19Lines changed: 19 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -172,3 +172,22 @@ The other sub-item is the replay recording's instructions set. This payload shou
172
172
{"segment_id":0}
173
173
/* gzipped JSON payload */
174
174
```
175
+
176
+
## SDK Behavior
177
+
178
+
### Session mode
179
+
180
+
When an SDK records Session Replay in `session` mode (`sessionSampleRate` is specified), the recording should start when the SDK is initialized and should be continuously streamed to the Sentry servers. The SDK should send a replay envelope every 5 seconds. The maximum duration of the recording should not exceed 60 minutes (or 15 minutes without an activity on Web). After the hard limit has been reached, the SDK should stop recording, clear out the current `replay_id` and remove it from the Scope so all of the subsequent events don't get associated with it.
181
+
182
+
For the SDKs that support disk cache, the recording should pause when there's no internet connection or the SDK is getting rate-limited. This is necessary to prevent overflowing the disk cache, which can potentially result in losing more critical envelopes. When internet connection is restored or rate limit is lifted, the recording should resume.
183
+
184
+
### Buffer mode
185
+
186
+
When an SDK records Session Replay in `buffer` mode (`onErrorSampleRate` is specified), the recording should start when the SDK is initialized and should be buffered in-memory (and to disk if the SDK supports disk cache) in a ring buffer for up to 30 seconds back. The capturing of the recording may be triggered when one of the following conditions is met:
187
+
188
+
- A crash or error event occurs and is captured by the SDK
189
+
-`flush` API has been called manually on the replay (for SDKs that support manual API)
190
+
191
+
After the initial (buffered) segment has been captured, the SDK should continue recording in `session` mode. Note, however, that the `replay_type` field of the following segments should still be set to `buffer` to reflect the original `replay_type`.
192
+
193
+
If the crash or error event has been dropped in `beforeSend`, the replay should **not** be captured.
Copy file name to clipboardExpand all lines: develop-docs/self-hosted/index.mdx
+7-1Lines changed: 7 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,13 @@ Depending on your traffic volume, you may want to increase your system specifica
40
40
41
41
If increasing the disk storage space isn't possible, you can migrate your storage to use external storage such as AWS S3 or Google Cloud Storage (GCS). Decreasing your `SENTRY_RETENTION_DAYS` environment variable to lower numbers will save some storage space from being full, at the cost of having shorter data retention period. See [Event Retention](/self-hosted/configuration#event-retention) section.
42
42
43
-
There are known issues on installing self-hosted Sentry on RHEL-based Linux distros, such as CentOS, Rocky Linux, and Alma Linux. It is also not possible to install on an Alpine Linux distro. Most people succeed with Debian/Ubuntu-based distros. If you successfully install on another distro, please let us know on the [Sentry's Discord](https://discord.gg/sentry)!
43
+
Below is a breakdown of self-hosted Sentry installation compatibility with various Linux distributions:
44
+
***Debian/Ubuntu-based** distros are preferred; most users succeed with them, and they're used on Sentry's dogfood instance.
45
+
***RHEL-based Linux** distributions (e.g., CentOS, Rocky Linux, Alma Linux) have known installation issues. While some users have made it work by disabling SELinux, this is highly discouraged.
46
+
***Amazon Linux 2023**, a Fedora Linux derivative, has seen one person successfully run self-hosted Sentry. This was achieved with SELinux enabled and by adding their user to the `docker` group.
47
+
***Alpine Linux** is unsupported due to install script compatibility.
48
+
49
+
If you successfully install Sentry on a different distribution, please share your experience on the [Sentry's Discord](https://discord.gg/sentry)!
Copy file name to clipboardExpand all lines: develop-docs/self-hosted/troubleshooting/kafka.mdx
+65-9Lines changed: 65 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,33 +16,89 @@ This happens where Kafka and the consumers get out of sync. Possible reasons are
16
16
2. Having a sustained event spike that causes very long processing times, causing Kafka to drop messages as they go past the retention time
17
17
3. Date/time out of sync issues due to a restart or suspend/resume cycle
18
18
19
+
### Visualize
20
+
21
+
You can visualize the Kafka consumers and their offsets by bringing an additional container, such as [Kafka UI](https://github.com/provectus/kafka-ui) or [Redpanda Console](https://github.com/redpanda-data/console) into your Docker Compose.
Ideally, you want to have zero lag for all consumer groups. If a consumer group has a lot of lag, you need to investigate whether it's caused by a disconnected consumer (e.g., a Sentry/Snuba container that's disconnected from Kafka) or a consumer that's stuck processing a certain message. If it's a disconnected consumer, you can either restart the container or reset the Kafka offset to 'earliest.' Otherwise, you can reset the Kafka offset to 'latest.'
63
+
19
64
### Recovery
20
65
21
-
Note: These solutions may result in data loss when resetting the offset of the snuba consumers.
66
+
<Alert level="warning" title="Warning">
67
+
These solutions may result in data loss for the duration of your Kafka event retention (defaults to 24 hours) when resetting the offset of the consumers.
68
+
</Alert>
22
69
23
70
#### Proper solution
24
71
25
-
The _proper_ solution is as follows ([reported](https://github.com/getsentry/self-hosted/issues/478#issuecomment-666254392) by [@rmisyurev](https://github.com/rmisyurev)):
72
+
The _proper_ solution is as follows ([reported](https://github.com/getsentry/self-hosted/issues/478#issuecomment-666254392) by [@rmisyurev](https://github.com/rmisyurev)). This example uses `snuba-consumers` with `events` topic. Your consumer group name and topic name may be different.
26
73
27
-
1. Receive consumers list:
74
+
1. Shutdown the corresponding Sentry/Snuba container that's using the consumer group (You can see the corresponding containers by inspecting the `docker-compose.yml` file):
See [Django's documentation on CSRF](https://docs.djangoproject.com/en/4.2/ref/settings/#std-setting-CSRF_TRUSTED_ORIGINS) for further detail.
17
17
18
+
## Containers taking too much CPU/RAM usage
19
+
20
+
If you're seeing a higher incoming traffic load, then it's expected. If this is the case, you might want to increase your machine resources.
21
+
22
+
However, if you have very low incoming traffic and the CPU/RAM constantly spikes, you might want to check on your `top` (or `htop`, or similar) command to see which process are taking the most resources. You can then track down the corresponding Docker container and see if there are any logs that might help you identify the issue.
23
+
24
+
Usually, most containers that are not a dependency (like Postgres or Kafka) will consume some good amount of CPU & RAM during startup, specifically for catching up with backlogs. If they are experiencing a [bootloop](https://en.wikipedia.org/wiki/Booting#Bootloop), it usually comes down to invalid configuration or a broken dependency.
25
+
18
26
## `sentry-data` volume not being cleaned up
19
27
20
28
You may see the `sentry-data` taking too much disk space. You can clean it manually (or putting the cleanup cronjob in place).
Sentry can ingest [OpenTelemetry](https://opentelemetry.io) traces directly via the [OpenTelemetry Protocol](https://opentelemetry.io/docs/specs/otel/protocol/). If you have an existing OpenTelemetry trace instrumentation, you can configure your OpenTelemetry exporter to send traces to Sentry directly. Sentry's OTLP ingestion endpoint is currently in development, and has a few known limitations:
11
+
12
+
- Span events are not supported. All span events are dropped during ingestion.
13
+
- Span links are partially supported. We ingest and display span links, but they cannot be searched, filtered, or aggregated. Links are are shown in the [Trace View](https://docs.sentry.io/concepts/key-terms/tracing/trace-view/).
14
+
- Array attributes are partially supported. We ingest and display array attributes, but they cannot be searched, filtered, or aggregated. Array attributes are shown in the [Trace View](https://docs.sentry.io/concepts/key-terms/tracing/trace-view/).
15
+
- Sentry does not support ingesting OTLP metrics or OTLP logs.
16
+
17
+
The easiest way to configure an OpenTelemetry exporter is with environment variables. You'll need to configure the trace endpoint URL, as well as the authentication headers. Set these variables on the server where your application is running.
0 commit comments