Skip to content

Commit c3d3494

Browse files
committed
docs(self-hosted): experimental external kafka
1 parent 5bd251a commit c3d3494

File tree

2 files changed

+116
-2
lines changed

2 files changed

+116
-2
lines changed
Lines changed: 113 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,113 @@
1+
---
2+
title: Self Hosted External Kafka
3+
sidebar_title: External Kafka
4+
sidebar_order: 91
5+
---
6+
7+
<Alert title="Important" level="warning">
8+
These are community-contributed docs. Sentry does not officially provide support for self-hosted configurations beyond the default install.
9+
</Alert>
10+
11+
Kafka plays a very significant role on Sentry's infrastructure, from ingesting to processing events until they end up on ClickHouse or filesystem for permanent storage (which also depends on your event retention days). Since Kafka requires very heavy resources on the server host, and some infrastructure already have a Kafka cluster set up, it is possible to use an external Kafka cluster for Sentry.
12+
13+
Sentry (the company) itself uses a Kafka cluster on production with a very tailored setup, especially for authentication. Some Kafka configuration options (such as `SASL_SSL` security protocol) might not be available for some services, but since everything is open source, you are encouraged to contribute to implement those missing things.
14+
15+
If you are using authentication, make sure that the user is able to create new topics. As of now, there is no support for prefixed topic name.
16+
17+
<Alert title="Note" level="info">
18+
After changing configuration files, re-run the <code>./install.sh</code> script, to rebuild and restart the containers. See the <Link to="/self-hosted/#configuration">configuration section</Link> for more information.
19+
</Alert>
20+
21+
## Sentry
22+
23+
Sentry uses the confluent-kafka library, which leverages the [default Kafka config from librdkafka](https://github.com/confluentinc/librdkafka/blob/master/CONFIGURATION.md). Modify your `sentry.conf.py` file like so:
24+
25+
```python
26+
# DEFAULT_KAFKA_OPTIONS variable is already defined in sentry.conf.py
27+
# Make sure you don't have a duplicate variable declaration.
28+
DEFAULT_KAFKA_OPTIONS = {
29+
"bootstrap.servers": "kafka-node1:9092,kafka-node2:9092,kafka-node3:9092",
30+
"message.max.bytes": 50000000,
31+
"socket.timeout.ms": 1000,
32+
"security.protocol": "PLAINTEXT", # Valid options are PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL
33+
# If you don't use any of these options below, you can remove them or set them to `None`.
34+
"sasl.mechanism": "PLAIN", # Valid options are PLAIN, SCRAM-SHA-256, SCRAM-SHA-512. Other mechanism might be unavailable.
35+
"sasl.username": "username",
36+
"sasl.password": "password",
37+
"ssl.ca.location": "/path/to/ca.pem",
38+
"ssl.certificate.location": "/path/to/client.pem",
39+
"ssl.key.location": "/path/to/client.key",
40+
}
41+
```
42+
43+
## Snuba
44+
45+
Although Snuba also uses confluent-kafka under the hood, not every configuration option is available. Modify your `docker-compose.yml` file like so:
46+
47+
```yaml
48+
x-snuba-defaults: &snuba_defaults
49+
# ...
50+
environment:
51+
# ...
52+
DEFAULT_BROKERS: "kafka-node1:9092,kafka-node2:9092,kafka-node3:9092"
53+
KAFKA_SECURITY_PROTOCOL: "plaintext" # Valid options are PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. SSL is not supported for rust-consumer.
54+
KAFKA_SSL_CA_PATH:
55+
KAFKA_SSL_CERT_PATH:
56+
KAFKA_SSL_KEY_PATH:
57+
KAFKA_SASL_MECHANISM: "PLAIN" # Valid options are PLAIN, SCRAM-SHA-256, SCRAM-SHA-512.
58+
KAFKA_SASL_USERNAME: "username"
59+
KAFKA_SASL_PASSWORD: "password"
60+
```
61+
62+
If you encounter any failing startup, try to use `consumer` instead of `rust-consumer`.
63+
64+
## Relay
65+
66+
Modify your `relay/config.yml` file like so:
67+
68+
```yaml
69+
processing:
70+
kafka_config:
71+
- {name: "bootstrap.servers", value: "kafka-node1:9092,kafka-node2:9092,kafka-node3:9092"}
72+
- {name: "message.max.bytes", value: 50000000} # 50MB
73+
- {name: "security.protocol", value: "PLAINTEXT"}
74+
- {name: "sasl.mechanism", value: "PLAIN"}
75+
- {name: "sasl.username", value: "username"}
76+
- {name: "sasl.password", value: "password"}
77+
- {name: "ssl.ca.location", value: "/path/to/ca.pem"}
78+
- {name: "ssl.certificate.location", value: "/path/to/client.pem"}
79+
- {name: "ssl.key.location", value: "/path/to/client.key"}
80+
```
81+
82+
## Vroom
83+
84+
As of the time of writing, Vroom does not support any kind of authentication.
85+
86+
Modify your `docker-compose.yml` file like so:
87+
88+
```yaml
89+
vroom:
90+
# ...
91+
environment:
92+
# ...
93+
SENTRY_KAFKA_BROKERS_PROFILING: "kafka-node1:9092,kafka-node2:9092,kafka-node3:9092"
94+
SENTRY_KAFKA_BROKERS_OCCURRENCES: "kafka-node1:9092,kafka-node2:9092,kafka-node3:9092"
95+
```
96+
97+
When [vroom#530](https://github.com/getsentry/vroom/pull/530) is merged, you can use authentication. You will need to modify your `docker-compose.yml` file like so:
98+
99+
```yaml
100+
vroom:
101+
# ...
102+
environment:
103+
# ...
104+
SENTRY_KAFKA_BROKERS_PROFILING: "kafka-node1:9092,kafka-node2:9092,kafka-node3:9092"
105+
SENTRY_KAFKA_BROKERS_OCCURRENCES: "kafka-node1:9092,kafka-node2:9092,kafka-node3:9092"
106+
SENTRY_KAFKA_SECURITY_PROTOCOL: "plaintext" # Valid options are PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL
107+
SENTRY_KAFKA_SSL_CA_PATH: ""
108+
SENTRY_KAFKA_SSL_CERT_PATH: ""
109+
SENTRY_KAFKA_SSL_KEY_PATH: ""
110+
SENTRY_KAFKA_SASL_MECHANISM: "PLAIN" # Valid options are PLAIN, SCRAM-SHA-256, SCRAM-SHA-512.
111+
SENTRY_KAFKA_SASL_USERNAME: "username"
112+
SENTRY_KAFKA_SASL_PASSWORD: "password"
113+
```

develop-docs/self-hosted/experimental/external-storage.mdx

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,12 @@ sidebar_title: External Storage
44
sidebar_order: 90
55
---
66

7-
In some cases, storing Sentry data on-disk is not really something people can do. Sometimes, it's better to offload it into some bucket storage (like AWS S3 or Google Cloud Storage).
8-
97
<Alert title="Important" level="warning">
108
These are community-contributed docs. Sentry does not officially provide support for self-hosted configurations beyond the default install.
119
</Alert>
10+
11+
In some cases, storing Sentry data on-disk is not really something people can do. Sometimes, it's better to offload it into some bucket storage (like AWS S3 or Google Cloud Storage).
12+
1213
<Alert title="Note" level="info">
1314
After changing configuration files, re-run the <code>./install.sh</code> script, to rebuild and restart the containers. See the <Link to="/self-hosted/#configuration">configuration section</Link> for more information.
1415
</Alert>

0 commit comments

Comments
 (0)