|
| 1 | +--- |
| 2 | +title: Separate Ingest Box |
| 3 | +sidebar_title: Separate Ingest Box |
| 4 | +sidebar_order: 2 |
| 5 | +--- |
| 6 | + |
| 7 | +In addition to having a [separate domain](/self-hosted/experimental/reverse-proxy/#expose-only-ingest-endpoint-publicly) for viewing the web UI and ingesting data, you can deploy a dedicated server for data ingestion that relays information to your main server. This setup is recommended for high-traffic installations and environments with multiple data centers. |
| 8 | + |
| 9 | +This architecture helps mitigate DDoS attacks by distributing ingestion across multiple endpoints, while your main Sentry instance with the web UI should be protected on a private network (accessible via VPN). Invalid payloads sent to your Relay instances will be dropped immediately. If your main server becomes unreachable, your Relay will continue attempting to send the data. |
| 10 | + |
| 11 | +Note that the region names in the diagram below are used for illustration purposes. |
| 12 | + |
| 13 | +```mermaid |
| 14 | +graph TB |
| 15 | + subgraph main [Main Sentry Server] |
| 16 | + direction TB |
| 17 | + nginx[External Nginx] |
| 18 | + sentry[Self-Hosted Sentry] |
| 19 | +
|
| 20 | + nginx --> sentry |
| 21 | + end |
| 22 | +
|
| 23 | + subgraph "US Ingest Server" |
| 24 | + direction TB |
| 25 | + internet1[Public Internet] |
| 26 | + relay1[Sentry Relay] |
| 27 | + end |
| 28 | +
|
| 29 | +
|
| 30 | + subgraph "Asia Ingest Server" |
| 31 | + direction TB |
| 32 | + internet2[Public Internet] |
| 33 | + relay2[Sentry Relay] |
| 34 | + end |
| 35 | +
|
| 36 | + subgraph "Europe Ingest Server" |
| 37 | + direction TB |
| 38 | + internet3[Public Internet] |
| 39 | + relay3[Sentry Relay] |
| 40 | + end |
| 41 | +
|
| 42 | + internet1 --> relay1 -- Through VPN or tunnel --> main |
| 43 | + internet2 --> relay2 -- Through VPN or tunnel --> main |
| 44 | + internet3 --> relay3 -- Through VPN or tunnel --> main |
| 45 | +``` |
| 46 | + |
| 47 | +To set up the relay, install Sentry Relay on your machine by following the [Relay Getting Started Guide](https://docs.sentry.io/product/relay/getting-started/). Configure the Relay to run in `managed` mode and point it to your main Sentry server. You can customize the port and protocol (HTTP or HTTPS) as needed. |
| 48 | + |
| 49 | +After installing Relay (via Docker or executable) and running the `configure init` command, you can configure it with the following settings: |
| 50 | + |
| 51 | +```yaml |
| 52 | +# Please see the relevant documentation. |
| 53 | +# Performance tuning: https://docs.sentry.io/product/relay/operating-guidelines/ |
| 54 | +# All config options: https://docs.sentry.io/product/relay/options/ |
| 55 | +relay: |
| 56 | + mode: managed |
| 57 | + instance: default |
| 58 | + upstream: https://sentry.yourcompany.com/ |
| 59 | + host: 0.0.0.0 |
| 60 | + port: 3000 |
| 61 | + |
| 62 | +limits: |
| 63 | + max_concurrent_requests: 20 |
| 64 | + |
| 65 | +# To avoid having Out Of Memory issues, |
| 66 | +# it's recommended to enable the envelope spooler. |
| 67 | +spool: |
| 68 | + envelopes: |
| 69 | + path: /var/lib/sentry-relay/spool.db # make sure this path exists |
| 70 | + max_memory_size: 200MB |
| 71 | + max_disk_size: 1000MB |
| 72 | + |
| 73 | +# metrics: |
| 74 | +# statsd: "100.100.123.123:8125" |
| 75 | + |
| 76 | +sentry: |
| 77 | + enabled: true |
| 78 | + dsn: "https://[email protected]/1" |
| 79 | +``` |
| 80 | +
|
| 81 | +While it's possible to run Relay on a different version than your self-hosted instance, we recommend keeping both Relay and Sentry on the same version. Remember to upgrade Relay whenever you upgrade your self-hosted Sentry installation. |
| 82 | +
|
| 83 | +<Alert level="info" title="Fun Fact"> |
| 84 | + Sentry SaaS uses a similar setup for its ingestion servers, behind Google Anycast IP addresses. |
| 85 | +</Alert> |
0 commit comments