You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/en/install.mdx
+155Lines changed: 155 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -675,6 +675,161 @@ spec:
675
675
# Other configurations as needed
676
676
```
677
677
678
+
## High Availability Deployment
679
+
680
+
For production environments, it is recommended to deploy the Connectors system in a high availability (HA) configuration to ensure service continuity and fault tolerance.
681
+
682
+
### Configuring Replicas
683
+
684
+
You can increase the number of replicas foreach workload to achieve high availability. This is done through the `workloads` fieldin the component spec. For production environments, we recommend configuring at least 3 replicas for each workload to ensure service continuity during node failures or rolling updates.
685
+
686
+
Below are specific examples for each major connector component:
687
+
688
+
#### ConnectorsCore
689
+
690
+
ConnectorsCore includes three main workloads: API server, controller manager, and proxy. For high availability, configure all three with multiple replicas:
The following connector components do not have Deployment workloads and therefore do not require replica configuration:
775
+
776
+
- ConnectorsGitLab
777
+
- ConnectorsK8S
778
+
- ConnectorsNPM
779
+
- ConnectorsPyPI
780
+
781
+
### Built-in Pod Anti-Affinity
782
+
783
+
The system includes built-in pod anti-affinity rules to ensure that replicas are distributed across different nodes. By default, the system uses `preferredDuringSchedulingIgnoredDuringExecution` with a weight of `100`, which means the scheduler will try to place pods on different nodes when possible, but will still schedule them on the same node if no other options are available.
784
+
785
+
This default configuration ensures:
786
+
- Pods are spread across different nodes when possible
787
+
- Deployment remains schedulable even if the cluster has limited nodes
788
+
- Automatic failover capability when a node becomes unavailable
789
+
790
+
### Customizing Affinity Rules
791
+
792
+
If the default affinity rules do not meet your requirements, you can override them through the `workloads` configuration. The `template.spec.affinity` field allows you to specify custom affinity rules.
793
+
794
+
For multi-zone clusters, you can configure zone-aware scheduling to spread pods across availability zones. The following example uses `requiredDuringSchedulingIgnoredDuringExecution` to enforce zone-level distribution, combined with `preferredDuringSchedulingIgnoredDuringExecution` to prefer node-level distribution within each zone:
0 commit comments