You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ Using k0smotron the clusters controlplane and workerplane are truly separated. T
16
16
17
17
### Bring your own workers
18
18
19
-
With k0smotron you can connect worker nodes from ANY infrastructure to your cluster control plane.
19
+
With k0smotron you can connect worker nodes from ANY infrastructure to your cluster control plane.
20
20
21
21
## How does it work
22
22
@@ -34,7 +34,7 @@ Often when running integration and end-to-end testing for your software running
34
34
35
35
### Edge
36
36
37
-
Running Kubernetes on the network edge usually means running in low resource infrastructure. What this often means is that setting up the controlplane is either a challenge or a mission impossible. Running the controlplane on a existing cluster, on a separate dedicated infrastructure, removes that challenge and let's you focus on the real edge.
37
+
Running Kubernetes on the network edge usually means running in low resource infrastructure. What this often means is that setting up the controlplane is either a challenge or a mission impossible. Running the controlplane on a existing cluster, on a separate dedicated infrastructure, removes that challenge and let's you focus on the real edge.
38
38
39
39
Running on the edge often also means large number of clusters to manage. Do you really want to dedicate nodes for each cluster controlplane and manage all the infrastructure for those?
@@ -169,7 +169,7 @@ k0smotron, running in a management cluster in AWS, supports flexible networking
169
169
If you prefer using an NLB instead of ELB, you must specify annotations for the Service in the `k0smotronControlPlane`. These annotations guide the AWS Cloud Controller Manager (CCM) or the AWS Load Balancer Controller to create the respective services.
170
170
171
171
```yaml
172
-
[...]
172
+
[...]
173
173
service:
174
174
type: LoadBalancer
175
175
apiPort: 6443
@@ -184,7 +184,7 @@ For scenarios involving Classic ELBs or NLBs without special options, the AWS CC
184
184
If you aim to use the NLB and set the schema to `internal`, the target group attribute `preserve_client_ip.enabled=false` is required due to "hairpinning" or "NAT loopback". In such cases, the AWS CCM cannot be used because it doesn't support setting Target Group Attributes. Therefore, the AWS Load Balancer Controller, which has the ability to set Target Group Attributes, becomes necessary. Follow [this guide](https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html) to install the AWS Load Balancer Controller.
Copy file name to clipboardExpand all lines: docs/capi-bootstrap.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -112,6 +112,6 @@ spec:
112
112
# More details about the aws machine template can be set here
113
113
```
114
114
115
-
This example creates a `MachineDeployment` with 2 replicas, using k0smotron as the bootstrap provider. The `infrastructureRef` is used to specify the infrastructure requirements for the machines, in this case, AWS.
115
+
This example creates a `MachineDeployment` with 2 replicas, using k0smotron as the bootstrap provider. The `infrastructureRef` is used to specify the infrastructure requirements for the machines, in this case, AWS.
116
116
117
117
Check the [examples](capi-examples.md) pages for more detailed examples how k0smotron can be used with various Cluster API infrastructure providers.
@@ -69,7 +69,7 @@ For a full reference on `K0sControlPlane` configurability see the [reference doc
69
69
**WARNING: Downscaling is a potentially dangerous operation.**
70
70
71
71
Kubernetes using etcd as its backing store. It's crucial to have a quorum of etcd nodes available at all times. Always run etcd as a cluster of **odd** members.
72
-
72
+
73
73
When downscaling the control plane, you need firstly to deregister the node from the etcd cluster. k0smotron will do it automatically for you.
74
74
75
75
**NOTE:** k0smotron gives node names sequentially and on downscaling it will remove the "latest" nodes. For instance, if you have `k0smotron-test` cluster of 5 nodes and you downscale to 3 nodes, the nodes `k0smotron-test-3` and `k0smotron-test-4` will be removed.
0 commit comments