You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/book/src/topics/bring-your-own-aws-infrastructure.md
+74-19Lines changed: 74 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,23 @@
1
-
# Consuming Existing AWS Infrastructure
1
+
# Bring Your Own AWS Infrastructure
2
2
3
-
Normally, Cluster API will create infrastructure on AWS when standing up a new workload cluster. However, it is possible to have Cluster API re-use existing AWS infrastructure instead of creating its own infrastructure. Follow the instructions below to configure Cluster API to consume existing AWS infrastructure.
3
+
Normally, Cluster API will create infrastructure on AWS when standing up a new workload cluster. However, it is possible to have Cluster API re-use external AWS infrastructure instead of creating its own infrastructure.
4
4
5
-
## Prerequisites
5
+
There are two possible ways to do this:
6
+
* By consuming existing AWS infrastructure
7
+
* By using externally managed AWS infrastructure
8
+
> **IMPORTANT NOTE**: This externally managed AWS infrastructure should not be confused with EKS-managed clusters.
9
+
10
+
Follow the instructions below to configure Cluster API to consume existing AWS infrastructure.
11
+
12
+
## Consuming Existing AWS Infrastructure
13
+
14
+
### Overview
15
+
16
+
CAPA supports using existing AWS resources while creating AWS Clusters which gives flexibility to the users to bring their own existing resources into the cluster instead of creating new resources again.
17
+
18
+
Follow the instructions below to configure Cluster API to consume existing AWS infrastructure.
19
+
20
+
### Prerequisites
6
21
7
22
In order to have Cluster API consume existing AWS infrastructure, you will need to have already created the following resources:
8
23
@@ -22,7 +37,7 @@ If you want to use existing security groups, these can be specified and new ones
22
37
23
38
If you want to use an existing control load load balancer, specify its name.
24
39
25
-
## Tagging AWS Resources
40
+
###Tagging AWS Resources
26
41
27
42
Cluster API itself does tag AWS resources it creates. The `sigs.k8s.io/cluster-api-provider-aws/cluster/<cluster-name>` (where `<cluster-name>` matches the `metadata.name` field of the Cluster object) tag, with a value of `owned`, tells Cluster API that it has ownership of the resource. In this case, Cluster API will modify and manage the lifecycle of the resource.
28
43
@@ -32,7 +47,7 @@ However, the built-in Kubernetes AWS cloud provider _does_ require certain tags
32
47
33
48
Finally, if the controller manager isn't started with the `--configure-cloud-routes: "false"` parameter, the route table(s) will also need the `kubernetes.io/cluster/<cluster-name>` tag. (This parameter can be added by customizing the `KubeadmConfigSpec` object of the `KubeadmControlPlane` object.)
34
49
35
-
## Configuring the AWSCluster Specification
50
+
###Configuring the AWSCluster Specification
36
51
37
52
Specifying existing infrastructure for Cluster API to use takes place in the specification for the AWSCluster object. Specifically, you will need to add an entry with the VPC ID and the IDs of all applicable subnets into the `network` field. Here is an example:
38
53
@@ -59,7 +74,7 @@ spec:
59
74
60
75
When you use `kubectl apply` to apply the Cluster and AWSCluster specifications to the management cluster, Cluster API will use the specified VPC ID and subnet IDs, and will not create a new VPC, new subnets, or other associated resources. It _will_, however, create a new ELB and new security groups.
61
76
62
-
## Placing EC2 Instances in Specific AZs
77
+
### Placing EC2 Instances in Specific AZs
63
78
64
79
To distribute EC2 instances across multiple AZs, you can add information to the Machine specification. This is optional and only necessary if control over AZ placement is desired.
65
80
@@ -81,7 +96,7 @@ spec:
81
96
82
97
Note that all replicas within a MachineDeployment will reside in the same AZ.
83
98
84
-
## Placing EC2 Instances in Specific Subnets
99
+
### Placing EC2 Instances in Specific Subnets
85
100
86
101
To specify that an EC2 instance should be placed in a specific subnet, add this to the AWSMachine specification:
87
102
@@ -103,7 +118,7 @@ spec:
103
118
104
119
Users may either specify `failureDomain` on the Machine or MachineDeployment objects, _or_ users may explicitly specify subnet IDs on the AWSMachine or AWSMachineTemplate objects. If both are specified, the subnet ID is used and the `failureDomain` is ignored.
105
120
106
-
## Security Groups
121
+
### Security Groups
107
122
108
123
To use existing security groups for instances for a cluster, add this to the AWSCluster specification:
109
124
@@ -130,7 +145,7 @@ spec:
130
145
- ...
131
146
```
132
147
133
-
## Control Plane Load Balancer
148
+
### Control Plane Load Balancer
134
149
135
150
The cluster control plane is accessed through a Classic ELB. By default, Cluster API creates the Classic ELB. To use an existing Classic ELB, add its name to the AWSCluster specification:
136
151
@@ -142,20 +157,60 @@ spec:
142
157
143
158
As control plane instances are added or removed, Cluster API will register and deregister them, respectively, with the Classic ELB.
144
159
145
-
<aside class="note warning">
160
+
> **WARNING:** Using an existing Classic ELB is an advanced feature. **If you use an existing Classic ELB, you must correctly configure it, and attach subnets to it.**
161
+
>
162
+
>An incorrectly configured Classic ELB can easily lead to a non-functional cluster. We strongly recommend you let Cluster API create the Classic ELB.
146
163
147
-
<h1>Warning</h1>
164
+
### Caveats/Notes
148
165
149
-
Using an existing Classic ELB is an advanced feature. **If you use an existing Classic ELB, you must correctly configure it, and attach subnets to it.**
166
+
* When both public and private subnets are available in an AZ, CAPI will choose the private subnet in the AZ over the public subnet for placing EC2 instances.
167
+
* If you configure CAPI to use existing infrastructure as outlined above, CAPI will _not_ create an SSH bastion host. Combined with the previous bullet, this means you must make sure you have established some form of connectivity to the instances that CAPI will create.
150
168
151
-
An incorrectly configured Classic ELB can easily lead to a non-functional cluster. We strongly recommend you let Cluster API create the Classic ELB.
169
+
## Using Externally managed AWS Clusters
152
170
153
-
</aside>
171
+
### Overview
154
172
155
-
## Caveats/Notes
173
+
Alternatively, CAPA supports externally managed cluster infrastructure which is useful for scenarios where a different persona is managing the cluster infrastructure out-of-band(external system) while still wanting to use CAPI for automated machine management.
174
+
Users can make use of existing AWSCluster CRDs in their externally managed clusters.
156
175
157
-
* When both public and private subnets are available in an AZ, CAPI will choose the private subnet in the AZ over the public subnet for placing EC2 instances.
158
-
* If you configure CAPI to use existing infrastructure as outlined above, CAPI will _not_ create an SSH bastion host. Combined with the previous bullet, this means you must make sure you have established some form of connectivity to the instances that CAPI will create.
176
+
### How to use externally managed clusters?
177
+
178
+
Users have to use `cluster.x-k8s.io/managed-by: "<name-of-system>"` annotation to depict that AWS resources are managed externally. If CAPA controllers come across this annotation in any of the AWS resources while reconciliation, then it will ignore the resource and not perform any reconciliation(including creating/modifying any of the AWS resources, or it's status).
If the `AWSCluster` resource includes a "cluster.x-k8s.io/managed-by" annotation then the [controller will skip any reconciliation](https://cluster-api.sigs.k8s.io/developer/providers/cluster-infrastructure.html#normal-resource).
180
+
A predicate `ResourceIsNotExternallyManaged` is exposed by Cluster API which allows CAPA controllers to differentiate between externally managed vs CAPA managed resources. For example:
return errors.Wrap(err, "failed setting up with a controller manager")
190
+
}
191
+
```
192
+
The external system must provide all required fields within the spec of the AWSCluster and must adhere to the CAPI provider contract and set the AWSCluster status to be ready when it is appropriate to do so.
193
+
194
+
> **IMPORTANT NOTE**: Users should take care of skipping reconciliation in external controllers within mapping function while enqueuing requests. For example:
Once the user has created externally managed AWSCluster, it is not allowed to convert it to CAPA managed cluster. However, converting from managed to externally managed is allowed.
215
+
216
+
User should only use this feature if their cluster infrastructure lifecycle management has constraints that the reference implementation does not support. See [user stories](https://github.com/kubernetes-sigs/cluster-api/blob/10d89ceca938e4d3d94a1d1c2b60515bcdf39829/docs/proposals/20210203-externally-managed-cluster-infrastructure.md#user-stories) for more details.
0 commit comments