You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
docs: Clarify baremetal network requirements and kubectl capabilities (#10429)
- Qualify L2 network as network/PXE boot only, not required for ISO boot
- Clarify DHCP only needed for network/PXE boot mode
- Add TinkerbellIP vs Control Plane Endpoint explanation
- Clarify kubectl works for management and workload cluster operations
- Add hardware CSV operational guidance (when to use CSV vs kubectl)
- Add cross-references between related documentation
These changes address common customer confusion about network requirements
for different boot modes and the scope of kubectl usage for cluster
lifecycle management.
Copy file name to clipboardExpand all lines: docs/content/en/docs/clustermgmt/cluster-scale/baremetal-scale.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -65,7 +65,7 @@ If you don't have any available hardware that match this requirement in the clus
65
65
```
66
66
As noted earlier, adding the `--kubeconfig` option tells `eksctl` to use the management cluster identified by that kubeconfig file to create a different workload cluster.
67
67
68
-
2. **kubectl CLI**: The cluster lifecycle feature lets you use kubectl to talk to the Kubernetes API to upgrade a workload cluster. To use kubectl, run:
68
+
2. **kubectl CLI**: The cluster lifecycle feature lets you use kubectl to talk to the Kubernetes API to scale a workload cluster. kubectl can also be used for management cluster scaling, except when upgrading the EKS Anywhere CLI version which requires `eksctl anywhere upgrade`. To use kubectl, run:
Copy file name to clipboardExpand all lines: docs/content/en/docs/clustermgmt/cluster-upgrades/baremetal-upgrades.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -136,9 +136,9 @@ and then you will run the [upgrade cluster command]({{< relref "baremetal-upgrad
136
136
137
137
138
138
#### Upgrade cluster command
139
-
***kubectl CLI**: The cluster lifecycle feature lets you use kubectl to talk to the Kubernetes API to upgrade a workload cluster. To use kubectl, run:
139
+
***kubectl CLI**: The cluster lifecycle feature lets you use kubectl to talk to the Kubernetes API to upgrade a workload cluster. kubectl can also be used for management cluster upgrades (Kubernetes version, scaling, configuration changes), except when upgrading the EKS Anywhere CLI version which requires `eksctl anywhere upgrade`. To use kubectl, run:
Copy file name to clipboardExpand all lines: docs/content/en/docs/getting-started/baremetal/bare-preparation.md
+22-4Lines changed: 22 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,12 +25,30 @@ Once the hardware is in place, you need to:
25
25
26
26
## Prepare hardware inventory
27
27
Create a CSV file to provide information about all physical machines that you are ready to add to your target Bare Metal cluster.
28
-
This file will be used:
29
28
30
-
* When you generate the hardware file to be included in the cluster creation process described in the Create Bare Metal production cluster Getting Started guide.
31
-
* To provide information that is passed to each machine from the Tinkerbell DHCP server when the machine is initially network booted.
29
+
{{% alert title="Hardware CSV vs kubectl: When to Use Each" color="primary" %}}
**NOTE**:While using kubectl, GitOps and Terraform for workload cluster creation, please make sure to refer [this]({{< relref "./baremetal-getstarted/#create-separate-workload-clusters" >}}) section.
32
+
**Use hardware CSV for**:
33
+
- Initial cluster creation
34
+
- Adding new hardware when insufficient hardware available for scaling/upgrades
- System automatically selects from available hardware (those without `ownerName` label)
42
+
- If insufficient hardware: Add more via hardware CSV with `--hardware-csv` flag or kubectl apply, then perform operation
43
+
44
+
**Do NOT use CSV for**:
45
+
- Removing hardware from cluster (use CAPI machine delete annotations)
46
+
- Trying to force specific hardware selection during operations (system auto-selects based on availability)
47
+
48
+
See [Scale Bare Metal Cluster]({{< relref "../../clustermgmt/cluster-scale/baremetal-scale" >}}) for operational examples.
49
+
{{% /alert %}}
50
+
51
+
**NOTE**: While using kubectl, GitOps and Terraform for workload cluster creation, please make sure to refer to [this section]({{< relref "./baremetal-getstarted/#create-separate-workload-clusters" >}}).
34
52
35
53
The following is an example of an EKS Anywhere Bare Metal hardware CSV file:
Copy file name to clipboardExpand all lines: docs/content/en/docs/getting-started/baremetal/bare-prereq.md
+27-6Lines changed: 27 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -45,17 +45,19 @@ When upgrading without an extra machine, keep in mind that your control plane an
45
45
46
46
Each machine should include the following features:
47
47
48
-
* Network Interface Cards: at least one NIC is required. It must be capable of network booting.
48
+
* Network Interface Cards: at least one NIC is required. For network/PXE boot mode, it must be capable of network booting. See [Boot Modes]({{< relref "customize/bare-metal-boot-modes" >}}) for boot configuration options.
49
49
50
50
* BMC integration (recommended): an IPMI or Redfish implementation (such a Dell iDRAC, RedFish-compatible, legacy or HP iLO) on the computer's motherboard or on a separate expansion card. This feature is used to allow remote management of the machine, such as turning the machine on and off.
51
51
52
52
> **_NOTE:_** BMC integration is not required for an EKS Anywhere cluster. However, without BMC integration, upgrades are not supported and you will have to physically turn machines off and on when appropriate.
53
53
54
54
Here are other network requirements:
55
55
56
-
* All EKS Anywhere machines, including the Admin, control plane and worker machines, must be on the same layer 2 network and have network connectivity to the BMC (IPMI, Redfish, and so on).
56
+
***For network/PXE boot mode** (default): All EKS Anywhere machines, including the Admin, control plane and worker machines, must be on the same layer 2 network and have network connectivity to the BMC (IPMI, Redfish, and so on).
57
57
58
-
* You must be able to run DHCP on the control plane/worker machine network.
58
+
**For ISO boot mode**: Layer 2 network connectivity is not required. Machines only need Layer 3 (routable) connectivity to the management cluster and BMC access for virtual media mounting. See [Boot Modes]({{< relref "customize/bare-metal-boot-modes" >}}) for details on boot mode options.
59
+
60
+
***For network/PXE boot mode**: You must be able to run DHCP on the control plane/worker machine network. DHCP is not required for ISO boot mode.
59
61
60
62
> **_NOTE:_** If you have another DHCP service running on the network, you need to prevent it from interfering with the EKS Anywhere DHCP service. You can do that by configuring the other DHCP service to explicitly block all MAC addresses and exclude all IP addresses that you plan to use with your EKS Anywhere clusters.
61
63
@@ -77,14 +79,33 @@ Here are other network requirements:
77
79
78
80
*`sts.amazonaws.com`: only if AWS IAM Authenticator is enabled
79
81
80
-
* Two IP addresses routable from the cluster, but excluded from DHCP offering. One IP address is to be used as the Control Plane Endpoint IP. The other is for the Tinkerbell IP address on the target cluster. Below are some suggestions to ensure that these IP addresses are never handed out by your DHCP server. You may need to contact your network engineer to manage these addresses.
82
+
* Two IP addresses routable from the cluster, but excluded from DHCP offering:
83
+
84
+
{{% alert title="Understanding the Two Required IPs" color="primary" %}}
- Must be reachable from admin machine and all cluster nodes
90
+
91
+
**Tinkerbell IP** (`tinkerbellIP`):
92
+
- Virtual IP for Tinkerbell stack services (Smee, Tink-server, Hegel)
93
+
- Used during node provisioning and lifecycle operations
94
+
- Must be reachable from all machines being provisioned
95
+
- On workload clusters, should be the same as the management cluster's Tinkerbell IP
96
+
97
+
**Both IPs must be**:
98
+
- Outside the DHCP range
99
+
- Routable from the cluster subnet
100
+
- Not assigned to any physical interface
101
+
{{% /alert %}}
102
+
103
+
Below are some suggestions to ensure that these IP addresses are never handed out by your DHCP server. You may need to contact your network engineer to manage these addresses:
81
104
82
105
* Pick IP addresses reachable from the cluster subnet that are excluded from the DHCP range or
83
106
84
107
* Create an IP reservation for these addresses on your DHCP server. This is usually accomplished by adding a dummy mapping of this IP address to a non-existent mac address.
85
108
86
-
> **_NOTE:_** When you set up your cluster configuration YAML file, the endpoint and Tinkerbell addresses are set in the `controlPlaneConfiguration.endpoint.host` and `tinkerbellIP` fields, respectively.
87
-
88
109
* Ports must be open to the Admin machine and cluster machines as described in the [Cluster Networking documentation]({{< relref "../ports" >}}).
Copy file name to clipboardExpand all lines: docs/content/en/docs/getting-started/baremetal/baremetal-getstarted.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -184,7 +184,7 @@ Follow these steps if you want to use your initial cluster to create and manage
184
184
```
185
185
As noted earlier, adding the `--kubeconfig` option tells `eksctl` to use the management cluster identified by that kubeconfig file to create a different workload cluster.
186
186
187
-
* **kubectl CLI**: The cluster lifecycle feature lets you use kubectl to talks to the Kubernetes API to create a workload cluster. To use kubectl, run:
187
+
* **kubectl CLI**: The cluster lifecycle feature lets you use kubectl to talk to the Kubernetes API to create a workload cluster. kubectl can also be used for management cluster operations (scaling, Kubernetes version upgrades), except when upgrading the EKS Anywhere CLI version which requires `eksctl anywhere upgrade`. To use kubectl, run:
0 commit comments