Skip to content

Commit d4b6914

Browse files
authored
docs: Clarify baremetal network requirements and kubectl capabilities (#10429)
- Qualify L2 network as network/PXE boot only, not required for ISO boot - Clarify DHCP only needed for network/PXE boot mode - Add TinkerbellIP vs Control Plane Endpoint explanation - Clarify kubectl works for management and workload cluster operations - Add hardware CSV operational guidance (when to use CSV vs kubectl) - Add cross-references between related documentation These changes address common customer confusion about network requirements for different boot modes and the scope of kubectl usage for cluster lifecycle management.
1 parent e31bc06 commit d4b6914

File tree

5 files changed

+53
-14
lines changed

5 files changed

+53
-14
lines changed

docs/content/en/docs/clustermgmt/cluster-scale/baremetal-scale.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ If you don't have any available hardware that match this requirement in the clus
6565
```
6666
As noted earlier, adding the `--kubeconfig` option tells `eksctl` to use the management cluster identified by that kubeconfig file to create a different workload cluster.
6767

68-
2. **kubectl CLI**: The cluster lifecycle feature lets you use kubectl to talk to the Kubernetes API to upgrade a workload cluster. To use kubectl, run:
68+
2. **kubectl CLI**: The cluster lifecycle feature lets you use kubectl to talk to the Kubernetes API to scale a workload cluster. kubectl can also be used for management cluster scaling, except when upgrading the EKS Anywhere CLI version which requires `eksctl anywhere upgrade`. To use kubectl, run:
6969
```bash
7070
kubectl apply -f eksa-w01-cluster.yaml --kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig
7171
```

docs/content/en/docs/clustermgmt/cluster-upgrades/baremetal-upgrades.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -136,9 +136,9 @@ and then you will run the [upgrade cluster command]({{< relref "baremetal-upgrad
136136

137137

138138
#### Upgrade cluster command
139-
* **kubectl CLI**: The cluster lifecycle feature lets you use kubectl to talk to the Kubernetes API to upgrade a workload cluster. To use kubectl, run:
139+
* **kubectl CLI**: The cluster lifecycle feature lets you use kubectl to talk to the Kubernetes API to upgrade a workload cluster. kubectl can also be used for management cluster upgrades (Kubernetes version, scaling, configuration changes), except when upgrading the EKS Anywhere CLI version which requires `eksctl anywhere upgrade`. To use kubectl, run:
140140
```bash
141-
kubectl apply -f eksa-w01-cluster.yaml
141+
kubectl apply -f eksa-w01-cluster.yaml
142142
--kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig
143143
```
144144

docs/content/en/docs/getting-started/baremetal/bare-preparation.md

Lines changed: 22 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -25,12 +25,30 @@ Once the hardware is in place, you need to:
2525

2626
## Prepare hardware inventory
2727
Create a CSV file to provide information about all physical machines that you are ready to add to your target Bare Metal cluster.
28-
This file will be used:
2928

30-
* When you generate the hardware file to be included in the cluster creation process described in the Create Bare Metal production cluster Getting Started guide.
31-
* To provide information that is passed to each machine from the Tinkerbell DHCP server when the machine is initially network booted.
29+
{{% alert title="Hardware CSV vs kubectl: When to Use Each" color="primary" %}}
30+
**Hardware CSV** creates the initial hardware catalog (Hardware objects, BMC Machines, credentials).
3231

33-
**NOTE**:While using kubectl, GitOps and Terraform for workload cluster creation, please make sure to refer [this]({{< relref "./baremetal-getstarted/#create-separate-workload-clusters" >}}) section.
32+
**Use hardware CSV for**:
33+
- Initial cluster creation
34+
- Adding new hardware when insufficient hardware available for scaling/upgrades
35+
36+
**Use kubectl for hardware management**:
37+
- Adding hardware after initial creation: `eksctl anywhere generate hardware -z hardware.csv > hardware.yaml && kubectl apply -f hardware.yaml`
38+
39+
**For cluster operations (scaling, upgrades)**:
40+
- Update `count` in cluster specification
41+
- System automatically selects from available hardware (those without `ownerName` label)
42+
- If insufficient hardware: Add more via hardware CSV with `--hardware-csv` flag or kubectl apply, then perform operation
43+
44+
**Do NOT use CSV for**:
45+
- Removing hardware from cluster (use CAPI machine delete annotations)
46+
- Trying to force specific hardware selection during operations (system auto-selects based on availability)
47+
48+
See [Scale Bare Metal Cluster]({{< relref "../../clustermgmt/cluster-scale/baremetal-scale" >}}) for operational examples.
49+
{{% /alert %}}
50+
51+
**NOTE**: While using kubectl, GitOps and Terraform for workload cluster creation, please make sure to refer to [this section]({{< relref "./baremetal-getstarted/#create-separate-workload-clusters" >}}).
3452

3553
The following is an example of an EKS Anywhere Bare Metal hardware CSV file:
3654

docs/content/en/docs/getting-started/baremetal/bare-prereq.md

Lines changed: 27 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -45,17 +45,19 @@ When upgrading without an extra machine, keep in mind that your control plane an
4545

4646
Each machine should include the following features:
4747

48-
* Network Interface Cards: at least one NIC is required. It must be capable of network booting.
48+
* Network Interface Cards: at least one NIC is required. For network/PXE boot mode, it must be capable of network booting. See [Boot Modes]({{< relref "customize/bare-metal-boot-modes" >}}) for boot configuration options.
4949

5050
* BMC integration (recommended): an IPMI or Redfish implementation (such a Dell iDRAC, RedFish-compatible, legacy or HP iLO) on the computer's motherboard or on a separate expansion card. This feature is used to allow remote management of the machine, such as turning the machine on and off.
5151

5252
> **_NOTE:_** BMC integration is not required for an EKS Anywhere cluster. However, without BMC integration, upgrades are not supported and you will have to physically turn machines off and on when appropriate.
5353
5454
Here are other network requirements:
5555

56-
* All EKS Anywhere machines, including the Admin, control plane and worker machines, must be on the same layer 2 network and have network connectivity to the BMC (IPMI, Redfish, and so on).
56+
* **For network/PXE boot mode** (default): All EKS Anywhere machines, including the Admin, control plane and worker machines, must be on the same layer 2 network and have network connectivity to the BMC (IPMI, Redfish, and so on).
5757

58-
* You must be able to run DHCP on the control plane/worker machine network.
58+
**For ISO boot mode**: Layer 2 network connectivity is not required. Machines only need Layer 3 (routable) connectivity to the management cluster and BMC access for virtual media mounting. See [Boot Modes]({{< relref "customize/bare-metal-boot-modes" >}}) for details on boot mode options.
59+
60+
* **For network/PXE boot mode**: You must be able to run DHCP on the control plane/worker machine network. DHCP is not required for ISO boot mode.
5961

6062
> **_NOTE:_** If you have another DHCP service running on the network, you need to prevent it from interfering with the EKS Anywhere DHCP service. You can do that by configuring the other DHCP service to explicitly block all MAC addresses and exclude all IP addresses that you plan to use with your EKS Anywhere clusters.
6163
@@ -77,14 +79,33 @@ Here are other network requirements:
7779

7880
* `sts.amazonaws.com`: only if AWS IAM Authenticator is enabled
7981

80-
* Two IP addresses routable from the cluster, but excluded from DHCP offering. One IP address is to be used as the Control Plane Endpoint IP. The other is for the Tinkerbell IP address on the target cluster. Below are some suggestions to ensure that these IP addresses are never handed out by your DHCP server. You may need to contact your network engineer to manage these addresses.
82+
* Two IP addresses routable from the cluster, but excluded from DHCP offering:
83+
84+
{{% alert title="Understanding the Two Required IPs" color="primary" %}}
85+
**Control Plane Endpoint IP** (`controlPlaneConfiguration.endpoint.host`):
86+
- Virtual IP for the Kubernetes API server
87+
- Managed by kube-vip on control plane nodes
88+
- Used by kubectl and all Kubernetes clients
89+
- Must be reachable from admin machine and all cluster nodes
90+
91+
**Tinkerbell IP** (`tinkerbellIP`):
92+
- Virtual IP for Tinkerbell stack services (Smee, Tink-server, Hegel)
93+
- Used during node provisioning and lifecycle operations
94+
- Must be reachable from all machines being provisioned
95+
- On workload clusters, should be the same as the management cluster's Tinkerbell IP
96+
97+
**Both IPs must be**:
98+
- Outside the DHCP range
99+
- Routable from the cluster subnet
100+
- Not assigned to any physical interface
101+
{{% /alert %}}
102+
103+
Below are some suggestions to ensure that these IP addresses are never handed out by your DHCP server. You may need to contact your network engineer to manage these addresses:
81104

82105
* Pick IP addresses reachable from the cluster subnet that are excluded from the DHCP range or
83106

84107
* Create an IP reservation for these addresses on your DHCP server. This is usually accomplished by adding a dummy mapping of this IP address to a non-existent mac address.
85108

86-
> **_NOTE:_** When you set up your cluster configuration YAML file, the endpoint and Tinkerbell addresses are set in the `controlPlaneConfiguration.endpoint.host` and `tinkerbellIP` fields, respectively.
87-
88109
* Ports must be open to the Admin machine and cluster machines as described in the [Cluster Networking documentation]({{< relref "../ports" >}}).
89110

90111
## Validated hardware

docs/content/en/docs/getting-started/baremetal/baremetal-getstarted.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -184,7 +184,7 @@ Follow these steps if you want to use your initial cluster to create and manage
184184
```
185185
As noted earlier, adding the `--kubeconfig` option tells `eksctl` to use the management cluster identified by that kubeconfig file to create a different workload cluster.
186186
187-
* **kubectl CLI**: The cluster lifecycle feature lets you use kubectl to talks to the Kubernetes API to create a workload cluster. To use kubectl, run:
187+
* **kubectl CLI**: The cluster lifecycle feature lets you use kubectl to talk to the Kubernetes API to create a workload cluster. kubectl can also be used for management cluster operations (scaling, Kubernetes version upgrades), except when upgrading the EKS Anywhere CLI version which requires `eksctl anywhere upgrade`. To use kubectl, run:
188188
```bash
189189
kubectl apply -f eksa-w01-cluster.yaml --kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig
190190
```

0 commit comments

Comments
 (0)