You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Aug 12, 2025. It is now read-only.
Copy file name to clipboardExpand all lines: README.md
+18-72Lines changed: 18 additions & 72 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,57 +10,6 @@ This is the official [cluster-api](https://github.com/kubernetes-sigs/cluster-ap
10
10
11
11

12
12
13
-
## Ugrading from v0.3.X to v1.1.X
14
-
15
-
***IMPORTANT** - Before you upgrade, please note that multi-tenancy support has changed in versions after v0.3.X
16
-
* We no longer support running multiple instances of the provider in the same management cluster. Typically this was done to enable multiple credentials for managing devices in more than one project.
17
-
* If you currently have a management cluster with multiple instances of the provider, it's recommended you use clusterctl move to migrate them to another cluster before upgrading.
18
-
*[See more information about `clusterctl move` here](https://cluster-api.sigs.k8s.io/clusterctl/commands/move.html)
19
-
20
-
* Upgrade your clusterctl to version 1.1.3 or later.
21
-
* Backup your clusterapi objects from your management cluster by using the `clusterctl backup` comamnd.
You can now apply the upgrade by executing the following command:
40
-
41
-
clusterctl upgrade apply --contract v1beta1
42
-
```
43
-
44
-
* Go ahead and run `clusterctl upgrade apply --contract v1beta1`
45
-
* After this, if you'd like to co ntinue and upgrade kubernetes, it's a normal upgrade flow where you upgrade the control plane by editing the machinetemplates and kubeadmcontrolplane and the workers by editing the machinesets and machinedeployments. Full details [here](https://cluster-api.sigs.k8s.io/tasks/upgrading-clusters.html). Below is a very basic example upgrade of a small cluster:
46
-
47
-
```bash
48
-
kubectl get PacketMachineTemplate example-control-plane -o yaml > example-control-plane.yaml
49
-
# Using a text editor, edit the spec.version field to the new kubernetes version
50
-
kubectl apply -f example-control-plane.yaml
51
-
kubectl get machineDeployment example-worker-a -o yaml > example-worker-a.yaml
52
-
# Using a text editor, edit the spec.template.spec.version to the new kubernetes version
53
-
kubectl apply -f example-worker-a.yaml
54
-
```
55
-
56
-
## Using
57
-
58
-
The following section describes how to use the cluster-api provider for packet (CAPP) as a regular user.
59
-
You do _not_ need to clone this repository, or install any special tools, other than the standard
60
-
`kubectl` and `clusterctl`; see below.
61
-
62
-
* To build CAPP and to deploy individual components, see [docs/BUILD.md](./docs/BUILD.md).
63
-
* To build CAPP and to cut a proper release, see [docs/RELEASE.md](./docs/RELEASE.md).
64
13
65
14
### Requirements
66
15
@@ -80,32 +29,19 @@ Once you have your cluster, ensure your `KUBECONFIG` environment variable is set
80
29
81
30
### Getting Started
82
31
83
-
You can follow the [Cluster API Quick Start Guide](https://cluster-api.sigs.k8s.io/user/quick-start.html), selecting the 'Equinix Metal' tabs.
32
+
You should then follow the [Cluster API Quick Start Guide](https://cluster-api.sigs.k8s.io/user/quick-start.html), selecting the 'Equinix Metal' tabs where offered.
84
33
85
34
#### Defaults
86
35
87
36
If you do not change the generated `yaml` files, it will use defaults. You can look in the [templates/cluster-template.yaml](./templates/cluster-template.yaml) file for details.
88
37
89
-
* CLUSTER_NAME (defaults to my-cluster)
90
-
* CONTROL_PLANE_MACHINE_COUNT (defaults to 1)
91
-
* KUBE_VIP_VERSION (defaults to "v0.4.2")
92
-
* NODE_OS (defaults to "ubuntu_18_04")
93
-
* POD_CIDR (defaults to "192.168.0.0/16")
94
-
* SERVICE_CIDR (defaults to "172.26.0.0/16")
95
-
* WORKER_MACHINE_COUNT (defaults to 0)
96
-
97
-
#### API Server VIP Management Choice
98
-
As of v0.6.0 you can choose to use kube-vip to manage the api-server VIP instead of CPEM. By default CPEM will be used to manage the EIP that serves as the VIP for the api-server. To use kube-vip, when generating the template with `clusterctl`, pass in the `--flavor kube-vip` flag. For example, your `clusterctl generate` command might look like the following:
99
-
100
-
```sh
101
-
clusterctl generate cluster capi-quickstart \
102
-
--kubernetes-version v1.24.0 \
103
-
--control-plane-machine-count=3 \
104
-
--worker-machine-count=3 \
105
-
--infrastructure packet \
106
-
--flavor kube-vip
107
-
> capi-quickstart.yaml
108
-
```
38
+
*`CLUSTER_NAME` (defaults to `my-cluster`)
39
+
*`CONTROL_PLANE_MACHINE_COUNT` (defaults to `1`)
40
+
*`KUBE_VIP_VERSION` (defaults to `v0.4.2`)
41
+
*`NODE_OS` (defaults to `ubuntu_18_04`)
42
+
*`POD_CIDR` (defaults to `192.168.0.0/16`)
43
+
*`SERVICE_CIDR` (defaults to `172.26.0.0/16`)
44
+
*`WORKER_MACHINE_COUNT` (defaults to `0`)
109
45
110
46
## Community, discussion, contribution, and support
111
47
@@ -116,6 +52,16 @@ You can reach the maintainers of this project at:
116
52
* Chat with us on [Slack](http://slack.k8s.io/) in the [#cluster-api-provider-packet][#cluster-api-provider-packet slack] channel
117
53
* Subscribe to the [SIG Cluster Lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle) Google Group for access to documents and calendars
118
54
55
+
56
+
## Development and Customizations
57
+
The following section describes how to use the cluster-api provider for packet (CAPP) as a regular user.
58
+
You do _not_ need to clone this repository, or install any special tools, other than the standard
59
+
`kubectl` and `clusterctl`; see below.
60
+
61
+
* To build CAPP and to deploy individual components, see [docs/BUILD.md](./docs/BUILD.md).
62
+
* To build CAPP and to cut a proper release, see [docs/RELEASE.md](./docs/RELEASE.md).
63
+
64
+
119
65
### Code of conduct
120
66
121
67
Participation in the Kubernetes community is governed by the [Kubernetes Code of Conduct](code-of-conduct.md).
***IMPORTANT** - Before you upgrade, please note that multi-tenancy support has changed in versions after v0.3.X
6
+
* We no longer support running multiple instances of the provider in the same management cluster. Typically this was done to enable multiple credentials for managing devices in more than one project.
7
+
* If you currently have a management cluster with multiple instances of the provider, it's recommended you use clusterctl move to migrate them to another cluster before upgrading.
8
+
*[See more information about `clusterctl move` here](https://cluster-api.sigs.k8s.io/clusterctl/commands/move.html)
9
+
10
+
* Upgrade your clusterctl to version 1.1.3 or later.
11
+
* Backup your clusterapi objects from your management cluster by using the `clusterctl backup` comamnd.
You can now apply the upgrade by executing the following command:
30
+
31
+
clusterctl upgrade apply --contract v1beta1
32
+
```
33
+
34
+
* Go ahead and run `clusterctl upgrade apply --contract v1beta1`
35
+
* After this, if you'd like to co ntinue and upgrade kubernetes, it's a normal upgrade flow where you upgrade the control plane by editing the machinetemplates and kubeadmcontrolplane and the workers by editing the machinesets and machinedeployments. Full details [here](https://cluster-api.sigs.k8s.io/tasks/upgrading-clusters.html). Below is a very basic example upgrade of a small cluster:
36
+
37
+
```bash
38
+
kubectl get PacketMachineTemplate example-control-plane -o yaml > example-control-plane.yaml
39
+
# Using a text editor, edit the spec.version field to the new kubernetes version
40
+
kubectl apply -f example-control-plane.yaml
41
+
kubectl get machineDeployment example-worker-a -o yaml > example-worker-a.yaml
42
+
# Using a text editor, edit the spec.template.spec.version to the new kubernetes version
When using the CAPI quickstart, follow the [Calico install instructions from Tigera](https://projectcalico.docs.tigera.io/getting-started/kubernetes/quickstart).
8
+
9
+
## Flannel
10
+
11
+
### Installing Flannel
12
+
13
+
Follow the instructions at <https://github.com/flannel-io/flannel#deploying-flannel-manually> (ignoring the instruction to create a `flanneld` binary on each node).
14
+
15
+
When declaring your cluster, set the `POD_CIDR` to `10.244.0.0/16` which is the default `Network` (`net-conf.json`) for Flannel, or update the Flannel manifest to match the desired POD CIDR.
Copy file name to clipboardExpand all lines: docs/experiences/flavors.md
+30-7Lines changed: 30 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,3 +1,27 @@
1
+
# Flavors & Custom Templates
2
+
3
+
## Kube-VIP
4
+
5
+
### API Server VIP Management Choice
6
+
7
+
By default CPEM will be used to manage the EIP that serves as the VIP for the api-server. As of v0.6.0 you can choose to use kube-vip to manage the api-server VIP instead of CPEM.
8
+
9
+
### Choosing Kube-VIP
10
+
11
+
To use kube-vip, when generating the template with `clusterctl`, pass in the `--flavor kube-vip` flag. For example, your `clusterctl generate` command might look like the following:
12
+
13
+
```sh
14
+
clusterctl generate cluster capi-quickstart \
15
+
--kubernetes-version v1.24.0 \
16
+
--control-plane-machine-count=3 \
17
+
--worker-machine-count=3 \
18
+
--infrastructure packet \
19
+
--flavor kube-vip
20
+
> capi-quickstart.yaml
21
+
```
22
+
23
+
## Custom Templates
24
+
1
25
When using the `clusterctl` you can generate your own cluster spec from a
2
26
template.
3
27
@@ -35,17 +59,16 @@ automation. Here a few examples:
35
59
let's suppose you want `flannel` you can add the following line to
36
60
`postKubeadmCommands` for the `KubeadmControlPlane` resource:
0 commit comments