Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 0 additions & 5 deletions docs/book/src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,9 @@
- [Prerequisites](./topics/vpc/prerequisites.md)
- [Uploading an image](topics/vpc/uploading-an-image.md)
- [Creating a cluster](./topics/vpc/creating-a-cluster.md)
- [Creating a cluster with Load Balancer and External Cloud Provider](./topics/vpc/load-balancer.md)
- [Creating a cluster from ClusterClass](./topics/vpc/clusterclass-cluster.md)
- [PowerVS Cluster](./topics/powervs/index.md)
- [Prerequisites](./topics/powervs/prerequisites.md)
- [Creating a cluster](./topics/powervs/creating-a-cluster.md)
- [Creating a cluster with External Cloud Provider](./topics/powervs/external-cloud-provider.md)
- [Creating a cluster from ClusterClass](./topics/powervs/clusterclass-cluster.md)
- [Creating a cluster by auto creating required resources](./topics/powervs/create-resources.md)
- [Using autoscaler with scaling from 0 machine](./topics/powervs/autoscaler-scalling-from-0.md)
- [capibmadm CLI](./topics/capibmadm/index.md)
- [PowerVS Commands](./topics/capibmadm/powervs/index.md)
Expand Down
13 changes: 5 additions & 8 deletions docs/book/src/developer/tilt.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,8 @@ podman machine start

## Create a kind cluster

First, make sure you have a kind cluster and that your `KUBECONFIG` is set up correctly:
First, make sure you have a kind cluster and that your `KUBECONFIG` is set up correctly.
> **Note:** Execute the following from the `cluster-api-provider-ibmcloud` respository.
``` bash
make kind-cluster
Expand Down Expand Up @@ -97,11 +98,9 @@ extra_args:
> **Note:** Currently, both [ClusterClass](https://cluster-api.sigs.k8s.io/tasks/experimental-features/cluster-class/index.html) and [ClusterResourceset](https://cluster-api.sigs.k8s.io/tasks/experimental-features/cluster-resource-set.html) are experimental features.
### 1. Configuration to deploy PowerVS workload cluster with external cloud controller manager
### 1. Configuration to deploy workload cluster with external cloud controller manager
To deploy workload cluster with [PowerVS cloud controller manager](/topics/powervs/external-cloud-provider.html)(experimental) or to deploy workload cluster with [cloud controller manager](/topics/vpc/load-balancer.html)(experimental), set `PROVIDER_ID_FORMAT` to `v2` and enable cluster resourceset feature gate under kustomize_substitutions.

This requires setting the feature gate `EXP_CLUSTER_RESOURCE_SET` to `true` under kustomize_substitutions.
To deploy workload cluster with cloud controller manager, set `PROVIDER_ID_FORMAT` to `v2` and enable cluster resourceset feature gate by setting `EXP_CLUSTER_RESOURCE_SET` to `true under kustomize_substitutions.

```yaml
default_registry: "gcr.io/you-project-name-here"
Expand All @@ -119,9 +118,7 @@ kustomize_substitutions:

### 2. Configuration to deploy workload cluster from ClusterClass template

To deploy workload cluster with [clusterclass-template](/topics/powervs/clusterclass-cluster.html), set the `PROVIDER_ID_FORMAT` to `v2` under kustomize_substitutions.

This requires setting the feature gates `EXP_CLUSTER_RESOURCE_SET` and `CLUSTER_TOPOLOGY` to `true` under kustomize_substitutions.
To deploy workload cluster with [clusterclass-template](/topics/powervs/clusterclass-cluster.html), set the `PROVIDER_ID_FORMAT` to `v2` and enable the feature gates `EXP_CLUSTER_RESOURCE_SET` and `CLUSTER_TOPOLOGY` to `true`under kustomize_substitutions.

```yaml
default_registry: "gcr.io/you-project-name-here"
Expand Down
26 changes: 0 additions & 26 deletions docs/book/src/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,29 +87,3 @@ it into a management cluster using `clusterctl`.
```

6. Once the management cluster is ready with the required providers up and running, proceed to provisioning the workload cluster. Check the respective sections for [VPC](/topics/vpc/creating-a-cluster.html) and [PowerVS](/topics/powervs/creating-a-cluster.html) to deploy the cluster.

7. For deploying with your workload cluster with Cloud Controller manager or Cluster Class template, refer to [deploy with cloud controller manager](#deploy-with-cloud-controller-manager) and [deploy PowerVS cluster with cluster class template](#deploy-powervs-cluster-with-clusterclass-template) sections respectively.


### Deploy with Cloud Controller manager

To deploy VPC workload cluster with [IBM cloud controller manager](/topics/vpc/load-balancer.html), or with [PowerVS cloud controller manager](/topics/powervs/external-cloud-provider.html), set the `PROVIDER_ID_FORMAT` environmental variable to `v2`.

```console
export PROVIDER_ID_FORMAT=v2
export EXP_CLUSTER_RESOURCE_SET=true
```

> Note: `EXP_CLUSTER_RESOURCE_SET` should be set for deploying workload cluster with Cloud Controller manager.
### Deploy PowerVS cluster or VPC cluster with ClusterClass template

To deploy workload cluster with [PowerVS clusterclass-template](/topics/powervs/clusterclass-cluster.html) or [VPC clusterclass-template](/topics/VPC/clusterclass-cluster.html). Set the following environmental variables.

```console
export PROVIDER_ID_FORMAT=v2
export EXP_CLUSTER_RESOURCE_SET=true
export CLUSTER_TOPOLOGY=true
```

> Note: Currently, both [ClusterClass](https://cluster-api.sigs.k8s.io/tasks/experimental-features/cluster-class/index.html) and [ClusterResourceSet](https://cluster-api.sigs.k8s.io/tasks/experimental-features/cluster-resource-set.html) are experimental feature so we need to enable the feature gate by setting `EXP_CLUSTER_RESOURCE_SET`, `CLUSTER_TOPOLOGY` environmental variables.
20 changes: 10 additions & 10 deletions docs/book/src/topics/powervs/autoscaler-scalling-from-0.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,8 +45,8 @@ go build .
```

Note:
1. autoscaler can be run in different ways the possible ways are described [here](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/clusterapi/README.md#connecting-cluster-autoscaler-to-cluster-api-management-and-workload-clusters).
2. autoscaler supports various command line flags and more details about it can be found [here](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-the-parameters-to-ca).
1. Autoscaler can be run in different ways, the possible ways are described [here](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/clusterapi/README.md#connecting-cluster-autoscaler-to-cluster-api-management-and-workload-clusters).
2. Autoscaler supports various command line flags and more details about it can be found [here](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-the-parameters-to-ca).

## Use case of cluster-autoscaler

Expand Down Expand Up @@ -97,17 +97,17 @@ busybox-deployment-7c87788568-t26bb 0/1 Pending 0 5s
5. On the management cluster verify that the new machine creation is being triggered by autoscaler
```
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
karthik-ibm-powervs-control-plane-smvf7 karthik-ibm-powervs karthik-ibm-powervs-control-plane-pgwmz ibmpowervs://osa/osa21/3229a-af54-4212-bf60-6202b6fd0a07/809cd0f2-7502-4112-bf44-84d178020d8a Running 82m v1.24.2
karthik-ibm-powervs-md-0-6b4d67ccf4-npdbm karthik-ibm-powervs karthik-ibm-powervs-md-0-qch8f ibmpowervs://osa/osa21/3229a-af54-4212-bf60-6202b6fd0a07/50f841e5-f58c-4569-894d-b40ba0d2696e Running 76m v1.24.2
karthik-ibm-powervs-md-0-6b4d67ccf4-v7xv9 karthik-ibm-powervs Provisioning 3m19s v1.24.2
ibm-powervs-control-plane-smvf7 ibm-powervs ibm-powervs-control-plane-pgwmz ibmpowervs://osa/osa21/3229a-af54-4212-bf60-6202b6fd0a07/809cd0f2-7502-4112-bf44-84d178020d8a Running 82m v1.24.2
ibm-powervs-md-0-6b4d67ccf4-npdbm ibm-powervs ibm-powervs-md-0-qch8f ibmpowervs://osa/osa21/3229a-af54-4212-bf60-6202b6fd0a07/50f841e5-f58c-4569-894d-b40ba0d2696e Running 76m v1.24.2
ibm-powervs-md-0-6b4d67ccf4-v7xv9 ibm-powervs Provisioning 3m19s v1.24.2
```
6. After sometime verify that the new node being added to the cluster and pod is in running state
```
kubectl get nodes
NAME STATUS ROLES AGE VERSION
karthik-ibm-powervs-control-plane-pgwmz Ready control-plane 92m v1.24.2
karthik-ibm-powervs-md-0-n8c6d Ready <none> 42s v1.24.2
karthik-ibm-powervs-md-0-qch8f Ready <none> 85m v1.24.2
ibm-powervs-control-plane-pgwmz Ready control-plane 92m v1.24.2
ibm-powervs-md-0-n8c6d Ready <none> 42s v1.24.2
ibm-powervs-md-0-qch8f Ready <none> 85m v1.24.2

kubectl get pods
NAME READY STATUS RESTARTS AGE
Expand All @@ -120,7 +120,7 @@ kubectl delete deployment/busybox-deployment

kubectl get nodes
NAME STATUS ROLES AGE VERSION
karthik-ibm-powervs-control-plane-pgwmz Ready control-plane 105m v1.24.2
karthik-ibm-powervs-md-0-qch8f Ready <none> 98m v1.24.2
ibm-powervs-control-plane-pgwmz Ready control-plane 105m v1.24.2
ibm-powervs-md-0-qch8f Ready <none> 98m v1.24.2
```

28 changes: 0 additions & 28 deletions docs/book/src/topics/powervs/clusterclass-cluster.md

This file was deleted.

28 changes: 0 additions & 28 deletions docs/book/src/topics/powervs/create-resources.md

This file was deleted.

113 changes: 108 additions & 5 deletions docs/book/src/topics/powervs/creating-a-cluster.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,12 @@
### Provision workload cluster in IBM Cloud PowerVS

> **Note:**
> A PowerVS cluster can be deployed with different customisations. Pick one of the following templates as per your need and fulfill the [prerequisites](prerequisites.md) before proceeding with cluster creation.
> - [PowerVS cluster with user provided resources](#deploy-a-powervs-cluster-with-user-provided-resources)
> - [PowerVS cluster with infrastructure creation](#deploy-a-powervs-cluster-with-infrastructure-creation)
> - [PowerVS cluster with external cloud provider](#deploy-a-powervs-cluster-with-external-cloud-provider)
> - [PowerVS cluster with cluster class](#deploy-a-powervs-cluster-with-cluster-class)

Now that we have a management cluster ready, you can create your workload cluster by
following the steps below.

Expand Down Expand Up @@ -29,11 +36,11 @@ following the steps below.
capi-test-port 163.68.65.6 192.168.167.6 fa:16:3e:89:c8:80 c7e7b6e0-0b0d-4a11-a90b-6ea293deb5ac DOWN
```

2. Use clusterctl to render the yaml through templates and deploy the cluster

**Note:** To deploy workload cluster with PowerVS cloud controller manager which is currently in experimental stage follow [these](/topics/powervs/external-cloud-provider.html) steps.
2. Use clusterctl to render the yaml through templates and deploy the cluster.
**Replace the following snippet with the template of your choice.**

**Note:** the `IBMPOWERVS_IMAGE_ID` value below should reflect the ID of the custom qcow2 image, the `kubernetes-version` value below should reflect the kubernetes version of the custom qcow2 image.
> **Note:**
> The `IBMPOWERVS_IMAGE_ID` value below should reflect the ID of the custom image and the `kubernetes-version` value below should reflect the kubernetes version of the custom image.

```console
IBMPOWERVS_SSHKEY_NAME="my-pub-key" \
Expand Down Expand Up @@ -123,4 +130,100 @@ following the steps below.
ibm-powervs-1-control-plane-rg6xv Ready master 41h v1.26.2
ibm-powervs-1-md-0-4dc5c Ready <none> 41h v1.26.2
ibm-powervs-1-md-0-dbxb7 Ready <none> 20h v1.26.2
```

### Deploy a PowerVS cluster with user provided resources

```
IBMPOWERVS_SSHKEY_NAME="my-pub-key" \
IBMPOWERVS_VIP="192.168.167.6" \
IBMPOWERVS_VIP_EXTERNAL="163.68.65.6" \
IBMPOWERVS_VIP_CIDR="29" \
IBMPOWERVS_IMAGE_NAME="capibm-powervs-centos-streams8-1-26-2" \
IBMPOWERVS_SERVICE_INSTANCE_ID="3229a94c-af54-4212-bf60-6202b6fd0a07" \
IBMPOWERVS_NETWORK_NAME="capi-test" \
clusterctl generate cluster ibm-powervs-1 --kubernetes-version v1.26.2 \
--target-namespace default \
--control-plane-machine-count=3 \
--worker-machine-count=1 \
--flavor=powervs | kubectl apply -f -
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add more content here on explaining the usage of existing resources and the different combinations user can pass like TG, VPC and PowerVS service instance?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently the user provided TG expects to have both PowerVS and VPC connections.

  1. When connections are not there create the connection and during delete phase clean up only the connection. It requires another field to be added in status to mark whether connection is created by controller or not.
  2. We can expect combination of powervs, vpc and Tg to be passed and their connection should already be there.
    If we are going with 2nd approach, we need to document that. IMO we can go with second approach and keep things simple for reuse approach.
    Wdyt? @Amulyam24 @Karthik-K-N @mkumatag

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think its better to go with second approach and document it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently the user provided TG expects to have both PowerVS and VPC connections.

  1. When connections are not there create the connection and during delete phase clean up only the connection. It requires another field to be added in status to mark whether connection is created by controller or not.
  2. We can expect combination of powervs, vpc and Tg to be passed and their connection should already be there.
    If we are going with 2nd approach, we need to document that. IMO we can go with second approach and keep things simple for reuse approach.
    Wdyt? @Amulyam24 @Karthik-K-N @mkumatag

I don't think this is relevant with the latest code which supports all combinations of TG. Added a generic comment, PTAL


### Deploy a PowerVS cluster with infrastructure creation

#### Prerequisites:
- Set `EXP_CLUSTER_RESOURCE_SET` to true as the cluster will be deployed with external cloud provider which will create the resources to run the cloud controller manager.
- Set the `provider-id-fmt` [flag](https://github.com/kubernetes-sigs/cluster-api-provider-ibmcloud/blob/5e7f80878f2252c6ab13c16102de90c784a2624d/main.go#L168-L173) to `v2` via `PROVIDER_ID_FORMAT` environment variable.
- Already existing infrasturcture resources can be used for cluster creation by setting either the ID or name in spec. If neither are specified, the cluster name will be used for constructing the resource name. For example, if cluster name is `capi-powervs`, PowerVS workspace will be created with name `capi-powervs-serviceInstance`.
```
```

```
IBMCLOUD_API_KEY=XXXXXXXXXXXX \
IBMPOWERVS_SSHKEY_NAME="my-ssh-key" \
COS_BUCKET_REGION="us-south" \
COS_BUCKET_NAME="power-oss-bucket" \
COS_OBJECT_NAME=capibm-powervs-centos-streams8-1-28-4-1707287079.ova.gz \
IBMACCOUNT_ID="<account_id>" \
IBMPOWERVS_REGION="wdc" \
IBMPOWERVS_ZONE="wdc06" \
IBMVPC_REGION="us-east" \
IBM_RESOURCE_GROUP="ibm-resource-group" \
BASE64_API_KEY=$(echo -n $IBMCLOUD_API_KEY | base64) \
clusterctl generate cluster capi-powervs --kubernetes-version v1.28.4 \
--target-namespace default \
--control-plane-machine-count=3 \
--worker-machine-count=1 \
--flavor=powervs-create-infra | kubectl apply -f -
```

### Deploy a PowerVS cluster with external cloud provider

#### Prerequisites:
- Set `EXP_CLUSTER_RESOURCE_SET` to true as the cluster will be deployed with external cloud provider which will create the resources to run the cloud controller manager.
- Set the `provider-id-fmt` [flag](https://github.com/kubernetes-sigs/cluster-api-provider-ibmcloud/blob/5e7f80878f2252c6ab13c16102de90c784a2624d/main.go#L168-L173) to `v2` via `PROVIDER_ID_FORMAT` environment variable.

```
IBMPOWERVS_SSHKEY_NAME="my-pub-key" \
IBMPOWERVS_VIP="192.168.167.6" \
IBMPOWERVS_VIP_EXTERNAL="163.68.65.6" \
IBMPOWERVS_VIP_CIDR="29" \
IBMPOWERVS_IMAGE_NAME="capibm-powervs-centos-streams8-1-26-2" \
IBMPOWERVS_SERVICE_INSTANCE_ID="3229a94c-af54-4212-bf60-6202b6fd0a07" \
IBMPOWERVS_NETWORK_NAME="capi-test" \
IBMACCOUNT_ID="ibm-accountid" \
IBMPOWERVS_REGION="osa" \
IBMPOWERVS_ZONE="osa21" \
BASE64_API_KEY=$(echo -n $IBMCLOUD_API_KEY | base64) \
clusterctl generate cluster ibm-powervs-1 --kubernetes-version v1.26.2 \
--target-namespace default \
--control-plane-machine-count=3 \
--worker-machine-count=1 \
--flavor=powervs-cloud-provider | kubectl apply -f -
```

### Deploy a PowerVS cluster with cluster class

#### Prerequisites:
- To deploy a cluster using [ClusterClass](https://cluster-api.sigs.k8s.io/tasks/experimental-features/cluster-class/index.html), set `CLUSTER_TOPOLOGY` environment variable to `true`.
- Set `EXP_CLUSTER_RESOURCE_SET` to true as the cluster will be deployed with external cloud provider which will create the resources to run the cloud controller manager.
- Set the `provider-id-fmt` [flag](https://github.com/kubernetes-sigs/cluster-api-provider-ibmcloud/blob/5e7f80878f2252c6ab13c16102de90c784a2624d/main.go#L168-L173) to `v2` via `PROVIDER_ID_FORMAT` environment variable.

```
IBMPOWERVS_CLUSTER_CLASS_NAME="powervs-cc" \
IBMPOWERVS_SSHKEY_NAME="my-pub-key" \
IBMPOWERVS_VIP="192.168.167.6" \
IBMPOWERVS_VIP_EXTERNAL="163.68.65.6" \
IBMPOWERVS_VIP_CIDR="29" \
IBMPOWERVS_IMAGE_NAME="capibm-powervs-centos-streams8-1-26-2" \
IBMPOWERVS_SERVICE_INSTANCE_ID="3229a94c-af54-4212-bf60-6202b6fd0a07" \
IBMPOWERVS_NETWORK_NAME="capi-test" \
IBMACCOUNT_ID="ibm-accountid" \
IBMPOWERVS_REGION="osa" \
IBMPOWERVS_ZONE="osa21" \
BASE64_API_KEY=$(echo -n $IBMCLOUD_API_KEY | base64) \
clusterctl generate cluster ibm-powervs-1 --kubernetes-version v1.26.2 \
--target-namespace default \
--control-plane-machine-count=3 \
--worker-machine-count=1 \
--flavor=powervs-clusterclass | kubectl apply -f -
```
Loading