Skip to content

Commit a6b8746

Browse files
authored
Merge pull request #536 from adity1raut/main
📖 docs: remove OCM-type control plane usage examples
2 parents 8a38a4b + 6715ff2 commit a6b8746

File tree

2 files changed

+0
-223
lines changed

2 files changed

+0
-223
lines changed

docs/contributors.md

Lines changed: 0 additions & 75 deletions
Original file line numberDiff line numberDiff line change
@@ -91,78 +91,3 @@ LATEST_TAG=<tag used for image> make ko-build-push-cmupdate
9191

9292
1. To avoid leaving a time bomb, delete that `brew` branch after it was merged into `main` (the goreleaser will fail to create the new `brew` branch if one already exists).
9393

94-
95-
## OCM Images
96-
97-
### Commands to build & publish multicluster-controlplane from fork
98-
99-
At this time the official multicluster-controlplane has issues and a build from
100-
a fork is required. We expect to re-evaluate when new images will be published
101-
by the OCM community.
102-
103-
1. Build and publish image from fork
104-
105-
```shell
106-
git clone https://github.com/pdettori/multicluster-controlplane.git
107-
cd multicluster-controlplane
108-
git checkout kubeflex
109-
make image
110-
docker tag quay.io/open-cluster-management/multicluster-controlplane:latest quay.io/kubestellar/multicluster-controlplane:v0.2.0-kf-alpha.1
111-
docker push quay.io/kubestellar/multicluster-controlplane:v0.2.0-kf-alpha.1
112-
```
113-
### Commands to package and push a chart to a OCI registry
114-
115-
First clone and build image as explained in [Commands to build & publish multicluster-controlplane from fork](#commands-to-build--publish-multicluster-controlplane-from-fork), then:
116-
117-
```shell
118-
helm package charts/multicluster-controlplane --version v0.2.0-kf-alpha.1
119-
helm push *.tgz oci://quay.io/kubestellar
120-
```
121-
122-
### Hosting a copy of postgresql chart and image on quay.io
123-
124-
1. Select the desired postgresql chart tag:
125-
126-
For example `CHART_TAG=13.1.5`.
127-
128-
2. Pull the chart from the bitmani Docker registry:
129-
130-
```shell
131-
helm pull oci://registry-1.docker.io/bitnamicharts/postgresql:$CHART_TAG
132-
```
133-
134-
3. Unpack the downloaded `postgresql-$CHART_TAG.tgz` archive:
135-
136-
```shell
137-
tar xf postgresql-$CHART_TAG.tgz
138-
```
139-
140-
4. In the newly created `postgresql` folder open the `values.yaml` and find the `image:` section:
141-
142-
- Change the `registry:` from `docker.io` to `quay.io`.
143-
- Change the `repository:` from `bitnami/postgresql` to `kubestellar/postgresql`.
144-
- Take note of the image tag, for example `IMAGE_TAG=16.0.0-debian-11-r13`.
145-
146-
5. Repackage the chart in a tarball with the same name:
147-
148-
```shell
149-
tar czf postgresql-$CHART_TAG.tgz postgresql/
150-
```
151-
152-
6. Push the customized chart to quay.io:
153-
154-
```shell
155-
helm login quay.io
156-
helm push postgresql-$CHART_TAG.tgz oci://quay.io/kubestellar/charts
157-
```
158-
159-
7. Update the postgresql Helm chart reference that is hard coded in [postgresql.yaml](../chart/templates/postgresql.yaml#L97) to match the quay.io registry, repository, and tag used in the Helm push command of step 6.
160-
161-
8. Make a multi-arch copy of postgresql container image from `docker.io` to `quay.io` to match the customized chart image reference:
162-
163-
```shell
164-
docker login quay.io
165-
docker buildx imagetools create --tag quay.io/kubestellar/postgresql:$IMAGE_TAG docker.io/bitnami/postgresql:$IMAGE_TAG
166-
```
167-
168-
The registry, repository, and tag used in this command must match the values included in the customized chart at step 4.

docs/users.md

Lines changed: 0 additions & 148 deletions
Original file line numberDiff line numberDiff line change
@@ -464,154 +464,6 @@ NAME TYPE DATA AGE
464464
admin-kubeconfig Opaque 1 4m47s
465465
```
466466

467-
## Working with an OCM control plane
468-
469-
Let's create an OCM control plane:
470-
471-
```console
472-
$ kflex create cp3 --type ocm
473-
✔ Checking for saved hosting cluster context...
474-
✔ Switching to hosting cluster context...
475-
✔ Creating new control plane cp3...
476-
✔ Waiting for API server to become ready...
477-
```
478-
479-
We may check the CRDs available for the OCM control plane:
480-
481-
```console
482-
$ kubectl get crds
483-
NAME CREATED AT
484-
addondeploymentconfigs.addon.open-cluster-management.io 2023-07-08T21:17:44Z
485-
addonplacementscores.cluster.open-cluster-management.io 2023-07-08T21:17:44Z
486-
clustermanagementaddons.addon.open-cluster-management.io 2023-07-08T21:17:44Z
487-
managedclusteraddons.addon.open-cluster-management.io 2023-07-08T21:17:44Z
488-
managedclusters.cluster.open-cluster-management.io 2023-07-08T21:17:44Z
489-
managedclustersetbindings.cluster.open-cluster-management.io 2023-07-08T21:17:44Z
490-
managedclustersets.cluster.open-cluster-management.io 2023-07-08T21:17:44Z
491-
manifestworks.work.open-cluster-management.io 2023-07-08T21:17:44Z
492-
placementdecisions.cluster.open-cluster-management.io 2023-07-08T21:17:44Z
493-
placements.cluster.open-cluster-management.io 2023-07-08T21:17:44Z
494-
```
495-
496-
We may also register clusters with the OCM control plane and deploy workloads
497-
using the `ManifestWork` API. In order to do that, you need first to install
498-
the Open Cluster Management [clusteradm CLI](https://open-cluster-management.io/getting-started/installation/start-the-control-plane/), e.g.
499-
500-
```shell
501-
curl -L https://raw.githubusercontent.com/open-cluster-management-io/clusteradm/main/install.sh | bash
502-
```
503-
504-
With the current context set to the ocm control plane, we can use `clusteradm` to retrieve a token
505-
used to register managed clusters:
506-
507-
```shell
508-
clusteradm get token --use-bootstrap-token
509-
clusteradm join --hub-token <some value> --hub-apiserver https://cp3.localtest.me:9443/ --cluster-name <cluster_name>
510-
```
511-
512-
The command returns the command to run on the managed cluster (actual token value not shown in example).
513-
514-
Now create a kind cluster to register with ocm, with the command:
515-
516-
```shell
517-
kind create cluster --name cluster1
518-
```
519-
520-
Once the cluster is ready, run the command above, taking care of replacing <cluster_name> with cluster1
521-
and leaving the actual token value. Most importantly, make sure to add the flag `--force-internal-endpoint-lookup` which allows the managed cluster to communicate with the OCM control plane
522-
using the docker network that all kind clusters share. Note that the `kind create cluster` command
523-
switches the context to the new cluster `cluster`, so the `clusteramd join` command is run using the
524-
new cluster context.
525-
526-
```shell
527-
clusteradm join --hub-token <some value> --hub-apiserver https://cp3.localtest.me:9443/ --cluster-name cluster1 --force-internal-endpoint-lookup
528-
```
529-
530-
At this point, switch back the context to the OCM control plane with the command:
531-
532-
```shell
533-
kflex ctx cp3
534-
```
535-
536-
and verifies that a Certificate Signing Request (csr) has been created on the OCM control plane
537-
running the command `kubectl get csr`. The CSR request is part of the mechanism used by OCM
538-
to register a new cluster. You should see an output simlar to the following:
539-
540-
```console
541-
$ kubectl get csr
542-
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
543-
cluster1-zx5x5 7s kubernetes.io/kube-apiserver-client system:bootstrap:j5bork <none> Pending
544-
```
545-
546-
Approve the csr to complete the cluster registration with the command:
547-
548-
```shell
549-
clusteradm accept --clusters cluster1
550-
```
551-
552-
You can now see the new cluster in the OCM inventory:
553-
554-
```console
555-
$ kubectl get managedclusters
556-
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE
557-
cluster1 true https://cluster1-control-plane:6443 True True 3m25s
558-
```
559-
560-
Finally, you may deploy a workload on the managed cluster using the ManifestWork API:
561-
562-
```shell
563-
kubectl apply -f - <<EOF
564-
apiVersion: work.open-cluster-management.io/v1
565-
kind: ManifestWork
566-
metadata:
567-
namespace: cluster1
568-
name: deployment1
569-
spec:
570-
workload:
571-
manifests:
572-
- apiVersion: v1
573-
kind: ServiceAccount
574-
metadata:
575-
namespace: default
576-
name: my-sa
577-
- apiVersion: apps/v1
578-
kind: Deployment
579-
metadata:
580-
namespace: default
581-
name: nginx-deployment
582-
labels:
583-
app: nginx
584-
spec:
585-
replicas: 3
586-
selector:
587-
matchLabels:
588-
app: nginx
589-
template:
590-
metadata:
591-
labels:
592-
app: nginx
593-
spec:
594-
serviceAccountName: my-sa
595-
containers:
596-
- name: nginx
597-
image: nginx:1.14.2
598-
ports:
599-
- containerPort: 80
600-
EOF
601-
```
602-
To check the workload has been deployed, switch context back to the managed cluster
603-
and list deployments:
604-
605-
```shell
606-
kflex ctx kind-cluster1
607-
```
608-
609-
```console
610-
$ kubectl get deployments.apps
611-
NAME READY UP-TO-DATE AVAILABLE AGE
612-
nginx-deployment 3/3 3 3 20s
613-
```
614-
615467
## Working with a vcluster control plane
616468

617469
Let's create a vcluster control plane:

0 commit comments

Comments
 (0)