Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/buildAndTest.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ on:
branches: [ "master" ]

jobs:
build:
build-and-test:

runs-on: ubuntu-latest

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,9 @@ spec:
scope: Cluster
versions:
- additionalPrinterColumns:
- jsonPath: .spec.clusterID
name: ClusterID
type: string
- jsonPath: .status.phase
name: Phase
type: string
Expand Down
145 changes: 100 additions & 45 deletions docs/quick-start.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,11 @@
---
cwd: ../
---

# Quick Start

This document introduces:

- Deploying kubeocean components in a local KIND (kubernetes in docker) cluster
- Binding two worker clusters into kubeocean and extracting computing resources to form virtual computing nodes
- Creating Pods on computing nodes that can work normally
Expand All @@ -16,90 +21,111 @@ This document introduces:
## Build Environment and Deploy kubeocean Components

1. Clone the repository and enter the directory
```

```sh
git clone https://github.com/gocrane/kubeocean
cd kubeocean
```

2. Modify inotify kernel parameters to support KIND multi-cluster
```

```sh
sudo sysctl fs.inotify.max_user_watches=524288
sudo sysctl fs.inotify.max_user_instances=512
```

3. Build 3 KIND clusters locally
```

```sh
make kind-create-all
```

The above command will create 3 k8s clusters locally, named kubeocean-manager, kubeocean-worker1 and kubeocean-worker2.
You can use the following command to switch between different cluster contexts:
```
# <clusterName> can be kubeocean-manager, kubeocean-worker1 and kubeocean-worker2
kubectl config use-context kind-<clusterName>

```sh
# CLUSTER_NAME 可为 kubeocean-manager,kubeocean-worker1 和 kubeocean-worker2
export CLUSTER_NAME=kubeocean-worker1
kubectl config use-context kind-$CLUSTER_NAME
```

4. Deploy kubernetes-intranet and kube-dns-intranet Services
```

```sh
make kind-deploy-pre
```

The above command will deploy kubernetes-intranet and kube-dns-intranet Services in the created kubeocean-manager cluster to prepare for kubeocean component deployment and usage.

5. Deploy kubeocean components in kubeocean-manager cluster
```

```sh
# Load images
KIND_CLUSTER_NAME=kubeocean-manager make kind-load-images
# Switch to manager cluster and deploy components
kubectl config use-context kind-kubeocean-manager
# Get current version
version=$(git describe --tags --always --dirty)-amd64

# Install components using helm
version=$(git describe --tags --always --dirty)
helm upgrade --install kubeocean charts/kubeocean \
--set global.imageRegistry="ccr.ccs.tencentyun.com/tke-eni-test" \
--set manager.image.tag=${version} \
--set syncer.image.tag=${version} \
--set proxier.image.tag=${version} \
--wait
# Or use preset make command to install
make install-manager
INSTALL_IMG_TAG=${version} make install-manager
```

## Bind Worker Clusters and Extract Computing Nodes

**Note: Replace kubeocean-worker1 with kubeocean-worker2 in the following commands to complete worker2 cluster binding**
0. Set environment variables

1. Deploy kubeocean-worker in worker cluster
```sh
export CLUSTER_NAME=kubeocean-worker1
export CLUSTERID=cls-worker1
# Set CLUSTER_NAME to kubeocean-worker2 and CLUSTERID to cls-worker2, then re-execute to complete the second worker cluster registration
```
kubectl config use-context kind-kubeocean-worker1

1. Deploy kubeocean-worker in worker cluster

```sh
kubectl config use-context kind-$CLUSTER_NAME
# Install using helm
helm upgrade --install kubeocean-worker charts/kubeocean-worker --wait
# Or use preset make command to install
make install-worker
```

2. Extract kubeconfig from kubeocean-worker
```

```sh
# Use script to extract kubeconfig
bash hack/kubeconfig.sh kubeocean-syncer kubeocean-worker /tmp/kubeconfig-worker1
bash hack/kubeconfig.sh kubeocean-syncer kubeocean-worker /tmp/kubeconfig-$CLUSTER_NAME
# Replace APIServer's localhost address with corresponding docker container address
WORKER1_IP=$(docker inspect kubeocean-worker1-control-plane --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}')
sed -i "s|server:.*|server: \"https://${WORKER1_IP}:6443\"|" /tmp/kubeconfig-worker1
WORKER1_IP=$(docker inspect $CLUSTER_NAME-control-plane --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}')
sed -i "s|server:.*|server: \"https://${WORKER1_IP}:6443\"|" /tmp/kubeconfig-$CLUSTER_NAME
```

3. Create related secrets in manager cluster
```

```sh
kubectl config use-context kind-kubeocean-manager
kubectl -nkubeocean-system create secret generic worker1-cluster-kubeconfig --from-file=kubeconfig=/tmp/kubeconfig-worker1
kubectl -nkubeocean-system create secret generic $CLUSTER_NAME-kubeconfig --from-file=kubeconfig=/tmp/kubeconfig-$CLUSTER_NAME
```

4. Bind worker cluster
```
# cb1.yaml

```sh
cat > cb.yaml << EOF
apiVersion: cloud.tencent.com/v1beta1
kind: ClusterBinding
metadata:
name: cb-worker1
name: cb-$CLUSTER_NAME
namespace: kubeocean-system
spec:
clusterID: cls-worker1
clusterID: $CLUSTERID
mountNamespace: kubeocean-worker
nodeSelector:
nodeSelectorTerms:
Expand All @@ -109,37 +135,48 @@ spec:
values:
- worker
secretRef:
name: worker1-cluster-kubeconfig
name: $CLUSTER_NAME-kubeconfig
namespace: kubeocean-system
EOF

```

Create the above clusterbinding object in manager cluster:
```

```sh
kubectl config use-context kind-kubeocean-manager
kubectl apply -f cb1.yaml
kubectl apply -f cb.yaml
```

After the above command is executed, you can check if the corresponding clusterbinding status is Ready:

```sh
kubectl get cb cb-$CLUSTER_NAME
```
kubectl get cb cb-worker1
```

Expected execution result:

```sh
NAME PHASE AGE
cb-kubeocean-worker1 Ready Xs
```
NAME CLUSTERID PHASE
cb-worker1 cls-worker1 Ready
```

At the same time, after cluster binding, corresponding worker and proxier pods will be synchronously created in the kubeocean-system namespace, which can be viewed with the following command:
```

```sh
kubectl -nkubeocean-system get po -owide
```

5. Extract computing resources to form virtual nodes
```
# rlp1.yaml

```sh
cat > rlp.yaml << EOF
apiVersion: cloud.tencent.com/v1beta1
kind: ResourceLeasingPolicy
metadata:
name: rlp-worker1
name: rlp-$CLUSTER_NAME
spec:
cluster: cb-worker1
cluster: cb-$CLUSTER_NAME
forceReclaim: true
nodeSelector:
nodeSelectorTerms:
Expand All @@ -156,19 +193,26 @@ spec:
percent: 80 # Take the smaller of 4 CPUs or 80% of available CPUs
- resource: memory
percent: 90 # Take 90% of available memory
EOF
```

Create the above ResourceLeasingPolicy object in worker1 cluster to extract computing nodes:

```sh
kubectl config use-context kind-$CLUSTER_NAME
kubectl apply -f rlp.yaml
```
kubectl config use-context kind-kubeocean-worker1
kubectl apply -f rlp1.yaml
```

After the above command is executed, you can observe in the manager cluster whether computing nodes are extracted normally:
```

```sh
kubectl config use-context kind-kubeocean-manager
kubectl get node
```

If nodes starting with vnode are created, it means computing resource extraction is successful:
```

```sh
NAME STATUS ROLES AGE VERSION
kubeocean-manager-control-plane Ready control-plane 92m v1.28.0
kubeocean-manager-worker Ready <none> 91m v1.28.0
Expand All @@ -179,8 +223,8 @@ vnode-cls-worker1-kubeocean-worker1-worker2 Ready <none> 5m v1.2

## Create and Deploy Sample Pod

```
# job.yaml
```sh
cat > job.yaml << EOF
kind: Job
apiVersion: batch/v1
metadata:
Expand All @@ -199,9 +243,12 @@ spec:
tolerations:
- operator: Exists
key: kubeocean.io/vnode
EOF
```

Deploy the above job in manager cluster. You can cordon non-virtual nodes for better verification effect:
```

```sh
# Pull image
docker pull busybox:latest
bin/kind load docker-image busybox:latest --name kubeocean-worker1
Expand All @@ -211,8 +258,16 @@ kubectl config use-context kind-kubeocean-manager
kubectl cordon kubeocean-manager-control-plane kubeocean-manager-worker kubeocean-manager-worker2
kubectl create -f job.yaml
```
Use `kubectl get po -owide -w` to view the results. You can observe that the job can run and complete normally:

After deployment, use `kubectl` to view the results

```sh
kubectl get po -owide -w
```

You can observe that the job can run and complete normally:

```sh
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test-job-9ln8m 0/1 ContainerCreating 0 3s <none> vnode-cls-worker1-kubeocean-worker1-worker2 <none> <none>
test-job-9ln8m 1/1 Running 0 8s 10.242.1.2 vnode-cls-worker1-kubeocean-worker1-worker2 <none> <none>
Expand Down
Loading