Skip to content

Commit 7705a31

Browse files
Merge pull request #14 from gocrane/feature/support-github-actions
workflow: update workflow job name to build-and-test
2 parents 130f4f0 + 10e35fb commit 7705a31

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

41 files changed

+7918
-777
lines changed

.github/workflows/buildAndTest.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ on:
77
branches: [ "master" ]
88

99
jobs:
10-
build:
10+
build-and-test:
1111

1212
runs-on: ubuntu-latest
1313

charts/kubeocean/templates/crds/cloud.tencent.com_clusterbindings.yaml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,9 @@ spec:
1717
scope: Cluster
1818
versions:
1919
- additionalPrinterColumns:
20+
- jsonPath: .spec.clusterID
21+
name: ClusterID
22+
type: string
2023
- jsonPath: .status.phase
2124
name: Phase
2225
type: string

docs/quick-start.md

Lines changed: 100 additions & 45 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,11 @@
1+
---
2+
cwd: ../
3+
---
4+
15
# Quick Start
26

37
This document introduces:
8+
49
- Deploying kubeocean components in a local KIND (kubernetes in docker) cluster
510
- Binding two worker clusters into kubeocean and extracting computing resources to form virtual computing nodes
611
- Creating Pods on computing nodes that can work normally
@@ -16,90 +21,111 @@ This document introduces:
1621
## Build Environment and Deploy kubeocean Components
1722

1823
1. Clone the repository and enter the directory
19-
```
24+
25+
```sh
2026
git clone https://github.com/gocrane/kubeocean
2127
cd kubeocean
2228
```
2329

2430
2. Modify inotify kernel parameters to support KIND multi-cluster
25-
```
31+
32+
```sh
2633
sudo sysctl fs.inotify.max_user_watches=524288
2734
sudo sysctl fs.inotify.max_user_instances=512
2835
```
2936

3037
3. Build 3 KIND clusters locally
31-
```
38+
39+
```sh
3240
make kind-create-all
3341
```
42+
3443
The above command will create 3 k8s clusters locally, named kubeocean-manager, kubeocean-worker1 and kubeocean-worker2.
3544
You can use the following command to switch between different cluster contexts:
36-
```
37-
# <clusterName> can be kubeocean-manager, kubeocean-worker1 and kubeocean-worker2
38-
kubectl config use-context kind-<clusterName>
45+
46+
```sh
47+
# CLUSTER_NAME 可为 kubeocean-manager,kubeocean-worker1 和 kubeocean-worker2
48+
export CLUSTER_NAME=kubeocean-worker1
49+
kubectl config use-context kind-$CLUSTER_NAME
3950
```
4051

4152
4. Deploy kubernetes-intranet and kube-dns-intranet Services
42-
```
53+
54+
```sh
4355
make kind-deploy-pre
4456
```
57+
4558
The above command will deploy kubernetes-intranet and kube-dns-intranet Services in the created kubeocean-manager cluster to prepare for kubeocean component deployment and usage.
4659

4760
5. Deploy kubeocean components in kubeocean-manager cluster
48-
```
61+
62+
```sh
4963
# Load images
5064
KIND_CLUSTER_NAME=kubeocean-manager make kind-load-images
5165
# Switch to manager cluster and deploy components
5266
kubectl config use-context kind-kubeocean-manager
67+
# Get current version
68+
version=$(git describe --tags --always --dirty)-amd64
69+
5370
# Install components using helm
54-
version=$(git describe --tags --always --dirty)
5571
helm upgrade --install kubeocean charts/kubeocean \
5672
--set global.imageRegistry="ccr.ccs.tencentyun.com/tke-eni-test" \
5773
--set manager.image.tag=${version} \
5874
--set syncer.image.tag=${version} \
5975
--set proxier.image.tag=${version} \
6076
--wait
6177
# Or use preset make command to install
62-
make install-manager
78+
INSTALL_IMG_TAG=${version} make install-manager
6379
```
6480

6581
## Bind Worker Clusters and Extract Computing Nodes
6682

67-
**Note: Replace kubeocean-worker1 with kubeocean-worker2 in the following commands to complete worker2 cluster binding**
83+
0. Set environment variables
6884

69-
1. Deploy kubeocean-worker in worker cluster
85+
```sh
86+
export CLUSTER_NAME=kubeocean-worker1
87+
export CLUSTERID=cls-worker1
88+
# Set CLUSTER_NAME to kubeocean-worker2 and CLUSTERID to cls-worker2, then re-execute to complete the second worker cluster registration
7089
```
71-
kubectl config use-context kind-kubeocean-worker1
90+
91+
1. Deploy kubeocean-worker in worker cluster
92+
93+
```sh
94+
kubectl config use-context kind-$CLUSTER_NAME
7295
# Install using helm
7396
helm upgrade --install kubeocean-worker charts/kubeocean-worker --wait
7497
# Or use preset make command to install
7598
make install-worker
7699
```
77100

78101
2. Extract kubeconfig from kubeocean-worker
79-
```
102+
103+
```sh
80104
# Use script to extract kubeconfig
81-
bash hack/kubeconfig.sh kubeocean-syncer kubeocean-worker /tmp/kubeconfig-worker1
105+
bash hack/kubeconfig.sh kubeocean-syncer kubeocean-worker /tmp/kubeconfig-$CLUSTER_NAME
82106
# Replace APIServer's localhost address with corresponding docker container address
83-
WORKER1_IP=$(docker inspect kubeocean-worker1-control-plane --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}')
84-
sed -i "s|server:.*|server: \"https://${WORKER1_IP}:6443\"|" /tmp/kubeconfig-worker1
107+
WORKER1_IP=$(docker inspect $CLUSTER_NAME-control-plane --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}')
108+
sed -i "s|server:.*|server: \"https://${WORKER1_IP}:6443\"|" /tmp/kubeconfig-$CLUSTER_NAME
85109
```
86110

87111
3. Create related secrets in manager cluster
88-
```
112+
113+
```sh
89114
kubectl config use-context kind-kubeocean-manager
90-
kubectl -nkubeocean-system create secret generic worker1-cluster-kubeconfig --from-file=kubeconfig=/tmp/kubeconfig-worker1
115+
kubectl -nkubeocean-system create secret generic $CLUSTER_NAME-kubeconfig --from-file=kubeconfig=/tmp/kubeconfig-$CLUSTER_NAME
91116
```
92117

93118
4. Bind worker cluster
94-
```
95-
# cb1.yaml
119+
120+
```sh
121+
cat > cb.yaml << EOF
96122
apiVersion: cloud.tencent.com/v1beta1
97123
kind: ClusterBinding
98124
metadata:
99-
name: cb-worker1
125+
name: cb-$CLUSTER_NAME
100126
namespace: kubeocean-system
101127
spec:
102-
clusterID: cls-worker1
128+
clusterID: $CLUSTERID
103129
mountNamespace: kubeocean-worker
104130
nodeSelector:
105131
nodeSelectorTerms:
@@ -109,37 +135,48 @@ spec:
109135
values:
110136
- worker
111137
secretRef:
112-
name: worker1-cluster-kubeconfig
138+
name: $CLUSTER_NAME-kubeconfig
113139
namespace: kubeocean-system
140+
EOF
141+
114142
```
143+
115144
Create the above clusterbinding object in manager cluster:
116-
```
145+
146+
```sh
117147
kubectl config use-context kind-kubeocean-manager
118-
kubectl apply -f cb1.yaml
148+
kubectl apply -f cb.yaml
119149
```
150+
120151
After the above command is executed, you can check if the corresponding clusterbinding status is Ready:
152+
153+
```sh
154+
kubectl get cb cb-$CLUSTER_NAME
121155
```
122-
kubectl get cb cb-worker1
123-
```
156+
124157
Expected execution result:
158+
159+
```sh
160+
NAME PHASE AGE
161+
cb-kubeocean-worker1 Ready Xs
125162
```
126-
NAME CLUSTERID PHASE
127-
cb-worker1 cls-worker1 Ready
128-
```
163+
129164
At the same time, after cluster binding, corresponding worker and proxier pods will be synchronously created in the kubeocean-system namespace, which can be viewed with the following command:
130-
```
165+
166+
```sh
131167
kubectl -nkubeocean-system get po -owide
132168
```
133169

134170
5. Extract computing resources to form virtual nodes
135-
```
136-
# rlp1.yaml
171+
172+
```sh
173+
cat > rlp.yaml << EOF
137174
apiVersion: cloud.tencent.com/v1beta1
138175
kind: ResourceLeasingPolicy
139176
metadata:
140-
name: rlp-worker1
177+
name: rlp-$CLUSTER_NAME
141178
spec:
142-
cluster: cb-worker1
179+
cluster: cb-$CLUSTER_NAME
143180
forceReclaim: true
144181
nodeSelector:
145182
nodeSelectorTerms:
@@ -156,19 +193,26 @@ spec:
156193
percent: 80 # Take the smaller of 4 CPUs or 80% of available CPUs
157194
- resource: memory
158195
percent: 90 # Take 90% of available memory
196+
EOF
159197
```
198+
160199
Create the above ResourceLeasingPolicy object in worker1 cluster to extract computing nodes:
200+
201+
```sh
202+
kubectl config use-context kind-$CLUSTER_NAME
203+
kubectl apply -f rlp.yaml
161204
```
162-
kubectl config use-context kind-kubeocean-worker1
163-
kubectl apply -f rlp1.yaml
164-
```
205+
165206
After the above command is executed, you can observe in the manager cluster whether computing nodes are extracted normally:
166-
```
207+
208+
```sh
167209
kubectl config use-context kind-kubeocean-manager
168210
kubectl get node
169211
```
212+
170213
If nodes starting with vnode are created, it means computing resource extraction is successful:
171-
```
214+
215+
```sh
172216
NAME STATUS ROLES AGE VERSION
173217
kubeocean-manager-control-plane Ready control-plane 92m v1.28.0
174218
kubeocean-manager-worker Ready <none> 91m v1.28.0
@@ -179,8 +223,8 @@ vnode-cls-worker1-kubeocean-worker1-worker2 Ready <none> 5m v1.2
179223

180224
## Create and Deploy Sample Pod
181225

182-
```
183-
# job.yaml
226+
```sh
227+
cat > job.yaml << EOF
184228
kind: Job
185229
apiVersion: batch/v1
186230
metadata:
@@ -199,9 +243,12 @@ spec:
199243
tolerations:
200244
- operator: Exists
201245
key: kubeocean.io/vnode
246+
EOF
202247
```
248+
203249
Deploy the above job in manager cluster. You can cordon non-virtual nodes for better verification effect:
204-
```
250+
251+
```sh
205252
# Pull image
206253
docker pull busybox:latest
207254
bin/kind load docker-image busybox:latest --name kubeocean-worker1
@@ -211,8 +258,16 @@ kubectl config use-context kind-kubeocean-manager
211258
kubectl cordon kubeocean-manager-control-plane kubeocean-manager-worker kubeocean-manager-worker2
212259
kubectl create -f job.yaml
213260
```
214-
Use `kubectl get po -owide -w` to view the results. You can observe that the job can run and complete normally:
261+
262+
After deployment, use `kubectl` to view the results
263+
264+
```sh
265+
kubectl get po -owide -w
215266
```
267+
268+
You can observe that the job can run and complete normally:
269+
270+
```sh
216271
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
217272
test-job-9ln8m 0/1 ContainerCreating 0 3s <none> vnode-cls-worker1-kubeocean-worker1-worker2 <none> <none>
218273
test-job-9ln8m 1/1 Running 0 8s 10.242.1.2 vnode-cls-worker1-kubeocean-worker1-worker2 <none> <none>

0 commit comments

Comments
 (0)