Skip to content

Commit 7db0b04

Browse files
FrankYang0529dariavladykinajillian-maroket
authored
feat: add fleet doc (#867)
* feat: add fleet doc Signed-off-by: PoAn Yang <[email protected]> * Update docs/troubleshooting/installation.md Co-authored-by: Daria Vladykina <[email protected]> Signed-off-by: PoAn Yang <[email protected]> * Update docs/troubleshooting/installation.md Co-authored-by: Daria Vladykina <[email protected]> Signed-off-by: PoAn Yang <[email protected]> * Update docs/troubleshooting/installation.md Co-authored-by: Daria Vladykina <[email protected]> Signed-off-by: PoAn Yang <[email protected]> * Update docs/troubleshooting/installation.md Co-authored-by: Jillian Maroket <[email protected]> Signed-off-by: PoAn Yang <[email protected]> * Update docs/troubleshooting/installation.md Co-authored-by: Jillian Maroket <[email protected]> Signed-off-by: PoAn Yang <[email protected]> * Update docs/troubleshooting/installation.md Co-authored-by: Jillian Maroket <[email protected]> Signed-off-by: PoAn Yang <[email protected]> * Update docs/troubleshooting/installation.md Co-authored-by: Jillian Maroket <[email protected]> Signed-off-by: PoAn Yang <[email protected]> * Update docs/troubleshooting/installation.md Co-authored-by: Jillian Maroket <[email protected]> Signed-off-by: PoAn Yang <[email protected]> * Update docs/troubleshooting/installation.md Co-authored-by: Jillian Maroket <[email protected]> Signed-off-by: PoAn Yang <[email protected]> * Update docs/troubleshooting/installation.md Co-authored-by: Jillian Maroket <[email protected]> Signed-off-by: PoAn Yang <[email protected]> --------- Signed-off-by: PoAn Yang <[email protected]> Signed-off-by: PoAn Yang <[email protected]> Co-authored-by: Daria Vladykina <[email protected]> Co-authored-by: Jillian Maroket <[email protected]>
1 parent 8c64809 commit 7db0b04

File tree

1 file changed

+354
-1
lines changed

1 file changed

+354
-1
lines changed

docs/troubleshooting/installation.md

Lines changed: 354 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@ Please include the following information in a bug report when reporting a failed
119119
```
120120
supportconfig -k -c
121121
```
122-
122+
123123
The command output messages contain the generated tarball path. For example the path is `/var/loq/scc_aaa_220520_1021 804d65d-c9ba-4c54-b12d-859631f892c5.txz` in the following example:
124124

125125
![](/img/v1.2/troubleshooting/installation-support-config-example.png)
@@ -129,3 +129,356 @@ Please include the following information in a bug report when reporting a failed
129129
A failure PXE Boot installation automatically generates a tarball if the [`install.debug`](../install/harvester-configuration.md#installdebug) field is set to `true` in the Harvester configuration file.
130130

131131
:::
132+
133+
## Check Charts Status
134+
135+
Harvester uses the following chart CRDs:
136+
137+
- `HelmChart`: Maintains RKE2 charts.
138+
139+
- `rke2-runtimeclasses`
140+
- `rke2-multus`
141+
- `rke2-metrics-server`
142+
- `rke2-ingress-nginx`
143+
- `rke2-coredns`
144+
- `rke2-cannal`
145+
146+
- `ManagedChart`: Manages Rancher and Harvester charts.
147+
148+
- `rancher-monitoring-crd`
149+
- `rancher-logging-crd`
150+
- `kubeovn-operator-crd`
151+
- `harvester-crd`
152+
- `harvester`
153+
154+
You can use the `helm list -A` command to retrieve a list of installed charts.
155+
156+
Example of output:
157+
158+
```shell
159+
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
160+
fleet cattle-fleet-system 4 2025-09-24 09:07:10.801764068 +0000 UTC deployed fleet-107.0.0+up0.13.0 0.13.0
161+
fleet-agent-local cattle-fleet-local-system 1 2025-09-24 08:59:28.686781982 +0000 UTC deployed fleet-agent-local-v0.0.0+s-d4f65a6f642cca930c78e6e2f0d3f9bbb7d3ba47cf1cce34ac3d6b8770ce5
162+
fleet-crd cattle-fleet-system 1 2025-09-24 08:58:28.396419747 +0000 UTC deployed fleet-crd-107.0.0+up0.13.0 0.13.0
163+
harvester harvester-system 1 2025-09-24 08:59:37.718646669 +0000 UTC deployed harvester-0.0.0-master-ac070598 master-ac070598
164+
harvester-crd harvester-system 1 2025-09-24 08:59:35.341316526 +0000 UTC deployed harvester-crd-0.0.0-master-ac070598 master-ac070598
165+
kubeovn-operator-crd kube-system 1 2025-09-24 08:59:34.783356576 +0000 UTC deployed kubeovn-operator-crd-1.13.13 v1.13.13
166+
mcc-local-managed-system-upgrade-controller cattle-system 1 2025-09-24 08:59:10.656784284 +0000 UTC deployed system-upgrade-controller-107.0.0 v0.16.0
167+
rancher cattle-system 1 2025-09-24 08:57:20.690330683 +0000 UTC deployed rancher-2.12.0 8815e66-dirty
168+
rancher-logging-crd cattle-logging-system 1 2025-09-24 08:59:36.262080367 +0000 UTC deployed rancher-logging-crd-107.0.1+up4.10.0-rancher.10
169+
rancher-monitoring-crd cattle-monitoring-system 1 2025-09-24 08:59:35.287099045 +0000 UTC deployed rancher-monitoring-crd-107.1.0+up69.8.2-rancher.15
170+
rancher-provisioning-capi cattle-provisioning-capi-system 1 2025-09-24 08:59:00.561162307 +0000 UTC deployed rancher-provisioning-capi-107.0.0+up0.8.0 1.10.2
171+
rancher-webhook cattle-system 2 2025-09-24 09:02:38.774660489 +0000 UTC deployed rancher-webhook-107.0.0+up0.8.0 0.8.0
172+
rke2-canal kube-system 1 2025-09-24 08:57:25.248839867 +0000 UTC deployed rke2-canal-v3.30.2-build2025071100 v3.30.2
173+
rke2-coredns kube-system 1 2025-09-24 08:57:25.341016864 +0000 UTC deployed rke2-coredns-1.42.302 1.12.2
174+
rke2-ingress-nginx kube-system 3 2025-09-24 09:01:31.331647555 +0000 UTC deployed rke2-ingress-nginx-4.12.401 1.12.4
175+
rke2-metrics-server kube-system 1 2025-09-24 08:57:42.162046899 +0000 UTC deployed rke2-metrics-server-3.12.203 0.7.2
176+
rke2-multus kube-system 1 2025-09-24 08:57:25.341560394 +0000 UTC deployed rke2-multus-v4.2.106 4.2.1
177+
rke2-runtimeclasses kube-system 1 2025-09-24 08:57:40.137168056 +0000 UTC deployed rke2-runtimeclasses-0.1.000 0.1.0
178+
```
179+
180+
### HelmChart CRD
181+
182+
`HelmChart` items are installed by jobs. You can determine the name and status of each job by running the following command on the Harvester node:
183+
184+
```shell
185+
$ kubectl get helmcharts -A -o jsonpath='{range .items[*]}{"Namespace: "}{.metadata.namespace}{"\nName: "}{.metadata.name}{"\nStatus:\n"}{range .status.conditions[*]}{" - Type: "}{.type}{"\n Status: "}{.status}{"\n Reason: "}{.reason}{"\n Message: "}{.message}{"\n"}{end}{"JobName: "}{.status.jobName}{"\n\n"}{end}'
186+
```
187+
188+
Example of output:
189+
190+
```shell
191+
Namespace: kube-system
192+
Name: rke2-canal
193+
Status:
194+
- Type: JobCreated
195+
Status: True
196+
Reason: Job created
197+
Message: Applying HelmChart using Job kube-system/helm-install-rke2-canal
198+
- Type: Failed
199+
Status: False
200+
Reason:
201+
Message:
202+
JobName: helm-install-rke2-canal
203+
204+
Namespace: kube-system
205+
Name: rke2-coredns
206+
Status:
207+
- Type: JobCreated
208+
Status: True
209+
Reason: Job created
210+
Message: Applying HelmChart using Job kube-system/helm-install-rke2-coredns
211+
- Type: Failed
212+
Status: False
213+
Reason:
214+
Message:
215+
JobName: helm-install-rke2-coredns
216+
217+
Namespace: kube-system
218+
Name: rke2-ingress-nginx
219+
Status:
220+
- Type: JobCreated
221+
Status: True
222+
Reason: Job created
223+
Message: Applying HelmChart using Job kube-system/helm-install-rke2-ingress-nginx
224+
- Type: Failed
225+
Status: False
226+
Reason:
227+
Message:
228+
JobName: helm-install-rke2-ingress-nginx
229+
230+
Namespace: kube-system
231+
Name: rke2-metrics-server
232+
Status:
233+
- Type: JobCreated
234+
Status: True
235+
Reason: Job created
236+
Message: Applying HelmChart using Job kube-system/helm-install-rke2-metrics-server
237+
- Type: Failed
238+
Status: False
239+
Reason:
240+
Message:
241+
JobName: helm-install-rke2-metrics-server
242+
243+
Namespace: kube-system
244+
Name: rke2-multus
245+
Status:
246+
- Type: JobCreated
247+
Status: True
248+
Reason: Job created
249+
Message: Applying HelmChart using Job kube-system/helm-install-rke2-multus
250+
- Type: Failed
251+
Status: False
252+
Reason:
253+
Message:
254+
JobName: helm-install-rke2-multus
255+
256+
Namespace: kube-system
257+
Name: rke2-runtimeclasses
258+
Status:
259+
- Type: JobCreated
260+
Status: True
261+
Reason: Job created
262+
Message: Applying HelmChart using Job kube-system/helm-install-rke2-runtimeclasses
263+
- Type: Failed
264+
Status: False
265+
Reason:
266+
Message:
267+
JobName: helm-install-rke2-runtimeclasses
268+
```
269+
270+
You can use the information in the following ways:
271+
272+
- Determine the cause of a failed job: Check the `Reason` and `Message` values of the `Failed` condition.
273+
- Rerun a job: Remove the `Status` field for that specific job from the `HelmChart` CRD. The controller deploys a new job.
274+
275+
### ManagedChart CRD
276+
277+
Rancher uses [Fleet](https://fleet.rancher.io/) to install charts on target clusters. Harvester has only one target cluster (`fleet-local/local`).
278+
279+
Fleet deploys an agent on each target cluster via `helm install`, so you can find the `fleet-agent-local` chart using the `helm list -A` command. The `cluster.fleet.cattle.io` CRD contains the agent's status.
280+
281+
```yaml
282+
apiVersion: fleet.cattle.io/v1alpha1
283+
kind: Cluster
284+
metadata:
285+
name: local
286+
namespace: fleet-local
287+
spec:
288+
agentAffinity:
289+
nodeAffinity:
290+
preferredDuringSchedulingIgnoredDuringExecution:
291+
- preference:
292+
matchExpressions:
293+
- key: fleet.cattle.io/agent
294+
operator: In
295+
values:
296+
- "true"
297+
weight: 1
298+
agentNamespace: cattle-fleet-local-system
299+
clientID: xd8cgpm2gq5w25qf46r8ml6qxvhsg858g64s5k7wj5h947vs5sxbwd
300+
kubeConfigSecret: local-kubeconfig
301+
kubeConfigSecretNamespace: fleet-local
302+
redeployAgentGeneration: 1
303+
status:
304+
agent:
305+
lastSeen: "2025-09-01T07:09:28Z"
306+
namespace: cattle-fleet-local-system
307+
agentAffinityHash: f50425c0999a8e18c2d104cdb8cb063762763f232f538b5a7c8bdb61
308+
agentDeployedGeneration: 1
309+
agentMigrated: true
310+
agentNamespaceMigrated: true
311+
agentTLSMode: system-store
312+
apiServerCAHash: 158866807fdf372a1f1946bb72d0fbcdd66e0e63c4799f9d4df0e18b
313+
apiServerURL: https://10.53.0.1:443
314+
cattleNamespaceMigrated: true
315+
conditions:
316+
- lastUpdateTime: "2025-08-28T04:43:02Z"
317+
status: "True"
318+
type: Processed
319+
- lastUpdateTime: "2025-08-28T10:08:31Z"
320+
status: "True"
321+
type: Imported
322+
- lastUpdateTime: "2025-08-28T10:08:30Z"
323+
status: "True"
324+
type: Reconciled
325+
- lastUpdateTime: "2025-08-28T10:09:30Z"
326+
status: "True"
327+
type: Ready
328+
```
329+
330+
Rancher converts the `ManagedChart` CRD into a `Bundle` resource with an `mcc-` prefix. The Fleet agent watches for `Bundle` resources and deploys them to the target cluster. The `BundleDeployment` resource contains the deployment status.
331+
332+
The Fleet controller does not push data to the agent. Instead, the agent polls `Bundle` resource data from the cluster on which the Fleet controller is installed. In Harvester, the Fleet controller and agent are on the same cluster, so network issues are not a concern.
333+
334+
```shell
335+
$ kubectl get bundledeployments -A -o jsonpath='{range .items[*]}{"Namespace: "}{.metadata.namespace}{"\nName: "}{.metadata.name}{"\nStatus:\n"}{range .status.conditions[*]}{" - Type: "}{.type}{"\n Status: "}{.status}{"\n Reason: "}{.reason}{"\n Message: "}{.message}{"\n"}{end}{"\n"}{end}'
336+
337+
Namespace: cluster-fleet-local-local-1a3d67d0a899
338+
Name: fleet-agent-local
339+
Status:
340+
- Type: Installed
341+
Status: True
342+
Reason:
343+
Message:
344+
- Type: Deployed
345+
Status: True
346+
Reason:
347+
Message:
348+
- Type: Ready
349+
Status: True
350+
Reason:
351+
Message:
352+
- Type: Monitored
353+
Status: True
354+
Reason:
355+
Message:
356+
357+
Namespace: cluster-fleet-local-local-1a3d67d0a899
358+
Name: mcc-harvester
359+
Status:
360+
- Type: Installed
361+
Status: True
362+
Reason:
363+
Message:
364+
- Type: Deployed
365+
Status: True
366+
Reason:
367+
Message:
368+
- Type: Ready
369+
Status: True
370+
Reason:
371+
Message:
372+
- Type: Monitored
373+
Status: True
374+
Reason:
375+
Message:
376+
377+
Namespace: cluster-fleet-local-local-1a3d67d0a899
378+
Name: mcc-harvester-crd
379+
Status:
380+
- Type: Installed
381+
Status: True
382+
Reason:
383+
Message:
384+
- Type: Deployed
385+
Status: True
386+
Reason:
387+
Message:
388+
- Type: Ready
389+
Status: True
390+
Reason:
391+
Message:
392+
- Type: Monitored
393+
Status: True
394+
Reason:
395+
Message:
396+
397+
Namespace: cluster-fleet-local-local-1a3d67d0a899
398+
Name: mcc-kubeovn-operator-crd
399+
Status:
400+
- Type: Installed
401+
Status: True
402+
Reason:
403+
Message:
404+
- Type: Deployed
405+
Status: True
406+
Reason:
407+
Message:
408+
- Type: Ready
409+
Status: True
410+
Reason:
411+
Message:
412+
- Type: Monitored
413+
Status: True
414+
Reason:
415+
Message:
416+
417+
Namespace: cluster-fleet-local-local-1a3d67d0a899
418+
Name: mcc-rancher-logging-crd
419+
Status:
420+
- Type: Installed
421+
Status: True
422+
Reason:
423+
Message:
424+
- Type: Deployed
425+
Status: True
426+
Reason:
427+
Message:
428+
- Type: Ready
429+
Status: True
430+
Reason:
431+
Message:
432+
- Type: Monitored
433+
Status: True
434+
Reason:
435+
Message:
436+
437+
Namespace: cluster-fleet-local-local-1a3d67d0a899
438+
Name: mcc-rancher-monitoring-crd
439+
Status:
440+
- Type: Installed
441+
Status: True
442+
Reason:
443+
Message:
444+
- Type: Deployed
445+
Status: True
446+
Reason:
447+
Message:
448+
- Type: Ready
449+
Status: True
450+
Reason:
451+
Message:
452+
- Type: Monitored
453+
Status: True
454+
Reason:
455+
Message:
456+
```
457+
458+
If you change the `harvester-system/harvester` deployment image, the Fleet agent detects the change and updates the corresponding status in the `BundleDeployment` resource.
459+
460+
Example:
461+
462+
```yaml
463+
status:
464+
appliedDeploymentID: s-89f9ce3f33c069befb4ebdceaa103af7b71db0e70a39760cb6653366964e5:1cd9188211e318033f89b77acf7b996
465+
e5bb3d9a25319528c47dc052528056f78
466+
conditions:
467+
- lastUpdateTime: "2025-08-28T04:44:18Z"
468+
status: "True"
469+
type: Installed
470+
- lastUpdateTime: "2025-08-28T04:44:18Z"
471+
status: "True"
472+
type: Deployed
473+
- lastUpdateTime: "2025-09-01T07:40:28Z"
474+
message: deployment.apps harvester-system/harvester modified {"spec":{"template":{"spec":{"containers":[{"env":[{"
475+
name":"HARVESTER_SERVER_HTTPS_PORT","value":"8443"},{"name":"HARVESTER_DEBUG","value":"false"},{"name":"HARVESTER_SERV
476+
ER_HTTP_PORT","value":"0"},{"name":"HCI_MODE","value":"true"},{"name":"RANCHER_EMBEDDED","value":"true"},{"name":"HARV
477+
ESTER_SUPPORT_BUNDLE_IMAGE_DEFAULT_VALUE","value":"{\"repository\":\"rancher/support-bundle-kit\",\"tag\":\"master-hea
478+
d\",\"imagePullPolicy\":\"IfNotPresent\"}"},{"name":"NAMESPACE","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath"
479+
:"metadata.namespace"}}}],"image":"frankyang/harvester:fix-renovate-head","imagePullPolicy":"IfNotPresent","name":"api
480+
server","ports":[{"containerPort":8443,"name":"https","protocol":"TCP"},{"containerPort":6060,"name":"profile","protoc
481+
ol":"TCP"}],"resources":{"requests":{"cpu":"250m","memory":"256Mi"}},"securityContext":{"appArmorProfile":{"type":"Unc
482+
onfined"},"capabilities":{"add":["SYS_ADMIN"]}},"terminationMessagePath":"/dev/termination-log","terminationMessagePol
483+
icy":"File"}]}}}}
484+
```

0 commit comments

Comments
 (0)