Skip to content

Commit 1c6045c

Browse files
authored
doc(upgrade): add known issue for v1.4.2 to v1.4.3 upgrade path (#770)
* doc(upgrade): add known issue for v1.4.2 to v1.4.3 upgrade path Signed-off-by: Zespre Chang <[email protected]> * fix(upgrade): correct typos Signed-off-by: Zespre Chang <[email protected]> --------- Signed-off-by: Zespre Chang <[email protected]>
1 parent b277277 commit 1c6045c

File tree

12 files changed

+121
-11
lines changed

12 files changed

+121
-11
lines changed

docs/upgrade/troubleshooting.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
sidebar_position: 11
2+
sidebar_position: 12
33
sidebar_label: Troubleshooting
44
title: "Troubleshooting"
55
---

docs/upgrade/v1-1-2-to-v1-2-0.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
sidebar_position: 10
2+
sidebar_position: 11
33
sidebar_label: Upgrade from v1.1.2 to v1.2.0 (not recommended)
44
title: "Upgrade from v1.1.2 to v1.2.0 (not recommended)"
55
---

docs/upgrade/v1-2-0-to-v1-2-1.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
sidebar_position: 9
2+
sidebar_position: 10
33
sidebar_label: Upgrade from v1.1.2/v1.1.3/v1.2.0 to v1.2.1
44
title: "Upgrade from v1.1.2/v1.1.3/v1.2.0 to v1.2.1"
55
---

docs/upgrade/v1-2-1-to-v1-2-2.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
sidebar_position: 8
2+
sidebar_position: 9
33
sidebar_label: Upgrade from v1.2.1 to v1.2.2
44
title: "Upgrade from v1.2.1 to v1.2.2"
55
---

docs/upgrade/v1-2-2-to-v1-3-1.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
sidebar_position: 7
2+
sidebar_position: 8
33
sidebar_label: Upgrade from v1.2.2/v1.3.0 to v1.3.1
44
title: "Upgrade from v1.2.2/v1.3.0 to v1.3.1"
55
---

docs/upgrade/v1-3-1-to-v1-3-2.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
sidebar_position: 6
2+
sidebar_position: 7
33
sidebar_label: Upgrade from v1.3.1 to v1.3.2
44
title: "Upgrade from v1.3.1 to v1.3.2"
55
---

docs/upgrade/v1-3-2-to-v1-4-0.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
sidebar_position: 5
2+
sidebar_position: 6
33
sidebar_label: Upgrade from v1.3.2 to v1.4.0
44
title: "Upgrade from v1.3.2 to v1.4.0"
55
---

docs/upgrade/v1-4-0-to-v1-4-1.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
sidebar_position: 4
2+
sidebar_position: 5
33
sidebar_label: Upgrade from v1.4.0 to v1.4.1
44
title: "Upgrade from v1.4.0 to v1.4.1"
55
---

docs/upgrade/v1-4-1-to-v1-4-2.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
sidebar_position: 3
2+
sidebar_position: 4
33
sidebar_label: Upgrade from v1.4.1 to v1.4.2
44
title: "Upgrade from v1.4.1 to v1.4.2"
55
---

docs/upgrade/v1-4-2-to-v1-4-3.md

Lines changed: 110 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
---
2+
sidebar_position: 3
3+
sidebar_label: Upgrade from v1.4.2 to v1.4.3
4+
title: "Upgrade from v1.4.2 to v1.4.3"
5+
---
6+
7+
<head>
8+
<link rel="canonical" href="https://docs.harvesterhci.io/v1.4/upgrade/v1-4-2-to-v1-4-3"/>
9+
</head>
10+
11+
## General information
12+
13+
An **Upgrade** button appears on the **Dashboard** screen whenever a new Harvester version that you can upgrade to becomes available. For more information, see [Start an upgrade](./automatic.md#start-an-upgrade).
14+
15+
For air-gapped environments, see [Prepare an air-gapped upgrade](./automatic.md#prepare-an-air-gapped-upgrade).
16+
17+
## Known issues
18+
19+
---
20+
21+
### 1. Air-gapped upgrade stuck with `ImagePullBackOff` error in Fluentd and Fluent Bit pods
22+
23+
The upgrade may become stuck at the very beginning of the process, as indicated by 0% progress and items marked **Pending** in the **Upgrade** dialog of the Harvester UI.
24+
25+
![](/img/v1.5/upgrade/upgrade-dialog-with-empty-status.png)
26+
27+
Specifically, Fluentd and Fluent Bit pods may become stuck in the `ImagePullBackOff` status. To check the status of the pods, run the following commands:
28+
29+
```bash
30+
$ kubectl -n harvester-system get upgrades -l harvesterhci.io/latestUpgrade=true
31+
NAME AGE
32+
hvst-upgrade-x2hz8 7m14s
33+
34+
$ kubectl -n harvester-system get upgradelogs -l harvesterhci.io/upgrade=hvst-upgrade-x2hz8
35+
NAME UPGRADE
36+
hvst-upgrade-x2hz8-upgradelog hvst-upgrade-x2hz8
37+
38+
$ kubectl -n harvester-system get pods -l harvesterhci.io/upgradeLog=hvst-upgrade-x2hz8-upgradelog
39+
NAME READY STATUS RESTARTS AGE
40+
hvst-upgrade-x2hz8-upgradelog-downloader-6cdb864dd9-6bw98 1/1 Running 0 7m7s
41+
hvst-upgrade-x2hz8-upgradelog-infra-fluentbit-2nq7q 0/1 ImagePullBackOff 0 7m42s
42+
hvst-upgrade-x2hz8-upgradelog-infra-fluentbit-697wf 0/1 ImagePullBackOff 0 7m42s
43+
hvst-upgrade-x2hz8-upgradelog-infra-fluentbit-kd8kl 0/1 ImagePullBackOff 0 7m42s
44+
hvst-upgrade-x2hz8-upgradelog-infra-fluentd-0 0/2 ImagePullBackOff 0 7m42s
45+
```
46+
47+
This occurs because the following container images are neither preloaded in the cluster nodes nor pulled from the internet:
48+
49+
- `ghcr.io/kube-logging/fluentd:v1.15-ruby3`
50+
- `ghcr.io/kube-logging/config-reloader:v0.0.5`
51+
- `fluent/fluent-bit:2.1.8`
52+
53+
To fix the issue, perform any of the following actions:
54+
55+
- Update the Logging CR to use the images that are already preloaded in the cluster nodes. To do this, run the following commands against the cluster:
56+
57+
```bash
58+
# Get the Logging CR names
59+
OPERATOR_LOGGING_NAME=$(kubectl get loggings -l app.kubernetes.io/name=rancher-logging -o jsonpath="{.items[0].metadata.name}")
60+
INFRA_LOGGING_NAME=$(kubectl get loggings -l harvesterhci.io/upgradeLogComponent=infra -o jsonpath="{.items[0].metadata.name}")
61+
62+
# Gather image info from operator's Logging CR
63+
FLUENTD_IMAGE_REPO=$(kubectl get loggings $OPERATOR_LOGGING_NAME -o jsonpath="{.spec.fluentd.image.repository}")
64+
FLUENTD_IMAGE_TAG=$(kubectl get loggings $OPERATOR_LOGGING_NAME -o jsonpath="{.spec.fluentd.image.tag}")
65+
66+
FLUENTBIT_IMAGE_REPO=$(kubectl get loggings $OPERATOR_LOGGING_NAME -o jsonpath="{.spec.fluentbit.image.repository}")
67+
FLUENTBIT_IMAGE_TAG=$(kubectl get loggings $OPERATOR_LOGGING_NAME -o jsonpath="{.spec.fluentbit.image.tag}")
68+
69+
CONFIG_RELOADER_IMAGE_REPO=$(kubectl get loggings $OPERATOR_LOGGING_NAME -o jsonpath="{.spec.fluentd.configReloaderImage.repository}")
70+
CONFIG_RELOADER_IMAGE_TAG=$(kubectl get loggings $OPERATOR_LOGGING_NAME -o jsonpath="{.spec.fluentd.configReloaderImage.tag}")
71+
72+
# Patch the Logging CR
73+
kubectl patch logging $INFRA_LOGGING_NAME --type=json -p="[{\"op\":\"replace\",\"path\":\"/spec/fluentbit/image\",\"value\":{\"repository\":\"$FLUENTBIT_IMAGE_REPO\",\"tag\":\"$FLUENTBIT_IMAGE_TAG\"}}]"
74+
kubectl patch logging $INFRA_LOGGING_NAME --type=json -p="[{\"op\":\"replace\",\"path\":\"/spec/fluentd/image\",\"value\":{\"repository\":\"$FLUENTD_IMAGE_REPO\",\"tag\":\"$FLUENTD_IMAGE_TAG\"}}]"
75+
kubectl patch logging $INFRA_LOGGING_NAME --type=json -p="[{\"op\":\"replace\",\"path\":\"/spec/fluentd/configReloaderImage\",\"value\":{\"repository\":\"$CONFIG_RELOADER_IMAGE_REPO\",\"tag\":\"$CONFIG_RELOADER_IMAGE_TAG\"}}]"
76+
```
77+
78+
The status of the Fluentd and Fluent Bit pods should change to `Running` in a moment and the upgrade process should continue after the Logging CR is updated. If the Fluentd pod is still in the `ImagePullBackOff` status, try deleting it with the following command to force it to restart:
79+
80+
```bash
81+
UPGRADE_NAME=$(kubectl -n harvester-system get upgrades -l harvesterhci.io/latestUpgrade=true -o jsonpath='{.items[0].metadata.name}')
82+
UPGRADELOG_NAME=$(kubectl -n harvester-system get upgradelogs -l harvesterhci.io/upgrade=$UPGRADE_NAME -o jsonpath='{.items[0].metadata.name}')
83+
84+
kubectl -n harvester-system delete pods -l harvesterhci.io/upgradeLog=$UPGRADELOG_NAME,harvesterhci.io/upgradeLogComponent=aggregator
85+
```
86+
87+
- On a computer with internet access, pull the required container images and then export them to a TAR file. Next, transfer the TAR file to the cluster nodes and then import the images by running the following commands on each node:
88+
89+
```bash
90+
# Pull down the three container images
91+
docker pull ghcr.io/kube-logging/fluentd:v1.15-ruby3
92+
docker pull ghcr.io/kube-logging/config-reloader:v0.0.5
93+
docker pull fluent/fluent-bit:2.1.8
94+
95+
# Export the images to a tar file
96+
docker save \
97+
ghcr.io/kube-logging/fluentd:v1.15-ruby3 \
98+
ghcr.io/kube-logging/config-reloader:v0.0.5 \
99+
fluent/fluent-bit:2.1.8 > upgradelog-images.tar
100+
101+
# After transferring the tar file to the cluster nodes, import the images (need to be run on each node)
102+
ctr -n k8s.io images import upgradelog-images.tar
103+
```
104+
105+
The upgrade process should continue after the images are preloaded.
106+
107+
- (Not recommended) Restart the upgrade process with logging disabled. Ensure that the **Enable Logging** checkbox in the **Upgrade** dialog is not selected.
108+
109+
Related issues:
110+
- [[BUG] AirGap Upgrades Seem Blocked with Fluentbit/FluentD](https://github.com/harvester/harvester/issues/7955)

0 commit comments

Comments
 (0)