Skip to content

Commit 3bea418

Browse files
doc: description and workaround for issue #5841
Signed-off-by: Webber Huang <[email protected]> Co-authored-by: Jillian <[email protected]>
1 parent e0a6933 commit 3bea418

File tree

2 files changed

+78
-0
lines changed

2 files changed

+78
-0
lines changed
50.7 KB
Loading

versioned_docs/version-v1.2/vm/backup-restore.md

Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -187,3 +187,81 @@ Example:
187187
| `b-c` | `a` | `a-b-c.cfg` |
188188

189189
Harvester v1.3.0 fixes this issue by changing the metadata file path to `<storage-path>/harvester/vmbackups/<vmbackup-namespace>/<vmbackup-name>.cfg`. If you are using an earlier version, however, ensure that VM backup names do not cause the described file naming conflicts.
190+
191+
### Failure to Create Backup for Stopped VM
192+
193+
When creating a backup for a stopped VM, the Harvester UI may display an error message that indicates a known issue.
194+
195+
![](/img/v1.2/vm/vm_backup_fail.png)
196+
197+
To determine if the [issue](https://github.com/harvester/harvester/issues/5841) has occurred, locate the VM backup on the **Dashboard** screen and perform the following steps:
198+
199+
1. Obtain the names of the problematic `VolumeSnapshot` resources that are related to the VM backup.
200+
201+
```
202+
$ kubectl get virtualmachinebackups.harvesterhci.io <VM backup name> -o json | jq '.status.volumeBackups[] | select(.readyToUse == false) | .name '
203+
```
204+
205+
Example:
206+
207+
```
208+
$ kubectl get virtualmachinebackups.harvesterhci.io extra-default.off -o json | jq '.status.volumeBackups[] | select(.readyToUse == false) | .name '
209+
extra-default.off-volume-vm-extra-default-rootdisk-vp3py
210+
extra-default.off-volume-vm-extra-default-disk-1-oohjf
211+
```
212+
213+
2. Obtain the names of the `VolumeSnapshotContent` resources that are related to the problematic volume snapshots.
214+
215+
```
216+
$ SNAPSHOT_CONTENT=$(kubectl get volumesnapshot <VolumeSnapshot Name> -o json | jq -r '.status.boundVolumeSnapshotContentName')
217+
```
218+
219+
Example:
220+
```
221+
$ SNAPSHOT_CONTENT=$(kubectl get volumesnapshot extra-default.off-volume-vm-extra-default-rootdisk-vp3py -o json | jq -r '.status.boundVolumeSnapshotContentName')
222+
```
223+
224+
3. Obtain the names of the related `Longhorn Snapshot` resources.
225+
226+
```
227+
$ LH_SNAPSHOT=snapshot-$(echo "$SNAPSHOT_CONTENT" | sed 's/^snapcontent-//')
228+
```
229+
230+
4. Check if the status of the related `Longhorn Snapshot` resources is `readyToUse`.
231+
232+
```
233+
$ kubectl -n longhorn-system get snapshots.longhorn.io $LH_SNAPSHOT -o json | jq '.status.readyToUse'
234+
```
235+
236+
Example:
237+
```
238+
$ kubectl -n longhorn-system get snapshots.longhorn.io $LH_SNAPSHOT -o json | jq '.status.readyToUse'
239+
true
240+
```
241+
242+
5. Check the state of the related `Longhorn backups` resources.
243+
244+
```
245+
$ kubectl -n longhorn-system get backups.longhorn.io -o json | jq --arg snapshot "$LH_SNAPSHOT" '.items[] | select(.spec.snapshotName == $snapshot) | .status.state'
246+
```
247+
248+
Example:
249+
```
250+
$ kubectl -n longhorn-system get backups.longhorn.io -o json | jq --arg snapshot "$LH_SNAPSHOT" '.items[] | select(.spec.snapshotName == $snapshot) | .status.state'
251+
Completed
252+
```
253+
254+
:::info important
255+
You must perform the listed actions for all problematic `VolumeSnapshot` resources identified in step 1.
256+
:::
257+
258+
The issue has likely occurred if the status of the related `Longhorn Snapshot` resources is `readyToUse` and the state of the related `Longhorn backups` resources is `Completed`.
259+
260+
Start the VM before creating the VM backup to prevent the issue from occurring again. If you still choose to create the backup while the VM is stopped, change the state of the VM backup by running the following command:
261+
262+
```
263+
$ kubectl -n longhorn-system rollout restart deployment csi-snapshotter
264+
```
265+
266+
Related issue:
267+
- [[BUG] Fail to backup a Stopped/Off VM due to volume error state](https://github.com/harvester/harvester/issues/5841)

0 commit comments

Comments
 (0)