Skip to content

Commit 8597af7

Browse files
xzpeterFabiano Rosas
authored andcommitted
migration/block: Rewrite disk activation
This patch proposes a flag to maintain disk activation status globally. It mostly rewrites disk activation mgmt for QEMU, including COLO and QMP command xen_save_devices_state. Backgrounds =========== We have two problems on disk activations, one resolved, one not. Problem 1: disk activation recover (for switchover interruptions) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When migration is either cancelled or failed during switchover, especially when after the disks are inactivated, QEMU needs to remember re-activate the disks again before vm starts. It used to be done separately in two paths: one in qmp_migrate_cancel(), the other one in the failure path of migration_completion(). It used to be fixed in different commits, all over the places in QEMU. So these are the relevant changes I saw, I'm not sure if it's complete list: - In 2016, commit fe904ea ("migration: regain control of images when migration fails to complete") - In 2017, commit 1d2acc3 ("migration: re-active images while migration been canceled after inactive them") - In 2023, commit 6dab4c9 ("migration: Attempt disk reactivation in more failure scenarios") Now since we have a slightly better picture maybe we can unify the reactivation in a single path. One side benefit of doing so is, we can move the disk operation outside QMP command "migrate_cancel". It's possible that in the future we may want to make "migrate_cancel" be OOB-compatible, while that requires the command doesn't need BQL in the first place. This will already do that and make migrate_cancel command lightweight. Problem 2: disk invalidation on top of invalidated disks ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is an unresolved bug for current QEMU. Link in "Resolves:" at the end. It turns out besides the src switchover phase (problem 1 above), QEMU also needs to remember block activation on destination. Consider two continuous migration in a row, where the VM was always paused. In that scenario, the disks are not activated even until migration completed in the 1st round. When the 2nd round starts, if QEMU doesn't know the status of the disks, it needs to try inactivate the disk again. Here the issue is the block layer API bdrv_inactivate_all() will crash a QEMU if invoked on already inactive disks for the 2nd migration. For detail, see the bug link at the end. Implementation ============== This patch proposes to maintain disk activation with a global flag, so we know: - If we used to inactivate disks for migration, but migration got cancelled, or failed, QEMU will know it should reactivate the disks. - On incoming side, if the disks are never activated but then another migration is triggered, QEMU should be able to tell that inactivate is not needed for the 2nd migration. We used to have disk_inactive, but it only solves the 1st issue, not the 2nd. Also, it's done in completely separate paths so it's extremely hard to follow either how the flag changes, or the duration that the flag is valid, and when we will reactivate the disks. Convert the existing disk_inactive flag into that global flag (also invert its naming), and maintain the disk activation status for the whole lifecycle of qemu. That includes the incoming QEMU. Put both of the error cases of source migration (failure, cancelled) together into migration_iteration_finish(), which will be invoked for either of the scenario. So from that part QEMU should behave the same as before. However with such global maintenance on disk activation status, we not only cleanup quite a few temporary paths that we try to maintain the disk activation status (e.g. in postcopy code), meanwhile it fixes the crash for problem 2 in one shot. For freshly started QEMU, the flag is initialized to TRUE showing that the QEMU owns the disks by default. For incoming migrated QEMU, the flag will be initialized to FALSE once and for all showing that the dest QEMU doesn't own the disks until switchover. That is guaranteed by the "once" variable. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2395 Signed-off-by: Peter Xu <[email protected]> Reviewed-by: Fabiano Rosas <[email protected]> Message-Id: <[email protected]> Signed-off-by: Fabiano Rosas <[email protected]>
1 parent 8c97c5a commit 8597af7

File tree

9 files changed

+140
-91
lines changed

9 files changed

+140
-91
lines changed

include/migration/misc.h

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -104,4 +104,8 @@ bool migration_incoming_postcopy_advised(void);
104104
/* True if background snapshot is active */
105105
bool migration_in_bg_snapshot(void);
106106

107+
/* Wrapper for block active/inactive operations */
108+
bool migration_block_activate(Error **errp);
109+
bool migration_block_inactivate(void);
110+
107111
#endif

migration/block-active.c

Lines changed: 94 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,94 @@
1+
/*
2+
* Block activation tracking for migration purpose
3+
*
4+
* SPDX-License-Identifier: GPL-2.0-or-later
5+
*
6+
* Copyright (C) 2024 Red Hat, Inc.
7+
*/
8+
#include "qemu/osdep.h"
9+
#include "block/block.h"
10+
#include "qapi/error.h"
11+
#include "migration/migration.h"
12+
#include "qemu/error-report.h"
13+
#include "trace.h"
14+
15+
/*
16+
* Migration-only cache to remember the block layer activation status.
17+
* Protected by BQL.
18+
*
19+
* We need this because..
20+
*
21+
* - Migration can fail after block devices are invalidated (during
22+
* switchover phase). When that happens, we need to be able to recover
23+
* the block drive status by re-activating them.
24+
*
25+
* - Currently bdrv_inactivate_all() is not safe to be invoked on top of
26+
* invalidated drives (even if bdrv_activate_all() is actually safe to be
27+
* called any time!). It means remembering this could help migration to
28+
* make sure it won't invalidate twice in a row, crashing QEMU. It can
29+
* happen when we migrate a PAUSED VM from host1 to host2, then migrate
30+
* again to host3 without starting it. TODO: a cleaner solution is to
31+
* allow safe invoke of bdrv_inactivate_all() at anytime, like
32+
* bdrv_activate_all().
33+
*
34+
* For freshly started QEMU, the flag is initialized to TRUE reflecting the
35+
* scenario where QEMU owns block device ownerships.
36+
*
37+
* For incoming QEMU taking a migration stream, the flag is initialized to
38+
* FALSE reflecting that the incoming side doesn't own the block devices,
39+
* not until switchover happens.
40+
*/
41+
static bool migration_block_active;
42+
43+
/* Setup the disk activation status */
44+
void migration_block_active_setup(bool active)
45+
{
46+
migration_block_active = active;
47+
}
48+
49+
bool migration_block_activate(Error **errp)
50+
{
51+
ERRP_GUARD();
52+
53+
assert(bql_locked());
54+
55+
if (migration_block_active) {
56+
trace_migration_block_activation("active-skipped");
57+
return true;
58+
}
59+
60+
trace_migration_block_activation("active");
61+
62+
bdrv_activate_all(errp);
63+
if (*errp) {
64+
error_report_err(error_copy(*errp));
65+
return false;
66+
}
67+
68+
migration_block_active = true;
69+
return true;
70+
}
71+
72+
bool migration_block_inactivate(void)
73+
{
74+
int ret;
75+
76+
assert(bql_locked());
77+
78+
if (!migration_block_active) {
79+
trace_migration_block_activation("inactive-skipped");
80+
return true;
81+
}
82+
83+
trace_migration_block_activation("inactive");
84+
85+
ret = bdrv_inactivate_all();
86+
if (ret) {
87+
error_report("%s: bdrv_inactivate_all() failed: %d",
88+
__func__, ret);
89+
return false;
90+
}
91+
92+
migration_block_active = false;
93+
return true;
94+
}

migration/colo.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -836,7 +836,7 @@ static void *colo_process_incoming_thread(void *opaque)
836836

837837
/* Make sure all file formats throw away their mutable metadata */
838838
bql_lock();
839-
bdrv_activate_all(&local_err);
839+
migration_block_activate(&local_err);
840840
bql_unlock();
841841
if (local_err) {
842842
error_report_err(local_err);

migration/meson.build

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ migration_files = files(
1111

1212
system_ss.add(files(
1313
'block-dirty-bitmap.c',
14+
'block-active.c',
1415
'channel.c',
1516
'channel-block.c',
1617
'cpu-throttle.c',

migration/migration.c

Lines changed: 19 additions & 61 deletions
Original file line numberDiff line numberDiff line change
@@ -738,7 +738,6 @@ static void qemu_start_incoming_migration(const char *uri, bool has_channels,
738738

739739
static void process_incoming_migration_bh(void *opaque)
740740
{
741-
Error *local_err = NULL;
742741
MigrationIncomingState *mis = opaque;
743742

744743
trace_vmstate_downtime_checkpoint("dst-precopy-bh-enter");
@@ -769,11 +768,7 @@ static void process_incoming_migration_bh(void *opaque)
769768
* Make sure all file formats throw away their mutable
770769
* metadata. If error, don't restart the VM yet.
771770
*/
772-
bdrv_activate_all(&local_err);
773-
if (local_err) {
774-
error_report_err(local_err);
775-
local_err = NULL;
776-
} else {
771+
if (migration_block_activate(NULL)) {
777772
vm_start();
778773
}
779774
} else {
@@ -1560,16 +1555,6 @@ static void migrate_fd_cancel(MigrationState *s)
15601555
}
15611556
}
15621557
}
1563-
if (s->state == MIGRATION_STATUS_CANCELLING && s->block_inactive) {
1564-
Error *local_err = NULL;
1565-
1566-
bdrv_activate_all(&local_err);
1567-
if (local_err) {
1568-
error_report_err(local_err);
1569-
} else {
1570-
s->block_inactive = false;
1571-
}
1572-
}
15731558
}
15741559

15751560
void migration_add_notifier_mode(NotifierWithReturn *notify,
@@ -1853,6 +1838,12 @@ void qmp_migrate_incoming(const char *uri, bool has_channels,
18531838
return;
18541839
}
18551840

1841+
/*
1842+
* Newly setup incoming QEMU. Mark the block active state to reflect
1843+
* that the src currently owns the disks.
1844+
*/
1845+
migration_block_active_setup(false);
1846+
18561847
once = false;
18571848
}
18581849

@@ -2505,7 +2496,6 @@ static int postcopy_start(MigrationState *ms, Error **errp)
25052496
QIOChannelBuffer *bioc;
25062497
QEMUFile *fb;
25072498
uint64_t bandwidth = migrate_max_postcopy_bandwidth();
2508-
bool restart_block = false;
25092499
int cur_state = MIGRATION_STATUS_ACTIVE;
25102500

25112501
if (migrate_postcopy_preempt()) {
@@ -2541,13 +2531,10 @@ static int postcopy_start(MigrationState *ms, Error **errp)
25412531
goto fail;
25422532
}
25432533

2544-
ret = bdrv_inactivate_all();
2545-
if (ret < 0) {
2546-
error_setg_errno(errp, -ret, "%s: Failed in bdrv_inactivate_all()",
2547-
__func__);
2534+
if (!migration_block_inactivate()) {
2535+
error_setg(errp, "%s: Failed in bdrv_inactivate_all()", __func__);
25482536
goto fail;
25492537
}
2550-
restart_block = true;
25512538

25522539
/*
25532540
* Cause any non-postcopiable, but iterative devices to
@@ -2617,8 +2604,6 @@ static int postcopy_start(MigrationState *ms, Error **errp)
26172604
goto fail_closefb;
26182605
}
26192606

2620-
restart_block = false;
2621-
26222607
/* Now send that blob */
26232608
if (qemu_savevm_send_packaged(ms->to_dst_file, bioc->data, bioc->usage)) {
26242609
error_setg(errp, "%s: Failed to send packaged data", __func__);
@@ -2663,17 +2648,7 @@ static int postcopy_start(MigrationState *ms, Error **errp)
26632648
fail:
26642649
migrate_set_state(&ms->state, MIGRATION_STATUS_POSTCOPY_ACTIVE,
26652650
MIGRATION_STATUS_FAILED);
2666-
if (restart_block) {
2667-
/* A failure happened early enough that we know the destination hasn't
2668-
* accessed block devices, so we're safe to recover.
2669-
*/
2670-
Error *local_err = NULL;
2671-
2672-
bdrv_activate_all(&local_err);
2673-
if (local_err) {
2674-
error_report_err(local_err);
2675-
}
2676-
}
2651+
migration_block_activate(NULL);
26772652
migration_call_notifiers(ms, MIG_EVENT_PRECOPY_FAILED, NULL);
26782653
bql_unlock();
26792654
return -1;
@@ -2771,31 +2746,6 @@ static void migration_completion_postcopy(MigrationState *s)
27712746
trace_migration_completion_postcopy_end_after_complete();
27722747
}
27732748

2774-
static void migration_completion_failed(MigrationState *s,
2775-
int current_active_state)
2776-
{
2777-
if (s->block_inactive && (s->state == MIGRATION_STATUS_ACTIVE ||
2778-
s->state == MIGRATION_STATUS_DEVICE)) {
2779-
/*
2780-
* If not doing postcopy, vm_start() will be called: let's
2781-
* regain control on images.
2782-
*/
2783-
Error *local_err = NULL;
2784-
2785-
bql_lock();
2786-
bdrv_activate_all(&local_err);
2787-
if (local_err) {
2788-
error_report_err(local_err);
2789-
} else {
2790-
s->block_inactive = false;
2791-
}
2792-
bql_unlock();
2793-
}
2794-
2795-
migrate_set_state(&s->state, current_active_state,
2796-
MIGRATION_STATUS_FAILED);
2797-
}
2798-
27992749
/**
28002750
* migration_completion: Used by migration_thread when there's not much left.
28012751
* The caller 'breaks' the loop when this returns.
@@ -2849,7 +2799,8 @@ static void migration_completion(MigrationState *s)
28492799
error_free(local_err);
28502800
}
28512801

2852-
migration_completion_failed(s, current_active_state);
2802+
migrate_set_state(&s->state, current_active_state,
2803+
MIGRATION_STATUS_FAILED);
28532804
}
28542805

28552806
/**
@@ -3279,6 +3230,11 @@ static void migration_iteration_finish(MigrationState *s)
32793230
case MIGRATION_STATUS_FAILED:
32803231
case MIGRATION_STATUS_CANCELLED:
32813232
case MIGRATION_STATUS_CANCELLING:
3233+
/*
3234+
* Re-activate the block drives if they're inactivated. Note, COLO
3235+
* shouldn't use block_active at all, so it should be no-op there.
3236+
*/
3237+
migration_block_activate(NULL);
32823238
if (runstate_is_live(s->vm_old_state)) {
32833239
if (!runstate_check(RUN_STATE_SHUTDOWN)) {
32843240
vm_start();
@@ -3852,6 +3808,8 @@ static void migration_instance_init(Object *obj)
38523808
ms->state = MIGRATION_STATUS_NONE;
38533809
ms->mbps = -1;
38543810
ms->pages_per_second = -1;
3811+
/* Freshly started QEMU owns all the block devices */
3812+
migration_block_active_setup(true);
38553813
qemu_sem_init(&ms->pause_sem, 0);
38563814
qemu_mutex_init(&ms->error_mutex);
38573815

migration/migration.h

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -370,9 +370,6 @@ struct MigrationState {
370370
/* Flag set once the migration thread is running (and needs joining) */
371371
bool migration_thread_running;
372372

373-
/* Flag set once the migration thread called bdrv_inactivate_all */
374-
bool block_inactive;
375-
376373
/* Migration is waiting for guest to unplug device */
377374
QemuSemaphore wait_unplug_sem;
378375

@@ -556,4 +553,7 @@ void migration_bitmap_sync_precopy(bool last_stage);
556553
/* migration/block-dirty-bitmap.c */
557554
void dirty_bitmap_mig_init(void);
558555

556+
/* migration/block-active.c */
557+
void migration_block_active_setup(bool active);
558+
559559
#endif

migration/savevm.c

Lines changed: 13 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -1547,19 +1547,18 @@ int qemu_savevm_state_complete_precopy_non_iterable(QEMUFile *f,
15471547
}
15481548

15491549
if (inactivate_disks) {
1550-
/* Inactivate before sending QEMU_VM_EOF so that the
1551-
* bdrv_activate_all() on the other end won't fail. */
1552-
ret = bdrv_inactivate_all();
1553-
if (ret) {
1554-
error_setg(&local_err, "%s: bdrv_inactivate_all() failed (%d)",
1555-
__func__, ret);
1550+
/*
1551+
* Inactivate before sending QEMU_VM_EOF so that the
1552+
* bdrv_activate_all() on the other end won't fail.
1553+
*/
1554+
if (!migration_block_inactivate()) {
1555+
error_setg(&local_err, "%s: bdrv_inactivate_all() failed",
1556+
__func__);
15561557
migrate_set_error(ms, local_err);
15571558
error_report_err(local_err);
1558-
qemu_file_set_error(f, ret);
1559+
qemu_file_set_error(f, -EFAULT);
15591560
return ret;
15601561
}
1561-
/* Remember that we did this */
1562-
s->block_inactive = true;
15631562
}
15641563
if (!in_postcopy) {
15651564
/* Postcopy stream will still be going */
@@ -2123,7 +2122,6 @@ static int loadvm_postcopy_handle_listen(MigrationIncomingState *mis)
21232122

21242123
static void loadvm_postcopy_handle_run_bh(void *opaque)
21252124
{
2126-
Error *local_err = NULL;
21272125
MigrationIncomingState *mis = opaque;
21282126

21292127
trace_vmstate_downtime_checkpoint("dst-postcopy-bh-enter");
@@ -2146,12 +2144,11 @@ static void loadvm_postcopy_handle_run_bh(void *opaque)
21462144
* Make sure all file formats throw away their mutable metadata.
21472145
* If we get an error here, just don't restart the VM yet.
21482146
*/
2149-
bdrv_activate_all(&local_err);
2147+
bool success = migration_block_activate(NULL);
2148+
21502149
trace_vmstate_downtime_checkpoint("dst-postcopy-bh-cache-invalidated");
2151-
if (local_err) {
2152-
error_report_err(local_err);
2153-
local_err = NULL;
2154-
} else {
2150+
2151+
if (success) {
21552152
vm_start();
21562153
}
21572154
} else {
@@ -3193,11 +3190,7 @@ void qmp_xen_save_devices_state(const char *filename, bool has_live, bool live,
31933190
* side of the migration take control of the images.
31943191
*/
31953192
if (live && !saved_vm_running) {
3196-
ret = bdrv_inactivate_all();
3197-
if (ret) {
3198-
error_setg(errp, "%s: bdrv_inactivate_all() failed (%d)",
3199-
__func__, ret);
3200-
}
3193+
migration_block_inactivate();
32013194
}
32023195
}
32033196

migration/trace-events

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -383,3 +383,6 @@ migration_pagecache_insert(void) "Error allocating page"
383383
# cpu-throttle.c
384384
cpu_throttle_set(int new_throttle_pct) "set guest CPU throttled by %d%%"
385385
cpu_throttle_dirty_sync(void) ""
386+
387+
# block-active.c
388+
migration_block_activation(const char *name) "%s"

0 commit comments

Comments
 (0)