Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 6 additions & 4 deletions lib/host.py
Original file line number Diff line number Diff line change
Expand Up @@ -359,9 +359,10 @@ def pool_has_vm(self, vm_uuid, vm_type='vm'):
else:
return self.xe('vm-list', {'uuid': vm_uuid}, minimal=True) == vm_uuid

def install_updates(self):
def install_updates(self, enablerepo=None):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

enablerepo sounds like a bool. extra_repo, maybe?
Also adding a type hint would help

logging.info("Install updates on host %s" % self)
return self.ssh(['yum', 'update', '-y'])
enablerepo_cmd = ['--enablerepo=%s' % enablerepo] if enablerepo is not None else []
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that is not a command, should not be named _cmd

return self.ssh(['yum', 'update', '-y'] + enablerepo_cmd)

def restart_toolstack(self, verify=False):
logging.info("Restart toolstack on host %s" % self)
Expand All @@ -376,10 +377,11 @@ def is_enabled(self) -> bool:
# If XAPI is not ready yet, or the host is down, this will throw. We return False in that case.
return False

def has_updates(self):
def has_updates(self, enablerepo=None):
enablerepo_cmd = ['--enablerepo=%s' % enablerepo] if enablerepo is not None else []
try:
# yum check-update returns 100 if there are updates, 1 if there's an error, 0 if no updates
self.ssh(['yum', 'check-update'])
self.ssh(['yum', 'check-update'] + enablerepo_cmd)
# returned 0, else there would have been a SSHCommandFailed
return False
except commands.SSHCommandFailed as e:
Expand Down
1 change: 1 addition & 0 deletions pytest.ini
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ markers =
default_vm: mark a test with a default VM in case no --vm parameter was given.

# *** Markers used to select tests at collect stage ***
upgrade_test: mark a test which will upgrade packages from testing repo
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this line does not warrant a separate commit, it logically belongs to the next commit.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From what I gather this is about an "update" (ie using yum) not an "upgrade" (which uses the installer ISO)


# * Host-related markers, automatically set based on fixtures
hostA2: a second member in the first pool.
Expand Down
18 changes: 16 additions & 2 deletions tests/storage/linstor/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,9 +49,20 @@ def storage_pool_name(provisioning_type):
def provisioning_type(request):
return request.param

def pytest_configure(config):
config._linstor_upgrade_test = False

def pytest_collection_modifyitems(config, items):
Comment on lines +52 to +55
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a note that those are pytest hooks (and, for the 2nd one, when it is called) it would be useful to the reader

for item in items:
if item.get_closest_marker("upgrade_test"):
config._linstor_upgrade_test = True
break

@pytest.fixture(scope='package')
def pool_with_linstor(hostA2, lvm_disks, pool_with_saved_yum_state):
def pool_with_linstor(hostA2, lvm_disks, pool_with_saved_yum_state, request):
import concurrent.futures

dont_use_testing_repo = request.config._linstor_upgrade_test
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not clear at first read what this does: this line should have an explanation for future readers of the file, and the commit message should give an overview. And using a variable with a negative name does not help: something like use_testing_repo = not request.config._linstor_upgrade_test would be more readable - but that would not be enough in itself to understand the logic.

That's especially hard to follow, as the extra testing repo seems to be configured only for the upgrade test.

pool = pool_with_saved_yum_state

def check_linstor_installed(host):
Expand All @@ -66,7 +77,10 @@ def check_linstor_installed(host):
def install_linstor(host):
logging.info(f"Installing {LINSTOR_PACKAGE} on host {host}...")
host.yum_install([LINSTOR_RELEASE_PACKAGE])
host.yum_install([LINSTOR_PACKAGE], enablerepo="xcp-ng-linstor-testing")
if dont_use_testing_repo:
host.yum_install([LINSTOR_PACKAGE])
else:
host.yum_install([LINSTOR_PACKAGE], enablerepo="xcp-ng-linstor-testing")
# Needed because the linstor driver is not in the xapi sm-plugins list
# before installing the LINSTOR packages.
host.ssh(["systemctl", "restart", "multipathd"])
Expand Down
50 changes: 50 additions & 0 deletions tests/storage/linstor/test_linstor_sr.py
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tricky part here is ensuring that we do have packages to test the rolling pool update. That means starting with a pool which has an older version of the packages compared to the ones we want to test. That also likely means moving this kind of test to a specific job, or have an orchestrator test (or fixtures) that prepares a nested pool for this test.

See with @ydirson, as I think there are common challenges with the installation and upgrade tests.

Copy link
Contributor Author

@rushikeshjadhav rushikeshjadhav May 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So should we limit this to xcp-ng-linstor-testing packages to keep it in linstor tests? Along side of main update tests, this can be specific for linstor related testing package tests. For this test pool_with_linstor fixture will have to run without enablerepo="xcp-ng-linstor-testing", so that we can reliably test "upgrade" from main repo linstor vs new one.

Larger installation and upgrade tests can happen in different job.

# repoquery -a --repoid=xcp-ng-linstor-testing
linstor-common-0:1.29.2-1.el7_9.noarch
linstor-controller-0:1.29.2-1.el7_9.noarch
linstor-satellite-0:1.29.2-1.el7_9.noarch
# repoquery -a --repoid=xcp-ng-linstor
linstor-common-0:1.29.0-1.el7_9.noarch
linstor-controller-0:1.29.0-1.el7_9.noarch
linstor-satellite-0:1.29.0-1.el7_9.noarch

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@stormi @ydirson can you check if the latest approach of using a marker upgrade_test to conditionally select testing repo is useful?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See my other comments, I don't really grasp what you tried to achieve with that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tricky part here is ensuring that we do have packages to test the rolling pool update. That means starting with a pool which has an older version of the packages compared to the ones we want to test. That also likely means moving this kind of test to a specific job, or have an orchestrator test (or fixtures) that prepares a nested pool for this test.

What we want to do depends on what we want to test. If we're testing that a rolling update works, one very important parameter is which version we're updating from, which raises the question of which such updates are supported (and incidentally, what does the product to, and what we should check, for unsupported updates. Need input from @Wescoeur and @Nambrok here.

My feeling here is that we want to setup specific pools first (nested, I'd say), and setup the linstor cluster with the specific version (a test parameter) that we want to upgrade from.

As to which repo to take the update from, we may want to allow devs to select it (ie have control on what they want to test), so I'd rather have this simply controlled by data.py.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we want to test progressive updates to not complicate our lives for the moment. The advantage of nested pools allows us to have an image at a given version and to easily rollback (and of course to select any LINSTOR version). Now this does not allow us to test on "real machines", if we want to do without nested ones, it would have to save the initial /var/lib/linstor folder as well as several config files, it is not necessarily easy to do...

If we don't have a lot of time for now, the nested solution "should" be ok.

Original file line number Diff line number Diff line change
Expand Up @@ -131,6 +131,56 @@ def test_linstor_missing(self, linstor_sr, host):
if not linstor_installed:
host.yum_install([LINSTOR_PACKAGE])

@pytest.mark.reboot
@pytest.mark.small_vm
@pytest.mark.upgrade_test
def test_linstor_sr_pool_update(self, linstor_sr, vm_on_linstor_sr):
"""
Perform update on the Linstor SR pool hosts while ensuring VM availability.
1. Identify all hosts in the SR pool and order them with the master first.
2. Update all hosts if updates are available.
3. Reboot all hosts.
4. Sequentially ensure that the VM can start on all hosts.
"""
import concurrent.futures, threading
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not much reason not to do that at top of file


sr = linstor_sr
vm = vm_on_linstor_sr
updates_applied = []
updates_lock = threading.Lock()

# Sort hosts so that pool master is first (optional)
hosts = sorted(sr.pool.hosts, key=lambda h: h != sr.pool.master)

# RPU is disabled for pools with XOSTOR SRs.
# LINSTOR expects that we always use satellites and controllers with the same version on all hosts.
def install_updates_on(host):
logging.info("Checking on host %s", host.hostname_or_ip)
if host.has_updates(enablerepo="xcp-ng-linstor-testing"):
host.install_updates(enablerepo="xcp-ng-linstor-testing")
Comment on lines +159 to +160
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we don't want a test for updates just doing nothing when by mistake it was given no updates, that should be a test error

with updates_lock:
updates_applied.append(host)
else:
logging.info("No updates available for host %s", host.hostname_or_ip)

with concurrent.futures.ThreadPoolExecutor() as executor:
executor.map(install_updates_on, hosts)

# Reboot updated hosts
def reboot_updated(host):
host.reboot(verify=True)

with concurrent.futures.ThreadPoolExecutor() as executor:
executor.map(reboot_updated, updates_applied)

# Ensure VM is able to boot on all the hosts
for h in hosts:
vm.start(on=h.uuid)
vm.wait_for_os_booted()
vm.shutdown(verify=True)

sr.scan()

# *** End of tests with reboots

# --- Test diskless resources --------------------------------------------------
Expand Down