-
Notifications
You must be signed in to change notification settings - Fork 5
Step 2: Writing more complex tests
In this round we are going to write a more complex test. The test is going to be twofold: it will be comprised of two more or less independent subvariants that get passed different sets of parameters. We will also demonstrate how to add a tarball to the test and have it extracted on the guest. Finally, we will also show a couple of optional ways we can use to run code on a remote VM from the host machine. Those are a bit simpler than the ones we will also optionally use (provided we have additional satisfied dependencies) in the third tutorial.
As in the previous section, this test will receive its own file
tutorial_step_2.py
in the tests
directory.
The novelty in this iteration is going to be a tarball check_names.tar.gz
which must be located at data/tutorial_step_2
. It contains a sample
shell script that does two simple checks:
#!/bin/sh if test "$(uname)" = "Linux" && test "$(hostname)" = "$1"; then exit 0 fi exit 1
Besides the test function itself (i.e. the run()
function), there are
several other functions defined as part of the test. Each one of them will be
responsible for executing a portion of our whole test process. We put these in
a separate helpers section and a few test constants in a constants section
while the run()
function remains in the main module section.
Our goal is to define a test tutorial2
with two subtests which get passed
different values in their params
dictionary.
This can be accomplished by defining two subvariants of the actual test:
-
files
: Checks two lists of files for presence or absence, respectively, from a certain location in the file system. The lists aremust_exist
must_not_exit
-
names
: Runs the script contained in the companion tarball and checks its return value. The name of the script as well as the tarball (minus their extensions) is specified by a variablescript
. Additionally, the MD5 hash of the tarball is passed asmd5sum
to verify whether the correct file has been transferred to the guest.
Defining these subvariants creates two tests that are run separately, each with
its own distinct params
dictionary.
The Cartesian configuration ensures that the files subtest gets passed only
the must_exist
and must_not_exist
keys, whereas the names test
receives the script
and md5sum
keys -- neither set of entries is
visible during the other test.
Each of the test receives a different check_kind
key that indicates the
subvariant being run.
The complete entry to groups.cfg
is given in below snippet:
... - quicktest: ... variants: ... - tutorial2: type = tutorial_step_2 files_prefix = /etc variants: - files: check_kind = files must_exist = fstab must_not_exist = foo shunned/file - names: check_kind = names script = check_names md5sum = e497f33127254842e12417a64a09d536 ...
Note that we could have named our subvariants differently. For instance, the
files
subvariant could have been named tutorial_check_files_exist_or_not
.
At first glance this detailed name may look better than the simple files
,
but it suffers from the following problems:
- It unnecessarily repeats information. After running this test it will be listed
in the report as
quicktest.tutorial2.tutorial_check_files_exist_or_not
, meaning that thetutorial
part will be duplicated. Since we are already on thetutorial2
variant there's no need to specify it again in the subvariant name. - It is too detailed. Any refactor that changes how the test behaves may force us to rename it (e.g. if we change it to stop checking for the absence of files).
Since the Cartesian configuration is hierarchically structured, each subvariant name should add more detail to the test hierarchy we are running. There's one caveat, though: we can run tests by specifying the name of a single subvariant, thus it's usually recommended to avoid repeating names for subvariants that are completely unrelated because it would cause them to be run together.
Another important clarification regarding the structuring in variants is to be explicit when specifying all the variants we want to produce. This means that
... - tutorial2: testA = no testB = no variants: - featureA: testA = yes - featureB: testB = yes ...
which could be written in this way to safe performance or incompatible runs is better written explicitly based on which variants are mutually exclusive
... - tutorial2: variants: - featureA: testA = yes - no_featureA: testA = no only featureB variants: - featureB: testB = yes - no_featureB: testB = no only featureA ...
and then restrictions of any kind are specified explicitly on top of these
definitions using only
or other Cartesian configuration keywords.
To recapitulate: The variant tutorial2 is the test entry for this section.
All its subtests receive the key files_prefix
on their params
dictionary, containing the value /etc
.
Then, two subtests -- files and variants -- are defined which will be
executed one after another.
!DANGER!
When adapting your groups.cfg
don't forget to hash the actual tarball
you intend to send and modify the value of md5sum
accordingly.
This is the main function of our test script and will simply check the parameters
dictionary and decide based on the check_kind
variable which test it should
execute:
def run(test, params, env): vmnet = env.get_vmnet() vm, _ = vmnet.get_single_vm_with_session() sleep(3) if params["check_kind"] == "names": extract_tarball(params, vm) run_extracted_script(params, vm) else: check_files(params, vm)
Each branch shows a different approach to handling config parameters:
- The
names
will instead run theextract_tarball()
function and therun_extracted_script()
function right after. - The
files
branch executes thecheck_files()
function as usual, i.e. on the host.
We also make a short call to sleep(3)
from a test utility to demonstrate
code which is reusable across tests. The function is imported as
from sample_utility import sleep
and the utility sample_utility
is placed in the utils
directory.
For this test variant we need to first extract the files from the tarball.
We do that by calling the extract_tarball()
function which will check
if the hash of the file matches the one it expects before extracting:
def extract_tarball(params, vm): tarball_path = os.path.join( params["deployed_test_data_path"], "tutorial_step_2", "check_names.tar.gz" ) hash_out = vm.session.cmd_output("md5sum %s" % tarball_path) if params["md5sum"] not in hash_out: raise exceptions.TestError("MD5 checksum mismatch of file %s: " "expected %s, got:\n%s" % (tarball_path, params["md5sum"], hash_out)) vm.session.cmd("tar -xf %s -C %s" % (tarball_path, TARBALL_DESTINATION))
When this is finished we will execute the file that was extracted from the tarball.
This is done in the run_extracted_script()
function:
def run_extracted_script(params, vm): scriptdir = params["script"] scriptname = scriptdir + ".sh" scriptabspath = os.path.join( TARBALL_DESTINATION, scriptdir, scriptname ) vm.session.cmd("test -f " + scriptabspath) if OPS_AVAILABLE: if not session_ops.is_regular_file(vm.session, scriptabspath): raise exceptions.TestError("Expected file %s to have been extracted, " "but it doesn't exist." % scriptabspath) else: logging.info("The extracted script was also verified through session ops") vm.session.cmd(scriptabspath + " " + vm.name)
Here we show two separate ways to execute remote commands on the guest VM. First we make
sure that the file we want to execute exists. We could use the basic session approach
covered in the previous tutorial, however depending on availability we could also use the
optional session_ops
module from an upgraded aexpect dependency which provides us with
just the right function for this same task and any task involving simple file and shell
operations. Once this is asserted, we will run the file using the session object provided
for each virtual machine, which will also check the exit code of the script and throw
an exception if it is not admissible, making sure that it was successfully executed.
The test for missing and present files is straightforward.
From the sets of files it receives, it constructs two new sets of those files
which are present or absent in the directory that was specified via the
files_prefix
entry in the Cartesian configuration.
def check_files(params, vm): must_exist = params["must_exist"].split(" ") must_not_exist = params["must_not_exist"].split(" ") files_prefix = params["files_prefix"] def aux(f): fullpath = os.path.join(files_prefix, f) result = vm.session.cmd_status("! test -f " + fullpath) if OPS_AVAILABLE: result2 = session_ops.is_regular_file(vm.session, fullpath) assert result == result2 return result missing = [f for f in must_exist if not aux(f)] unwanted = [f for f in must_not_exist if aux(f)]
Next it checks whether one or both of those lists are non-empty and raises an appropriately formatted exception:
if missing and unwanted: raise exceptions.TestFail( "%d mandatory files not found in path %s: \"%s\";\n" "%d unwanted files in path %s: \"%s\"." % (len(missing), files_prefix, ", ".join(missing), len(unwanted), files_prefix, ", ".join(unwanted))) elif missing: raise exceptions.TestFail("%d mandatory files not found in path %s: \"%s\"." %(files_prefix, len(missing), ", ".join(missing))) elif unwanted: raise exceptions.TestFail("%d unwanted files in path %s: \"%s\"." %(files_prefix, len(unwanted), ", ".join(unwanted)))
These exceptions signal a test failure; again, a clean exit from the function means success. A test error implies any exception that is not related to what is actually tested and is a product of the test conditions themselves.