-
Notifications
You must be signed in to change notification settings - Fork 5
Step 1: Writing a basic test
In this part of the tutorial we add a basic test to the existing sample test suite tree, which will run on the behest of the testing framework. We also give instructions on making the framework aware of that addition and explain how to run the test through the command line.
By convention, tests scripts are located in the tests
directory.
Each test defines a main run
function, which receives three parameters: a
test object, a dictionary of parameters and an environment object. This function
is called by the framework when running the tests and should be the last member
of the Python module (for readability of larger tests).
When Avocado invokes the test module, that module will run on the host machine. We are then able to spawn and control virtual machines as we see fit, setting them up, running tools and asserting for expected outcome. In this first part of the tutorial we will use a single virtual machine, on which we will run a couple of operations and check their output.
tutorial_step_1.py
defines a minimal run()
method that creates and
writes to a file on a remote VM and then reads that file, asserting that
its contents match those that have been written:
import logging import random from avocado.core import exceptions def run(test, params, env): vmnet = env.get_vmnet() _, session = vmnet.get_single_vm_with_session() contents_to_write = params.get("file_contents", "some content") session.cmd("echo %s > /root/sample.txt" % contents_to_write) contents_written = session.cmd_output("cat /root/sample.txt").strip() if contents_written != contents_to_write: raise exceptions.TestFail( "File contents (%s) differs from the expected (%s)." % (contents_to_write, contents_written) )
The test starts out by spawning a VM session object from the only VM available for this test (we have a single VM available because that's the requirement for quick tests, explained better in the section bellow). Through this session object we invoke remote shell commands, which we use to create a file remotely, write to it and read its contents. We finally assert that what was written matches what we intended to write.
Also, the test shows the common return pattern: in case of a successful test
the run
function simply exits silently, without returning anything.
Upon error it bails out by throwing an exception that is defined in the
Avocado core
library.
The fresh addition must now be registered with the Avocado plugin through
the configuration files in the configs
directory.
Before you add anything it is recommended to familiarize yourself with the
notion of Cartesian configuration, Avocado VT's and relevant plugins'
means of specifying and addressing test cases.
For simple tests it is sufficient to add them to the variants of
the quicktest
test group in groups.cfg
.
- quicktest: install setup image_copy unattended_install.cdrom vms = vm1 get_state = on_customize variants: - tutorial1: type = tutorial_step_1 file_contents = "avocado tutorial 1" - tutorial2: ...
Before executing any tests the VMs need to be installed and running. In the avocado directory, issue
$ avocado run --auto --loaders cartesian_graph -- "only=tutorial1" # or equivalently using our more general tools $ avocado manu only=tutorial1
Now grab yourself a cup of coffee and wait for the virtual machine to be prepared for you. Afterwards you can just reuse the VMs that are already on. This step needs to be executed only once per session as long as you work with the same virtual machine combination. Needless to say that adding further guests like in the next example requires to install those other machines as well.)
Subsequent test runs may omit every setup that was already performed:
$ avocado manu only=tutorial1
After the integrated Cartesian runner or alternatively the manu
command (above)
is invoked the test run activity can be monitored from the console output.
The log will contain a summary of each step in the JOB LOG
section. In
the end of the test run, a results line is printed with the amount of
tests that passed, the amount that failed or errored and so on.
Note that the results
directory will contain a directory for each test
run. Inside those directories we can find a job.log
file, which contains
more detailed information, including debug messages. This file is useful
when we need to diagnose a test failure since standard output won't contain
full stack traces. The results
folder also contain a latest
symlink
that points to the most recent run.
Running VMs, as long as they produce graphical output, can be inspected via
VNC by connecting to their VNC channels.
For example, VM no. 1 is broadcasting at local channel 1, no. 2 at channel 2,
and so on.
You can grep job.log
for the parameters passed to qemu-kvm
; the VNC
channel where a VM is broadcast on will be in there somewhere.
Since we have only been toying with vm1
so far, we can jack into that one:
vncviewer -shared :1
If this fails, then the VM is not running and you will have to bring it up
explicitly.
When running a series of tests the guests can be left booted for easier
inspection in case of failure (of the last test but you can run just one such
test to debug it specifically). In some cases though the tests will run against
a set of parameters that require shutdown and eventually kill of the vm. In
cases like this you can investigate the setup before they power off again or
better yet boot them with a manual step (using the a manu
tool). In order
to boot a machine separately and have it stay up without triggering a
test you can pass the directive boot
to the setup
parameter for the
manu
command:
$ avocado manu setup=boot vms=vm1
This will bring up all the machines specified in the vms
list.
To take them down again use the shutdown
directive. Note that since we
are doing a manual preparation (booting the VM) and not running tests
we need to add setup=boot
but there are other valid tools invoked through
the setup
parameter (Pro Tip: Check guest-base.cfg
for details).
Now you can log on to the VMs with SSH; each guest receives a separate /24
subnet: the first one listens at 192.168.1.1, number two at 192.168.2.1, etc.
(Pro Tip: Take a look at the networking configuration in the file
objects.cfg
, especially the host nic interfaces.)
It is advisable to add host specifications for each VM to your
~/.ssh/config
to prevent their key signatures from being added to your
~/.ssh/known_hosts
:
Host vm1 HostName 192.168.1.1 AddressFamily inet StrictHostKeyChecking no UserKnownHostsFile /dev/null User root LogLevel QUIET
You can also use this for deployed SSH keys during the automated vm setup.
Otherwise FYI the password is test1234
.