Skip to content

Test framework

Alan B. Christie edited this page Aug 15, 2025 · 9 revisions

The engine is Event Driven, so to test it we simply have to send events (Protocol Buffer messages) to its handle_message() method.

This sounds simple enough but a key element of the engine is the manipulation of database records, and the asynchronous launching of Jobs. As a result, under test, the engine needs to run in a multi-process world as the DM does using a "mock" message bus to allow the transmission of and reception of messages. The engine relies on the behaviour of an InstanceLauncher and Workflow API Adapter in order to progress through the numerous stats a running workflow experiences. All of this leads to a test framework that is a challenging environment to construct, resulting in a significant amount of code before even the simplest of tests can be executed.

The test framework is based on the operation of the following test modules: -

  1. A unit test InstanceLauncher. A class able to simulate the execution of DM Jobs.
  2. A unit test WorkflowAPIAdapter. A class able to simulate the expected DM database tables.
  3. A unit test MessageDispatcher. A class that is used by the instance launcher to simulate the transmission of protocol buffer messages.
  4. A unit test MessageQueue. A class that is used by the message dispatcher to simulate the existence of the DM's RabbitMQ bus.

Their implementations can be found in the repository's tests directory: -

  • tests/instance_launcher.py
  • tests/wapi_adapter.py
  • tests/message_dispatcher.py
  • tests/message_queue.py

Each of these test modules has its own unit test module, which verifies the simulated behaviour is as expected: -

  • tests/test_test_instance_launcher.py
  • tests/test_test_wapi_adpater.py
  • tests/test_test_message_dispatcher.py
  • tests/test_test_message_queue.py

All of the above are required to simulate the services that are offered by the DM. They are essential if the engine's behaviour is to be thoroughly tested.

message_queue.py

A simulation of the RabbitMQ queue in the DM. It has a put() method that serializes ProtocolBuffer messages and places them on a queue. A run() method picks messages off the queue and deserializes them back into ProtocolBuffer objects before sending them to a receiver function.

In the UnitTest framework the receiver is the WorkflowEngine's handle_message() method.

message_dispatcher.py

This is a very simple object that relies on an underlying message_queue.py queue. The initialiser creates an instance of the queue and a send() method puts messages on the queue.

wapi_adapter.py

This simulates the responses that would be received from the DM API. It uses Python dictionaries to represent those used in the DM. Because the workflow engine runs in an event-driven multiprocessing environment the dictionaries cannot be held in memory. So ensure the simulated tables are available to all process the objects are pickled to (and unpickled from) the file system (in 'tests/pickle-files').

A separate pickle file is used for each simulated database table. The class initialiser resets all the pickle files.

instance_launcher.py

The instance launcher offers a launch() method where Jobs are represented by python modules located in tests/jobs.

When the launcher object is initialised it is given unit tests instances of the workflow API adapter (it too has to make API calls), and the message dispatcher - as it need to simulate the generation of PodMessages, that are sent via the dispatcher's internal queue to the engine's handle_message() function to simulate the end of the Job.

Where do we put..?

Workflow definitions
Test workflows can be found in tests/workflow-definitions
Job definitions
Test job definitions can be found in tests/job-definitions
Jobs
Test jobs, those run by the unit test instance launcher, can be found in tests/jobs
The test filesystem
The simulated Project directory can be found in tests/project-root

The process of testing a workflow definition

A brief tour of the operation of test framework for a unit test follows. This simple diagram illustrates the objects that play a role in the test: -

https://github.com/user-attachments/assets/ec45304c-c65b-4ca8-8e57-c97ee1934b25

All the example workflows that need to be tested (those in the tests/workflow-definitions directory) are tested by the tests/test_workflow_engine_examples.py pytest-based module. It consists of a fixture that initiates the following objects: -

  • An API adapter
  • A message queue
  • A message dispatcher
  • An instance launcher
  • A workflow engine

All of these are required to test a workflow.

In tests/test_workflow_engine_examples.py a start_workflow() function simplifies the start of a workflow by emulating the user's use of the DM /workflow/{id}/run API logic. It creates expected database records, loads the workflow file to be tested and then simulates the WorkflowMessage that will be handled by the engine under test. Every unit test in the module used the start_workflow() function, each passing their own workflow definition to be tested.

A wait_for_workflow() function simply wait for the corresponding RunningWorkflow to become done, with a success value that matches expectations (workflows under test can be expected to be successful or to have failed).

Regardless of the workflow under test, a typically unit test function that expects the workflow execution to be successful therefore looks something like this: -

def test_workflow_engine_example_two_step_nop(basic_engine):
    # Arrange
    md, da = basic_engine

    # Act
    r_wfid = start_workflow(md, da, "example-two-step-nop", {})

    # Assert
    wait_for_workflow(da, r_wfid)
    # Any additional checks that are workflow-specific...

See also

  1. The "Unit testing the WorkflowEngine" section of the Shortcut doc Workflow engine design
Clone this wiki locally