Skip to content

🧪 MLIR - Advanced Testing for Transformation passes #899

@DRovara

Description

@DRovara

Currently, we use LLVM's LIT framework to test transformation passes.
For each pass, we have a single test file that is split into multiple slices for individual test cases.
These test cases use FileCheck and require assertions to be written manually which is quite painful.
Therefore, it might be a good idea to have a more sophisticated testing setup.

LLVM Testing Infrastructure Guidelines

A good standard for how to set up testing might be the LLVM Testing Infrastructure Guide (https://llvm.org/docs/TestingGuide.html). They make the following suggestions:

  • Put related tests into a single file (as we already do)
  • Use check-prefixes to test different settings, e.g. different architectures that might require different behaviour (we currently do not really have a use case for this)

The LLVM framework also distinguishes between three classes of tests:

  • Regression Tests (the "main" tests in the form of small LIT tests)
  • "Whole Program Tests" (LIT tests with bigger programs, in a different directory)
  • Unit Tests (tests that use gtest)
    According to their guidelines, Unit Tests should only be used to test internal data structures etc., not the functionality of the passes themselves.

The LLVM framework also has additional tools that help them with LIT tests:

  1. They have their own scripts that automatically generate assertions (it's considered best practice to almost exclusively use auto-generated checks)
  2. They support custom operations that can be used inside the tests (not sure if we have a use case for this currently, but it's best to keep it in mind)
  3. They suggest expanding LIT configurations (https://github.com/munich-quantum-toolkit/core/blob/4e08f11fb89491ff61e352a20ce1172aec4c6d9e/mlir/test/lit.cfg.py) to best help for the specific tests

While it is not necessary to follow all of these suggestions, it might be helpful to implement at least one of them, as writing new tests currently is quite a lot of work.

Deviating from the LLVM Guidelines

It is not strictly necessary to follow these guidelines. We might also implement more gtests instead of having the large number of LIT tests (see https://github.com/PennyLaneAI/catalyst/blob/9f1c8f0b84906c2f8e65c899f5f305f449064585/mlir/CMakeLists.txt#L100-L145 for example)

However, any types of improvements would be quite welcome for the MLIR testing infrastructure.

Adding Tests

When testing is set up, we should also include new tests that are missing for some features.

Especially "failing" tests for all Trait verifiers are quite important to show that they work as expected.
However, we probably also need more test for transformation passes and individual dialect features.

Metadata

Metadata

Assignees

No one assigned

    Labels

    MLIRAnything related to MLIRenhancementImprovement of existing featurerefactorAnything related to code refactoring

    Type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions