Skip to content

Feature request: support for Google Test #64

@alfaix

Description

@alfaix

Hello!

First of all, thanks for all the work put into this plugin, it's great and I never want to run tests in a separate tmux pane ever again :)

On to the issue: I'm working on creating an adapter for the Google Test framework for C++. It is not supported by vim-test, as far as I know, due to architectural difficulties. There is a separate plugin that ports that support, but the functionality is limited.

The plugin that I'm writing can be found in my repo. It works, but many features are still WIP, so I decided to open this issue in case somebody else wants to work on this (which would be much appreciated!), and to bring up some issues and suggestions.

So far I've only discovered a single issue: test discovery breaks down when opening a file outside the current project.
If I'm working on /home/me/myproject/foo.cpp and go-to-definition in /usr/include/something.h, the whole filesystem tree gets parsed, which breaks neovim in a number of ways, from going over ulimit with open files to straight up freezing while trying to parse every test file it finds. The discovering mechanic seems way too eager to discover :) Similar behavior has already been mentioned here, and is probably being worked on, but if I can help, I would love to

Furthermore, if it's okay, I would also like to suggest a couple of minor improvements. If you think they are a good fit for the plugin, I think I can add them myself.

  1. Support canceling a run from build_spec.
    If an adapter returns nil from build_spec, an error happens. With google test, the adapter has to find the executable to run. Sometimes, that may require user input, and I would like to give the user an opportunity to cancel during that input (i.e., "enter path to the executable, empty to cancel"). Otherwise the user has to press <C-C> and see some errors they have nothing to do with.
  2. Consider supporting errors in a different file.
    Consider the following use case: A test runs a function in another file, that file throws an error. Do we want to put a diagnostic at the line that the error happened? Currently the only way to report this would be to display such an error in the test's short summary. However, printing that error in a different file could result in hundreds of test reporting the same error in that one file, so maybe it's best left as is.
  3. Keeping previous results would be helpful (in a text file somewhere).
    I think pytest does this best, creating /tmp/pytest-of-username/pytest-run-<counter> directory. I implemented something similar myself for google test (code here), perhaps it would be generally useful? I sometimes check old test runs to see when did it all go so wrong.
  4. Providing an interface for an adapter to store a persistent state to disk would be nice.
    Adapters may want some tiny state. Of course, they can store it themselves under stdpath('data'), but it would be nice to store all test states in a centralized fashion. My particular adapter wants to store a simple JSON associating test files to executables they are compiled into.

Finally, I need some guidance with parametrized tests: is there a definitive way to work with them? E.g., @pytest.mark.parametrize in pytest or TEST_P in Google Test. This is really multiple tests masquerading as one and I'm not sure how to report them - should they be different nodes in the tree? Should it just be one test and whenever it's run an adapter should report that all errors happened in that one test?

Sorry for jamming all this into a single issue, if you think any of these should be worked on, I'll create separate ones.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions