Skip to content

Conversation

@monksy
Copy link

@monksy monksy commented Oct 22, 2025

This PR adds the ability for testing coverage minimums for your project, and gives you the ability to break the build when the test coverage falls below that.

It is reproducing the behavior granted by https://github.com/scoverage/sbt-scoverage guarenting coverage.

@monksy monksy marked this pull request as draft October 22, 2025 05:49
@monksy monksy changed the title Feature/update scala coverage metrics Feature/update scala coverage metrics: Add validation checks for coverage minimums. Oct 22, 2025
@monksy
Copy link
Author

monksy commented Oct 22, 2025

I'm not sure how to improve the test case to actually run coverage on code in the test. So I'm not sure how to test this thoroughly.

@lefou
Copy link
Member

lefou commented Oct 22, 2025

I'm not sure how to improve the test case to actually run coverage on code in the test. So I'm not sure how to test this thoroughly.

You might want to look at / extend the example under example/scalalib/linting/2-contrib-scoverage which also serves as integration tests but also as documentation (see https://mill-build.org/mill/scalalib/linting.html#_code_coverage_with_scoverage).

Run it with mill example.scalalib.linting[2-contrib-scoverage].packaged.daemon.

@monksy
Copy link
Author

monksy commented Oct 22, 2025

@lefou In the example where would I be able to run this new plugin functionality? Looks like the packaged functionality has something special setup that it s not clear whats going on there.

Also, how would I force this to execute the tests before it as well?

@lefou
Copy link
Member

lefou commented Oct 25, 2025

@monksy I'm not sure I understand what you're asking here. I try to explain nevertheless.

@lefou In the example where would I be able to run this new plugin functionality?

Please add a description of the added functionality to the PR description (first comment), otherwise it's reverse engineering and guess work. Assuming you want to exercise that the build fails if a minimal test coverage is not met, you might need to add that case to the example project, e.g. by adding a sub-module and then running the check task for that sub-module.

Looks like the packaged functionality has something special setup that it s not clear whats going on there.

The example tests run against a local build which is all handled by Mill. You control everything via the build.mill file. All comments will be documentation, rendered in the docs. Special usage comment (/** Usage */) are the test cases. The prompt, e.g. > mill scoverage.consoleReport is the command to be executed. The following lines are the expected output. Three dots (...) are wildcards. The test will fail, if some expected output is missing. If you prepend a error: to the expected output, the test asserts that the command fails.

Also, how would I force this to execute the tests before it as well?

I don't know what you mean.

@monksy
Copy link
Author

monksy commented Oct 25, 2025

Please add a description of the added functionality to the PR description (first comment), otherwise it's reverse engineering and guess work.

Done.

Assuming you want to exercise that the build fails if a minimal test coverage is not met, you might need to add that case to the example project, e.g. by adding a sub-module and then running the check task for that sub-module.

That's what I was trying to do. I know how to update the linting example. However, I'm not sure how you would execute that particular example with a specific task (validateCoverageMinimums) under the new build. That is what I was trying to ask in the previous question. It is not clear how and what commands are executed for that example.

The example tests run against a local build which is all handled by Mill. You control everything via the build.mill file. All comments will be documentation, rendered in the docs. Special usage comment (/** Usage */) are the test cases. The prompt, e.g. > mill scoverage.consoleReport is the command to be executed. The following lines are the expected output. Three dots (...) are wildcards. The test will fail, if some expected output is missing. If you prepend a error: to the expected output, the test asserts that the command fails.

How is the configured and setup to review the documentation and execute those commands?

Also, how would I force this to execute the tests before it as well? I don't know what you mean.

I'm asking, how would i make the test task a dependency of the validateCoverageMinimums task?

@monksy monksy marked this pull request as ready for review October 25, 2025 22:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants