-
Notifications
You must be signed in to change notification settings - Fork 50
Create staging lint checks workflow #785
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Create staging lint checks workflow #785
Conversation
…eline_for-_linter_rules
… into akhilailla/add_workflow_to_simulate_test_pipeline_for-_linter_rules
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR is moving things in the right direction. I have a few questions about how rules are evaluated and whether there's a way to infer what to check instead of relying on the user to supply that data.
run: | | ||
npm --version | ||
npm init -y | ||
npm i @microsoft.azure/openapi-validator-rulesets @stoplight/spectral-core @stoplight/json-ref-resolver js-yaml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this ensure installation of @microsoft.azure/openapi-validator-ruleset
at whatever version is tagged latest
on NPM? If so, I think this pipeline would only be checking released rules.
If that is the case, one possible improvement here would be to build and install the rulesets from the state of this repo in the PR. This would all you to test changes to the rules right in your PR without releasing any packages.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with Daniel. My idea was this workflow would run on PRs to this repo, and test the build of aov produced from your PR, against specs from main.
|
||
on: | ||
pull_request: | ||
types: [opened, edited, synchronize, labeled, unlabeled] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The combination of different triggers (opened, edited, labeled, etc.) mean that a PR could run different tests depending on user input. So, for example, if PR 1 edits rules A and B has labels to check rules A and B against some specs you might get the test coverage you want. But if someone like me creates PR 2 to edit rule B and adds rule C I would need to know to add labels to test those appropriately... and if I added the wrong label, the checks might look green but they wouldn't be fully testing my changes.
I am reviewing this with little knowledge of the code structure in the rest of the repository but it could be worth considering if you can test only the rules whose logic is changed in a PR if that is simple to do.
|
||
test('no rules -> no-op with summary and 0 exit', () => { | ||
const outFile = path.join(artifactsDir, 'no-rules.txt') | ||
const res = run({ SPEC_ROOT: specsDir, RULE_NAMES: '', OUTPUT_FILE: outFile }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Running the entire program end to end is broader than typical unit testing and doesn't offer much documentation on the intended behavior of individual functions inside the program. In the future, consider moving functions out of run-selected-rules.js
and into separate files where the functions themselves can be tested individually.
Test coverage is almost always good and this is a great start.
@@ -0,0 +1,318 @@ | |||
"use strict" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if all of this is necessary. Would it be possible to instead run autorest
using an AOV and rules built from the current state of code in the PR and parse the outputs? That could eliminate a lot of this code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree we should try to make our test as close to the real end-to-end as possible. Which means running autorest
with a command like this (but using the locally build aov packages):
npm exec -- autorest \
--level=warning \
--v3 --spectral --azure-validator --semantic-validator=false --model-validator=false \
--use=$(pwd)/node_modules/@microsoft.azure/openapi-validator \
--openapi-type=arm --openapi-subtype=arm \
--tag=package-2021-11-01 \
specification/widget/resource-manager/Microsoft.Widget/Widget/readme.md
If autorest/aov doesn't already support it, the gap might be, how do we run only a subset of rules from the code? If we can't flow the data through a command-line option, we could probably use an env var in the worst case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assuming we still have some script code to run autorest on specs in a loop, this tool should be refactored to be similar to tool "spec-model" in the specs repo (but ignore the "shared" folder):
- "bin" entrypoint in
.github/package.json
- Small script under
.github/cmd
https://github.com/Azure/azure-rest-api-specs/blob/main/.github/shared/cmd/spec-model.js
- Source code (unit testable) under
.github/src
https://github.com/Azure/azure-rest-api-specs/blob/main/.github/shared/src/spec-model.js
const { spectralRulesets } = require('@microsoft.azure/openapi-validator-rulesets') | ||
const tentative = spectralRulesets?.azARM?.rules ? Object.keys(spectralRulesets.azARM.rules) : [] | ||
const unknown = RULE_NAMES.filter(n => !tentative.includes(n)) | ||
if (tentative.length > 0 && unknown.length === RULE_NAMES.length) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm skimming over this code with little context at the moment... I wonder if the situation where a user supplies any unknown rule names is not a cause for a CI failure. Especially if you can supply rules as user input in PR comments.
uses: actions/github-script@v7 | ||
with: | ||
script: | | ||
const labels = (context.payload.pull_request?.labels || []).map(l => l.name || ""); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move this code into a separate JS file under workflows/src
, similar to the specs repo. Example:
- name: Setup Node.js | ||
uses: actions/setup-node@v4 | ||
with: | ||
node-version: 18 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Node 18 is deprecated. I would use Node 22 (latest LTS) unless you have a reason to use 20.
core.info(`Selected rules: ${value || "<none>"}`); | ||
|
||
- name: Checkout specs repository | ||
uses: actions/checkout@v4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Checkout of the whole specs repo is expensive. Can we do a sparse checkout of just some folders? Do we test everything under specification, or a known subset of specs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move to .github/package.json
, like the specs repo.
"@microsoft.azure/openapi-validator-rulesets": "^2.1.7", | ||
"@stoplight/spectral-core": "^1.18.3", | ||
"@stoplight/json-ref-resolver": "^3.1.6", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"@microsoft.azure/openapi-validator-rulesets": "^2.1.7", | |
"@stoplight/spectral-core": "^1.18.3", | |
"@stoplight/json-ref-resolver": "^3.1.6", |
These shouldn't be necessary, if we build aov from the repo and then run it.
"test": "jest --runInBand" | ||
}, | ||
"devDependencies": { | ||
"jest": "^29.7.0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"jest": "^29.7.0" | |
"vitest": "^3.1.2" |
We are using vitest instead of jest for all new tests going forward (and will be converting older projects to vitest eventually).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should be able to revert this after converting to vitest
@AkhilaIlla: @danieljurek and I were discussing, and we think this might work better if we put the workflow in the specs repo instead. It would work like this:
This has several benefits:
I can create a skeleton WF in the specs repo to help get you started. |
This PR introduces a workflow to validate linter rules that are under development. It ensures these rules are tested against the specifications, allowing us to build confidence in them before merging and releasing to production.
How the staging-lint-checks workflow runs selected rules
Root workflow: staging-lint-checks.yaml
Triggers: pull_request events (opened, edited, synchronize, labeled, unlabeled)
Flow:
Checks out this repo and sets up Node 18
Derives RULE_NAMES from PR labels (test-) or a “rules: A,B” line in the PR body
Checks out azure-rest-api-specs into specs/
Installs minimal deps for the runner
Runs the script run-selected-rules.js with env:
RULE_NAMES, SPEC_ROOT=specs/, FAIL_ON_ERRORS, OUTPUT_FILE
The runner filters to JSON under specification/