Skip to content

Conversation

AkhilaIlla
Copy link
Collaborator

@AkhilaIlla AkhilaIlla commented Aug 20, 2025

This PR introduces a workflow to validate linter rules that are under development. It ensures these rules are tested against the specifications, allowing us to build confidence in them before merging and releasing to production.

How the staging-lint-checks workflow runs selected rules

Root workflow: staging-lint-checks.yaml
Triggers: pull_request events (opened, edited, synchronize, labeled, unlabeled)
Flow:
Checks out this repo and sets up Node 18
Derives RULE_NAMES from PR labels (test-) or a “rules: A,B” line in the PR body
Checks out azure-rest-api-specs into specs/
Installs minimal deps for the runner
Runs the script run-selected-rules.js with env:
RULE_NAMES, SPEC_ROOT=specs/, FAIL_ON_ERRORS, OUTPUT_FILE
The runner filters to JSON under specification//resource-manager/stable for key RPs and executes only the selected Spectral rules
Uploads artifacts/linter-findings.txt as a build artifact

@mikeharder mikeharder self-assigned this Aug 25, 2025
mikeharder and others added 2 commits August 25, 2025 11:31
… into akhilailla/add_workflow_to_simulate_test_pipeline_for-_linter_rules
Copy link
Member

@danieljurek danieljurek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR is moving things in the right direction. I have a few questions about how rules are evaluated and whether there's a way to infer what to check instead of relying on the user to supply that data.

run: |
npm --version
npm init -y
npm i @microsoft.azure/openapi-validator-rulesets @stoplight/spectral-core @stoplight/json-ref-resolver js-yaml
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this ensure installation of @microsoft.azure/openapi-validator-ruleset at whatever version is tagged latest on NPM? If so, I think this pipeline would only be checking released rules.

If that is the case, one possible improvement here would be to build and install the rulesets from the state of this repo in the PR. This would all you to test changes to the rules right in your PR without releasing any packages.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with Daniel. My idea was this workflow would run on PRs to this repo, and test the build of aov produced from your PR, against specs from main.


on:
pull_request:
types: [opened, edited, synchronize, labeled, unlabeled]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The combination of different triggers (opened, edited, labeled, etc.) mean that a PR could run different tests depending on user input. So, for example, if PR 1 edits rules A and B has labels to check rules A and B against some specs you might get the test coverage you want. But if someone like me creates PR 2 to edit rule B and adds rule C I would need to know to add labels to test those appropriately... and if I added the wrong label, the checks might look green but they wouldn't be fully testing my changes.

I am reviewing this with little knowledge of the code structure in the rest of the repository but it could be worth considering if you can test only the rules whose logic is changed in a PR if that is simple to do.


test('no rules -> no-op with summary and 0 exit', () => {
const outFile = path.join(artifactsDir, 'no-rules.txt')
const res = run({ SPEC_ROOT: specsDir, RULE_NAMES: '', OUTPUT_FILE: outFile })
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Running the entire program end to end is broader than typical unit testing and doesn't offer much documentation on the intended behavior of individual functions inside the program. In the future, consider moving functions out of run-selected-rules.js and into separate files where the functions themselves can be tested individually.

Test coverage is almost always good and this is a great start.

@@ -0,0 +1,318 @@
"use strict"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if all of this is necessary. Would it be possible to instead run autorest using an AOV and rules built from the current state of code in the PR and parse the outputs? That could eliminate a lot of this code.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree we should try to make our test as close to the real end-to-end as possible. Which means running autorest with a command like this (but using the locally build aov packages):

npm exec -- autorest \
  --level=warning \
  --v3 --spectral --azure-validator --semantic-validator=false --model-validator=false \
  --use=$(pwd)/node_modules/@microsoft.azure/openapi-validator \
  --openapi-type=arm --openapi-subtype=arm \
  --tag=package-2021-11-01 \
  specification/widget/resource-manager/Microsoft.Widget/Widget/readme.md

If autorest/aov doesn't already support it, the gap might be, how do we run only a subset of rules from the code? If we can't flow the data through a command-line option, we could probably use an env var in the worst case.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assuming we still have some script code to run autorest on specs in a loop, this tool should be refactored to be similar to tool "spec-model" in the specs repo (but ignore the "shared" folder):

  1. "bin" entrypoint in .github/package.json

https://github.com/Azure/azure-rest-api-specs/blob/09695f5a97dfcf201766ea2f2e7e3b53aec49144/.github/shared/package.json#L30

  1. Small script under .github/cmd

https://github.com/Azure/azure-rest-api-specs/blob/main/.github/shared/cmd/spec-model.js

  1. Source code (unit testable) under .github/src

https://github.com/Azure/azure-rest-api-specs/blob/main/.github/shared/src/spec-model.js

const { spectralRulesets } = require('@microsoft.azure/openapi-validator-rulesets')
const tentative = spectralRulesets?.azARM?.rules ? Object.keys(spectralRulesets.azARM.rules) : []
const unknown = RULE_NAMES.filter(n => !tentative.includes(n))
if (tentative.length > 0 && unknown.length === RULE_NAMES.length) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm skimming over this code with little context at the moment... I wonder if the situation where a user supplies any unknown rule names is not a cause for a CI failure. Especially if you can supply rules as user input in PR comments.

uses: actions/github-script@v7
with:
script: |
const labels = (context.payload.pull_request?.labels || []).map(l => l.name || "");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move this code into a separate JS file under workflows/src, similar to the specs repo. Example:

https://github.com/Azure/azure-rest-api-specs/blob/09695f5a97dfcf201766ea2f2e7e3b53aec49144/.github/workflows/arm-incremental-typespec.yaml#L55-L57

- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: 18
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Node 18 is deprecated. I would use Node 22 (latest LTS) unless you have a reason to use 20.

core.info(`Selected rules: ${value || "<none>"}`);

- name: Checkout specs repository
uses: actions/checkout@v4
Copy link
Member

@mikeharder mikeharder Sep 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Checkout of the whole specs repo is expensive. Can we do a sparse checkout of just some folders? Do we test everything under specification, or a known subset of specs?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move to .github/package.json, like the specs repo.

Comment on lines +12 to +14
"@microsoft.azure/openapi-validator-rulesets": "^2.1.7",
"@stoplight/spectral-core": "^1.18.3",
"@stoplight/json-ref-resolver": "^3.1.6",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"@microsoft.azure/openapi-validator-rulesets": "^2.1.7",
"@stoplight/spectral-core": "^1.18.3",
"@stoplight/json-ref-resolver": "^3.1.6",

These shouldn't be necessary, if we build aov from the repo and then run it.

"test": "jest --runInBand"
},
"devDependencies": {
"jest": "^29.7.0"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"jest": "^29.7.0"
"vitest": "^3.1.2"

We are using vitest instead of jest for all new tests going forward (and will be converting older projects to vitest eventually).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be able to revert this after converting to vitest

@mikeharder
Copy link
Member

mikeharder commented Sep 5, 2025

@AkhilaIlla: @danieljurek and I were discussing, and we think this might work better if we put the workflow in the specs repo instead. It would work like this:

  1. Trigger WF in specs repo manually
  2. When triggering, specify parameters like "PR number to aov", "Rules to test", "specs to test" etc
  3. WF downloads private builds from aov PR artifacts (or clones and builds locally)
  4. WF runs rule against selected specs, using private build of aov

This has several benefits:

  1. We can test specs in any branch in specs or specs-pr. If the WF lives in aov repo, can only test public specs repo.
  2. specs repo already has a lot of shared code and example WFs
  3. WF could be slightly tweaked, to allow testing published versions of aov in addition to private builds
  4. Long-term, I'd like to move aov into the specs repo, so we don't need to publish/consume at all

I can create a skeleton WF in the specs repo to help get you started.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants