-
Notifications
You must be signed in to change notification settings - Fork 30
feat: Add comprehensive pipeline validation test infrastructure #271
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Claude code description: Introduce a new testing framework for validating pipeline checkers with both positive and negative test cases. This ensures pipeline validators work correctly before release and prevents regressions. - Add tests/docs-test.yaml with comprehensive docs pipeline validation - Positive test: giflib-doc (real Wolfi package with valid docs) - Negative tests: glibc (non-docs package), binaries, empty packages - All negative tests capture and display checker output for debugging - Add tests/README.md documenting test structure and best practices - Configure tests to use provider-priority: 0 for proper Wolfi precedence - Restructure test targets into three categories: - test-melange: Tests main tw package - test-projects: Tests individual project tools - test-pipelines: Tests pipeline validators (new) - Add test-all target to run complete test suite - Add granular targets: build-pipeline-tests, run-pipeline-tests - Fix MELANGE_TEST_OPTS to include proper repository configuration - Add TEST_DIR and TEST_OUT_DIR for test package isolation - Include both main and test package repositories - Add pipeline-dirs, keyring, and arch configuration - Rename pattern rule from test-% to test-project/% - Prevents conflicts with test-pipelines, test-melange targets - Uses slash separator for clearer intent (e.g., test-project/gosh) - Update clean target to remove test package output directory - Add informative echo statements for better CI/CD visibility - Update workflow to use test-projects (renamed from test) - Add test-pipelines step to validate pipeline checkers - Ensures all three test types run in CI - Expand testing section with comprehensive test type documentation - Document all make targets with usage examples - Explain positive vs negative test concepts - Add test files structure and purpose - Rename "Testing locally" to "Testing with Original Repositories" - Clarify workflow for testing changes in wolfi-os/enterprise-packages - Add explanation of --repository-append usage - Ignore .DS_Store files (macOS) - Ignore tests/packages/ directory (test build artifacts) Previously, pipeline validators were tested manually or not at all. This led to: 1. Regressions when modifying checkers 2. Inconsistent behavior across different package types 3. Difficulty validating edge cases This infrastructure provides: 1. Automated validation of pipeline checkers 2. Both positive (should pass) and negative (should fail) test coverage 3. Clear documentation for adding new pipeline tests 4. Separation of test artifacts from main package builds 5. Reproducible local testing matching CI environment The test-pipelines target runs a clean build to ensure tests use the latest checker implementations, preventing false positives from stale builds. Verified all test targets work correctly: - make test-melange: ✓ - make test-projects: ✓ - make test-pipelines: ✓ - make test-all: ✓ - make test-project/package-type-check: ✓ Signed-off-by: Debasish Biswas <debasishbsws.dev@gmail.com>
… staticpackage tests Complete rewrite of tests/README.md to provide comprehensive guidance on writing pipeline validation tests, emphasizing critical requirements and common pitfalls. Add staticpackage-test.yaml with both positive and negative test scenarios. **New Structure:** - Focus on "how to write tests" rather than "what tests exist" - Step-by-step guidance for creating new pipeline tests - Comprehensive examples with explanations **Critical Configuration Rules Section:** 1. Always use version `0.0.0` - explains precedence behavior 2. Set `provider-priority: 0` - explains Wolfi package testing 3. Don't test main package - explains organizational benefits 4. Use subpackages for scenarios - explains test isolation **Testing Real Wolfi Packages:** - Detailed explanation of how to test real packages (giflib-doc, glibc) - How version `0.0.0` + `provider-priority: 0` enables this - Benefits: validates against real-world package structures **Positive Test Guidelines:** - Simple, focused examples - No special test logic needed - Create realistic package content **Negative Test Requirements (Critical Section):** Four critical requirements with detailed explanations: 1. `set +e` - why it's needed to continue after checker fails 2. Capture output - debugging and documentation benefits 3. Validate failure - how to check exit codes correctly 4. Add package-type-check - why it must be in environment **Common Mistakes Section:** - 6 common errors with solutions - Based on real development experience - Helps prevent repetitive debugging **Test Checklist:** - Actionable 14-point checklist - Covers all critical requirements - Ensures consistency across test files **Best Practices:** - 8 practical guidelines - Emphasizes maintainability and clarity - Real-world testing strategies **Simplified Environment:** - Changed from `build-base` + `busybox` to just `busybox` - `busybox` provides `/bin/sh` and basic utilities (sufficient) - Reduces unnecessary dependencies Complete test suite for static package pipeline validation: **Positive Tests:** 1. `gdcm-static` - Real Wolfi static package (production validation) 2. `contains-only-static` - Synthetic package with only .a files **Negative Tests:** 3. `glibc` - Real Wolfi non-static package (should be rejected) 4. `contains-static-and-more` - Synthetic with .so files (should be rejected) All negative tests follow best practices: - Use `set +e` to handle expected failures - Capture and display checker output - Validate rejection with proper exit codes **Configuration:** - Version `0.0.0` for proper precedence - `provider-priority: 0` enables Wolfi package testing - Minimal environment (busybox only) - Main package has only log line **Version Fix:** - Changed from `0.0.1` → `0.0.0` for consistency - Ensures proper precedence with Wolfi packages **Environment Simplification:** - Removed `build-base` (not needed for simple tests) - Keep only `busybox` (provides /bin/sh) **Pipeline Clarification:** - Changed from empty echo to descriptive message - Clearly states this is a test package Added link to tests/README.md for detailed pipeline test documentation, making it discoverable from main README. The original tests/README.md was more of a catalog than a guide. It listed what tests existed but didn't explain how to create new ones or why certain patterns were important. Signed-off-by: Debasish Biswas <debasishbsws.dev@gmail.com>
The tests/ directory contains pipeline validation test packages, not project code with test targets. Exclude it from PROJECT_DIRS to prevent `make test-projects` from attempting to run `make -C tests test`. Fixes CI error: make[1]: *** No rule to make target 'test'. Stop. make: *** [test-project/tests] Error 2 Signed-off-by: Debasish Biswas <debasishbsws.dev@gmail.com>
Signed-off-by: Debasish Biswas <debasishbsws.dev@gmail.com>
|
I like the idea of using real world packages to verify pipelines are working as expected. However, the proposed infrastructure results in a significant amount of code duplication (melange yamls for the test infra). I propose a slightly different approach: to autogenerate them at test time. Imagine this Imagine this other We could have a number of yaml files like these. Then, we will have a runner script (in python, go, whatever) that for each package, generates the same melange yamls as you envisioned, then copies the pipelines as defined in the test case, executes the tests, and verifies the exit code. What do you think? We could have several pipelines in a single test case: The behavior is the same, our runner will generate the melange yamls for each package and copy all the pipelines, test, and verify the exit_code. |
Implement declarative YAML-based test system that auto-generates Melange
configs from test case definitions, eliminating code duplication.
Key features:
- Single package per test (1:1 mapping) for clear test isolation
- Declarative test case format with expect_pass for positive/negative tests
- Test-time dependencies via test-dependencies field and --append-packages flag
- Directory-based organization (pass-*/ and fail-*/ directories)
- Detailed failure reporting with config paths and re-run commands
- Makefile integration with suite filtering support
Benefits:
- Eliminates repetitive Melange YAML boilerplate
- Easy to add new test cases (just add YAML entry)
- Tests real Wolfi packages against pipelines
- Clear 1:1 mapping between tests and packages
Usage:
make test-pipelines-autogen # Run all tests
make test-pipelines-autogen/docs-pipeline # Run specific suite
make generate-pipeline-tests # Generate without running
Test case format:
testcases:
- name: Test description
package: package-name
pipelines:
- uses: test/tw/pipeline
expect_pass: true
test-dependencies: [dep1, dep2] # optional
Implementation:
- Go-based test runner in tests/pipeline-runner/
- Auto-generates Melange configs to tests/generated/
- Deduplicates dependencies (wolfi-base + global + per-test)
- Validates results against expectations
Documentation in tests/README.md and tests/AUTOGEN-TESTS.md
Signed-off-by: Debasish Biswas <debasishbsws.dev@gmail.com>
Add constants for magic values, validation, and deterministic output: - Define constants for common values (basePackage, defaultVersion, file permissions, directory names) to improve maintainability - Add TestCase.Validate() method to catch missing required fields early - Sort dependency packages for deterministic, reproducible output - Enhance error messages to include file paths for easier debugging - Add context cancellation checks in processing loops for graceful shutdown Benefits: - Errors now show full context: "failed to parse YAML from tests/foo.yaml" - Validation catches issues like missing package field before generation - Dependencies always sorted: [busybox, curl, git, wolfi-base, zlib] - Can interrupt long test runs with Ctrl+C All existing functionality preserved, tests pass. Signed-off-by: Debasish Biswas <debasishbsws.dev@gmail.com>
Replace verbose/quiet flags with a single debug flag. Normal mode (default) shows test results without melange output. Debug mode shows full melange output and internal details. Update Makefile to use DEBUG=1 flag and update documentation accordingly. Signed-off-by: Debasish Biswas <debasishbsws.dev@gmail.com>
Signed-off-by: Debasish Biswas <debasishbsws.dev@gmail.com>
39ef0e8 to
40b4f05
Compare
Signed-off-by: Debasish Biswas <debasishbsws.dev@gmail.com>
aborrero
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good work! some comments inline.
| @echo "==> Building pipeline test packages..." | ||
| @rc=0; for test_file in $(TEST_PIPELINE_FILES); do \ | ||
| echo "Building $$test_file"; \ | ||
| $(MELANGE) build --runner docker $$test_file $(MELANGE_OPTS) --signing-key=${KEY} --pipeline-dir ${TOP_D}/pipelines --out-dir=${TEST_OUT_DIR} || rc=$$?; \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe this --runner parameter can be omitted so melange uses whatever system config is present.
same for the other melange calls in this Makefile.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup thanks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is still unresolved.
Signed-off-by: Debasish Biswas <debasishbsws.dev@gmail.com>
aborrero
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some cleanups are still required.
| - Testing specific edge cases with custom configurations | ||
| - One-off tests that don't fit the autogen pattern | ||
|
|
||
| **Location:** `tests/*-test.yaml` files |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we don't have any test in this category at the moment, no? I would suggest we completely remove this reference, otherwise this is just documenting dead code that doesn't exist.
| **Autogenerated Approach** (testcases/docs-pipeline.yaml): | ||
| ```yaml | ||
| # Clear 1:1 mapping between test cases and packages | ||
| name: Docs pipeline validation tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is documented in README.md
Duplicating the schema docs will create toil in the future when we need to make changes. I'd suggest having just one README file, with all the content.
| Tests the pipeline validators located in `pipelines/test/tw/` using test packages in `tests/`. | ||
|
|
||
| ```bash | ||
| make test-pipelines |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe have a prompt here so is the same format as the other commands.
| 4. Runs pipeline validation tests against those packages | ||
|
|
||
| **Test files structure:** | ||
| - `tests/docs-test.yaml` - Tests the `pipelines/test/tw/docs.yaml` pipeline |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we don't have any of this. I suggest we remove this until this code actually exists.
| @echo "==> Building pipeline test packages..." | ||
| @rc=0; for test_file in $(TEST_PIPELINE_FILES); do \ | ||
| echo "Building $$test_file"; \ | ||
| $(MELANGE) build --runner docker $$test_file $(MELANGE_OPTS) --signing-key=${KEY} --pipeline-dir ${TOP_D}/pipelines --out-dir=${TEST_OUT_DIR} || rc=$$?; \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is still unresolved.
| test-pipelines: | ||
| @echo "==> Running complete pipeline test suite..." | ||
| $(MAKE) build-pipeline-tests | ||
| $(MAKE) run-pipeline-tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we don't have any of theses tests at the moment. I suggest we remove support for this cases, and if we introduce them in a later PR, then we put that code there. Otherwise this PR would be introducing dead code with no usage.
Claude code description:
Introduce a new testing framework for validating pipeline checkers with both positive and negative test cases. This ensures pipeline validators work correctly before release and prevents regressions.
What's New:
1. Test Files
tests/docs-test.yaml: Validates docs pipelinetests/README.md: Documents test structure and best practices.gitignore: Ignore .DS_Store and tests/packages/ (build artifacts)2. Test Targets
Reorganized into three categories:
make test-melange: Tests main tw packagemake test-projects: Tests individual project tools (e.g.,make test-project/gosh)make test-pipelines: Tests pipeline validators (new)make test-all: Runs complete test suite3. Configuration Updates
provider-priority: 0for proper Wolfi precedenceTEST_DIRandTEST_OUT_DIRfor test package isolationMELANGE_TEST_OPTSwith proper repository and keyring configuration4. Pattern Rule Changes
test-%totest-project/%to prevent naming conflictstest-project/gosh)5. CI/CD Updates
test-pipelinesstep to validate checkersbuild-pipeline-tests,run-pipeline-testsVerified Working: