This guide covers how to write and execute end-to-end tests for Reqvire. Tests validate complete functionality by running reqvire commands and comparing outputs against expected results.
From the root directory:
./tests/run_tests.sh./tests/run_tests.sh test-diagram-generationTests use the debug binary:
cargo build
./tests/run_tests.shEach test is a directory containing:
test-feature-name/
├── test.sh # Test execution script (REQUIRED)
└── specifications/ # Test markdown files
├── Requirements.md # Test requirements
└── Verifications/ # Test verifications
Use ls tests/ to see all available test directories. Each test-* directory contains end-to-end tests for specific functionality.
#!/bin/bash
# Test: Feature Description
# Acceptance Criteria:
# - List specific behaviors to test
# - Define success conditions
#
# Test Criteria:
# - Command exits with success (0)
# - Output matches expected format
# - Files are modified as expected
# Run reqvire command
OUTPUT=$(cd "$TEST_DIR" && "$REQVIRE_BIN" command 2>&1)
EXIT_CODE=$?
# Save output for debugging
printf "%s\n" "$OUTPUT" > "${TEST_DIR}/test_results.log"
# Check exit code
if [ $EXIT_CODE -ne 0 ]; then
echo "❌ FAILED: Command returned error: $EXIT_CODE"
cat "${TEST_DIR}/test_results.log"
exit 1
fi
# Perform specific validations
# ... test-specific checks ...
exit 0Available in test scripts:
$TEST_DIR- Temporary test directory with copied files$REQVIRE_BIN- Path to reqvire binary- Git repository is initialized in
$TEST_DIR
The test runner (run_tests.sh):
- Creates temporary directory for each test
- Copies test files to temp directory
- Initializes git repository
- Runs
test.shin test context - Reports pass/fail results
- Cleans up temporary files
This is the preferred and REQUIRED pattern for file modification tests. Instead of using inline grep checks, tests MUST use expected output files and diff -u for comparison. This ensures:
- Clear visibility of what changed when tests fail
- Easy updates when intentional changes are made
- Deterministic, reproducible test results
test-feature-name/
├── test.sh # Test execution script
├── .reqvireignore # Excludes expected/ from reqvire parsing
├── expected/ # Expected output files (committed to git)
│ ├── 01-after-step1.md # Expected state after step 1
│ ├── 02-after-step2.md # Expected state after step 2
│ └── output.txt # Expected command output
└── specifications/ # Initial test files
└── Requirements.md
IMPORTANT - When to use .reqvireignore vs .gitignore:
-
Use
.reqvireignore(PREFERRED for most tests):- Excludes files from reqvire parsing ONLY
- Files remain tracked by git and committed to repository
- Expected output files can be version controlled
- Use this for tests with
expected/directories that should be committed
-
Use
.gitignore(ONLY for specific .gitignore functionality tests):- Excludes files from BOTH git AND reqvire
- Files are untracked and not committed to repository
- Only use in tests that specifically test .gitignore behavior:
test-gitignore-integration- Tests .gitignore integrationtest-crud-target-location-validation- Tests .gitignore exclusion validation
For regular tests with expected output files, create .reqvireignore:
# .reqvireignore in test directory
expected/
This prevents reqvire from parsing expected files (which contain # Elements headers that would cause duplicate element errors) while allowing them to be committed to git for version control.
#!/bin/bash
set -uo pipefail # NOTE: Do NOT use -e, it causes silent failures with diff
TEST_SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Helper function to compare files and show diff on failure
assert_file_matches() {
local expected="$1"
local actual="$2"
local description="$3"
if ! diff -u "$expected" "$actual"; then
echo "❌ FAILED: $description"
echo ""
echo "If changes are intentional, update $expected"
exit 1
fi
}
# Usage in tests:
cd "$TEST_DIR" && "$REQVIRE_BIN" some-command > /dev/null 2>&1
assert_file_matches "${TEST_SCRIPT_DIR}/expected/01-after-command.md" \
"$TEST_DIR/specifications/Requirements.md" \
"File content after command does not match expected"
echo "✅ Test passed"When creating a new test or updating expected outputs:
# Set up test environment
cd /tmp && mkdir test-gen && cp -r tests/test-feature/* test-gen/
cd test-gen && git init && git add -A && git commit -m "init"
# Run commands and capture expected outputs
reqvire some-command
cp specifications/Requirements.md /path/to/tests/test-feature/expected/01-after-command.md# ❌ BAD - Silent failure, unclear what's wrong
if ! grep -q "expected content" "$FILE"; then
echo "FAILED"
exit 1
fi
# ✅ GOOD - Shows exactly what differs
if ! diff -u "$EXPECTED" "$ACTUAL"; then
echo "FAILED - see diff above"
exit 1
fi# Run command that produces JSON
OUTPUT=$(cd "$TEST_DIR" && "$REQVIRE_BIN" model-summary --json 2>&1)
# Validate JSON structure
echo "$OUTPUT" | jq . >/dev/null 2>&1
if ! echo "$OUTPUT" | jq 'has("files")' | grep -q true; then
echo "❌ FAILED: JSON missing expected structure"
exit 1
fi
# Check specific values
TOTAL=$(echo "$OUTPUT" | jq '.global_counters.total_elements')
if [ "$TOTAL" -ne 5 ]; then
echo "❌ FAILED: Expected 5 elements, got $TOTAL"
exit 1
fi# Make backup for comparison
mkdir -p "$TEST_DIR/backup"
cp -r "$TEST_DIR/specifications" "$TEST_DIR/backup/"
# Run command that modifies files
OUTPUT=$(cd "$TEST_DIR" && "$REQVIRE_BIN" generate-diagrams 2>&1)
# Check files were modified
if cmp -s "$TEST_DIR/specifications/file.md" "$TEST_DIR/backup/specifications/file.md"; then
echo "❌ FAILED: Expected file to be modified"
exit 1
fi
# Check for specific content
if ! grep -q 'expected-content' "$TEST_DIR/specifications/file.md"; then
echo "❌ FAILED: Missing expected content"
exit 1
fi# Test different filter combinations
FILTERS=(
"--filter-type=requirement"
"--filter-is-not-verified"
"--filter-file=specifications/*.md"
"--filter-name=.*auth.*"
"--filter-content=SHALL"
)
for filter in "${FILTERS[@]}"; do
OUTPUT=$(cd "$TEST_DIR" && "$REQVIRE_BIN" model-summary $filter --json 2>&1)
if [ $? -ne 0 ]; then
echo "❌ FAILED: Filter failed: $filter"
exit 1
fi
done# Test invalid regex
OUTPUT=$(cd "$TEST_DIR" && "$REQVIRE_BIN" model-summary --filter-name="[invalid" 2>&1)
EXIT_CODE=$?
if [ $EXIT_CODE -eq 0 ]; then
echo "❌ FAILED: Expected error for invalid regex"
exit 1
fi
if ! echo "$OUTPUT" | grep -q "Invalid regex"; then
echo "❌ FAILED: Expected 'Invalid regex' error message"
exit 1
fiCreate testable requirements in specifications/Requirements.md:
### Test Feature Requirement
The system SHALL generate diagrams when requested.
#### Metadata
* type: requirement
#### Relations
* verifiedBy: Verifications/Tests.md#diagram-generation-testCreate verification in specifications/Verifications/Tests.md:
### Diagram Generation Test
Test verifies that diagrams are generated correctly:
1. Run generate-diagrams command
2. Verify mermaid diagrams are added to files
3. Confirm REQVIRE-AUTOGENERATED-DIAGRAM markers present
Expected: Files contain valid mermaid diagrams with proper markers
#### Metadata
* type: test-verification
#### Relations
* verify: ../Requirements.md#test-feature-requirement# Basic summary
"$REQVIRE_BIN" model-summary --json
# With filters
"$REQVIRE_BIN" model-summary --filter-type="requirement"
"$REQVIRE_BIN" model-summary --filter-is-not-verified
"$REQVIRE_BIN" model-summary --filter-file="specifications/*.md"# Generate diagrams
"$REQVIRE_BIN" generate-diagrams
# Check for mermaid content
grep -q '```mermaid' specifications/file.md
# Verify autogenerated markers
grep -q 'REQVIRE-AUTOGENERATED-DIAGRAM' specifications/file.md# Preview changes (default dry-run mode)
"$REQVIRE_BIN" format
# Apply formatting
"$REQVIRE_BIN" format --fix# Analyze changes
"$REQVIRE_BIN" change-impact --git-commit=HEAD~1 --json# Generate HTML
"$REQVIRE_BIN" html
# Check output directory
test -d output/
test -f output/index.html# Generate traces
"$REQVIRE_BIN" traces --json
# Validate trace structure
echo "$OUTPUT" | jq '.traces | length'# Test logs are saved to test_results.log
cat /path/to/test/directory/test_results.log# Navigate to test directory
cd tests/test-feature-name
# Set environment variables
export TEST_DIR=$(pwd)
export REQVIRE_BIN="../../target/debug/reqvire"
# Run test script
./test.sh# Check test specifications
ls tests/test-feature-name/specifications/
# Review test logic
cat tests/test-feature-name/test.sh- Clear Test Names: Use descriptive test directory names
- Focused Tests: Each test should validate specific functionality
- Comprehensive Checks: Validate both success and error conditions
- Deterministic Results: Tests should produce consistent outputs
- Clean Specifications: Use minimal test data that demonstrates features
- Error Messages: Provide clear failure messages with context
- Documentation: Include acceptance criteria in test scripts
- Git Setup: Tests run in temporary git repositories
- Cleanup: Test runner handles cleanup automatically
- Fast Execution: Keep tests efficient for CI/CD pipelines
- Silent Success: Never add debug or echo outputs except for failure messages - tests should be silent on success
- Create test directory:
mkdir tests/test-new-feature- Create test script:
touch tests/test-new-feature/test.sh
chmod +x tests/test-new-feature/test.sh- Add test specifications:
mkdir -p tests/test-new-feature/specifications/Verifications
# Create markdown files with test requirements and verifications- Run test:
./tests/run_tests.sh test-new-feature- Add to test suite (automatic - any
test-*directory is included)