This document provides clear, enforceable guidelines for AI agents (and human developers) working on the Tethys.Results project to ensure consistent quality, testing standards, and documentation practices.
- NEVER write implementation code before tests
- ALWAYS verify tests fail before implementing
- ALWAYS run tests after implementation to verify they pass
- Code without documentation is incomplete
- Tests without descriptions are incomplete
- Features without examples are incomplete
Before writing ANY implementation code, agents MUST:
## Pre-Implementation Verification
- [ ] Review DEVELOPMENT-PLAN.md for the feature requirements
- [ ] Create or locate the test file for the feature
- [ ] Write comprehensive test cases covering:
- [ ] Happy path scenarios
- [ ] Error/failure scenarios
- [ ] Null parameter handling
- [ ] Edge cases and boundaries
- [ ] Thread safety (if applicable)
- [ ] Async variants (if applicable)
- [ ] Run tests to ensure they FAIL (Red phase)
- [ ] Commit the failing tests with message: "test: Add failing tests for [feature]"When implementing features, agents MUST:
## Implementation Verification
- [ ] Write MINIMAL code to make tests pass
- [ ] Run ALL tests (not just new ones) to ensure no regressions
- [ ] Add XML documentation to ALL public members
- [ ] Ensure code follows existing patterns in the codebase
- [ ] Run code coverage to verify >95% coverage for new code
- [ ] Commit implementation with message: "feat: Implement [feature]"After implementation, agents MUST:
## Post-Implementation Verification
- [ ] Refactor code while keeping tests green
- [ ] Update README.md if feature is user-facing
- [ ] Add usage examples to docs/examples.md
- [ ] Run performance benchmarks (if applicable)
- [ ] Update CHANGELOG.md with the new feature
- [ ] Create or update integration tests
- [ ] Verify thread safety with concurrent tests
- [ ] Commit refinements with message: "refactor: Improve [feature] implementation"Create the following automated checks in CI/CD:
# .github/workflows/quality-gates.yml
name: Quality Gates
on: [push, pull_request]
jobs:
test-coverage:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Check Test-First Development
run: |
# Verify test commits exist before implementation commits
# This script should check git history
./scripts/verify-test-first.sh
- name: Run Tests with Coverage
run: |
dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=opencover
- name: Check Coverage Threshold
run: |
# Fail if coverage < 95% for new code
./scripts/check-coverage-threshold.sh
- name: Verify Documentation
run: |
# Check all public APIs have XML docs
./scripts/verify-documentation.shCreate pre-commit hooks to enforce standards:
#!/bin/bash
# .git/hooks/pre-commit
# Check for missing tests
if git diff --cached --name-only | grep -E "\.cs$" | grep -v "Test"; then
echo "⚠️ Warning: Committing implementation files without test files"
echo "Have you written tests first? (y/n)"
read answer
if [ "$answer" != "y" ]; then
echo "❌ Commit aborted. Please write tests first."
exit 1
fi
fi
# Check for missing XML documentation
dotnet build -p:TreatWarningsAsErrors=true -p:NoWarn=""
if [ $? -ne 0 ]; then
echo "❌ Build failed. Missing XML documentation?"
exit 1
fiCreate a PR template that enforces the checklist:
<!-- .github/pull_request_template.md -->
## Description
Brief description of changes
## Type of Change
- [ ] Bug fix
- [ ] New feature
- [ ] Breaking change
- [ ] Documentation update
## Testing Checklist
- [ ] Tests written BEFORE implementation
- [ ] All tests pass
- [ ] Coverage >95% for new code
- [ ] No regression in existing tests
- [ ] Thread safety tests added (if applicable)
- [ ] Performance benchmarks run (if applicable)
## Documentation Checklist
- [ ] XML documentation added for all public APIs
- [ ] README.md updated (if user-facing)
- [ ] Examples added to docs/
- [ ] CHANGELOG.md updated
## Code Quality
- [ ] Follows existing code patterns
- [ ] No compiler warnings
- [ ] Ran code formatter
- [ ] Peer review requested
## Evidence of Test-First Development
Provide links to commits showing:
1. Test commit (failing tests):
2. Implementation commit (tests passing):
3. Refactoring commit (if applicable): Agents MUST structure their responses as follows:
## Implementing [Feature Name]
### Step 1: Writing Tests
I'll first create comprehensive tests for [feature]...
[Show test code]
[Run tests to show they fail]
### Step 2: Implementation
Now I'll implement the minimum code to make tests pass...
[Show implementation code]
[Run tests to show they pass]
### Step 3: Documentation and Refactoring
Adding documentation and improving code quality...
[Show final code with docs]
[Run coverage report]Agents MUST NOT:
- Write implementation code without showing tests first
- Commit code without running tests
- Add features without updating documentation
- Skip error handling or null checks
- Ignore thread safety considerations
- Use placeholder or TODO comments in production code
Agents MUST:
- Show test execution results in their responses
- Include code coverage reports for new code
- Demonstrate that tests fail before implementation
- Update all affected documentation
- Follow the exact naming conventions in the codebase
- Use meaningful commit messages following conventional commits
#!/bin/bash
# Verify test files are committed before implementation files
for file in $(git diff --name-only HEAD~1); do
if [[ $file == *"Test"* ]]; then
echo "✓ Test file found: $file"
elif [[ $file == *.cs ]]; then
# Check if corresponding test exists
testfile=$(echo $file | sed 's/\.cs/Tests.cs/')
if [ ! -f "$testfile" ]; then
echo "❌ Missing test file for: $file"
exit 1
fi
fi
done#!/bin/bash
# Check code coverage meets threshold
coverage=$(dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=json | grep -o '"line":[0-9.]*' | cut -d: -f2)
threshold=95
if (( $(echo "$coverage < $threshold" | bc -l) )); then
echo "❌ Coverage $coverage% is below threshold of $threshold%"
exit 1
else
echo "✓ Coverage $coverage% meets threshold"
fiHere's an example of how an agent should approach implementing the Match feature:
User: Implement the Match method for Result<T>
Agent: I'll implement the Match method following TDD principles. Let me start by writing comprehensive tests.
## Step 1: Writing Tests First
First, I'll create tests for the Match functionality:
[Creates MatchTests.cs with all test cases]
Now let me run these tests to ensure they fail:
[Runs: dotnet test --filter "FullyQualifiedName~MatchTests"]
[Shows failing test output]
✓ Tests are failing as expected (Red phase)
## Step 2: Implementing Match Method
Now I'll implement the minimum code to make these tests pass:
[Shows Match implementation in Result.cs and GenericResult.cs]
Let me run the tests again:
[Runs: dotnet test --filter "FullyQualifiedName~MatchTests"]
[Shows all tests passing]
✓ All tests are now passing (Green phase)
## Step 3: Documentation and Code Coverage
Adding XML documentation and checking coverage:
[Shows updated code with XML documentation]
[Runs: dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=opencover]
[Shows coverage report: 98% coverage for new code]
✓ Documentation complete and coverage exceeds 95%
## Step 4: Updating Project Documentation
[Updates README.md with Match examples]
[Updates CHANGELOG.md]
[Creates example in docs/examples.md]
The Match feature is now complete with tests, implementation, and documentation.- Review git history for TDD compliance
- Check coverage trends
- Audit documentation completeness
- Review agent interactions for compliance
- Test-to-implementation commit ratio
- Code coverage percentage
- Documentation coverage (public APIs with XML docs)
- Build failure rate due to quality gates
- Time from test to implementation
# 1. First, check the plan
cat DEVELOPMENT-PLAN.md | grep -A 20 "feature-name"
# 2. Create test file FIRST
touch test/Tethys.Test/FeatureNameTests.cs
# 3. Write failing tests
# ... implement tests ...
# 4. Run tests to verify they fail
dotnet test --filter "FeatureNameTests"
# 5. Implement feature
# ... implement in src/Tethys.Results/...
# 6. Run tests again
dotnet test --filter "FeatureNameTests"
# 7. Check coverage
dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=opencover
# 8. Update docs
# - Add XML docs to all public members
# - Update README.md if user-facing
# - Update CHANGELOG.mdWhen implementing any feature, structure your response like this:
## Implementing [Feature Name]
### Step 1: Understanding Requirements
[Brief summary of what needs to be implemented]
### Step 2: Writing Tests First (Red Phase)
```csharp
// Show test code here[Run tests and show they fail]
// Show implementation code here[Run tests and show they pass]
- Added XML documentation
- Updated README.md (if applicable)
- Updated CHANGELOG.md
- Coverage: XX%
[Show final test run and coverage report]
### 🛑 Never Do These
1. ❌ Write implementation before tests
2. ❌ Commit without running tests
3. ❌ Skip XML documentation
4. ❌ Use `Console.WriteLine` (use proper patterns)
5. ❌ Leave TODO comments in production code
6. ❌ Ignore thread safety
7. ❌ Skip null parameter validation
8. ❌ Implement features not in DEVELOPMENT-PLAN.md
### ✅ Always Do These
1. ✅ Write tests first (TDD)
2. ✅ Check tests fail before implementing
3. ✅ Run ALL tests, not just new ones
4. ✅ Add XML docs to ALL public members
5. ✅ Update README.md for user-facing changes
6. ✅ Follow existing code patterns
7. ✅ Check coverage is >95% for new code
8. ✅ Handle null parameters appropriately
### 🔧 Useful Commands
```bash
# Run all tests
dotnet test
# Run specific test class
dotnet test --filter "FullyQualifiedName~MatchTests"
# Run with coverage
dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=opencover
# Format code
dotnet format
# Build with warnings as errors
dotnet build -warnaserror
# Check what files changed
git status
git diff --cached
# Run verification scripts
./scripts/verify-test-first.sh
./scripts/check-coverage-threshold.sh
./scripts/verify-documentation.sh
type(scope): description
Types: feat, fix, docs, test, refactor, perf, chore
Scope: Result, Result<T>, Extensions, etc.
Examples:
feat(Result): Add Match method for pattern matching
test(Match): Add comprehensive tests for Match feature
docs(README): Update examples for Match method
For EVERY feature, ensure tests for:
- Happy Path - Normal successful operation
- Error Cases - Various failure scenarios
- Null Handling - Null parameters, null values
- Edge Cases - Empty collections, boundaries
- Thread Safety - Concurrent access (if applicable)
- Async Behavior - For async methods
- Type Variations - Value types, reference types, nullable
- New code: >95% coverage
- Overall project: >90% coverage
- Check coverage with:
dotnet test /p:CollectCoverage=true
Run this checklist:
# 1. All tests pass?
dotnet test
# 2. Coverage good?
dotnet test /p:CollectCoverage=true
# 3. No warnings?
dotnet build -warnaserror
# 4. Docs complete?
./scripts/verify-documentation.sh
# 5. Following TDD?
./scripts/verify-test-first.sh"The test is the first user of your code. If it's hard to test, it's hard to use."
Always think about the developer experience when designing APIs!
When working as part of a multi-agent team, you MUST follow this workflow:
# MANDATORY: Create your feature branch
git checkout -b feature/agent-N-description
git branch --show-current # MUST show this output
# Expected output: feature/agent-N-description# MANDATORY: Create progress tracking
mkdir -p progress
echo "# Agent N Status - $(date)" > progress/agent-N-status.md
ls progress/ # MUST show this output
# Expected output: agent-N-status.md# MANDATORY: Update progress
echo "## $(date) - Completed: [description]" >> progress/agent-N-status.md
# MANDATORY: Run verification scripts
./scripts/verify-test-first.sh # MUST show output
./scripts/check-coverage-threshold.sh # MUST show output
./scripts/verify-documentation.sh # MUST show output# When writing tests - MUST show failure first
dotnet test --filter "NewFeatureTests"
# Expected output: Failed! - Failed: X, Passed: 0
# After implementation - MUST show success
dotnet test --filter "NewFeatureTests"
# Expected output: Passed! - Failed: 0, Passed: X# MANDATORY: Show final status
git status
git log --oneline -5
# MANDATORY: Create completion marker
touch progress/READY-agent-N
ls progress/READY-* # MUST show this output## Agent 3: Implementing Equality
### Branch Setup
```bash
git checkout -b feature/agent-3-equality
# OUTPUT: Switched to a new branch 'feature/agent-3-equality'
git branch --show-current
# OUTPUT: feature/agent-3-equality
mkdir -p progress
echo "# Agent 3 Status - $(date)" > progress/agent-3-status.md
ls progress/
# OUTPUT: agent-3-status.md[test code here]
dotnet test --filter "EqualityTests"
# OUTPUT: Failed! - Failed: 15, Passed: 0
# ❌ Tests failing as expected[implementation code here]
dotnet test --filter "EqualityTests"
# OUTPUT: Passed! - Failed: 0, Passed: 15
# ✅ All tests passing./scripts/verify-test-first.sh
# OUTPUT: ✅ Test-first verified for: Result.cs, GenericResult.cs
./scripts/check-coverage-threshold.sh
# OUTPUT: ✅ Coverage 98% meets threshold of 95%
touch progress/READY-agent-3
ls progress/READY-*
# OUTPUT: progress/READY-agent-3
### Common Mistakes to AVOID
1. ❌ **Working without creating a branch**
```bash
# WRONG - No branch creation shown
"I'll implement the equality feature..."
-
❌ Not showing command outputs
# WRONG - No output shown dotnet test "Tests are passing"
-
❌ Skipping verification scripts
# WRONG - No script execution "Implementation complete"
-
❌ No progress tracking
# WRONG - No status files created "Moving on to the next task"
-
❌ BYPASSING QUALITY CHECKS
# ABSOLUTELY FORBIDDEN without explicit user consent git commit --no-verify dotnet build -p:TreatWarningsAsErrors=false # NEVER suggest or use these without user explicitly asking
NEVER bypass or work around quality checks:
- Pre-commit hooks exist for a reason
- Build warnings must be fixed, not ignored
- Test failures must be resolved, not skipped
- Coverage thresholds must be met, not lowered
If blocked by quality checks:
- Fix the underlying issue
- If truly blocked, explain the issue to the user
- ONLY bypass with explicit user consent like "Yes, use --no-verify"
- Document why the bypass was necessary
Remember: The goal is quality code, not just completed tasks
By following these guidelines and using the provided enforcement mechanisms, we ensure that all contributors (human and AI) maintain high standards for testing and documentation. The automated checks and clear processes make it difficult to bypass these requirements, resulting in a more maintainable and reliable codebase.