|
| 1 | +<!-- List of authors who contributed to this decision. Include full names and roles if applicable. --> |
| 2 | +authors: |
| 3 | +- Martin Stühmer |
| 4 | + |
| 5 | +<!-- |
| 6 | +The patterns this decision applies to. Each entry is a glob pattern that matches files affected by this decision. |
| 7 | +--> |
| 8 | +applyTo: |
| 9 | +- "**/*.csproj" |
| 10 | +- "**/*.fsproj" |
| 11 | +- "**/*.vbproj" |
| 12 | +- "**/*.sln" |
| 13 | +- "**/*.slnx" |
| 14 | +- "**/tests/**/*.cs" |
| 15 | +- "AGENTS.md" |
| 16 | +- ".github/copilot-instructions.md" |
| 17 | + |
| 18 | +<!-- The date this ADR was initially created in YYYY-MM-DD format. --> |
| 19 | +created: 2025-11-01 |
| 20 | + |
| 21 | +<!-- |
| 22 | +The most recent date this ADR was updated in YYYY-MM-DD format. |
| 23 | +IMPORTANT: Update this field whenever the decision is modified. |
| 24 | +--> |
| 25 | +lastModified: 2025-11-01 |
| 26 | + |
| 27 | +<!-- |
| 28 | +The current state of this ADR. If superseded, include references to the superseding ADR. |
| 29 | +Valid values: proposed, accepted, deprecated, superseded |
| 30 | +--> |
| 31 | +state: accepted |
| 32 | + |
| 33 | +<!-- |
| 34 | +A compact AI LLM compatible definition of this decision. |
| 35 | +This should be a precise, structured description that AI systems can easily parse and understand. |
| 36 | +Include the core decision, key rationale, and primary impact in 1-2 concise sentences. |
| 37 | +--> |
| 38 | +instructions: | |
| 39 | + AI agents are authorized to execute dotnet build, restore, and test commands for all projects in the solution to verify code correctness and maintain quality standards. |
| 40 | + All operations MUST be executed at the solution level using the .sln or .slnx file found in the repository root directory. |
| 41 | + Before committing changes, AI agents MUST run restore, build, and test operations to ensure no regressions are introduced. |
| 42 | +--- |
| 43 | +# Decision: AI Agent Authorization for Build, Restore, and Test Operations |
| 44 | + |
| 45 | +AI coding assistants such as GitHub Copilot are authorized to execute `dotnet build`, `dotnet restore`, and `dotnet test` commands for all projects within the solution to verify code correctness, ensure compilation success, and validate functionality through automated tests. |
| 46 | + |
| 47 | +## Context |
| 48 | + |
| 49 | +AI coding assistants are increasingly capable of making complex code changes across multiple files and projects. However, without the ability to verify their changes, several issues arise: |
| 50 | + |
| 51 | +- **Undetected Compilation Errors**: Code changes may introduce syntax errors, missing references, or type mismatches that prevent successful compilation. |
| 52 | +- **Broken Dependencies**: Package references may be incomplete or incorrect, causing restore failures. |
| 53 | +- **Regression Bugs**: Changes may break existing functionality that is covered by automated tests. |
| 54 | +- **Integration Issues**: Multi-project changes may compile individually but fail when integrated. |
| 55 | +- **Quality Degradation**: Without verification, the quality of AI-generated code cannot be assured before commit. |
| 56 | +- **Developer Burden**: Human developers must manually verify all AI-generated changes, reducing productivity gains. |
| 57 | + |
| 58 | +To maximize the effectiveness of AI coding assistants while maintaining code quality, agents need the capability to validate their own work through standard .NET development operations. |
| 59 | + |
| 60 | +## Decision |
| 61 | + |
| 62 | +We authorize AI coding assistants to execute the following .NET CLI commands: |
| 63 | + |
| 64 | +1. **`dotnet restore`**: Restore NuGet packages and dependencies for projects and solutions. |
| 65 | +2. **`dotnet build`**: Compile projects and solutions to verify code correctness and identify compilation errors. |
| 66 | +3. **`dotnet test`**: Execute unit tests, integration tests, and other automated tests to validate functionality. |
| 67 | + |
| 68 | +### Implementation Guidelines |
| 69 | + |
| 70 | +AI agents MUST: |
| 71 | + |
| 72 | +- Detect the solution file (`.sln` or `.slnx`) in the repository root directory and use it for all build, restore, and test operations. |
| 73 | +- Execute all commands at the solution level to ensure consistency across all projects and detect integration issues. |
| 74 | +- Execute `dotnet restore` before building if package references have been added or modified. |
| 75 | +- Execute `dotnet build` after making code changes to verify compilation success. |
| 76 | +- Execute `dotnet test` after code changes that may affect functionality to ensure no regressions. |
| 77 | +- Report compilation errors, warnings, and test failures clearly to the user. |
| 78 | +- Iterate on fixes when errors or test failures are detected. |
| 79 | +- Use appropriate command-line options (e.g., `--configuration Release`, `--no-restore`) as context requires. |
| 80 | + |
| 81 | +AI agents SHOULD: |
| 82 | + |
| 83 | +- Use `--verbosity` options appropriately to provide useful diagnostic information. |
| 84 | +- Consider build performance implications and avoid unnecessary rebuilds. |
| 85 | +- Verify that the solution file exists before executing commands and report an error if missing. |
| 86 | + |
| 87 | +AI agents MAY: |
| 88 | + |
| 89 | +- Execute `dotnet clean` before building to ensure a clean build environment. |
| 90 | +- Use additional CLI options (e.g., `--framework`, `--runtime`) when targeting specific configurations. |
| 91 | +- Run specific test filters (e.g., `--filter Category=Unit`) when appropriate. |
| 92 | + |
| 93 | +## Consequences |
| 94 | + |
| 95 | +### Positive Consequences |
| 96 | + |
| 97 | +- **Improved Code Quality**: AI-generated code is validated before being presented to developers. |
| 98 | +- **Faster Development Cycles**: Errors are caught and fixed immediately by the AI agent rather than discovered later by developers. |
| 99 | +- **Reduced Human Oversight**: Developers can trust that AI-generated changes compile and pass tests. |
| 100 | +- **Better Error Context**: AI agents can see and respond to actual compiler and test errors rather than relying on static analysis alone. |
| 101 | +- **Continuous Verification**: Changes are validated in real-time as part of the AI's workflow. |
| 102 | +- **Test-Driven Development Support**: AI agents can run tests iteratively while implementing features. |
| 103 | + |
| 104 | +### Potential Negative Consequences |
| 105 | + |
| 106 | +- **Resource Usage**: Build and test operations consume CPU, memory, and disk I/O resources. |
| 107 | +- **Time Overhead**: Each verification cycle adds time to AI operations. |
| 108 | +- **Potential for Build Loops**: Poorly designed fixes could lead to repeated failed build attempts. |
| 109 | +- **Complexity**: AI agents need to interpret and respond appropriately to build and test output. |
| 110 | + |
| 111 | +### Mitigation Strategies |
| 112 | + |
| 113 | +- Use incremental builds where possible to minimize resource usage. |
| 114 | +- Set reasonable timeout limits for build and test operations. |
| 115 | +- Implement clear error handling and reporting mechanisms. |
| 116 | +- Monitor for and prevent infinite fix-attempt loops. |
| 117 | +- Allow users to disable automatic verification if needed. |
| 118 | + |
| 119 | +## Alternatives Considered |
| 120 | + |
| 121 | +### 1. Manual Verification Only |
| 122 | + |
| 123 | +**Description**: Require human developers to manually run build and test commands after AI-generated changes. |
| 124 | + |
| 125 | +**Why Not Chosen**: |
| 126 | +- Reduces the value proposition of AI assistance |
| 127 | +- Increases developer workload and slows down development |
| 128 | +- Errors are discovered later in the workflow |
| 129 | +- Does not leverage AI's ability to iterate on fixes |
| 130 | + |
| 131 | +### 2. Static Analysis Only |
| 132 | + |
| 133 | +**Description**: Use static code analysis and linting without actual compilation or test execution. |
| 134 | + |
| 135 | +**Why Not Chosen**: |
| 136 | +- Cannot detect runtime issues or test failures |
| 137 | +- May miss compilation errors not caught by static analysis |
| 138 | +- Less comprehensive verification than actual build/test |
| 139 | +- Doesn't validate package dependencies and restore operations |
| 140 | + |
| 141 | +### 3. Sandbox/Mock Build Environment |
| 142 | + |
| 143 | +**Description**: Create a separate, isolated environment for AI build verification that doesn't use actual project files. |
| 144 | + |
| 145 | +**Why Not Chosen**: |
| 146 | +- Adds significant complexity to implementation |
| 147 | +- May not accurately reflect actual build behavior |
| 148 | +- Difficult to maintain parity with real build environment |
| 149 | +- Adds maintenance overhead for separate build configuration |
| 150 | + |
| 151 | +### 4. Read-Only Analysis Without Execution |
| 152 | + |
| 153 | +**Description**: Allow AI agents to analyze code but not execute any commands. |
| 154 | + |
| 155 | +**Why Not Chosen**: |
| 156 | +- Severely limits AI effectiveness |
| 157 | +- Cannot verify changes work correctly |
| 158 | +- Misses the primary benefit of automated verification |
| 159 | +- Contradicts the goal of autonomous AI assistance |
| 160 | + |
| 161 | +## Related Decisions |
| 162 | + |
| 163 | +- [Centralized Package Version Management](./2025-07-10-centralized-package-version-management.md) - AI agents must respect centralized package versions when executing restore operations |
| 164 | +- [.NET 10 and C# 13 Adoption](./2025-07-11-dotnet-10-csharp-13-adoption.md) - Build operations must target the correct .NET version and C# language version |
0 commit comments