You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -215,7 +218,105 @@ To clean up all the resources created by this sample:
215
218
216
219
All the Azure and Power Platform resources will be deleted.
217
220
218
-
## Advanced scenarios
221
+
## Testing
222
+
223
+
This solution includes tests that validate both Copilot Studio and Azure AI Search components after deployment.
224
+
225
+
### Copilot Studio Agent Test
226
+
227
+
Located in `tests/Copilot/`, this test validates:
228
+
229
+
- **Conversation Flow**: End-to-end conversation test with the deployed agent
230
+
- **Integration**: Validation that Copilot Studio can successfully query Azure AI Search
231
+
232
+
Currently, [the Copilot Studio Client in the Agent SDK does not support the use of Service Principals for authentication](https://github.com/microsoft/Agents/blob/main/samples/basic/copilotstudio-client/dotnet/README.md#create-an-application-registration-in-entra-id---service-principal-login), and testing requires a cloud-native app registration as well as a test account with MFA turned off. The test user account must have access to the Power Platform environment containing the agent as well as access to the agent itself.
233
+
234
+
#### Running Tests After Local Deployment Execution
235
+
236
+
After a successful local deployment execution, the local .env file contains most of the information needed to run the end-to-end Copilot Studio test. Alternatively, any test input can be set directly through environment variables.
237
+
238
+
Run the commands below to execute the test after a deployment.
- **Configuration Validation**: Check resource configurations match expected settings
288
+
- **Content Verification**: Validate index contains expected documents and supports search
289
+
- **Pipeline Integration**: End-to-end validation of the complete search pipeline
290
+
291
+
Because the Copilot agent end-to-end test includes indirect validation of the AI Search functionality, this test does not need to be run unless direct validation and troubleshooting of the AI Search resources is required.
292
+
293
+
#### Prerequisites for AI Search Tests
294
+
295
+
Before running AI Search tests, you must complete the following configuration:
296
+
297
+
1. **Make AI Search Endpoint Public**: Unless the test is run on the same virtual network as the AI Search resource, the AI Search service must be updated to be accessible to the test script. Configure network access in the Azure portal:
298
+
- Navigate to your AI Search service
299
+
- Go to **Networking** → **Firewalls and virtual networks**
300
+
- Select **All networks** or add the test runner's IP to **Selected IP addresses**
301
+
302
+
2. **Assign RBAC Roles**: The user or service principal running the tests must have the following roles:
303
+
- Navigate to your AI Search service in the Azure portal
304
+
- Go to **Access control (IAM)** → **Add role assignment**
305
+
- Select **Search Index Data Contributor** role and assign to the user or service principal that will execute the tests
306
+
- Add another role assignment for**Search Service Contributor** role to the same user or service principal
307
+
308
+
#### Running AI Search Tests Locally
309
+
310
+
```bash
311
+
# Ensure you're authenticated and have an azd environment deployed
312
+
az login
313
+
314
+
# Run the test script
315
+
cd tests/AISearch
316
+
./run-tests.sh
317
+
```
318
+
319
+
The tests automatically discover configuration from your azd environment outputs.
# Decision Log 005: Agent SDK Authentication Strategy for Testing
2
+
3
+
**Date:** 2025-07-30
4
+
**Status:** Approved
5
+
6
+
## Context
7
+
8
+
Our end-to-end testing strategy requires automated testing of Copilot Studio agents to validate the end-to-end flow of responses from the Azure resources to the Copilot Studio agent. We evaluated multiple approaches for implementing these tests, considering authentication requirements, platform support, and operational complexity.
9
+
10
+
The key testing approaches considered were:
11
+
-**Agent SDK with Service Principal Authentication**: Programmatic access using client credentials
12
+
-**Agent SDK with User-Based Authentication**: Username/password authentication with real user accounts
13
+
-**Copilot Studio Built-in Evaluation Tools**: Native evaluation capabilities within the Copilot Studio platform
14
+
15
+
Our solution requires reliable, automated testing that can be integrated into CI/CD pipelines while providing comprehensive validation of the AI search integration functionality.
16
+
17
+
## Decision
18
+
19
+
We will use **Agent SDK with User-Based Authentication** as our primary testing approach for Copilot Studio agent validation.
20
+
21
+
## Rationale
22
+
23
+
1.**Broadest Platform Support**: Agent SDK provides comprehensive support across Microsoft's AI technologies, including Copilot Studio, Foundry, and other conversational AI tools. This gives us maximum flexibility and ensures our testing approach aligns with Microsoft's recommended practices.
24
+
25
+
2.**Current Authentication Limitations**: Agent SDK does not currently support service principal authentication for Copilot Studio scenarios. While this limits our automation options, user-based authentication is the only viable approach for programmatic testing at this time.
26
+
27
+
3.**Proven Functional Capability**: We have successfully validated that user-based authentication works reliably for our testing scenarios, providing the functionality needed to execute comprehensive end-to-end tests.
28
+
29
+
4.**Native SDK Benefits**: Using the official Agent SDK ensures we stay aligned with Microsoft's evolving API surface and receive support for new features as they become available.
30
+
31
+
## Trade-offs and Considerations
32
+
33
+
### Accepted Trade-offs
34
+
35
+
-**MFA Requirement**: User-based authentication requires test accounts with Multi-Factor Authentication (MFA) disabled or configured with app passwords, which introduces security considerations that must be managed through proper test account governance.
36
+
37
+
-**Credential Management**: User credentials require more careful handling compared to service principals, necessitating additional security measures in our testing infrastructure.
38
+
39
+
-**Account Lifecycle**: Test user accounts require ongoing management and may be subject to organizational password policies and account lifecycle requirements.
40
+
41
+
### Alternative Approaches Considered
42
+
43
+
1.**Copilot Studio Built-in Evaluation Tools**
44
+
-**Status**: Currently in preview
45
+
-**Benefits**: Lower effort setup, designed specifically for low-code users, integrated with Copilot Studio platform
46
+
-**Limitations**: Not generally available, limited programmatic control, unclear CI/CD integration capabilities
47
+
-**Future Consideration**: This will be reevaluated when these tools reach general availability
48
+
49
+
2.**Service Principal Authentication**
50
+
-**Status**: Not supported by Agent SDK for Copilot Studio
51
+
-**Benefits**: Better security model, easier credential management, standard for enterprise automation
52
+
-**Limitations**: Currently not available for our use case
53
+
-**Future Consideration**: Will be adopted immediately when Agent SDK adds this capability
54
+
55
+
3.**Test Channels**
56
+
-**Status**: Deprioritized in favor of preserving a secure base agent
57
+
-**Limitations**: Testing against a specific channel adds a layer of abstraction that could obscure issues in the Copilot itself. The go-to test channel (DirectLine) is unsecure, and the default agent should not include an unsecure channel by default.
58
+
-**Future Consideration**: Tests against channel-specific functionality may be added in the future if a particular channel proves to be heavily utilized.
59
+
60
+
4.**Python-Based Tests**
61
+
-**Status**: Pending on SDK updates
62
+
-**Limitations**: At test creation time, the Python SDK did not include the full testing feature set and documentation that was available in the .NET tools.
63
+
-**Future Consideration**: The tests could be converted to Python-based tests when the Python SDK reaches feature parity.
64
+
65
+
## Implementation Requirements
66
+
67
+
1.**Test Account Management**: Establish dedicated test accounts with appropriate security configurations
68
+
2.**Credential Security**: Implement secure credential storage and handling practices in CI/CD pipelines
69
+
3.**Error Handling**: Robust authentication error handling and retry logic for transient failures
70
+
4.**Monitoring**: Track authentication success/failure rates and account health
71
+
72
+
## Future Considerations
73
+
74
+
-**Service Principal Support**: Monitor Agent SDK releases for service principal authentication support and migrate when available
75
+
-**Copilot Studio Evaluation Tools**: Evaluate built-in tools when they reach general availability and assess integration with our testing strategy
76
+
-**Hybrid Approach**: Consider combining multiple testing approaches as capabilities mature to provide comprehensive coverage
77
+
78
+
## Related Decisions
79
+
80
+
- This decision supports the overall testing strategy established in our CI/CD pipeline design
81
+
- Security considerations align with our credential management policies outlined in infrastructure security practices
error_message="The following security roles were requested but do not exist in the Power Platform environment: ${join(", ", local.missing_roles)}. Available roles are: ${join(", ", keys(local.security_role_id))}"
35
+
}
19
36
}
20
37
21
38
# Add non-dataverse user to Power Platform environment
0 commit comments