-
Notifications
You must be signed in to change notification settings - Fork 3.2k
Howie/samples for tests #44139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Howie/samples for tests #44139
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR introduces improvements to make samples more testable by adding environment variable support for user input, implementing proper try-catch-finally cleanup patterns, and adding assertions to validate outputs. The PR also reorganizes MCP sample files by moving sample_mcp_tool_async.py to a new mcp_client directory with supporting assets.
Key changes:
- Added optional environment variable
AI_SEARCH_USER_INPUTto skip interactive prompts in the AI Search sample - Implemented try-finally pattern for reliable agent cleanup in
sample_agent_ai_search.py - Added output assertion to validate sample execution success
- Relocated MCP tool sample to dedicated
mcp_clientdirectory with improved documentation
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| sdk/ai/azure-ai-projects/samples/mcp_client/sample_mcp_tool_async.py | New location for MCP sample with enhanced documentation and workflow description |
| sdk/ai/azure-ai-projects/samples/mcp_client/assets/product_info.md | Asset file for file search demonstration in MCP sample |
| sdk/ai/azure-ai-projects/samples/agents/tools/sample_agent_ai_search.py | Refactored with main() function, environment variable support, try-finally cleanup, and output assertions |
| sdk/ai/azure-ai-projects/samples/agents/sample_mcp_tool_async_.py | Removed old sample location (appears to be deleted/moved) |
| sdk/ai/azure-ai-projects/CHANGELOG.md | Updated to document the improved MCP client sample |
| vector_store_file = await openai_client.vector_stores.files.upload_and_poll( | ||
| vector_store_id=vector_store.id, | ||
| file=open( | ||
| os.path.abspath(os.path.join(os.path.dirname(__file__), "./assets/product_info.md")), | ||
| "rb", | ||
| ), | ||
| ) |
Copilot
AI
Nov 21, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The file handle opened on line 123 is never closed. While the async with context manager will clean up the session, the file opened with open() should be properly closed to prevent resource leaks.
Consider using a context manager:
with open(
os.path.abspath(os.path.join(os.path.dirname(__file__), "./assets/product_info.md")),
"rb",
) as f:
vector_store_file = await openai_client.vector_stores.files.upload_and_poll(
vector_store_id=vector_store.id,
file=f,
)| vector_store_file = await openai_client.vector_stores.files.upload_and_poll( | |
| vector_store_id=vector_store.id, | |
| file=open( | |
| os.path.abspath(os.path.join(os.path.dirname(__file__), "./assets/product_info.md")), | |
| "rb", | |
| ), | |
| ) | |
| with open( | |
| os.path.abspath(os.path.join(os.path.dirname(__file__), "./assets/product_info.md")), | |
| "rb", | |
| ) as f: | |
| vector_store_file = await openai_client.vector_stores.files.upload_and_poll( | |
| vector_store_id=vector_store.id, | |
| file=f, | |
| ) |
| # Create a vector store | ||
| vector_store = await openai_client.vector_stores.create( | ||
| name="sample_vector_store", | ||
| ) | ||
|
|
||
| vector_store_file = await openai_client.vector_stores.files.upload_and_poll( | ||
| vector_store_id=vector_store.id, | ||
| file=open( | ||
| os.path.abspath(os.path.join(os.path.dirname(__file__), "./assets/product_info.md")), | ||
| "rb", | ||
| ), | ||
| ) | ||
|
|
||
| print(f"\n\nUploaded file, file ID: {vector_store_file.id} to vector store ID: {vector_store.id}") | ||
|
|
||
| # Call the file_search tool | ||
| file_search_result = await session.call_tool( | ||
| name="file_search", | ||
| arguments={"queries": ["What feature does Smart Eyewear offer?"]}, | ||
| meta={"vector_store_ids": [vector_store.id]}, | ||
| ) | ||
| print(f"\n\nFile Search Output: {file_search_result.content}") |
Copilot
AI
Nov 21, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The sample creates a vector store but never cleans it up. This can lead to resource accumulation over time. Consider adding cleanup in a try-finally block to ensure the vector store is deleted even if an error occurs, similar to how other samples handle agent cleanup.
Example pattern:
vector_store = None
try:
vector_store = await openai_client.vector_stores.create(...)
# ... use vector store ...
finally:
if vector_store:
await openai_client.vector_stores.delete(vector_store_id=vector_store.id)| # Create a vector store | |
| vector_store = await openai_client.vector_stores.create( | |
| name="sample_vector_store", | |
| ) | |
| vector_store_file = await openai_client.vector_stores.files.upload_and_poll( | |
| vector_store_id=vector_store.id, | |
| file=open( | |
| os.path.abspath(os.path.join(os.path.dirname(__file__), "./assets/product_info.md")), | |
| "rb", | |
| ), | |
| ) | |
| print(f"\n\nUploaded file, file ID: {vector_store_file.id} to vector store ID: {vector_store.id}") | |
| # Call the file_search tool | |
| file_search_result = await session.call_tool( | |
| name="file_search", | |
| arguments={"queries": ["What feature does Smart Eyewear offer?"]}, | |
| meta={"vector_store_ids": [vector_store.id]}, | |
| ) | |
| print(f"\n\nFile Search Output: {file_search_result.content}") | |
| # Create a vector store and ensure cleanup | |
| vector_store = None | |
| try: | |
| vector_store = await openai_client.vector_stores.create( | |
| name="sample_vector_store", | |
| ) | |
| vector_store_file = await openai_client.vector_stores.files.upload_and_poll( | |
| vector_store_id=vector_store.id, | |
| file=open( | |
| os.path.abspath(os.path.join(os.path.dirname(__file__), "./assets/product_info.md")), | |
| "rb", | |
| ), | |
| ) | |
| print(f"\n\nUploaded file, file ID: {vector_store_file.id} to vector store ID: {vector_store.id}") | |
| # Call the file_search tool | |
| file_search_result = await session.call_tool( | |
| name="file_search", | |
| arguments={"queries": ["What feature does Smart Eyewear offer?"]}, | |
| meta={"vector_store_ids": [vector_store.id]}, | |
| ) | |
| print(f"\n\nFile Search Output: {file_search_result.content}") | |
| finally: | |
| if vector_store: | |
| await openai_client.vector_stores.delete(vector_store_id=vector_store.id) |
|
|
||
| async with ( | ||
| DefaultAzureCredential() as credential, | ||
| AIProjectClient(endpoint=endpoint, credential=credential) as project_client, | ||
| project_client.get_openai_client() as openai_client, | ||
| streamablehttp_client( | ||
| url=f"{endpoint}/mcp_tools?api-version=2025-05-15-preview", | ||
| headers={"Authorization": f"Bearer {(await credential.get_token('https://ai.azure.com')).token}"}, | ||
| ) as (read_stream, write_stream, _), | ||
| ClientSession(read_stream, write_stream) as session, | ||
| ): | ||
|
|
||
| # Initialize the connection | ||
| await session.initialize() | ||
| # List available tools | ||
| tools = await session.list_tools() | ||
| print(f"Available tools: {[tool.name for tool in tools.tools]}") | ||
|
|
||
| # For each tool, print its details | ||
| for tool in tools.tools: | ||
| print(f"\n\nTool Name: {tool.name}, Input Schema: {tool.inputSchema}") | ||
|
|
||
| # Run the code interpreter tool | ||
| code_interpreter_result = await session.call_tool( | ||
| name="code_interpreter", | ||
| arguments={"code": "print('Hello from Microsoft Foundry MCP Code Interpreter tool!')"}, | ||
| ) | ||
| print(f"\n\nCode Interpreter Output: {code_interpreter_result.content}") | ||
|
|
||
| # Run the image_generation tool | ||
| image_generation_result = await session.call_tool( | ||
| name="image_generation", | ||
| arguments={"prompt": "Draw a cute puppy riding a skateboard"}, | ||
| meta={"imagegen_model_deployment_name": os.getenv("IMAGE_GEN_DEPLOYMENT_NAME", "")}, | ||
| ) | ||
|
|
||
| # Save the image generation output to a file | ||
| if image_generation_result.content and isinstance(image_generation_result.content[0], ImageContent): | ||
| filename = "puppy.png" | ||
| file_path = os.path.abspath(filename) | ||
| print(f"\nImage saved to: {file_path}") | ||
|
|
||
| with open(file_path, "wb") as f: | ||
| f.write(base64.b64decode(image_generation_result.content[0].data)) | ||
|
|
||
| # Create a vector store | ||
| vector_store = await openai_client.vector_stores.create( | ||
| name="sample_vector_store", | ||
| ) | ||
|
|
||
| vector_store_file = await openai_client.vector_stores.files.upload_and_poll( | ||
| vector_store_id=vector_store.id, | ||
| file=open( | ||
| os.path.abspath(os.path.join(os.path.dirname(__file__), "./assets/product_info.md")), | ||
| "rb", | ||
| ), | ||
| ) | ||
|
|
||
| print(f"\n\nUploaded file, file ID: {vector_store_file.id} to vector store ID: {vector_store.id}") | ||
|
|
||
| # Call the file_search tool | ||
| file_search_result = await session.call_tool( | ||
| name="file_search", | ||
| arguments={"queries": ["What feature does Smart Eyewear offer?"]}, | ||
| meta={"vector_store_ids": [vector_store.id]}, | ||
| ) | ||
| print(f"\n\nFile Search Output: {file_search_result.content}") | ||
|
|
||
|
|
Copilot
AI
Nov 21, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The sample creates a file (puppy.png) but doesn't clean it up afterwards. For a sample that may be run multiple times, consider cleaning up generated artifacts.
You could add cleanup after the sample completes:
try:
# ... existing code ...
finally:
# Clean up generated files
if os.path.exists("puppy.png"):
os.remove("puppy.png")| async with ( | |
| DefaultAzureCredential() as credential, | |
| AIProjectClient(endpoint=endpoint, credential=credential) as project_client, | |
| project_client.get_openai_client() as openai_client, | |
| streamablehttp_client( | |
| url=f"{endpoint}/mcp_tools?api-version=2025-05-15-preview", | |
| headers={"Authorization": f"Bearer {(await credential.get_token('https://ai.azure.com')).token}"}, | |
| ) as (read_stream, write_stream, _), | |
| ClientSession(read_stream, write_stream) as session, | |
| ): | |
| # Initialize the connection | |
| await session.initialize() | |
| # List available tools | |
| tools = await session.list_tools() | |
| print(f"Available tools: {[tool.name for tool in tools.tools]}") | |
| # For each tool, print its details | |
| for tool in tools.tools: | |
| print(f"\n\nTool Name: {tool.name}, Input Schema: {tool.inputSchema}") | |
| # Run the code interpreter tool | |
| code_interpreter_result = await session.call_tool( | |
| name="code_interpreter", | |
| arguments={"code": "print('Hello from Microsoft Foundry MCP Code Interpreter tool!')"}, | |
| ) | |
| print(f"\n\nCode Interpreter Output: {code_interpreter_result.content}") | |
| # Run the image_generation tool | |
| image_generation_result = await session.call_tool( | |
| name="image_generation", | |
| arguments={"prompt": "Draw a cute puppy riding a skateboard"}, | |
| meta={"imagegen_model_deployment_name": os.getenv("IMAGE_GEN_DEPLOYMENT_NAME", "")}, | |
| ) | |
| # Save the image generation output to a file | |
| if image_generation_result.content and isinstance(image_generation_result.content[0], ImageContent): | |
| filename = "puppy.png" | |
| file_path = os.path.abspath(filename) | |
| print(f"\nImage saved to: {file_path}") | |
| with open(file_path, "wb") as f: | |
| f.write(base64.b64decode(image_generation_result.content[0].data)) | |
| # Create a vector store | |
| vector_store = await openai_client.vector_stores.create( | |
| name="sample_vector_store", | |
| ) | |
| vector_store_file = await openai_client.vector_stores.files.upload_and_poll( | |
| vector_store_id=vector_store.id, | |
| file=open( | |
| os.path.abspath(os.path.join(os.path.dirname(__file__), "./assets/product_info.md")), | |
| "rb", | |
| ), | |
| ) | |
| print(f"\n\nUploaded file, file ID: {vector_store_file.id} to vector store ID: {vector_store.id}") | |
| # Call the file_search tool | |
| file_search_result = await session.call_tool( | |
| name="file_search", | |
| arguments={"queries": ["What feature does Smart Eyewear offer?"]}, | |
| meta={"vector_store_ids": [vector_store.id]}, | |
| ) | |
| print(f"\n\nFile Search Output: {file_search_result.content}") | |
| try: | |
| async with ( | |
| DefaultAzureCredential() as credential, | |
| AIProjectClient(endpoint=endpoint, credential=credential) as project_client, | |
| project_client.get_openai_client() as openai_client, | |
| streamablehttp_client( | |
| url=f"{endpoint}/mcp_tools?api-version=2025-05-15-preview", | |
| headers={"Authorization": f"Bearer {(await credential.get_token('https://ai.azure.com')).token}"}, | |
| ) as (read_stream, write_stream, _), | |
| ClientSession(read_stream, write_stream) as session, | |
| ): | |
| # Initialize the connection | |
| await session.initialize() | |
| # List available tools | |
| tools = await session.list_tools() | |
| print(f"Available tools: {[tool.name for tool in tools.tools]}") | |
| # For each tool, print its details | |
| for tool in tools.tools: | |
| print(f"\n\nTool Name: {tool.name}, Input Schema: {tool.inputSchema}") | |
| # Run the code interpreter tool | |
| code_interpreter_result = await session.call_tool( | |
| name="code_interpreter", | |
| arguments={"code": "print('Hello from Microsoft Foundry MCP Code Interpreter tool!')"}, | |
| ) | |
| print(f"\n\nCode Interpreter Output: {code_interpreter_result.content}") | |
| # Run the image_generation tool | |
| image_generation_result = await session.call_tool( | |
| name="image_generation", | |
| arguments={"prompt": "Draw a cute puppy riding a skateboard"}, | |
| meta={"imagegen_model_deployment_name": os.getenv("IMAGE_GEN_DEPLOYMENT_NAME", "")}, | |
| ) | |
| # Save the image generation output to a file | |
| if image_generation_result.content and isinstance(image_generation_result.content[0], ImageContent): | |
| filename = "puppy.png" | |
| file_path = os.path.abspath(filename) | |
| print(f"\nImage saved to: {file_path}") | |
| with open(file_path, "wb") as f: | |
| f.write(base64.b64decode(image_generation_result.content[0].data)) | |
| # Create a vector store | |
| vector_store = await openai_client.vector_stores.create( | |
| name="sample_vector_store", | |
| ) | |
| vector_store_file = await openai_client.vector_stores.files.upload_and_poll( | |
| vector_store_id=vector_store.id, | |
| file=open( | |
| os.path.abspath(os.path.join(os.path.dirname(__file__), "./assets/product_info.md")), | |
| "rb", | |
| ), | |
| ) | |
| print(f"\n\nUploaded file, file ID: {vector_store_file.id} to vector store ID: {vector_store.id}") | |
| # Call the file_search tool | |
| file_search_result = await session.call_tool( | |
| name="file_search", | |
| arguments={"queries": ["What feature does Smart Eyewear offer?"]}, | |
| meta={"vector_store_ids": [vector_store.id]}, | |
| ) | |
| print(f"\n\nFile Search Output: {file_search_result.content}") | |
| finally: | |
| # Clean up generated files | |
| puppy_file = "puppy.png" | |
| if os.path.exists(puppy_file): | |
| os.remove(puppy_file) |
| extra_body={"agent": {"name": agent.name, "type": "agent_reference"}}, | ||
| ) | ||
|
|
||
| output = None |
Copilot
AI
Nov 21, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The output variable is assigned to None on line 102 but then conditionally assigned within the event loop. If the loop completes without encountering a response.completed event, output will remain None, causing the assertion on line 138 to fail.
This initialization should be removed since output is already initialized as None at the module level (line 45), and the code should ensure the assertion message is more descriptive about what went wrong.
| print(f"Error occurred: {e}") | ||
| raise e |
Copilot
AI
Nov 21, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The exception is caught and printed but then re-raised, making the error message on line 127 redundant since the exception will propagate anyway with its original message. Consider either:
- Removing the print statement and just re-raising the exception
- Logging the error properly instead of printing
- Not re-raising if you want to handle it gracefully
Example:
except Exception as e:
# Let the exception propagate with its original traceback
raise| print(f"Error occurred: {e}") | |
| raise e | |
| raise |
| Before running the sample: | ||
| pip install "azure-ai-projects>=2.0.0b1" azure-identity python-dotenv mcp |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
azure-identity non longer needed in this line after my PR goes in.
| ) | ||
| ) | ||
| # [END tool_declaration] | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you need to refresh code snippets in package README.md?
This is a POC to be extended for all tests for tools.