Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,8 @@ Use one of the available authentication methods supported by Azure AI Foundry co
**4️⃣ Networking**
- **🌐 Public Networking**: If gateway endpoints are accessible on public internet, no separate setup required
- **🔐 Fully Secured Network Setup**: Use Agents BYO VNet feature
- **For APIM**: Use this [Bicep template for secure APIM setup](https://github.com/azure-ai-foundry/foundry-samples/tree/main/infrastructure/infrastructure-setup-bicep/16-private-network-standard-agent-apim-setup-preview)
- **For APIM**: Use this [Bicep template for secure APIM setup](https://github.com/azure-ai-foundry/foundry-samples/tree/main/infrastructure/infrastructure-setup-bicep/16-private-network-standard-agent-apim-setup-preview), If you are using your own setup, you just need to ensure that APIM is accessible from the agents subnet in BYO vnet.
- **For ModelGateway** - Ensure that your gateway endpoint is accessible from the agents subnet in the BYO vnet.

---

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,19 @@

This folder contains Azure Bicep templates for creating APIM (API Management) connections to Azure AI Foundry projects.

> **⚠️ IMPORTANT**: Before running any deployment, follow the [Setup Guide](./apim-setup-guide-for-agents.md) guide to properly configure your APIM service and obtain all applicable parameters. Make sure to collect these parameters to avoid 404/deploymentNotFound errors during Agent API execution:
> 1. **inferenceApiVersion** - The API version for chat completions calls if api-version is required
> 2. **deploymentApiVersion** - The API version for deployment operations if using dynamic discovery and api-version is required.
> **⚠️ IMPORTANT**: Before running any deployment, follow the [Setup Guide](./apim-setup-guide-for-agents.md) guide to properly configure your APIM service and obtain all applicable parameters. If you encounter issues, see the [Troubleshooting Guide](./troubleshooting-guide.md). Make sure to collect these parameters to avoid 404/400/ResourceNotFound/DeploymentNotFound errors during Agent API execution:
> 1. **inferenceApiVersion** - The API version for chat completions calls if api-version is required.
> 2. **deploymentApiVersion** - The API version for deployment operations (list and get deployment operations) if using dynamic discovery and api-version is required.
> 3. **apiName** - The name of your API in APIM (e.g., "foundry", "openai")
> 4. **deploymentInPath** - Whether deployment ID is in the URL path or in body as model field in chat completions API call.
>
> These parameters must match your actual APIM configuration to ensure successful deployments.

## 📚 Documentation

- **[Setup Guide](./apim-setup-guide-for-agents.md)** - Complete configuration guide for APIM connections
- **[Troubleshooting Guide](./troubleshooting-guide.md)** - Common issues and solutions with validation script usage

## Prerequisites

1. **Azure CLI** installed and configured
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@

> **⚠️ IMPORTANT: Test Your Configuration First**
> **Before executing your APIM connection bicep in Azure AI Foundry, [jump to the validation section](#-connection-validation) to test your configuration and ensure it works with the Agents SDK.**
>
> **🆘 Need Help?** If you encounter issues, check the [Troubleshooting Guide](./troubleshooting-guide.md) for solutions and use the validation script mentioned below.

> **🎯 Step-by-Step Configuration**
> This guide shows you how to configure Azure API Management (APIM) to make it ready for use by Foundry Agents as a connection.
Expand Down Expand Up @@ -96,6 +98,7 @@ Once chat completions are working, you need to configure how Foundry Agents will
- How to set model.format field
1. Use `OpenAI` if you are using an OpenAI model (hosted anywhere OpenAI, AzureOpenAI, Foundry or any other host provider),
2. Use `OpenAI` for Gemini models if you are using openai chat completions supported gemini endpoint.
3. Use `OpenAI` if your Gateway's chat completion endpoint is fully compatible with OpenAI API contract (supports tools, tool_choice, reasoning_effort, response_format etc.).
3. Use `Anthropic` if you are using an Anthropic model's /message API, use `OpenAI` if you are using Anthropic's /chat/completions API.
4. Use `NonOpenAI` for everything else.

Expand All @@ -115,6 +118,8 @@ If you choose dynamic discovery, you need to manually add **2 operations** to yo
1. **📋 List Deployments Operation** - Returns all available models/deployments
2. **🎯 Get Deployment Operation** - Returns details for a specific model/deployment

> **📝 Note**: The setup below is specific to using Azure OpenAI or Azure Foundry AI Service resource as APIM backend. For any other backend services, ensure you properly setup and test it out, otherwise use static discovery for simplicity.

##### 🛠️ Adding Get Deployment Operation

1. **📍 Navigate to Your API**: In APIM, go to your imported API (e.g., `agent-aoai`)
Expand Down Expand Up @@ -253,20 +258,20 @@ Once your APIM operations are configured, you need to collect the following deta

#### 🔧 2. Inference API Version

1. **📋 Check API Version Parameter**: In the chat completions test, look for an **api-version** parameter
1. **📋 Check API Version Parameter**: In the chat completions test, look for an **api-version** parameter. If not required, this will need to be kept an empty string.
2. **📝 Note the Value**: If an API version is required when hitting chat completions, record that value
3. **📄 Common Values**: Typically values like `2024-02-01`, `2023-12-01-preview`, etc.
3. **📄 Common Values**: Typically values like `2024-02-01`, `2023-12-01-preview` etc.

#### 🛤️ 3. Deployment in Path

Determine if your chat completions URL includes the deployment name in the path:

- **✅ Set to "true"**: If your URL is like `/deployments/{deploymentName}/chat/completions`
- **❌ Set to "false"**: If your URL is like `/chat/completions` (deployment passed as parameter)
- **❌ Set to "false"**: If your URL is like `/chat/completions` (deployment passed in chat completions request body as `model` field)

**Examples:**
- `"true"`: `/deployments/gpt-4/chat/completions`
- `"false"`: `/chat/completions?deployment=gpt-4`
- `"false"`: `/chat/completions`

> **📝 Note**: These values will be used when creating your APIM connection in Foundry using the Bicep templates.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ param isSharedToAll bool = false

// 1. REQUIRED - Basic APIM Configuration
@allowed(['true', 'false'])
@description('Whether deployment name is in URL path vs query parameter')
@description('Whether deployment name is in URL path vs body')
param deploymentInPath string = 'true'

@description('API version for inference calls (chat completions, embeddings)')
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,12 @@
"value": false
},
"deploymentInPath": {
"value": "true"
"_comment": "true or false - Set based on setup guide to avoid 404 errors in agents",
"value": "false"
},
"inferenceAPIVersion": {
"value": "2025-03-01-preview"
"_comment": "Set a proper value or leave empty if your APIM backend does not need it. Set based on setup guide to avoid 404 errors in agents",
"value": ""
},
"deploymentAPIVersion": {
"value": ""
Expand All @@ -47,6 +49,16 @@
"format": "OpenAI"
}
}
},
{
"name": "YOUR-MODEL-DEPLOYMENT-NAME-2",
"properties": {
"model": {
"name": "YOUR-MODEL-NAME-2",
"version": "YOUR-MODEL-VERSION-2",
"format": "OpenAI"
}
}
}
]
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,12 @@
"value": false
},
"deploymentInPath": {
"value": "true"
"_comment": "true or false - Set based on setup guide to avoid 404 errors in agents",
"value": "false"
},
"inferenceAPIVersion": {
"value": "2025-03-01-preview"
"_comment": "Set a proper value or leave empty if your APIM backend does not need it. Set based on setup guide to avoid 404 errors in agents",
"value": ""
},
"deploymentAPIVersion": {
"value": ""
Expand All @@ -46,6 +48,16 @@
"format": "OpenAI"
}
}
},
{
"name": "YOUR-MODEL-DEPLOYMENT-NAME-2",
"properties": {
"model": {
"name": "YOUR-MODEL-NAME-2",
"version": "YOUR-MODEL-VERSION-2",
"format": "OpenAI"
}
}
}
]
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,12 @@
"value": false
},
"deploymentInPath": {
"value": "true"
"_comment": "true or false - Set based on setup guide to avoid 404 errors in agents",
"value": "false"
},
"inferenceAPIVersion": {
"value": "2025-03-01-preview"
"_comment": "Set a proper value or leave empty if your APIM backend does not need it. Set based on setup guide to avoid 404 errors in agents",
"value": ""
},
"deploymentAPIVersion": {
"value": ""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,12 @@
"value": false
},
"deploymentInPath": {
"value": "true"
"_comment": "true or false - Set based on setup guide to avoid 404 errors in agents",
"value": "false"
},
"inferenceAPIVersion": {
"value": "2025-03-01-preview"
"_comment": "Set a proper value or leave empty if your APIM backend does not need it. Set based on setup guide to avoid 404 errors in agents",
"value": ""
},
"deploymentAPIVersion": {
"value": ""
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# APIM Troubleshooting Guide

This guide helps you diagnose and fix common issues when setting up APIM connections for Azure AI Foundry Agents.

## 🔧 Quick Diagnosis

**Before troubleshooting, use the validation script from the [setup guide](./apim-setup-guide-for-agents.md#-connection-validation) to test your configuration.**

This script will help identify the root cause of most issues.

---

## 🚨 Common Issues

### 1. 404/400 Resource Not Found or Deployment Not Found Errors

#### **Symptoms:**
- Agents return "404 Resource Not Found" errors
- "400 Bad Request - Deployment Not Found" errors
- Connection test fails with resource/deployment errors

#### **Root Cause:**
Generally caused by incorrectly configured APIM parameters in your connection.

#### **Critical Parameters to Check:**

| Parameter | Description |
|-----------|-------------|
| **`apiName`** | Name of the API in APIM that routes to your backend |
| **`inferenceAPIVersion`** | API version for chat completions calls |
| **`deploymentInPath`** | Whether model name is in URL path vs request body |

#### **Solution:**
1. **Follow the [Setup Guide](./apim-setup-guide-for-agents.md)** to verify your APIM configuration
2. **Run the validation script** to identify incorrect parameters (see [connection validation](./apim-setup-guide-for-agents.md#-connection-validation) for instructions)
3. **Update your connection** with the correct parameters identified by the script
4. **Test again** to verify the fixes
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,17 @@

This folder contains Azure Bicep templates for creating ModelGateway connections to Azure AI Foundry projects.

> **⚠️ IMPORTANT**: Before running any deployment, follow the [Setup Guide](./modelgateway-setup-guide-for-agents.md) guide to properly configure your ModelGateway service and obtain all required parameters. Make sure to collect these critical parameters to avoid 404/deploymentNotFound errors during Agent API execution:
> **⚠️ IMPORTANT**: Before running any deployment, follow the [Setup Guide](./modelgateway-setup-guide-for-agents.md) guide to properly configure your ModelGateway service and obtain all required parameters. If you encounter issues, see the [Troubleshooting Guide](./troubleshooting-guide.md). Make sure to collect these critical parameters to avoid 404/deploymentNotFound errors during Agent API execution:
> 1. **inferenceApiVersion** - The API version for chat completions calls if api-version query param is required
> 2. **deploymentApiVersion** - The API version for deployment operations if using dynamic discovery and api-version is reqiuired
> 3. **deploymentInPath** - Whether deployment ID is in the URL path or body in chat completions call
> These parameters must match your actual ModelGateway configuration to ensure successful deployments.

## 📚 Documentation

- **[Setup Guide](./modelgateway-setup-guide-for-agents.md)** - Complete configuration guide for ModelGateway connections
- **[Troubleshooting Guide](./troubleshooting-guide.md)** - Common issues and solutions.

## Prerequisites

1. **Azure CLI** installed and configured
Expand All @@ -28,6 +33,18 @@ az deployment group create \
--parameters apiKey=<your-api-key>
```


### Foundry AzureAI ModelGateway Connection
```bash
# 1. Edit samples/parameters-foundryazureai.json with your resource IDs
# 2. Deploy with your API key
az deployment group create \
--resource-group <your-resource-group> \
--template-file connection-modelgateway.bicep \
--parameters @samples/parameters-foundryazureai.json \
--parameters apiKey=<your-api-key>
```

### Foundry AzureOpenAI ModelGateway Connection
```bash
# 1. Edit samples/parameters-foundryopenai.json with your resource IDs
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@

> **⚠️ IMPORTANT: Test Your Configuration First**
> **Before creating your ModelGateway connection in Azure AI Foundry, [jump to the validation section](#-connection-validation) to test your configuration, it ensures that it works with the Agents SDK.**
>
> **🆘 Need Help?** If you encounter issues, check the [Troubleshooting Guide](./troubleshooting-guide.md) for solutions and use the validation script mentioned below.

> **🎯 Step-by-Step Configuration**
> This guide shows you how to configure your self-hosted or third-party gateway to make it ready for use by Foundry Agents as a ModelGateway connection.
Expand Down Expand Up @@ -143,6 +145,7 @@ You need to choose how Foundry Agents will discover available models through you
- How to set model.format field
1. Use `OpenAI` if you are using an OpenAI model (hosted anywhere OpenAI, AzureOpenAI, Foundry or any other host provider),
2. Use `OpenAI` for Gemini models if you are using openai chat completions supported gemini endpoint.
3. Use `OpenAI` if your Gateway's chat completion endpoint is fully compatible with OpenAI contract (supports tools, tool_choice, reasoning_effort, response_format etc.).
3. Use `Anthropic` if you are using an Anthropic model's /message API, use `OpenAI` if you are using Anthropic's /chat/completions API.
4. Use `NonOpenAI` for everything else.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,9 +45,11 @@
]
},
"deploymentInPath": {
"value": "true"
"_comment": "true or false - Set based on setup guide to avoid 404 errors in agents",
"value": "false"
},
"inferenceAPIVersion": {
"_comment": "Set a proper value or leave empty if your gateway does not need it. Set based on setup guide to avoid 404 errors in agents",
"value": ""
},
"authConfig": {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,11 @@
"value": ""
},
"deploymentInPath": {
"_comment": "true or false - Set based on setup guide to avoid 404 errors in agents",
"value": "false"
},
"inferenceAPIVersion": {
"_comment": "Set a proper value or leave empty if your gateway does not need it. Set based on setup guide to avoid 404 errors in agents",
"value": ""
},
"customHeaders": {
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"projectResourceId": {
"value": "/subscriptions/YOUR-SUBSCRIPTION-ID/resourceGroups/YOUR-RG/providers/Microsoft.CognitiveServices/accounts/YOUR-AI-FOUNDRY-ACCOUNT/projects/YOUR-PROJECT"
},
"targetUrl": {
"value": "https://<your-foundry-resource>.openai.azure.com/openai"
},
"gatewayName": {
"value": "YOUR-GATEWAY-NAME"
},
"connectionName": {
"value": "YOUR-CONNECTION-NAME"
},
"authType": {
"value": "ApiKey"
},
"deploymentInPath": {
"_comment": "true or false - Set based on setup guide to avoid 404 errors in agents",
"value": "false"
},
"inferenceAPIVersion": {
"_comment": "Set a proper value or leave empty if your gateway does not need it. Set based on setup guide to avoid 404 errors in agents",
"value": ""
},
"staticModels": {
"value": [
{
"name": "YOUR-DEPLOYMENT-NAME-1",
"properties": {
"model": {
"name": "YOUR-MODEL-NAME-1",
"version": "YOUR-MODEL-VERSION-1",
"format": "OpenAI"
}
}
},
{
"name": "YOUR-DEPLOYMENT-NAME-2",
"properties": {
"model": {
"name": "YOUR-MODEL-NAME-2",
"version": "YOUR-MODEL-VERSION-2",
"format": "OpenAI"
}
}
}
]
},
"isSharedToAll": {
"value": false
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,11 @@
"value": "ApiKey"
},
"deploymentInPath": {
"_comment": "true or false - Set based on setup guide to avoid 404 errors in agents",
"value": "true"
},
"inferenceAPIVersion": {
"_comment": "Set a proper value or leave empty if your gateway does not need it. Set based on setup guide to avoid 404 errors in agents",
"value": "2025-03-01-preview"
},
"staticModels": {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,11 @@
"value": ["YOUR-SCOPE-1", "YOUR-SCOPE-2"]
},
"deploymentInPath": {
"_comment": "true or false - Set based on setup guide to avoid 404 errors in agents",
"value": "false"
},
"inferenceAPIVersion": {
"_comment": "Set a proper value or leave empty if your gateway does not need it. Set based on setup guide to avoid 404 errors in agents",
"value": ""
},
"staticModels": {
Expand Down
Loading