|
| 1 | +--- |
| 2 | +title: Import an Azure AI Foundry API - Azure API Management |
| 3 | +description: How to import an API from Azure AI Foundry as a REST API in Azure API Management. |
| 4 | +ms.service: azure-api-management |
| 5 | +author: dlepow |
| 6 | +ms.author: danlep |
| 7 | +ms.topic: how-to |
| 8 | +ms.date: 05/16/2025 |
| 9 | +ms.collection: ce-skilling-ai-copilot |
| 10 | +ms.custom: template-how-to, build-2024 |
| 11 | +--- |
| 12 | + |
| 13 | +# Import an Azure AI Foundry API |
| 14 | + |
| 15 | +[!INCLUDE [api-management-availability-all-tiers](../../includes/api-management-availability-all-tiers.md)] |
| 16 | + |
| 17 | +You can import AI model endpoints deployed in Azure AI Foundry to your API Management instance as APIs. Use AI gateway policies and other capabilities in API Management to simplify integration, improve observability, and enhance control over the model endpoints. |
| 18 | + |
| 19 | +Learn more about managing AI APIs in API Management: |
| 20 | + |
| 21 | +* [AI gateway capabilities in Azure API Management](genai-gateway-capabilities.md) |
| 22 | + |
| 23 | + |
| 24 | +## Client compatibility options |
| 25 | + |
| 26 | +API Management supports two client compatibility options for AI APIs. Choose the option suitable for your model deployment. The option determines how clients call the API and how the API Management instance routes requests to the AI service. |
| 27 | + |
| 28 | +* **Azure AI** - Manage model endpoints in Azure AI Foundry that are exposed through the [Azure AI Model Inference API](/azure/ai-studio/reference/reference-model-inference-api). |
| 29 | + |
| 30 | + Clients call the deployment at a `/models` endpoint such as `/my-model/models/chat/completions`. Deployment name is passed in the request body. Use this option if you want flexibility to switch between models exposed through the Azure AI Model Inference API and those deployed in Azure OpenAI Service. |
| 31 | + |
| 32 | +* **Azure OpenAI Service** - Manage model endpoints deployed in Azure OpenAI Service. |
| 33 | + |
| 34 | + Clients call the deployment at an `/openai` endpoint such as `/openai/deployments/my-deployment/chat/completions`. Deployment name is passed in the request path. Use this option if your AI service only includes Azure OpenAI Service model deployments. |
| 35 | + |
| 36 | +## Prerequisites |
| 37 | + |
| 38 | +- An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md). |
| 39 | +- An Azure AI service in your subscription with one or more models deployed. Examples include models deployed in Azure AI Foundry or Azure OpenAI Service. |
| 40 | + |
| 41 | +## Import AI Foundry API using the portal |
| 42 | + |
| 43 | +Use the following steps to import an AI API to API Management. |
| 44 | + |
| 45 | +When you import the API, API Management automatically configures: |
| 46 | + |
| 47 | +* Operations for each of the API's REST API endpoints |
| 48 | +* A system-assigned identity with the necessary permissions to access the AI service deployment. |
| 49 | +* A [backend](backends.md) resource and a [set-backend-service](set-backend-service-policy.md) policy that direct API requests to the AI service endpoint. |
| 50 | +* Authentication to the backend using the instance's system-assigned managed identity. |
| 51 | +* (optionally) Policies to help you monitor and manage the API. |
| 52 | + |
| 53 | +To import an AI Foundry API to API Management: |
| 54 | + |
| 55 | +1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance. |
| 56 | +1. In the left menu, under **APIs**, select **APIs** > **+ Add API**. |
| 57 | +1. Under **Create from Azure resource**, select **Azure AI Foundry**. |
| 58 | + |
| 59 | + :::image type="content" source="media/azure-ai-foundry-api/ai-foundry-api.png" alt-text="Screenshot of creating an OpenAI-compatible API in the portal." ::: |
| 60 | +1. On the **Select AI service** tab: |
| 61 | + 1. Select the **Subscription** in which to search for AI services. To get information about the model deployments in a service, select the **deployments** link next to the service name. |
| 62 | + :::image type="content" source="media/azure-ai-foundry-api/deployments.png" alt-text="Screenshot of deployments for an AI service in the portal."::: |
| 63 | + 1. Select an AI service. |
| 64 | + 1. Select **Next**. |
| 65 | +1. On the **Configure API** tab: |
| 66 | + 1. Enter a **Display name** and optional **Description** for the API. |
| 67 | + 1. In **Base path**, enter a path that your API Management instance uses to access the deployment endpoint. |
| 68 | + 1. Optionally select one or more **Products** to associate with the API. |
| 69 | + 1. In **Client compatibility**, select either of the following based on the types of client you intend to support. See [Client compatibility options](#client-compatibility-options) for more information. |
| 70 | + * **Azure OpenAI** - Select this option if your clients only need to access Azure OpenAI Service model deployments. |
| 71 | + * **Azure AI** - Select this option if your clients need to access other models in Azure AI Foundry. |
| 72 | + 1. Select **Next**. |
| 73 | + |
| 74 | + :::image type="content" source="media/azure-ai-foundry-api/client-compatibility.png" alt-text="Screenshot of AI Foundry API configuration in the portal."::: |
| 75 | + |
| 76 | +1. On the **Manage token consumption** tab, optionally enter settings or accept defaults that define the following policies to help monitor and manage the API: |
| 77 | + * [Manage token consumption](llm-token-limit-policy.md) |
| 78 | + * [Track token usage](llm-emit-token-metric-policy.md) |
| 79 | +1. On the **Apply semantic caching** tab, optionally enter settings or accept defaults that define the policies to help optimize performance and reduce latency for the API: |
| 80 | + * [Enable semantic caching of responses](azure-openai-enable-semantic-caching.md) |
| 81 | +1. On the **AI content safety**, optionally enter settings or accept defaults to configure the Azure AI Content Safety service to block prompts with unsafe content: |
| 82 | + * [Enforce content safety checks on LLM requests](llm-content-safety-policy.md) |
| 83 | +1. Select **Review**. |
| 84 | +1. After settings are validated, select **Create**. |
| 85 | + |
| 86 | +## Test the AI API |
| 87 | + |
| 88 | +To ensure that your AI API is working as expected, test it in the API Management test console. |
| 89 | +1. Select the API you created in the previous step. |
| 90 | +1. Select the **Test** tab. |
| 91 | +1. Select an operation that's compatible with the model deployment. |
| 92 | + The page displays fields for parameters and headers. |
| 93 | +1. Enter parameters and headers as needed. Depending on the operation, you might need to configure or update a **Request body**. |
| 94 | + > [!NOTE] |
| 95 | + > In the test console, API Management automatically populates an **Ocp-Apim-Subscription-Key** header, and configures the subscription key of the built-in [all-access subscription](api-management-subscriptions.md#all-access-subscription). This key enables access to every API in the API Management instance. Optionally display the **Ocp-Apim-Subscription-Key** header by selecting the "eye" icon next to the **HTTP Request**. |
| 96 | +1. Select **Send**. |
| 97 | + |
| 98 | + When the test is successful, the backend responds with a successful HTTP response code and some data. Appended to the response is token usage data to help you monitor and manage your language model token consumption. |
| 99 | + |
| 100 | + |
| 101 | +[!INCLUDE [api-management-define-api-topics.md](../../includes/api-management-define-api-topics.md)] |
0 commit comments