|
| 1 | +--- |
| 2 | +title: Import an Azure AI Foundry API - Azure API Management |
| 3 | +description: How to import an API from Azure AI Foundry as a REST API in Azure API Management. |
| 4 | +ms.service: azure-api-management |
| 5 | +author: dlepow |
| 6 | +ms.author: danlep |
| 7 | +ms.topic: how-to |
| 8 | +ms.date: 05/15/2025 |
| 9 | +ms.collection: ce-skilling-ai-copilot |
| 10 | +ms.custom: template-how-to, build-2024 |
| 11 | +--- |
| 12 | + |
| 13 | +# Import an LLM API |
| 14 | + |
| 15 | +[!INCLUDE [api-management-availability-all-tiers](../../includes/api-management-availability-all-tiers.md)] |
| 16 | + |
| 17 | +[INTRO] |
| 18 | + |
| 19 | +Learn more about managing AI APIs in API Management: |
| 20 | + |
| 21 | +* [Generative AI gateway capabilities in Azure API Management](genai-gateway-capabilities.md) |
| 22 | + |
| 23 | + |
| 24 | +## AI service options |
| 25 | +* **Azure OpenAI service** - Deployment name of a model is passed in the URL path of the API request. |
| 26 | + |
| 27 | +* **Azure AI** - These are models that are available in Azure AI Foundry through the [Azure AI Model Inference API](/azure/ai-studio/reference/reference-model-inference-api). Deployment name of a model is passed in the request body of the API request. |
| 28 | + |
| 29 | + |
| 30 | +## Prerequisites |
| 31 | + |
| 32 | +- An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md). |
| 33 | +- One or more Azure AI services with models deployed, such as: |
| 34 | + - An Azure OpenAI resource. For information about model deployment in Azure OpenAI service, see the [resource deployment guide](/azure/ai-services/openai/how-to/create-resource). |
| 35 | + - An Azure AI Foundry project. For information about creating a project, see [Create a project in the Azure AI Foundry portal](/azure/ai-foundry/how-to/create-projects). |
| 36 | + |
| 37 | + |
| 38 | + |
| 39 | +## Import AI Foundry API using the portal |
| 40 | + |
| 41 | +Use the following steps to import an AI Foundry API directly to API Management. |
| 42 | + |
| 43 | +[!INCLUDE [api-management-workspace-availability](../../includes/api-management-workspace-availability.md)] |
| 44 | + |
| 45 | +When you import the API, API Management automatically configures: |
| 46 | + |
| 47 | +* Operations for each of the API's REST API endpoints |
| 48 | +* A system-assigned identity with the necessary permissions to access the AI service deployment. |
| 49 | +* A [backend](backends.md) resource and a [set-backend-service](set-backend-service-policy.md) policy that direct API requests to the AI service endpoint. |
| 50 | +* Authentication to the backend using the instance's system-assigned managed identity. |
| 51 | +* (optionally) Policies to help you monitor and manage the API. |
| 52 | + |
| 53 | +To import an AI Foundry API to API Management: |
| 54 | + |
| 55 | +1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance. |
| 56 | +1. In the left menu, under **APIs**, select **APIs** > **+ Add API**. |
| 57 | +1. Under **Create from Azure resource**, select **Azure AI Foundry**. |
| 58 | + |
| 59 | + :::image type="content" source="media/azure-ai-foundry-api/ai-foundry-api.png" alt-text="Screenshot of creating an OpenAI-compatible API in the portal." ::: |
| 60 | +1. On the **Select AI service** tab: |
| 61 | + 1. Select the **Subscription** in which to search for AI services (Azure OpenAI services or Azure AI Foundry projects). To get information about the deployments in a service, select the **deployments** link next to the service name. |
| 62 | + :::image type="content" source="media/azure-ai-foundry-api/deployments.png" alt-text="Screenshot of deployments for an AI service in the portal."::: |
| 63 | + 1. Select an AI service. |
| 64 | + 1. Select **Next**. |
| 65 | +1. On the **Configure API** tab: |
| 66 | + 1. Enter a **Display name** and optional **Description** for the API. |
| 67 | + 1. In **Path**, enter a path that your API Management instance uses to access the API endpoints. |
| 68 | + 1. Optionally select one or more **Products** to associate with the API. |
| 69 | + 1. In **Client compatibility**, select either of the following based on the types of client you intend to support: |
| 70 | + * **Azure OpenAI** - Clients call the model deployment using the OpenAI API format. Select this option if you use only Azure OpenAI deployments. |
| 71 | + * **Azure AI** - Clients call the model deployment by passing |
| 72 | + 1. In **Access key**, optionally enter the authorization header name and API key used to access the LLM API. |
| 73 | + 1. Select **Next**. |
| 74 | +1. On the **Manage token consumption** tab, optionally enter settings or accept defaults that define the following policies to help monitor and manage the API: |
| 75 | + * [Manage token consumption](llm-token-limit-policy.md) |
| 76 | + * [Track token usage](llm-emit-token-metric-policy.md) |
| 77 | +1. On the **Apply semantic caching** tab, optionally enter settings or accept defaults that define the policies to help optimize performance and reduce latency for the API: |
| 78 | + * [Enable semantic caching of responses](azure-openai-enable-semantic-caching.md) |
| 79 | +On the **AI content safety**, optionally enter settings or accept defaults to configure the Azure AI Content Safety service checks for API requests: |
| 80 | + * [Enforce content safety checks on LLM requests](llm-content-safety-policy.md) |
| 81 | +1. Select **Review**. |
| 82 | +1. After settings are validated, select **Create**. |
| 83 | + |
| 84 | +## Test the LLM API |
| 85 | + |
| 86 | +To ensure that your LLM API is working as expected, test it in the API Management test console. |
| 87 | +1. Select the API you created in the previous step. |
| 88 | +1. Select the **Test** tab. |
| 89 | +1. Select an operation that's compatible with the model in the LLM API. |
| 90 | + The page displays fields for parameters and headers. |
| 91 | +1. Enter parameters and headers as needed. Depending on the operation, you may need to configure or update a **Request body**. |
| 92 | + > [!NOTE] |
| 93 | + > In the test console, API Management automatically populates an **Ocp-Apim-Subscription-Key** header, and configures the subscription key of the built-in [all-access subscription](api-management-subscriptions.md#all-access-subscription). This key enables access to every API in the API Management instance. Optionally display the **Ocp-Apim-Subscription-Key** header by selecting the "eye" icon next to the **HTTP Request**. |
| 94 | +1. Select **Send**. |
| 95 | + |
| 96 | + When the test is successful, the backend responds with a successful HTTP response code and some data. Appended to the response is token usage data to help you monitor and manage your Azure OpenAI API token consumption. |
| 97 | + |
| 98 | + |
| 99 | +[!INCLUDE [api-management-define-api-topics.md](../../includes/api-management-define-api-topics.md)] |
0 commit comments