Skip to content

Commit 66f3c78

Browse files
committed
work in progress
1 parent 5aac2d1 commit 66f3c78

File tree

8 files changed

+113
-14
lines changed

8 files changed

+113
-14
lines changed
Lines changed: 99 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,99 @@
1+
---
2+
title: Import an Azure AI Foundry API - Azure API Management
3+
description: How to import an API from Azure AI Foundry as a REST API in Azure API Management.
4+
ms.service: azure-api-management
5+
author: dlepow
6+
ms.author: danlep
7+
ms.topic: how-to
8+
ms.date: 05/15/2025
9+
ms.collection: ce-skilling-ai-copilot
10+
ms.custom: template-how-to, build-2024
11+
---
12+
13+
# Import an LLM API
14+
15+
[!INCLUDE [api-management-availability-all-tiers](../../includes/api-management-availability-all-tiers.md)]
16+
17+
[INTRO]
18+
19+
Learn more about managing AI APIs in API Management:
20+
21+
* [Generative AI gateway capabilities in Azure API Management](genai-gateway-capabilities.md)
22+
23+
24+
## AI service options
25+
* **Azure OpenAI service** - Deployment name of a model is passed in the URL path of the API request.
26+
27+
* **Azure AI** - These are models that are available in Azure AI Foundry through the [Azure AI Model Inference API](/azure/ai-studio/reference/reference-model-inference-api). Deployment name of a model is passed in the request body of the API request.
28+
29+
30+
## Prerequisites
31+
32+
- An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md).
33+
- One or more Azure AI services with models deployed, such as:
34+
- An Azure OpenAI resource. For information about model deployment in Azure OpenAI service, see the [resource deployment guide](/azure/ai-services/openai/how-to/create-resource).
35+
- An Azure AI Foundry project. For information about creating a project, see [Create a project in the Azure AI Foundry portal](/azure/ai-foundry/how-to/create-projects).
36+
37+
38+
39+
## Import AI Foundry API using the portal
40+
41+
Use the following steps to import an AI Foundry API directly to API Management.
42+
43+
[!INCLUDE [api-management-workspace-availability](../../includes/api-management-workspace-availability.md)]
44+
45+
When you import the API, API Management automatically configures:
46+
47+
* Operations for each of the API's REST API endpoints
48+
* A system-assigned identity with the necessary permissions to access the AI service deployment.
49+
* A [backend](backends.md) resource and a [set-backend-service](set-backend-service-policy.md) policy that direct API requests to the AI service endpoint.
50+
* Authentication to the backend using the instance's system-assigned managed identity.
51+
* (optionally) Policies to help you monitor and manage the API.
52+
53+
To import an AI Foundry API to API Management:
54+
55+
1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
56+
1. In the left menu, under **APIs**, select **APIs** > **+ Add API**.
57+
1. Under **Create from Azure resource**, select **Azure AI Foundry**.
58+
59+
:::image type="content" source="media/azure-ai-foundry-api/ai-foundry-api.png" alt-text="Screenshot of creating an OpenAI-compatible API in the portal." :::
60+
1. On the **Select AI service** tab:
61+
1. Select the **Subscription** in which to search for AI services (Azure OpenAI services or Azure AI Foundry projects). To get information about the deployments in a service, select the **deployments** link next to the service name.
62+
:::image type="content" source="media/azure-ai-foundry-api/deployments.png" alt-text="Screenshot of deployments for an AI service in the portal.":::
63+
1. Select an AI service.
64+
1. Select **Next**.
65+
1. On the **Configure API** tab:
66+
1. Enter a **Display name** and optional **Description** for the API.
67+
1. In **Path**, enter a path that your API Management instance uses to access the API endpoints.
68+
1. Optionally select one or more **Products** to associate with the API.
69+
1. In **Client compatibility**, select either of the following based on the types of client you intend to support:
70+
* **Azure OpenAI** - Clients call the model deployment using the OpenAI API format. Select this option if you use only Azure OpenAI deployments.
71+
* **Azure AI** - Clients call the model deployment by passing
72+
1. In **Access key**, optionally enter the authorization header name and API key used to access the LLM API.
73+
1. Select **Next**.
74+
1. On the **Manage token consumption** tab, optionally enter settings or accept defaults that define the following policies to help monitor and manage the API:
75+
* [Manage token consumption](llm-token-limit-policy.md)
76+
* [Track token usage](llm-emit-token-metric-policy.md)
77+
1. On the **Apply semantic caching** tab, optionally enter settings or accept defaults that define the policies to help optimize performance and reduce latency for the API:
78+
* [Enable semantic caching of responses](azure-openai-enable-semantic-caching.md)
79+
On the **AI content safety**, optionally enter settings or accept defaults to configure the Azure AI Content Safety service checks for API requests:
80+
* [Enforce content safety checks on LLM requests](llm-content-safety-policy.md)
81+
1. Select **Review**.
82+
1. After settings are validated, select **Create**.
83+
84+
## Test the LLM API
85+
86+
To ensure that your LLM API is working as expected, test it in the API Management test console.
87+
1. Select the API you created in the previous step.
88+
1. Select the **Test** tab.
89+
1. Select an operation that's compatible with the model in the LLM API.
90+
The page displays fields for parameters and headers.
91+
1. Enter parameters and headers as needed. Depending on the operation, you may need to configure or update a **Request body**.
92+
> [!NOTE]
93+
> In the test console, API Management automatically populates an **Ocp-Apim-Subscription-Key** header, and configures the subscription key of the built-in [all-access subscription](api-management-subscriptions.md#all-access-subscription). This key enables access to every API in the API Management instance. Optionally display the **Ocp-Apim-Subscription-Key** header by selecting the "eye" icon next to the **HTTP Request**.
94+
1. Select **Send**.
95+
96+
When the test is successful, the backend responds with a successful HTTP response code and some data. Appended to the response is token usage data to help you monitor and manage your Azure OpenAI API token consumption.
97+
98+
99+
[!INCLUDE [api-management-define-api-topics.md](../../includes/api-management-define-api-topics.md)]

articles/api-management/azure-openai-api-from-specification.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Learn more about managing AI APIs in API Management:
2626
## Prerequisites
2727

2828
- An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md).
29-
- An Azure OpenAI resource with a model deployed. For more information about model deployment, see the [resource deployment guide](/azure/ai-services/openai/how-to/create-resource).
29+
- An Azure OpenAI resource with a model deployed. For more information about model deployment in Azure OpenAI service, see the [resource deployment guide](/azure/ai-services/openai/how-to/create-resource).
3030

3131
Make a note of the ID (name) of the deployment. You'll need it when you test the imported API in API Management.
3232

7.63 KB
Loading
130 KB
Loading
67.6 KB
Loading
91.8 KB
Loading
2.17 KB
Loading

articles/api-management/openai-compatible-llm-api.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,16 @@
11
---
2-
title: Import an LLM API as REST API - Azure API Management
3-
description: How to import an OpenAI-compatible LLM API or other AI model as a REST API in Azure API Management.
2+
title: Import a Language Model API as REST API - Azure API Management
3+
description: How to import an OpenAI-compatible language model API or other AI model as a REST API in Azure API Management.
44
ms.service: azure-api-management
55
author: dlepow
66
ms.author: danlep
77
ms.topic: how-to
8-
ms.date: 05/14/2025
8+
ms.date: 05/15/2025
99
ms.collection: ce-skilling-ai-copilot
1010
ms.custom: template-how-to, build-2024
1111
---
1212

13-
# Import an LLM API
13+
# Import a language model API
1414

1515
[!INCLUDE [api-management-availability-all-tiers](../../includes/api-management-availability-all-tiers.md)]
1616

@@ -23,20 +23,20 @@ Learn more about managing AI APIs in API Management:
2323
## Prerequisites
2424

2525
- An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md).
26-
- A self-hosted LLM with an API endpoint. You can use an OpenAI-compatible LLM that's exposed by an inference provider such as [Hugging Face Text Generation Inference (TGI)](hhttps://huggingface.co/docs/text-generation-inference/en/index). Alternatively, you can access an LLM through a provider such as [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html).
26+
- A self-hosted LLM with an API endpoint. You can use an OpenAI-compatible LLM that's exposed by an inference provider such as [Hugging Face Text Generation Inference (TGI)](https://huggingface.co/docs/text-generation-inference/en/index). Alternatively, you can access an LLM through a provider such as [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html).
2727
> [!NOTE]
2828
> API Management policies such as [llm-token-limit](llm-token-limit-policy.md) and [llm-emit-token-metric](llm-emit-token-metric-policy.md) are supported for APIs available through the [Azure AI Model Inference API](/azure/ai-studio/reference/reference-model-inference-api) or with OpenAI-compatible models served through third-party inference providers.
2929
3030

31-
## Import LLM API using the portal
31+
## Import language model API using the portal
3232

33-
Jse the following steps to import an LLM API directly to API Management.
33+
Use the following steps to import an LLM API directly to API Management.
3434

3535
[!INCLUDE [api-management-workspace-availability](../../includes/api-management-workspace-availability.md)]
3636

3737
Depending on the API type you select to import, API Management automatically configures different operations to call the API:
3838

39-
* **OpenAI-compatible API** - Operations for the LLM API's chat completion endpoint
39+
* **OpenAI-compatible API** - An operation for the LLM API's chat completion endpoint
4040
* **Passthrough API** - Wildcard operations for standard verbs `GET`, `HEAD`, `OPTIONS`, and `TRACK`. When you call the API, append any required path or parameters to the API request to pass a request to an LLM API endpoint.
4141

4242
For an OpenAI-compatible API, you can optionally configure policies to help you monitor and manage the API.
@@ -45,28 +45,28 @@ To import an LLM API to API Management:
4545

4646
1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
4747
1. In the left menu, under **APIs**, select **APIs** > **+ Add API**.
48-
1. Under **Define a new API**, select **OpenAI API**.
48+
1. Under **Define a new API**, select **Language Model API**.
4949

5050
:::image type="content" source="media/openai-compatible-llm-api/openai-api.png" alt-text="Screenshot of creating an OpenAI-compatible API in the portal." :::
5151

5252
1. On the **Configure API** tab:
5353
1. Enter a **Display name** and optional **Description** for the API.
5454
1. Enter the **URL** to the LLM API endpoint.
55-
1. Optionally select one or more **Products**l to associate with the API.
55+
1. Optionally select one or more **Products** to associate with the API.
5656
1. In **Path**, append a path that your API Management instance uses to access the LLM API endpoints.
5757
1. In **Type**, select either **Create OpenAI API** or **Create a passthrough API**.
5858
1. In **Access key**, optionally enter the authorization header name and API key used to access the LLM API.
5959
1. Select **Next**.
6060
1. On the **Manage token consumption** tab, optionally enter settings or accept defaults that define the following policies to help monitor and manage the API:
6161
* [Manage token consumption](llm-token-limit-policy.md)
62-
* [Track token usage](llm-token-metric-policy.md)
62+
* [Track token usage](llm-emit-token-metric-policy.md)
6363
1. On the **Apply semantic caching** tab, optionally enter settings or accept defaults that define the policies to help optimize performance and reduce latency for the API:
6464
* [Enable semantic caching of responses](azure-openai-enable-semantic-caching.md)
65-
1. On the **AI content safety**, optionally enter settings or accept defaults to configure [Azure AI Content Safety](llm-content-safety-policy.md) for the API.
65+
1. On the **AI content safety**, optionally enter settings or accept defaults to configure the Azure AI Content Safety service checks for API requests:
66+
* [Enforce content safety checks on LLM requests](llm-content-safety-policy.md)
6667
1. Select **Review**.
6768
1. After settings are validated, select **Create**.
6869

69-
7070
## Test the LLM API
7171

7272
To ensure that your LLM API is working as expected, test it in the API Management test console.

0 commit comments

Comments
 (0)