Skip to content

Commit 1ccad0c

Browse files
committed
fixes
1 parent 9421ccb commit 1ccad0c

File tree

6 files changed

+9
-12
lines changed

6 files changed

+9
-12
lines changed

articles/ai-foundry/model-inference/concepts/endpoints.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -64,13 +64,13 @@ For a chat model, you can create a request as follows:
6464

6565
[!INCLUDE [code-create-chat-completion](../includes/code-create-chat-completion.md)]
6666

67-
If you specify a model name that doesn't match any given model deployment, you get an error that the model doesn't exist. You can control which models are available for users by creating model deployments as explained at [add and configure model deployments](create-model-deployments.md).
67+
If you specify a model name that doesn't match any given model deployment, you get an error that the model doesn't exist. You can control which models are available for users by creating model deployments as explained at [add and configure model deployments](../how-to/create-model-deployments.md).
6868

6969
## Key-less authentication
7070

7171
Models deployed to Azure AI Foundry Models in Azure AI Services support key-less authorization using Microsoft Entra ID. Key-less authorization enhances security, simplifies the user experience, reduces operational complexity, and provides robust compliance support for modern development. It makes it a strong choice for organizations adopting secure and scalable identity management solutions.
7272

73-
To use key-less authentication, [configure your resource and grant access to users](configure-entra-id.md) to perform inference. Once configured, then you can authenticate as follows:
73+
To use key-less authentication, [configure your resource and grant access to users](../how-to/configure-entra-id.md) to perform inference. Once configured, then you can authenticate as follows:
7474

7575
[!INCLUDE [code-create-chat-client-entra](../includes/code-create-chat-client-entra.md)]
7676

articles/ai-foundry/model-inference/how-to/inference.md

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -14,11 +14,7 @@ ms.reviewer: fasantia
1414

1515
# Use Foundry Models
1616

17-
Azure AI Foundry Models allows customers to consume the most powerful models from flagship model providers using a single endpoint and credentials. This means that you can switch between models and consume them from your application without changing a single line of code.
18-
19-
This article explains how to use the inference endpoint to invoke them.
20-
21-
There are two different APIs to use models in Azure AI Foundry Models:
17+
Once you have [deployed a model in Azure AI Foundry](create-model-deployments.md), you can consume its capabilities via Azure AI Foundry APIs. There are two different endpoints and APIs to use models in Azure AI Foundry Models.
2218

2319
## Models inference endpoint
2420

articles/ai-foundry/model-inference/how-to/quickstart-ai-project.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ You can change this behavior and deploy both types of models to Azure AI Foundry
2121
Additionally, deploying models to Azure AI Foundry Models brings the extra benefits of:
2222

2323
> [!div class="checklist"]
24-
> * [Routing capability](../concepts/endpoints.md#routing).
24+
> * [Routing capability](inference.md#routing).
2525
> * [Custom content filters](../concepts/content-filter.md).
2626
> * Global capacity deployment type.
2727
> * [Key-less authentication](configure-entra-id.md) with role-based access control.
@@ -79,7 +79,7 @@ To configure the project to use the Foundry Models capability in Azure AI Foundr
7979
:::image type="content" source="../media/quickstart-ai-project/overview-endpoint-and-key.png" alt-text="Screenshot of the landing page for the project, highlighting the location of the connected resource and the associated inference endpoint." lightbox="../media/quickstart-ai-project/overview-endpoint-and-key.png":::
8080

8181
> [!TIP]
82-
> Each Azure AI Foundry Services resource has a single **Foundry Models endpoint** which can be used to access any model deployment on it. The same endpoint serves multiple models depending on which ones are configured. Learn about [how the endpoint works](../concepts/endpoints.md#azure-openai-inference-endpoint).
82+
> Each Azure AI Foundry Services resource has a single **Foundry Models endpoint** which can be used to access any model deployment on it. The same endpoint serves multiple models depending on which ones are configured. Learn about [how the endpoint works](inference.md#azure-openai-inference-endpoint).
8383
8484
5. Take note of the endpoint URL and credentials.
8585

@@ -136,7 +136,7 @@ Generate your first chat completion:
136136

137137
[!INCLUDE [code-create-chat-completion](../includes/code-create-chat-completion.md)]
138138

139-
Use the parameter `model="<deployment-name>` to route your request to this deployment. *Deployments work as an alias of a given model under certain configurations*. See [Routing](../concepts/endpoints.md#routing) concept page to learn how Azure AI Services route deployments.
139+
Use the parameter `model="<deployment-name>` to route your request to this deployment. *Deployments work as an alias of a given model under certain configurations*. See [Routing](inference.md#routing) page to learn how Azure AI Foundry Models routes deployments.
140140

141141

142142
## Move from standard deployments to Foundry Models

articles/ai-foundry/model-inference/how-to/quickstart-github-models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ Generate your first chat completion:
7575

7676
[!INCLUDE [code-create-chat-completion](../includes/code-create-chat-completion.md)]
7777

78-
Use the parameter `model="<deployment-name>` to route your request to this deployment. *Deployments work as an alias of a given model under certain configurations*. See [Routing](../concepts/endpoints.md#routing) concept page to learn how Azure AI Services route deployments.
78+
Use the parameter `model="<deployment-name>` to route your request to this deployment. *Deployments work as an alias of a given model under certain configurations*. See [Routing](inference.md#routing) concept page to learn how Azure AI Services route deployments.
7979

8080
> [!IMPORTANT]
8181
> As opposite to GitHub Models where all the models are already configured, the Azure AI Services resource allows you to control which models are available in your endpoint and under which configuration. Add as many models as you plan to use before indicating them in the `model` parameter. Learn how to [add more models](create-model-deployments.md) to your resource.

articles/ai-foundry/model-inference/includes/code-create-openai-client.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -107,4 +107,5 @@ Content-Type: application/json
107107
```
108108

109109
Here, `deepseek-v3-0324` is the name of a model deployment in the Azure AI Foundry resource.
110+
110111
---

articles/ai-foundry/model-inference/supported-languages-openai.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.custom: ignite-2024, github-universe-2024
99
ms.topic: conceptual
1010
ms.date: 1/21/2025
1111
ms.author: fasantia
12-
zone_pivot_groups: azure-ai-foundry-models-samples
12+
zone_pivot_groups: openai-supported-languages
1313
---
1414

1515
# Supported programming languages for Azure OpenAI SDK

0 commit comments

Comments
 (0)