Skip to content

Commit 83dc5cb

Browse files
committed
fix broken links
1 parent 5f40744 commit 83dc5cb

File tree

6 files changed

+8
-7
lines changed

6 files changed

+8
-7
lines changed

articles/ai-foundry/concepts/ai-resources.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ Projects also have specific settings that only hold for that project:
7676
7777
## Azure AI services API access keys
7878

79-
The hub allows you to set up connections to existing Azure OpenAI or Azure AI Services resource types, which can be used to host model deployments. You can access these model deployments from connected resources in Azure AI Foundry portal. Keys to connected resources can be listed from the Azure AI Foundry portal or Azure portal. For more information, see [Find Azure AI Foundry Service in the Azure portal](#find-azure-ai-foundry-resources-in-the-azure-portal).
79+
The hub allows you to set up connections to existing Azure OpenAI or Azure AI Services resource types, which can be used to host model deployments. You can access these model deployments from connected resources in Azure AI Foundry portal. Keys to connected resources can be listed from the Azure AI Foundry portal or Azure portal. For more information, see [Find Azure AI Foundry Service in the Azure portal](#find-azure-ai-foundry-services-in-the-azure-portal).
8080

8181
### Virtual networking
8282

articles/ai-foundry/concepts/deployments-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ Azure AI Foundry offers four different deployment options:
5050

5151
Azure AI Foundry encourages you to explore various deployment options and choose the one that best suites your business and technical needs. In general, Consider using the following approach to select a deployment option:
5252

53-
* Start with [Foundry Models](../../ai-foundry/model-inference/overview.md), which is the option with the largest scope. This option allows you to iterate and prototype faster in your application without having to rebuild your architecture each time you decide to change something. If you're using Azure AI Foundry hubs or projects, enable this option by [turning on the Foundry Models feature](../model-inference/how-to/quickstart-ai-project.md#configure-the-project-to-use-azure-ai-model-inference).
53+
* Start with [Foundry Models](../../ai-foundry/model-inference/overview.md), which is the option with the largest scope. This option allows you to iterate and prototype faster in your application without having to rebuild your architecture each time you decide to change something. If you're using Azure AI Foundry hubs or projects, enable this option by [turning on the Foundry Models feature](../model-inference/how-to/quickstart-ai-project.md#configure-the-project-to-use-foundry-models).
5454

5555
* When you're looking to use a specific model:
5656

articles/ai-foundry/how-to/develop/llama-index.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,8 @@ llm = AzureAICompletionsModel(
8686
```
8787
8888
> [!TIP]
89-
> If your model deployment is hosted in Azure OpenAI in Foundry Models or Azure AI Services resource, configure the client as indicated at [Azure OpenAI models and Foundry Models service](#azure-openai-models-and-azure-ai-model-inference-service).
89+
> If your model deployment is hosted in Azure OpenAI in Foundry Models or Azure AI Services resource, configure the client as indicated at [Azure OpenAI models and Foundry Models service](#azure-openai-models-and-foundry-models-service).
90+
9091
9192
If your endpoint is serving more than one model, like with the [Foundry Models service](../../model-inference/overview.md) or [GitHub Models](https://github.com/marketplace/models), you have to indicate `model_name` parameter:
9293

articles/ai-foundry/model-inference/how-to/quickstart-ai-project.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ To configure the project to use the Foundry Models capability in Azure AI Foundr
7979
:::image type="content" source="../media/quickstart-ai-project/overview-endpoint-and-key.png" alt-text="Screenshot of the landing page for the project, highlighting the location of the connected resource and the associated inference endpoint." lightbox="../media/quickstart-ai-project/overview-endpoint-and-key.png":::
8080

8181
> [!TIP]
82-
> Each Azure AI Foundry Services resource has a single **Foundry Models endpoint** which can be used to access any model deployment on it. The same endpoint serves multiple models depending on which ones are configured. Learn about [how the endpoint works](../concepts/endpoints.md#azure-ai-inference-endpoint).
82+
> Each Azure AI Foundry Services resource has a single **Foundry Models endpoint** which can be used to access any model deployment on it. The same endpoint serves multiple models depending on which ones are configured. Learn about [how the endpoint works](../concepts/endpoints.md#azure-openai-inference-endpoint).
8383
8484
5. Take note of the endpoint URL and credentials.
8585

@@ -189,7 +189,7 @@ For each model deployed as Serverless API Endpoints, follow these steps:
189189
Consider the following limitations when configuring your project to use Foundry Models:
190190

191191
* Only models supporting pay-as-you-go billing (Models as a Service) are available for deployment to Foundry Models. Models requiring compute quota from your subscription (Managed Compute), including custom models, can only be deployed within a given project as Managed Online Endpoints and continue to be accessible using their own set of endpoint URI and credentials.
192-
* Models available as both pay-as-you-go billing and managed compute offerings are, by default, deployed to Foundry Models in Azure AI Foundry Services resources. Azure AI Foundry portal doesn't offer a way to deploy them to Managed Online Endpoints. You have to turn off the feature mentioned at [Configure the project to use Foundry Models](#configure-the-project-to-use-azure-ai-model-inference) or use the Azure CLI/Azure ML SDK/ARM templates to perform the deployment.
192+
* Models available as both pay-as-you-go billing and managed compute offerings are, by default, deployed to Foundry Models in Azure AI Foundry Services resources. Azure AI Foundry portal doesn't offer a way to deploy them to Managed Online Endpoints. You have to turn off the feature mentioned at [Configure the project to use Foundry Models](#configure-the-project-to-use-foundry-models) or use the Azure CLI/Azure ML SDK/ARM templates to perform the deployment.
193193

194194
## Next steps
195195

articles/ai-foundry/model-inference/includes/create-model-deployments/portal.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ zone_pivot_groups: azure-ai-models-deployment
1212

1313
* An AI project connected to your Azure AI Foundry resource with the feature **Deploy models to Azure AI model inference service** on.
1414

15-
* You can follow the steps at [Configure Azure AI model inference service in my project](../../how-to/quickstart-ai-project.md#configure-the-project-to-use-azure-ai-model-inference) in Azure AI Foundry.
15+
* You can follow the steps at [Configure Azure AI model inference service in my project](../../how-to/quickstart-ai-project.md#configure-the-project-to-use-foundry-models) in Azure AI Foundry.
1616

1717
## Add a model
1818

articles/ai-foundry/model-inference/supported-languages.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Azure AI Foundry Models can be consumed with different SDKs and programming mode
1919

2020
All models deployed to Azure AI Foundry Models support the [Foundry Models API](https://aka.ms/azureai/modelinference) and its associated family of SDKs.
2121

22-
To use these SDKs, connect them to the [Foundry Models URI](concepts/endpoints.md#azure-ai-inference-endpoint) (usually in the form `https://<resource-name>.services.ai.azure.com/models`).
22+
To use these SDKs, connect them to the [Foundry Models URI](concepts/endpoints.md#azure-openai-inference-endpoint) (usually in the form `https://<resource-name>.services.ai.azure.com/models`).
2323

2424
### Azure AI Inference package
2525

0 commit comments

Comments
 (0)