Skip to content

Commit ee1151c

Browse files
committed
updating managed deployment article date to reflect changes done for Build
1 parent 2d572fc commit ee1151c

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

articles/ai-foundry/how-to/deploy-models-managed.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: azure-ai-foundry
77
ms.custom:
88
- build-2024
99
ms.topic: how-to
10-
ms.date: 03/24/2025
10+
ms.date: 05/19/2025
1111
ms.reviewer: fasantia
1212
reviewer: santiagxf
1313
ms.author: mopeakande
@@ -16,7 +16,7 @@ author: msakande
1616

1717
# How to deploy and inference a managed compute deployment with code
1818

19-
The Azure AI Foundry portal [model catalog](../how-to/model-catalog-overview.md) offers over 1,600 models, and the most common way to deploy these models is to use the managed compute deployment option, which is also sometimes referred to as a managed online deployment.
19+
The Azure AI Foundry portal [model catalog](../how-to/model-catalog-overview.md) offers over 1,600 models, and a common way to deploy these models is to use the managed compute deployment option, which is also sometimes referred to as a managed online deployment.
2020

2121
Deployment of a large language model (LLM) makes it available for use in a website, an application, or other production environment. Deployment typically involves hosting the model on a server or in the cloud and creating an API or other interface for users to interact with the model. You can invoke the deployment for real-time inference of generative AI applications such as chat and copilot.
2222

0 commit comments

Comments
 (0)