Skip to content

Commit 77e6028

Browse files
authored
Merge pull request #5506 from msakande/peer-review-deploy-serverless-article
implement peer review feedback and freshness
2 parents 70a0723 + 16ead01 commit 77e6028

11 files changed

+404
-433
lines changed

articles/ai-foundry/how-to/deploy-models-serverless.md

Lines changed: 377 additions & 433 deletions
Large diffs are not rendered by default.
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
---
2+
title: Include file
3+
description: Include file
4+
ms.author: mopeakande
5+
author: msakande
6+
ms.service: azure-ai-foundry
7+
ms.topic: include
8+
ms.date: 06/13/2025
9+
ms.custom: include
10+
---
11+
12+
> [!NOTE]
13+
> We recommend that you deploy Azure AI Foundry Models to **Azure AI Foundry resources** so that you can consume your deployments in the resource via a single endpoint with the same authentication and schema to generate inference. The endpoint follows the [Azure AI Model Inference API](/rest/api/aifoundry/modelinference/) which all the Foundry Models support. To learn how to deploy a Foundry Model to the Azure AI Foundry resources, see [Add and configure models to Azure AI Foundry Models](../model-inference/how-to/create-model-deployments.md).
221 KB
Loading
55.8 KB
Loading
107 KB
Loading
256 KB
Loading
272 KB
Loading
250 KB
Loading
211 KB
Loading

0 commit comments

Comments
 (0)