Skip to content

Commit 6c34721

Browse files
committed
more edits
1 parent 9738d98 commit 6c34721

File tree

1 file changed

+18
-18
lines changed

1 file changed

+18
-18
lines changed

articles/ai-foundry/concepts/deployments-overview.md

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -17,39 +17,39 @@ The model catalog in Azure AI Foundry is the hub to discover and use a wide rang
1717

1818
## Deployment options
1919

20-
Azure AI Foundry provides multiple deployment options depending on the type of models and resources you need to provision. The following deployment options are available:
20+
Azure AI Foundry provides several deployment options depending on the type of models and resources you need to provision. The following deployment options are available:
2121

2222
- Standard deployment in Azure AI Foundry resources
23-
- Deployment to serverless API endpoint
24-
- Deployment to managed compute
23+
- Deployment to serverless API endpoints
24+
- Deployment to managed computes
2525

2626
### Standard deployment in Azure AI Foundry resources
2727

28-
Azure AI Foundry resources (formerly referred to as Azure AI model inference, in Azure AI Services), is **the preferred deployment option** in Azure AI Foundry. It offers the widest range of options including regional, data zone, or global processing, and it offers standard and [provisioned throughput (PTU)](../../ai-services/openai/concepts/provisioned-throughput.md) options. Flagship models in Azure AI Foundry Models support this deployment option.
28+
Azure AI Foundry resources (formerly referred to as Azure AI model inference, in Azure AI Services), is **the preferred deployment option** in Azure AI Foundry. It offers the widest range of capabilities, including regional, data zone, or global processing, and it offers standard and [provisioned throughput (PTU)](../../ai-services/openai/concepts/provisioned-throughput.md) options. Flagship models in Azure AI Foundry Models support this deployment option.
2929

3030
This deployment option is available in:
3131

32-
* Azure OpenAI resources<sup>1</sup>
3332
* Azure AI Foundry resources
33+
* Azure OpenAI resources<sup>1</sup>
3434
* Azure AI hub, when connected to an Azure AI Foundry resource (requires the [Deploy models to Azure AI Foundry resources](#configure-azure-ai-foundry-portal-for-deployment-options) feature to be turned on).
3535

36-
<sup>1</sup>If you're using Azure OpenAI resources, the model catalog only shows Azure OpenAI in Foundry Models for deployment. You can get the full list of Foundry Models by upgrading to an Azure AI Foundry resource.
36+
<sup>1</sup>If you're using Azure OpenAI resources, the model catalog shows only Azure OpenAI in Foundry Models for deployment. You can get the full list of Foundry Models by upgrading to an Azure AI Foundry resource.
3737

38-
To get started with standard deployment in Azure AI Foundry resources, see [How-to: Deploy models to Azure AI Foundry Models](../model-inference/how-to/create-model-deployments.md).
38+
To get started with standard deployment in Azure AI Foundry resources, see [How-to: Deploy models to Azure AI Foundry Models](../foundry-models/how-to/create-model-deployments.md).
3939

4040
### Serverless API endpoint
4141

42-
This option is available **only in** [Azure AI hub resources](ai-resources.md) and it allows the creation of dedicated endpoints to host the model, accessible via API. Azure AI Foundry Models support serverless API endpoints with pay-as-you-go billing.
42+
This deployment option is available **only in** [Azure AI hub resources](ai-resources.md) and it allows the creation of dedicated endpoints to host the model, accessible via API. Azure AI Foundry Models support serverless API endpoints with pay-as-you-go billing.
4343

4444
Only regional deployments can be created for serverless API endpoints, and to use it, you _must_ **turn off** the "Deploy models to Azure AI Foundry resources" option.
4545

4646
To get started with deployment to a serverless API endpoint, see [Deploy models as serverless API deployments](../how-to/deploy-models-serverless.md).
4747

48-
### Managed Compute
48+
### Managed compute
4949

50-
This option is available **only in** [Azure AI hub resources](ai-resources.md) and it allows the creation of a dedicated endpoint to host the model in a **dedicated compute**. You need to have compute quota in your subscription to host the model and you're billed per compute uptime.
50+
This deployment option is available **only in** [Azure AI hub resources](ai-resources.md) and it allows the creation of a dedicated endpoint to host the model in a **dedicated compute**. You need to have compute quota in your subscription to host the model, and you're billed per compute uptime.
5151

52-
This deployment option is required for model collections such as these:
52+
Managed compute deployment is required for model collections that include:
5353

5454
* Hugging Face
5555
* NVIDIA inference microservices (NIMs)
@@ -63,17 +63,17 @@ To get started, see [How to deploy and inference a managed compute deployment](.
6363

6464
We recommend using [Standard deployments in Azure AI Foundry resources](#standard-deployment-in-azure-ai-foundry-resources) whenever possible, as it offers the largest set of capabilities among the available deployment options. The following table lists details about specific capabilities available for each deployment option:
6565

66-
| Capability | Azure OpenAI | Azure AI Foundry | Serverless API Endpoint | Managed compute |
66+
| Capability | Azure OpenAI | Standard deployment in Azure AI Foundry resources| Serverless API Endpoint | Managed compute |
6767
|-------------------------------|----------------------|-------------------|----------------|-----------------|
68-
| Which models can be deployed? | [Azure OpenAI models](../../ai-services/openai/concepts/models.md) | [Azure OpenAI models and Foundry Models with pay-as-you-go billing](../../ai-foundry/model-inference/concepts/models.md) | [Foundry Models with pay-as-you-go billing](../how-to/model-catalog-overview.md) | [Open and custom models](../how-to/model-catalog-overview.md#availability-of-models-for-deployment-as-managed-compute) |
69-
| Deployment resource | Azure OpenAI resource | Azure AI Foundry resource | AI project (in AI Hub resource) | AI project (in AI Hub resource) |
68+
| Which models can be deployed? | [Azure OpenAI models](../../ai-services/openai/concepts/models.md) | [Foundry Models](../../ai-foundry/foundry-models/concepts/models.md) | [Foundry Models with pay-as-you-go billing](../how-to/model-catalog-overview.md) | [Open and custom models](../how-to/model-catalog-overview.md#availability-of-models-for-deployment-as-managed-compute) |
69+
| Deployment resource | Azure OpenAI resource | Azure AI Foundry resource | AI project (in AI hub resource) | AI project (in AI hub resource) |
7070
| Requires AI Hubs | No | No | Yes | Yes |
7171
| Data processing options | Regional <br /> Data-zone <br /> Global | Regional <br /> Data-zone <br /> Global | Regional | Regional |
7272
| Private networking | Yes | Yes | Yes | Yes |
7373
| Content filtering | Yes | Yes | Yes | No |
7474
| Custom content filtering | Yes | Yes | No | No |
7575
| Key-less authentication | Yes | Yes | No | No |
76-
| Billing bases | Token usage & [provisioned throughput units](../../ai-services/openai/concepts/provisioned-throughput.md) | Token usage | Token usage<sup>1</sup> | Compute core hours<sup>2</sup> |
76+
| Billing bases | Token usage & [provisioned throughput units](../../ai-services/openai/concepts/provisioned-throughput.md) | Token usage & [provisioned throughput units](../../ai-services/openai/concepts/provisioned-throughput.md) | Token usage<sup>1</sup> | Compute core hours<sup>2</sup> |
7777

7878
<sup>1</sup> A minimal endpoint infrastructure is billed per minute. You aren't billed for the infrastructure that hosts the model in standard deployment. After you delete the endpoint, no further charges accrue.
7979

@@ -89,8 +89,8 @@ Once the **Deploy models to Azure AI Foundry resources** feature is enabled, mod
8989

9090
## Related content
9191

92-
* [Configure your AI project to use Foundry Models](../../ai-foundry/model-inference/how-to/quickstart-ai-project.md)
93-
* [Add and configure models to Foundry Models](../model-inference/how-to/create-model-deployments.md)
92+
* [Configure your AI project to use Foundry Models](../../ai-foundry/foundry-models/how-to/quickstart-ai-project.md)
93+
* [Add and configure models to Foundry Models](../foundry-models/how-to/create-model-deployments.md)
9494
* [Deploy Azure OpenAI models with Azure AI Foundry](../how-to/deploy-models-openai.md)
9595
* [Deploy open models with Azure AI Foundry](../how-to/deploy-models-managed.md)
96-
* [Model catalog and collections in Azure AI Foundry portal](../how-to/model-catalog-overview.md)
96+
* [Explore Azure AI Foundry Models](../how-to/model-catalog-overview.md)

0 commit comments

Comments
 (0)