You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/model-inference/includes/create-resources/bicep.md
+2-16Lines changed: 2 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,19 +30,9 @@ The files for this example are in:
30
30
cd azureai-model-inference-bicep/infra
31
31
```
32
32
33
-
## Understand the resources
34
-
35
-
The tutorial helps you create:
36
-
37
-
> [!div class="checklist"]
38
-
> * An Azure AI Services resource.
39
-
> * A model deployment in the Global standard SKU for each of the models supporting pay-as-you-go.
40
-
> * (Optionally) An Azure AI project and hub.
41
-
> * (Optionally) A connection between the hub and the models in Azure AI Services.
42
-
43
-
Notice that **you have to deploy an Azure AI project and hub** if you plan to use the Azure AI Foundry portal for managing the resource, using playground, or any other feature from the portal.
33
+
## Create the resources
44
34
45
-
You are using the following assets to create those resources:
35
+
Follow these steps:
46
36
47
37
1. Use the template `modules/ai-services-template.bicep` to describe your Azure AI Services resource:
48
38
@@ -72,10 +62,6 @@ You are using the following assets to create those resources:
Copy file name to clipboardExpand all lines: articles/ai-foundry/model-inference/includes/create-resources/intro.md
+16Lines changed: 16 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,6 +11,22 @@ ms.topic: include
11
11
12
12
In this article, you learn how to create the resources required to use Azure AI model inference and consume flagship models from Azure AI model catalog.
13
13
14
+
## Understand the resources
15
+
16
+
Azure AI model inference is a capability in Azure AI Services resources in Azure. You can create model deployments under the resource to consume their predictions. You can also connect the resource to Azure AI Hubs and Projects in Azure AI Foundry to create intelligent applications if needed. The following picture shows the high level architecture.
17
+
18
+
:::image type="content" source="../../media/create-resources/resources-architecture.png" alt-text="A diagram showing the high level architecture of the resources created in the tutorial." lightbox="../../media/create-resources/resources-architecture.png":::
19
+
20
+
Azure AI Services resources don't require AI projects or AI hubs to operate and you can create them to consume flagship models from your applications. However, additional capabilities are available if you **deploy an Azure AI project and hub**, including playground, or agents.
21
+
22
+
The tutorial helps you create:
23
+
24
+
> [!div class="checklist"]
25
+
> * An Azure AI Services resource.
26
+
> * A model deployment for each of the models supported with pay-as-you-go.
27
+
> * (Optionally) An Azure AI project and hub.
28
+
> * (Optionally) A connection between the hub and the models in Azure AI Services.
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/batch-transcription-create.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -246,7 +246,7 @@ To use a custom speech model for batch transcription, you need the model's URI.
246
246
> [!TIP]
247
247
> A [hosted deployment endpoint](how-to-custom-speech-deploy-model.md) isn't required to use custom speech with the batch transcription service. You can conserve resources if you use the [custom speech model](how-to-custom-speech-train-model.md) only for batch transcription.
248
248
249
-
Batch transcription requests for expired models fail with a 4xx error. Set the `model` property to a base model or custom model that isn't expired. Otherwise don't include the `model` property to always use the latest base model. For more information, see [Choose a model](how-to-custom-speech-create-project.md#choose-your-model) and [Custom speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
249
+
Batch transcription requests for expired models fail with a 4xx error. Set the `model` property to a base model or custom model that isn't expired. Otherwise don't include the `model` property to always use the latest base model. For more information, see [Choose a model](./custom-speech-overview.md#choose-your-model) and [Custom speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
0 commit comments