You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/model-inference/includes/configure-content-filters/code.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ ms.topic: include
9
9
10
10
## Account for content filtering in your code
11
11
12
-
Once content filtering has been applied to your model deployment, request may be intercepted by the service depending on the inputs and outputs. When a content filter is triggered, a 400 error code is returned with the description of the rule triggered.
12
+
Once content filtering has been applied to your model deployment, requests can be intercepted by the service depending on the inputs and outputs. When a content filter is triggered, a 400 error code is returned with the description of the rule triggered.
* Some of the commands in this tutorial use the `jq` tool, which may not be installed in your system. For installation instructions, see [Download `jq`](https://stedolan.github.io/jq/download/).
19
+
* Some of the commands in this tutorial use the `jq` tool, which might not be installed in your system. For installation instructions, see [Download `jq`](https://stedolan.github.io/jq/download/).
20
20
21
21
* Identify the following information:
22
22
@@ -77,7 +77,7 @@ To add a model, you first need to identify the model that you want to deploy. Yo
77
77
}
78
78
```
79
79
80
-
6. Identify the model you want to deploy. You need the properties `name`, `format`, `version`, and `sku`. Capacity may also be needed depending on the type of deployment.
80
+
6. Identify the model you want to deploy. You need the properties `name`, `format`, `version`, and `sku`. Capacity might also be needed depending on the type of deployment.
81
81
82
82
> [!TIP]
83
83
> Notice that not all the models are available in all the SKUs.
@@ -98,7 +98,7 @@ To add a model, you first need to identify the model that you want to deploy. Yo
98
98
99
99
8. The model is ready to be consumed.
100
100
101
-
You can deploy the same model multiple times if needed as long as it's under a different deployment name. This capability may be useful in case you want to test different configurations for a given model, including content safety.
101
+
You can deploy the same model multiple times if needed as long as it's under a different deployment name. This capability might be useful in case you want to test different configurations for a given model, including content safety.
Copy file name to clipboardExpand all lines: articles/ai-foundry/model-inference/includes/create-model-deployments/portal.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,33 +20,33 @@ You can add models to the Azure AI model inference endpoint using the following
20
20
21
21
1. Go to **Model catalog** section in [Azure AI Foundry portal](https://ai.azure.com/explore/models).
22
22
23
-
2. Scroll to the model you are interested in and select it.
23
+
2. Scroll to the model you're interested in and select it.
24
24
25
25
:::image type="content" source="../../media/add-model-deployments/models-search-and-deploy.gif" alt-text="An animation showing how to search models in the model catalog and select one for viewing its details." lightbox="../../media/add-model-deployments/models-search-and-deploy.gif":::
26
26
27
27
3. You can review the details of the model in the model card.
28
28
29
29
4. Select **Deploy**.
30
30
31
-
5. For models providers that require additional terms of contract, you will be asked to accept those terms. This is the case for Mistral models for instance. Accept the terms on those cases by selecting **Subscribe and deploy**.
31
+
5. For model providers that require more terms of contract, you'll be asked to accept those terms. This is the case for Mistral models for instance. Accept the terms on those cases by selecting **Subscribe and deploy**.
32
32
33
33
:::image type="content" source="../../media/add-model-deployments/models-deploy-agree.png" alt-text="An screenshot showing how to agree the terms and conditions of a Mistral-Large model." lightbox="../../media/add-model-deployments/models-deploy-agree.png":::
34
34
35
-
6. You can configure the deployment settings at this time. By default, the deployment receives the name of the model you are deploying. The deployment name is used in the `model` parameter for request to route to this particular model deployment. This allows you to also configure specific names for your models when you attach specific configurations. For instance `o1-preview-safe` for a model with a strict content safety content filter.
35
+
6. You can configure the deployment settings at this time. By default, the deployment receives the name of the model you're deploying. The deployment name is used in the `model` parameter for request to route to this particular model deployment. This allows you to also configure specific names for your models when you attach specific configurations. For instance `o1-preview-safe` for a model with a strict content safety content filter.
36
36
37
37
> [!TIP]
38
-
> Each model may support different deployments types, providing different data residency or throughput guarantees. See [deployment types](../../concepts/deployment-types.md) for more details.
38
+
> Each model can support different deployments types, providing different data residency or throughput guarantees. See [deployment types](../../concepts/deployment-types.md) for more details.
39
39
40
-
5. We automatically select an Azure AI Services connection depending on your project. Use the **Customize** option to change the connection based on your needs. If you are deploying under the **Standard** deployment type, the models needs to be available in the region of the Azure AI Services resource.
40
+
5. We automatically select an Azure AI Services connection depending on your project. Use the **Customize** option to change the connection based on your needs. If you're deploying under the **Standard** deployment type, the models need to be available in the region of the Azure AI Services resource.
41
41
42
42
:::image type="content" source="../../media/add-model-deployments/models-deploy-customize.png" alt-text="An screenshot showing how to customize the deployment if needed." lightbox="../../media/add-model-deployments/models-deploy-customize.png":::
43
43
44
44
> [!TIP]
45
-
> If the desired resource is not listed, you may need to create a connection to it. See [Configure Azure AI model inference service in my project](../../how-to/configure-project-connection.md) in Azure AI Foundry portal.
45
+
> If the desired resource isn't listed, you might need to create a connection to it. See [Configure Azure AI model inference service in my project](../../how-to/configure-project-connection.md) in Azure AI Foundry portal.
46
46
47
47
6. Select **Deploy**.
48
48
49
-
7. Once the deployment completes, the new model will be listed in the page and it's ready to be used.
49
+
7. Once the deployment completes, the new model is listed in the page and it's ready to be used.
50
50
51
51
## Manage models
52
52
@@ -58,7 +58,7 @@ You can manage the existing model deployments in the resource using Azure AI Fou
58
58
59
59
:::image type="content" source="../../media/quickstart-ai-project/endpoints-ai-services-connection.png" alt-text="An screenshot showing the list of models available under a given connection." lightbox="../../media/quickstart-ai-project/endpoints-ai-services-connection.png":::
60
60
61
-
3. You see a list of models available under each connection. Select the model deployment you are interested in.
61
+
3. You see a list of models available under each connection. Select the model deployment you're interested in.
62
62
63
63
4.**Edit** or **Delete** the deployment as needed.
64
64
@@ -74,7 +74,7 @@ You can interact with the new model in Azure AI Foundry portal using the playgro
74
74
75
75
2. Depending on the type of model you deployed, select the playground needed. In this case we select **Chat playground**.
76
76
77
-
3. In the **Deployment** drop down, under **Setup** select the name of the model deployment you have just created.
77
+
3. In the **Deployment** drop down, under **Setup** select the name of the model deployment you have created.
78
78
79
79
:::image type="content" source="../../media/add-model-deployments/playground-chat-models.png" alt-text="An screenshot showing how to select a model deployment to use in playground." lightbox="../../media/add-model-deployments/playground-chat-models.png":::
Copy file name to clipboardExpand all lines: articles/ai-foundry/model-inference/includes/create-resources/portal.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,32 +23,32 @@ To create a project with an Azure AI Services account, follow these steps:
23
23
24
24
3. Give the project a name, for example "my-project".
25
25
26
-
4. In this tutorial, we will create a brand new project under a new AI hub, hence, select **Create new hub**.
26
+
4. In this tutorial, we create a brand new project under a new AI hub, hence, select **Create new hub**.
27
27
28
28
5. Give the hub a name, for example "my-hub" and select **Next**.
29
29
30
-
6. The wizard will update with details about the resources that are going to be created. Select **Azure resources to be created** to see the details.
30
+
6. The wizard updates with details about the resources that are going to be created. Select **Azure resources to be created** to see the details.
31
31
32
32
:::image type="content" source="../../media/create-resources/create-project-with-hub-details.png" alt-text="An screenshot showing the details of the project and hub to be created." lightbox="../../media/create-resources/create-project-with-hub-details.png":::
33
33
34
34
7. You can see that the following resources are created:
35
35
36
36
| Property | Description |
37
37
| -------------- | ----------- |
38
-
| Resource group | The main container for all the resources in Azure. This helps get resources that work together organized. It also helps to have an scope for the costs associated with the entire project. |
39
-
| Location | The region of the resources that your are creating. |
38
+
| Resource group | The main container for all the resources in Azure. This helps get resources that work together organized. It also helps to have a scope for the costs associated with the entire project. |
39
+
| Location | The region of the resources that you're creating. |
40
40
| Hub | The main container for AI projects in Azure AI Foundry. Hubs promote collaboration and allow you to store information for your projects. |
41
-
| AI Services | The resource enabling access to the flagship models in Azure AI model catalog. In this tutorial, a new account is created, but Azure AI services resources can be shared across multiple hubs and projects. Hubs uses a connection to the resource to have access to the model deployments available there. To learn how you can create connections between projects and Azure AI Services to consume Azure AI model inference you can read [Connect your AI project](../../how-to/configure-project-connection.md). |
41
+
| AI Services | The resource enabling access to the flagship models in Azure AI model catalog. In this tutorial, a new account is created, but Azure AI services resources can be shared across multiple hubs and projects. Hubs use a connection to the resource to have access to the model deployments available there. To learn how, you can create connections between projects and Azure AI Services to consume Azure AI model inference you can read [Connect your AI project](../../how-to/configure-project-connection.md). |
42
42
43
-
8. Select **Create**. The resources creation process will start.
43
+
8. Select **Create**. The resources creation process starts.
44
44
45
-
9. Once completed your project is ready to be configured.
45
+
9. Once completed, your project is ready to be configured.
46
46
47
47
10. Azure AI model inference is a Preview feature that needs to be turned on in Azure AI Foundry. At the top navigation bar, over the right corner, select the **Preview features** icon. A contextual blade shows up at the right of the screen.
48
48
49
49
11. Turn the feature **Deploy models to Azure AI model inference service** on.
50
50
51
-
:::image type="content" source="../../media/quickstart-ai-project/ai-project-inference-endpoint.gif" alt-text="An animation showing how to turn on the Deploy models to Azure AI model inference service feature in Azure AI Foundry portal." lightbox="../../media/quickstart-ai-project/ai-project-inference-endpoint.gif":::
51
+
:::image type="content" source="../../media/quickstart-ai-project/ai-project-inference-endpoint.gif" alt-text="An animation showing how to turn on the Azure AI model inference service deploy models feature in Azure AI Foundry portal." lightbox="../../media/quickstart-ai-project/ai-project-inference-endpoint.gif":::
Copy file name to clipboardExpand all lines: articles/ai-foundry/model-inference/includes/github/add-model-deployments.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,33 +7,33 @@ ms.author: fasantia
7
7
author: santiagxf
8
8
---
9
9
10
-
As opposite to GitHub Models where all the models are already configured, the Azure AI Services resource allow you to control which models are available in your endpoint and under which configuration.
10
+
As opposite to GitHub Models where all the models are already configured, the Azure AI Services resource allows you to control which models are available in your endpoint and under which configuration.
11
11
12
12
You can add all the models you need in the endpoint by using [Azure AI Studio for GitHub](https://ai.azure.com/github). In the following example, we add a `Mistral-Large` model in the service:
13
13
14
14
1. Go to **Model catalog** section in [Azure AI Studio for GitHub](https://ai.azure.com/github).
15
15
16
-
2. Scroll to the model you are interested in and select it.
16
+
2. Scroll to the model you're interested in and select it.
17
17
18
18
:::image type="content" source="../../media/add-model-deployments/models-search-and-deploy.gif" alt-text="An animation showing how to search models in the model catalog and select one for viewing its details." lightbox="../../media/add-model-deployments/models-search-and-deploy.gif":::
19
19
20
20
3. You can review the details of the model in the model card.
21
21
22
22
4. Select **Deploy**.
23
23
24
-
5. For models providers that require additional terms of contract, you will be asked to accept those terms. This is the case for Mistral models for instance. Accept the terms on those cases by selecting **Subscribe and deploy**.
24
+
5. For model providers that require more terms of contract, you are asked to accept those terms. This is the case for Mistral models for instance. Accept the terms on those cases by selecting **Subscribe and deploy**.
25
25
26
26
:::image type="content" source="../../media/add-model-deployments/models-deploy-agree.png" alt-text="An screenshot showing how to agree the terms and conditions of a Mistral-Large model." lightbox="../../media/add-model-deployments/models-deploy-agree.png":::
27
27
28
-
6. You can configure the deployment settings at this time. By default, the deployment receives the name of the model you are deploying. The deployment name is used in the `model` parameter for request to route to this particular model deployment. This allow you to also configure specific names for your models when you attach specific configurations. For instance `o1-preview-safe` for a model with a strict content safety content filter.
28
+
6. You can configure the deployment settings at this time. By default, the deployment receives the name of the model you're deploying. The deployment name is used in the `model` parameter for request to route to this particular model deployment. This allows you to also configure specific names for your models when you attach specific configurations. For instance `o1-preview-safe` for a model with a strict content safety content filter.
29
29
30
30
> [!TIP]
31
-
> Each model may support different deployments types, providing different data residency or throughput guarantees. See [deployment types](../../concepts/deployment-types.md) for more details.
31
+
> Each model can support different deployments types, providing different data residency or throughput guarantees. See [deployment types](../../concepts/deployment-types.md) for more details.
32
32
33
33
7. Use the **Customize** option if you need to change settings like [content filter](../../concepts/content-filter.md).
34
34
35
35
:::image type="content" source="../../media/add-model-deployments/models-deploy-customize.png" alt-text="An screenshot showing how to customize the deployment if needed." lightbox="../../media/add-model-deployments/models-deploy-customize.png":::
36
36
37
37
8. Select **Deploy**.
38
38
39
-
9. Once the deployment completes, the new model will be listed in the page and it's ready to be used.
39
+
9. Once the deployment completes, the new model is listed in the page and it's ready to be used.
Copy file name to clipboardExpand all lines: articles/ai-foundry/model-inference/includes/use-chat-completions/rest.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -603,7 +603,7 @@ The response is as follows, where you can see the model's usage statistics:
603
603
"index": 0,
604
604
"message": {
605
605
"role": "assistant",
606
-
"content": "The chart illustrates that larger models tend to perform better in quality, as indicated by their size in billions of parameters. However, there are exceptions to this trend, such as Phi-3-medium and Phi-3-small, which outperform smaller models in quality. This suggests that while larger models generally have an advantage, there may be other factors at play that influence a model's performance.",
606
+
"content": "The chart illustrates that larger models tend to perform better in quality, as indicated by their size in billions of parameters. However, there are exceptions to this trend, such as Phi-3-medium and Phi-3-small, which outperform smaller models in quality. This suggests that while larger models generally have an advantage, there might be other factors at play that influence a model's performance.",
0 commit comments