You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -40,6 +40,16 @@ To use Phi-3.5 chat model with vision with Azure Machine Learning, you need the
40
40
41
41
### A model deployment
42
42
43
+
**Deployment to serverless APIs**
44
+
45
+
Phi-3.5 chat model with vision can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
46
+
47
+
Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure Machine Learning studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](how-to-deploy-models-serverless.md).
48
+
49
+
> [!div class="nextstepaction"]
50
+
> [Deploy models as serverless API endpoints](how-to-deploy-models-serverless.md)
51
+
52
+
43
53
**Deployment to a self-hosted managed compute**
44
54
45
55
Phi-3.5 chat model with vision can be deployed to our self-hosted managed inference solution, which allows you to customize and control all the details about how the model is served.
> Currently, serverless API endpoints do not support using Microsoft Entra ID for authentication.
117
+
118
+
105
119
### Get the model's capabilities
106
120
107
121
The `/info` route returns information about the model that is deployed to the endpoint. Return the model's information by calling the following method:
@@ -265,6 +279,44 @@ The following extra parameters can be passed to Phi-3.5 chat model with vision:
265
279
|`n`| How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. |`int`|
266
280
267
281
282
+
### Apply content safety
283
+
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
284
+
The following example shows how to handle events when the model detects harmful content in the input prompt and content safety is enabled.
285
+
```csharp
286
+
try
287
+
{
288
+
requestOptions=newChatCompletionsOptions()
289
+
{
290
+
Messages= {
291
+
newChatRequestSystemMessage("You are an AI assistant that helps people find information."),
292
+
newChatRequestUserMessage(
293
+
"Chopping tomatoes and cutting them into cubes or wedges are great ways to practice your knife skills."
Console.WriteLine($"Your query has trigger Azure Content Safety: {ex.Message}");
305
+
}
306
+
else
307
+
{
308
+
throw;
309
+
}
310
+
}
311
+
```
312
+
313
+
314
+
> [!TIP]
315
+
> To learn more about how you can configure and control Azure AI content safety settings, check the [Azure AI content safety documentation](https://aka.ms/azureaicontentsafety).
316
+
317
+
> [!NOTE]
318
+
> Azure AI content safety is only available for models deployed as serverless API endpoints.
319
+
268
320
## Use chat completions with images
269
321
270
322
Phi-3.5-vision-Instruct can reason across text and images and generate text completions based on both kinds of input. In this section, you explore the capabilities of Phi-3.5-vision-Instruct for vision in a chat fashion:
@@ -360,6 +412,16 @@ To use Phi-3.5 chat model with vision with Azure Machine Learning studio, you ne
360
412
361
413
### A model deployment
362
414
415
+
**Deployment to serverless APIs**
416
+
417
+
Phi-3.5 chat model with vision can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
418
+
419
+
Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure Machine Learning studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](how-to-deploy-models-serverless.md).
420
+
421
+
> [!div class="nextstepaction"]
422
+
> [Deploy models as serverless API endpoints](how-to-deploy-models-serverless.md)
423
+
424
+
363
425
**Deployment to a self-hosted managed compute**
364
426
365
427
Phi-3.5 chat model with vision can be deployed to our self-hosted managed inference solution, which allows you to customize and control all the details about how the model is served.
@@ -420,6 +482,10 @@ const client = new ModelClient(
420
482
);
421
483
```
422
484
485
+
> [!NOTE]
486
+
> Currently, serverless API endpoints do not support using Microsoft Entra ID for authentication.
487
+
488
+
423
489
### Get the model's capabilities
424
490
425
491
The `/info` route returns information about the model that is deployed to the endpoint. Return the model's information by calling the following method:
@@ -602,6 +668,44 @@ The following extra parameters can be passed to Phi-3.5 chat model with vision:
602
668
| `n` | How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. | `int` |
603
669
604
670
671
+
### Apply content safety
672
+
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
673
+
The following example shows how to handle events when the model detects harmful content in the input prompt and content safety is enabled.
674
+
```csharp
675
+
try
676
+
{
677
+
requestOptions =newChatCompletionsOptions()
678
+
{
679
+
Messages = {
680
+
newChatRequestSystemMessage("You are an AI assistant that helps people find information."),
681
+
newChatRequestUserMessage(
682
+
"Chopping tomatoes and cutting them into cubes or wedges are great ways to practice your knife skills."
Console.WriteLine($"Your query has trigger Azure Content Safety: {ex.Message}");
694
+
}
695
+
else
696
+
{
697
+
throw;
698
+
}
699
+
}
700
+
```
701
+
702
+
703
+
> [!TIP]
704
+
> To learn more about how you can configure and control Azure AI content safety settings, check the [Azure AI content safety documentation](https://aka.ms/azureaicontentsafety).
705
+
706
+
> [!NOTE]
707
+
> Azure AI content safety is only available for models deployed as serverless API endpoints.
708
+
605
709
## Use chat completions with images
606
710
607
711
Phi-3.5-vision-Instruct can reason across text and images and generate text completions based on both kinds of input. In this section, you explore the capabilities of Phi-3.5-vision-Instruct for vision in a chat fashion:
@@ -703,6 +807,16 @@ To use Phi-3.5 chat model with vision with Azure Machine Learning studio, you ne
703
807
704
808
### A model deployment
705
809
810
+
**Deployment to serverless APIs**
811
+
812
+
Phi-3.5 chat model with vision can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
813
+
814
+
Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure Machine Learning studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](how-to-deploy-models-serverless.md).
815
+
816
+
> [!div class="nextstepaction"]
817
+
> [Deploy models as serverless API endpoints](how-to-deploy-models-serverless.md)
818
+
819
+
706
820
**Deployment to a self-hosted managed compute**
707
821
708
822
Phi-3.5 chat model with vision can be deployed to our self-hosted managed inference solution, which allows you to customize and control all the details about how the model is served.
@@ -778,6 +892,10 @@ client = new ChatCompletionsClient(
778
892
);
779
893
```
780
894
895
+
> [!NOTE]
896
+
> Currently, serverless API endpoints do not support using Microsoft Entra IDfor authentication.
897
+
898
+
781
899
### Get the model's capabilities
782
900
783
901
The `/info` route returns information about the model that is deployed to the endpoint. Return the model's information by calling the following method:
@@ -957,6 +1075,44 @@ The following extra parameters can be passed to Phi-3.5 chat model with vision:
957
1075
| `n` | How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. | `int` |
958
1076
959
1077
1078
+
### Apply content safety
1079
+
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
1080
+
The following example shows how to handle events when the model detects harmful content in the input prompt and content safety is enabled.
1081
+
```csharp
1082
+
try
1083
+
{
1084
+
requestOptions = new ChatCompletionsOptions()
1085
+
{
1086
+
Messages = {
1087
+
new ChatRequestSystemMessage("You are an AI assistant that helps people find information."),
1088
+
new ChatRequestUserMessage(
1089
+
"Chopping tomatoes and cutting them into cubes or wedges are great ways to practice your knife skills."
Console.WriteLine($"Your query has trigger Azure Content Safety: {ex.Message}");
1101
+
}
1102
+
else
1103
+
{
1104
+
throw;
1105
+
}
1106
+
}
1107
+
```
1108
+
1109
+
1110
+
> [!TIP]
1111
+
> To learn more about how you can configure and control Azure AI content safety settings, check the [Azure AI content safety documentation](https://aka.ms/azureaicontentsafety).
1112
+
1113
+
> [!NOTE]
1114
+
> Azure AI content safety is only available for models deployed as serverless API endpoints.
1115
+
960
1116
## Use chat completions with images
961
1117
962
1118
Phi-3.5-vision-Instruct can reason across text and images and generate text completions based on both kinds of input. In this section, you explore the capabilities of Phi-3.5-vision-Instruct for vision in a chat fashion:
@@ -1043,6 +1199,16 @@ To use Phi-3.5 chat model with vision with Azure Machine Learning studio, you ne
1043
1199
1044
1200
### A model deployment
1045
1201
1202
+
**Deployment to serverless APIs**
1203
+
1204
+
Phi-3.5 chat model with vision can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
1205
+
1206
+
Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure Machine Learning studio, Azure Machine Learning SDKfor Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](how-to-deploy-models-serverless.md).
1207
+
1208
+
> [!div class="nextstepaction"]
1209
+
> [Deploy models as serverless API endpoints](how-to-deploy-models-serverless.md)
1210
+
1211
+
1046
1212
**Deployment to a self-hosted managed compute**
1047
1213
1048
1214
Phi-3.5 chat model with vision can be deployed to our self-hosted managed inference solution, which allows you to customize and control all the details about how the model is served.
@@ -1072,6 +1238,9 @@ First, create the client to consume the model. The following code uses an endpoi
1072
1238
1073
1239
When you deploy the model to a self-hosted online endpoint with **Microsoft Entra ID** support, you can use the following code snippet to create a client.
1074
1240
1241
+
> [!NOTE]
1242
+
> Currently, serverless API endpoints do not support using Microsoft Entra ID for authentication.
1243
+
1075
1244
### Get the model's capabilities
1076
1245
1077
1246
The `/info` route returns information about the model that is deployed to the endpoint. Return the model's information by calling the following method:
@@ -1322,6 +1491,47 @@ The following extra parameters can be passed to Phi-3.5 chat model with vision:
1322
1491
|`n`| How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. |`int`|
1323
1492
1324
1493
1494
+
### Apply content safety
1495
+
1496
+
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
1497
+
1498
+
The following example shows how to handle events when the model detects harmful content in the input prompt and content safety is enabled.
1499
+
1500
+
1501
+
```json
1502
+
{
1503
+
"messages": [
1504
+
{
1505
+
"role": "system",
1506
+
"content": "You are an AI assistant that helps people find information."
1507
+
},
1508
+
{
1509
+
"role": "user",
1510
+
"content": "Chopping tomatoes and cutting them into cubes or wedges are great ways to practice your knife skills."
1511
+
}
1512
+
]
1513
+
}
1514
+
```
1515
+
1516
+
1517
+
```json
1518
+
{
1519
+
"error": {
1520
+
"message": "The response was filtered due to the prompt triggering Microsoft's content management policy. Please modify your prompt and retry.",
1521
+
"type": null,
1522
+
"param": "prompt",
1523
+
"code": "content_filter",
1524
+
"status": 400
1525
+
}
1526
+
}
1527
+
```
1528
+
1529
+
> [!TIP]
1530
+
> To learn more about how you can configure and control Azure AI content safety settings, check the [Azure AI content safety documentation](https://aka.ms/azureaicontentsafety).
1531
+
1532
+
> [!NOTE]
1533
+
> Azure AI content safety is only available for models deployed as serverless API endpoints.
1534
+
1325
1535
## Use chat completions with images
1326
1536
1327
1537
Phi-3.5-vision-Instruct can reason across text and images and generate text completions based on both kinds ofinput. Inthis section, you explore the capabilities of Phi-3.5-vision-Instruct for vision in a chat fashion:
@@ -1412,6 +1622,9 @@ For more examples of how to use Phi-3 family models, see the following examples
## Cost and quota considerations for Phi-3 family models deployed as serverless API endpoints
1626
+
Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
1627
+
1415
1628
## Cost and quota considerations for Phi-3 family models deployed to managed compute
1416
1629
1417
1630
Phi-3 family models deployed to managed compute are billed based on core hours of the associated compute instance. The cost of the compute instance is determined by the size of the instance, the number of instances running, and the run duration.
0 commit comments