Skip to content

Commit 284f376

Browse files
committed
fixing conflict
2 parents 9c1c838 + 1febf05 commit 284f376

File tree

922 files changed

+6513
-4148
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

922 files changed

+6513
-4148
lines changed

.openpublishing.publish.config.json

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1330,6 +1330,7 @@
13301330
"articles/spatial-anchors/.openpublishing.redirection.spatial-anchors.json",
13311331
"articles/spring-apps/.openpublishing.redirection.spring-apps.json",
13321332
"articles/static-web-apps/.openpublishing.redirection.static-web-apps.json",
1333+
"articles/storage/.openpublishing.redirection.storage.json",
13331334
"articles/storsimple/.openpublishing.redirection.storsimple.json",
13341335
"articles/stream-analytics/.openpublishing.redirection.stream-analytics.json",
13351336
"articles/synapse-analytics/.openpublishing.redirection.synapse-analytics.json",

articles/advisor/advisor-alerts-bicep.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,7 @@
11
---
22
title: Create Azure Advisor alerts for new recommendations using Bicep
33
description: Learn how to set up an alert for new recommendations from Azure Advisor using Bicep.
4-
author: orspod
54
ms.topic: quickstart
6-
ms.author: orspodek
75
ms.custom: subject-armqs, mode-arm, devx-track-bicep
86
ms.date: 04/26/2022
97
---

articles/advisor/advisor-azure-resource-graph.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,8 @@
11
---
22
title: Advisor data in Azure Resource Graph
33
description: Make queries for Advisor data in Azure Resource Graph
4-
author: orspod
54
ms.topic: article
65
ms.date: 03/12/2020
7-
ms.author: orspodek
8-
96
---
107

118
# Query for Advisor data in Resource Graph Explorer (Azure Resource Graph)

articles/advisor/advisor-quick-fix.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,8 @@
11
---
22
title: Quick Fix remediation for Advisor recommendations
33
description: Perform bulk remediation using Quick Fix in Advisor
4-
author: orspod
54
ms.topic: article
65
ms.date: 03/13/2020
7-
ms.author: orspodek
8-
96
---
107

118
# Quick Fix remediation for Advisor

articles/advisor/advisor-recommendations-digest.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,8 @@
11
---
2-
32
title: Recommendation digest for Azure Advisor
43
description: Get periodic summary for your active recommendations
5-
author: orspod
64
ms.topic: article
75
ms.date: 03/16/2020
8-
ms.author: orspodek
9-
106
---
117

128
# Configure periodic summary for recommendations

articles/ai-services/cognitive-services-virtual-networks.md

Lines changed: 13 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ manager: nitinme
88
ms.service: azure-ai-services
99
ms.custom: devx-track-azurepowershell, devx-track-azurecli
1010
ms.topic: how-to
11-
ms.date: 02/13/2024
11+
ms.date: 03/25/2024
1212
ms.author: aahi
1313
---
1414

@@ -595,11 +595,21 @@ curl -i -X PATCH https://management.azure.com$rid?api-version=2023-10-01-preview
595595
'
596596
```
597597

598-
To revoke the exception, set `networkAcls.bypass` to `None`.
599-
600598
> [!NOTE]
601599
> The trusted service feature is only available using the command line described above, and cannot be done using the Azure portal.
602600
601+
To revoke the exception, set `networkAcls.bypass` to `None`.
602+
603+
To verify if the trusted service has been enabled from the Azure portal,
604+
605+
1. Use the **JSON View** from the Azure OpenAI resource overview page
606+
607+
:::image type="content" source="media/vnet/azure-portal-json-view.png" alt-text="A screenshot showing the JSON view option for resources in the Azure portal." lightbox="media/vnet/azure-portal-json-view.png":::
608+
609+
1. Choose your latest API version under **API versions**. Only the latest API version is supported, `2023-10-01-preview` .
610+
611+
:::image type="content" source="media/vnet/virtual-network-trusted-service.png" alt-text="A screenshot showing the trusted service is enabled." lightbox="media/vnet/virtual-network-trusted-service.png":::
612+
603613
### Pricing
604614

605615
For pricing details, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link).

articles/ai-services/computer-vision/how-to/background-removal.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ Where we used this helper function to read the value of an environment variable:
7070
#### [REST API](#tab/rest)
7171
-->
7272

73-
Authentication is done by adding the HTTP request header **Ocp-Apim-Subscription-Key** and setting it to your vision key. The call is made to the URL `https://<endpoint>/computervision/imageanalysis:segment?api-version=2023-02-01-preview`, where `<endpoint>` is your unique Computer Vision endpoint URL. See [Select a mode ](./background-removal.md#select-a-mode) section for another query string you add to this URL.
73+
Authentication is done by adding the HTTP request header **Ocp-Apim-Subscription-Key** and setting it to your vision key. The call is made to the URL `<endpoint>/computervision/imageanalysis:segment?api-version=2023-02-01-preview`, where `<endpoint>` is your unique Computer Vision endpoint URL. See [Select a mode ](./background-removal.md#select-a-mode) section for another query string you add to this URL.
7474

7575

7676
## Select the image to analyze
@@ -164,7 +164,7 @@ Set the query string *mode* to one of these two values. This query string is man
164164
| `mode` | `backgroundRemoval` | Outputs an image of the detected foreground object with a transparent background. |
165165
| `mode` | `foregroundMatting` | Outputs a gray-scale alpha matte image showing the opacity of the detected foreground object. |
166166

167-
A populated URL for backgroundRemoval would look like this: `https://<endpoint>/computervision/imageanalysis:segment?api-version=2023-02-01-preview&mode=backgroundRemoval`
167+
A populated URL for backgroundRemoval would look like this: `<endpoint>/computervision/imageanalysis:segment?api-version=2023-02-01-preview&mode=backgroundRemoval`
168168

169169

170170
## Get results from the service

articles/ai-services/computer-vision/how-to/call-analyze-image.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ You can specify which features you want to use by setting the URL query paramete
8686

8787
A populated URL might look like this:
8888

89-
`https://<endpoint>/vision/v3.2/analyze?visualFeatures=Tags`
89+
`<endpoint>/vision/v3.2/analyze?visualFeatures=Tags`
9090

9191
#### [C#](#tab/csharp)
9292

@@ -135,7 +135,7 @@ The following URL query parameter specifies the language. The default value is `
135135

136136
A populated URL might look like this:
137137

138-
`https://<endpoint>/vision/v3.2/analyze?visualFeatures=Tags&language=en`
138+
`<endpoint>/vision/v3.2/analyze?visualFeatures=Tags&language=en`
139139

140140
#### [C#](#tab/csharp)
141141

@@ -183,7 +183,7 @@ This section shows you how to parse the results of the API call. It includes the
183183
> [!NOTE]
184184
> **Scoped API calls**
185185
>
186-
> Some of the features in Image Analysis can be called directly as well as through the Analyze API call. For example, you can do a scoped analysis of only image tags by making a request to `https://<endpoint>/vision/v3.2/tag` (or to the corresponding method in the SDK). See the [reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) for other features that can be called separately.
186+
> Some of the features in Image Analysis can be called directly as well as through the Analyze API call. For example, you can do a scoped analysis of only image tags by making a request to `<endpoint>/vision/v3.2/tag` (or to the corresponding method in the SDK). See the [reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) for other features that can be called separately.
187187
188188
#### [REST](#tab/rest)
189189

articles/ai-services/computer-vision/how-to/image-retrieval.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,8 @@ ms.custom: references_regions
1717

1818
The Multimodal embeddings APIs enable the _vectorization_ of images and text queries. They convert images to coordinates in a multi-dimensional vector space. Then, incoming text queries can also be converted to vectors, and images can be matched to the text based on semantic closeness. This allows the user to search a set of images using text, without the need to use image tags or other metadata. Semantic closeness often produces better results in search.
1919

20+
The `2024-02-01` API includes a multi-lingual model that supports text search in 102 languages. The original English-only model is still available, but it cannot be combined with the new model in the same search index. If you vectorized text and images using the English-only model, these vectors won’t be compatible with multi-lingual text and image vectors.
21+
2022
> [!IMPORTANT]
2123
> These APIs are only available in the following geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US.
2224
@@ -46,7 +48,7 @@ The `retrieval:vectorizeImage` API lets you convert an image's data to a vector.
4648
1. Optionally, change the `model-version` parameter to an older version. `2022-04-11` is the legacy model that supports only English text. Images and text that are vectorized with a certain model aren't compatible with other models, so be sure to use the same model for both.
4749

4850
```bash
49-
curl.exe -v -X POST "https://<endpoint>/computervision/retrieval:vectorizeImage?api-version=2023-02-01-preview&model-version=2023-04-15" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
51+
curl.exe -v -X POST "<endpoint>/computervision/retrieval:vectorizeImage?api-version=2024-02-01&model-version=2023-04-15" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
5052
{
5153
'url':'https://learn.microsoft.com/azure/ai-services/computer-vision/media/quickstarts/presentation.png'
5254
}"
@@ -73,7 +75,7 @@ The `retrieval:vectorizeText` API lets you convert a text string to a vector. To
7375
1. Optionally, change the `model-version` parameter to an older version. `2022-04-11` is the legacy model that supports only English text. Images and text that are vectorized with a certain model aren't compatible with other models, so be sure to use the same model for both.
7476

7577
```bash
76-
curl.exe -v -X POST "https://<endpoint>/computervision/retrieval:vectorizeText?api-version=2023-02-01-preview&model-version=2023-04-15" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
78+
curl.exe -v -X POST "<endpoint>/computervision/retrieval:vectorizeText?api-version=2024-02-01&model-version=2023-04-15" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
7779
{
7880
'text':'cat jumping'
7981
}"

articles/ai-services/computer-vision/how-to/model-customization.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -235,7 +235,6 @@ prediction = prediction_client.predict(model_name, img, content_type='image/png'
235235
logging.info(f'Prediction: {prediction}')
236236
```
237237
238-
<!-- nbend -->
239238
-->
240239

241240
#### [Vision Studio](#tab/studio)
@@ -369,7 +368,7 @@ The `datasets/<dataset-name>` API lets you create a new dataset object that refe
369368
1. In the request body, set the `"annotationFileUris"` array to an array of string(s) that show the URI location(s) of your COCO file(s) in blob storage.
370369

371370
```bash
372-
curl.exe -v -X PUT "https://<endpoint>/computervision/datasets/<dataset-name>?api-version=2023-02-01-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
371+
curl.exe -v -X PUT "<endpoint>/computervision/datasets/<dataset-name>?api-version=2023-02-01-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
373372
{
374373
'annotationKind':'imageClassification',
375374
'annotationFileUris':['<URI>']
@@ -387,7 +386,7 @@ The `models/<model-name>` API lets you create a new custom model and associate i
387386
1. In the request body, set `"modelKind"` to either `"Generic-Classifier"` or `"Generic-Detector"`, depending on your project.
388387

389388
```bash
390-
curl.exe -v -X PUT "https://<endpoint>/computervision/models/<model-name>?api-version=2023-02-01-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
389+
curl.exe -v -X PUT "<endpoint>/computervision/models/<model-name>?api-version=2023-02-01-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
391390
{
392391
'trainingParameters': {
393392
'trainingDatasetName':'<dataset-name>',
@@ -408,7 +407,7 @@ The `models/<model-name>/evaluations/<eval-name>` API evaluates the performance
408407
1. In the request body, set `"testDatasetName"` to the name of the dataset you want to use for evaluation. If you don't have a dedicated dataset, you can use the same dataset you used for training.
409408

410409
```bash
411-
curl.exe -v -X PUT "https://<endpoint>/computervision/models/<model-name>/evaluations/<eval-name>?api-version=2023-02-01-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
410+
curl.exe -v -X PUT "<endpoint>/computervision/models/<model-name>/evaluations/<eval-name>?api-version=2023-02-01-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
412411
{
413412
'evaluationParameters':{
414413
'testDatasetName':'<dataset-name>'
@@ -431,7 +430,7 @@ The `imageanalysis:analyze` API does ordinary Image Analysis operations. By spec
431430
1. In the request body, set `"url"` to the URL of a remote image you want to test your model on.
432431

433432
```bash
434-
curl.exe -v -X POST "https://<endpoint>/computervision/imageanalysis:analyze?model-name=<model-name>&api-version=2023-02-01-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
433+
curl.exe -v -X POST "<endpoint>/computervision/imageanalysis:analyze?model-name=<model-name>&api-version=2023-02-01-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
435434
{'url':'https://learn.microsoft.com/azure/ai-services/computer-vision/media/quickstarts/presentation.png'
436435
}"
437436
```

0 commit comments

Comments
 (0)