Skip to content

Commit b832eb7

Browse files
Learn Build Service GitHub AppLearn Build Service GitHub App
authored andcommitted
Merging changes synced from https://github.com/MicrosoftDocs/azure-docs-pr (branch live)
2 parents 5697c57 + 9e2e596 commit b832eb7

File tree

131 files changed

+3766
-615
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

131 files changed

+3766
-615
lines changed

.openpublishing.redirection.json

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2770,6 +2770,11 @@
27702770
"redirect_url": "/azure/container-registry/tutorial-customer-managed-keys",
27712771
"redirect_document_id": false
27722772
},
2773+
{
2774+
"source_path_from_root": "/articles/container-registry/container-registry-enable-conditional-access-policy.md",
2775+
"redirect_url": "/azure/container-registry/container-registry-configure-conditional-access.md",
2776+
"redirect_document_id": false
2777+
},
27732778
{
27742779
"source_path": "articles/site-recovery/vmware-physical-secondary-disaster-recovery.md",
27752780
"redirect_url": "/azure/site-recovery/vmware-physical-secondary-architecture",
@@ -25794,5 +25799,6 @@
2579425799
"redirect_url": "https://azure.microsoft.com/updates/preview-ai-toolchain-operator-addon-for-aks/",
2579525800
"redirect_document_id": false
2579625801
}
25802+
2579725803
]
2579825804
}

articles/ai-services/computer-vision/concept-object-detection-40.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,9 @@ Try out the capabilities of object detection quickly and easily in your browser
2424
> [!div class="nextstepaction"]
2525
> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
2626
27+
> [!TIP]
28+
> You can use the Object detection feature through the [Azure OpenAI](/azure/ai-services/openai/overview) service. The **GPT-4 Turbo with Vision** model lets you chat with an AI assistant that can analyze the images you share, and the Vision Enhancement option uses Image Analysis to give the AI assistance more details (readable text and object locations) about the image. For more information, see the [GPT-4 Turbo with Vision quickstart](/azure/ai-services/openai/gpt-v-quickstart).
29+
2730
## Object detection example
2831

2932
The following JSON response illustrates what the Analysis 4.0 API returns when detecting objects in the example image.

articles/ai-services/computer-vision/concept-ocr.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,9 @@ OCR traditionally started as a machine-learning-based technique for extracting t
2424

2525
The new Computer Vision Image Analysis 4.0 REST API offers the ability to extract printed or handwritten text from images in a unified performance-enhanced synchronous API that makes it easy to get all image insights including OCR results in a single API operation. The Read OCR engine is built on top of multiple deep learning models supported by universal script-based models for [global language support](./language-support.md).
2626

27+
> [!TIP]
28+
> You can use the OCR feature through the [Azure OpenAI](/azure/ai-services/openai/overview) service. The **GPT-4 Turbo with Vision** model lets you chat with an AI assistant that can analyze the images you share, and the Vision Enhancement option uses Image Analysis to give the AI assistance more details (readable text and object locations) about the image. For more information, see the [GPT-4 Turbo with Vision quickstart](/azure/ai-services/openai/gpt-v-quickstart).
29+
2730
## Text extraction example
2831

2932
The following JSON response illustrates what the Image Analysis 4.0 API returns when extracting text from the given image.

articles/ai-services/computer-vision/how-to/background-removal.md

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,7 @@ To authenticate against the Image Analysis service, you need an Azure AI Vision
3838
3939
The SDK example assumes that you defined the environment variables `VISION_KEY` and `VISION_ENDPOINT` with your key and endpoint.
4040

41+
<!--
4142
#### [C#](#tab/csharp)
4243
4344
Start by creating a [VisionServiceOptions](/dotnet/api/azure.ai.vision.common.visionserviceoptions) object using one of the constructors. For example:
@@ -67,15 +68,16 @@ Where we used this helper function to read the value of an environment variable:
6768
[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/how-to/how-to.cpp?name=get_env_var)]
6869
6970
#### [REST API](#tab/rest)
71+
-->
7072

7173
Authentication is done by adding the HTTP request header **Ocp-Apim-Subscription-Key** and setting it to your vision key. The call is made to the URL `https://<endpoint>/computervision/imageanalysis:segment?api-version=2023-02-01-preview`, where `<endpoint>` is your unique Azure AI Vision endpoint URL. See [Select a mode ](./background-removal.md#select-a-mode) section for another query string you add to this URL.
7274

73-
---
7475

7576
## Select the image to analyze
7677

7778
The code in this guide uses remote images referenced by URL. You may want to try different images on your own to see the full capability of the Image Analysis features.
7879

80+
<!--
7981
#### [C#](#tab/csharp)
8082
8183
Create a new **VisionSource** object from the URL of the image you want to analyze, using the static constructor [VisionSource.FromUrl](/dotnet/api/azure.ai.vision.common.visionsource.fromurl).
@@ -117,15 +119,17 @@ Create a new **VisionSource** object from the URL of the image you want to analy
117119
> You can also analyze a local image by passing in the full-path image file name (see [VisionSource::FromFile](/cpp/cognitive-services/vision/input-visionsource#fromfile)), or by copying the image into the SDK's input buffer (see [VisionSource::FromImageSourceBuffer](/cpp/cognitive-services/vision/input-visionsource#fromimagesourcebuffer)). For more details, see [Call the Analyze API](./call-analyze-image-40.md?pivots=programming-language-cpp#select-the-image-to-analyze).
118120
119121
#### [REST API](#tab/rest)
122+
-->
120123

121124
When analyzing a remote image, you specify the image's URL by formatting the request body like this: `{"url":"https://learn.microsoft.com/azure/ai-services/computer-vision/images/windows-kitchen.jpg"}`. The **Content-Type** should be `application/json`.
122125

123126
To analyze a local image, you'd put the binary image data in the HTTP request body. The **Content-Type** should be `application/octet-stream` or `multipart/form-data`.
124127

125-
---
128+
126129

127130
## Select a mode
128131

132+
<!--
129133
### [C#](#tab/csharp)
130134
131135
Create a new [ImageAnalysisOptions](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions) object and set the property [SegmentationMode](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.segmentationmode#azure-ai-vision-imageanalysis-imageanalysisoptions-segmentationmode). This property must be set if you want to do segmentation. See [ImageSegmentationMode](/dotnet/api/azure.ai.vision.imageanalysis.imagesegmentationmode) for supported values.
@@ -151,6 +155,7 @@ Create a new [ImageAnalysisOptions](/cpp/cognitive-services/vision/imageanalysis
151155
[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/segmentation/segmentation.cpp?name=segmentation_mode)]
152156
153157
### [REST](#tab/rest)
158+
-->
154159

155160
Set the query string *mode** to one of these two values. This query string is mandatory if you want to do image segmentation.
156161

@@ -161,12 +166,12 @@ Set the query string *mode** to one of these two values. This query string is ma
161166

162167
A populated URL for backgroundRemoval would look like this: `https://<endpoint>/computervision/imageanalysis:segment?api-version=2023-02-01-preview&mode=backgroundRemoval`
163168

164-
---
165169

166170
## Get results from the service
167171

168172
This section shows you how to make the API call and parse the results.
169173

174+
<!--
170175
#### [C#](#tab/csharp)
171176
172177
The following code calls the Image Analysis API and saves the resulting segmented image to a file named **output.png**. It also displays some metadata about the segmented image.
@@ -196,10 +201,10 @@ The following code calls the Image Analysis API and saves the resulting segmente
196201
[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/segmentation/segmentation.cpp?name=segment)]
197202
198203
#### [REST](#tab/rest)
204+
-->
199205

200206
The service returns a `200` HTTP response on success with `Content-Type: image/png`, and the body contains the returned PNG image in the form of a binary stream.
201207

202-
---
203208

204209
As an example, assume background removal is run on the following image:
205210

articles/ai-services/computer-vision/how-to/model-customization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -473,4 +473,4 @@ The API call returns an **ImageAnalysisResult** JSON object, which contains all
473473
In this guide, you created and trained a custom image classification model using Image Analysis. Next, learn more about the Analyze Image 4.0 API, so you can call your custom model from an application using REST or library SDKs.
474474

475475
* See the [Model customization concepts](../concept-model-customization.md) guide for a broad overview of this feature and a list of frequently asked questions.
476-
* [Call the Analyze Image API](./call-analyze-image-40.md). Note the sections [Set model name when using a custom model](./call-analyze-image-40.md#set-model-name-when-using-a-custom-model) and [Get results using custom model](./call-analyze-image-40.md#get-results-using-custom-model).
476+
* [Call the Analyze Image API](./call-analyze-image-40.md). <!--Note the sections [Set model name when using a custom model](./call-analyze-image-40.md#set-model-name-when-using-a-custom-model) and [Get results using custom model](./call-analyze-image-40.md#get-results-using-custom-model).-->

articles/ai-services/computer-vision/how-to/shelf-model-customization.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -46,15 +46,14 @@ When your custom model is trained and ready (you've completed the steps in the [
4646
The API call will look like this:
4747

4848
```bash
49-
curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://<endpoint>/computervision/productrecognition/ms-pretrained-product-detection/models/<your_model_name>/runs/<your_run_name>?api-version=2023-04-01-preview" -d "{
49+
curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://<endpoint>/computervision/productrecognition/ms-pretrained-product-detection/runs/<your_run_name>?api-version=2023-04-01-preview" -d "{
5050
'url':'<your_url_string>'
5151
}"
5252
```
5353

5454
1. Make the following changes in the command where needed:
5555
1. Replace the `<subscriptionKey>` with your Vision resource key.
5656
1. Replace the `<endpoint>` with your Vision resource endpoint. For example: `https://YourResourceName.cognitiveservices.azure.com`.
57-
1. Replace the `<your_model_name>` with your unique custom model name. This will be the name of the customized model you have trained with your own data. For example, `.../models/mymodel1/runs/...`
5857
2. Replace the `<your_run_name>` with your unique test run name for the task queue. It is an async API task queue name for you to be able retrieve the API response later. For example, `.../runs/test1?api-version...`
5958
1. Replace the `<your_url_string>` contents with the blob URL of the image
6059
1. Open a command prompt window.

articles/ai-services/computer-vision/includes/how-to-guides/analyze-image-40-cpp.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -80,6 +80,7 @@ Create a new [ImageAnalysisOptions](/cpp/cognitive-services/vision/imageanalysis
8080
[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/how-to/how-to.cpp?name=visual_features)]
8181
8282
83+
<!--
8384
### Set model name when using a custom model
8485
8586
You can also do image analysis with a custom trained model. To create and train a model, see [Create a custom Image Analysis model](/azure/ai-services/computer-vision/how-to/model-customization). Once your model is trained, all you need is the model's name. You don't need to specify visual features if you use a custom model.
@@ -88,7 +89,7 @@ You can also do image analysis with a custom trained model. To create and train
8889
To use a custom model, create the [ImageAnalysisOptions](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions) object and call the [SetModelName](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setmodelname) method. You don't need to call any other methods on **ImageAnalysisOptions**. There's no need to call [SetFeatures](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setfeatures) as you do with standard model, since your custom model already implies the visual features the service extracts.
8990
9091
[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/custom-model/custom-model.cpp?name=model_name)]
91-
92+
-->
9293
9394
### Specify languages
9495
@@ -148,7 +149,7 @@ The code uses the following helper method to display the coordinates of a boundi
148149
149150
[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/how-to/how-to.cpp?name=polygon_to_string)]
150151
151-
152+
<!--
152153
### Get results using custom model
153154
154155
This section shows you how to make an analysis call to the service, when using a custom model.
@@ -157,7 +158,7 @@ This section shows you how to make an analysis call to the service, when using a
157158
The code is similar to the standard model case. The only difference is that results from the custom model are available by calling the **GetCustomTags** and/or **GetCustomObjects** methods of the [ImageAnalysisResult](/cpp/cognitive-services/vision/imageanalysis-imageanalysisresult) object.
158159
159160
[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/custom-model/custom-model.cpp?name=analyze)]
160-
161+
-->
161162
162163
## Error codes
163164

articles/ai-services/computer-vision/includes/how-to-guides/analyze-image-40-csharp.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ Create a new [ImageAnalysisOptions](/dotnet/api/azure.ai.vision.imageanalysis.im
7878

7979
[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/how-to/program.cs?name=visual_features)]
8080

81-
81+
<!--
8282
### Set model name when using a custom model
8383
8484
You can also do image analysis with a custom trained model. To create and train a model, see [Create a custom Image Analysis model](/azure/ai-services/computer-vision/how-to/model-customization). Once your model is trained, all you need is the model's name. You don't need to specify visual features if you use a custom model.
@@ -87,7 +87,7 @@ You can also do image analysis with a custom trained model. To create and train
8787
To use a custom model, create the [ImageAnalysisOptions](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions) object and set the [ModelName](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.modelname#azure-ai-vision-imageanalysis-imageanalysisoptions-modelname) property. You don't need to set any other properties on **ImageAnalysisOptions**. There's no need to set the [Features](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.features#azure-ai-vision-imageanalysis-imageanalysisoptions-features) property, as you do with the standard model, since your custom model already implies the visual features the service extracts.
8888
8989
[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/custom-model/program.cs?name=model_name)]
90-
90+
-->
9191

9292
### Specify languages
9393

@@ -144,7 +144,7 @@ This section shows you how to make an analysis call to the service using the sta
144144

145145
[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/how-to/program.cs?name=analyze)]
146146

147-
147+
<!--
148148
### Get results using custom model
149149
150150
This section shows you how to make an analysis call to the service, when using a custom model.
@@ -153,7 +153,7 @@ This section shows you how to make an analysis call to the service, when using a
153153
The code is similar to the standard model case. The only difference is that results from the custom model are available on the **CustomTags** and/or **CustomObjects** properties of the [ImageAnalysisResult](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisresult) object.
154154
155155
[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/custom-model/program.cs?name=analyze)]
156-
156+
-->
157157

158158
## Error codes
159159

articles/ai-services/computer-vision/includes/how-to-guides/analyze-image-40-java.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -83,15 +83,15 @@ Create a new [ImageAnalysisOptions](/java/api/com.azure.ai.vision.imageanalysis.
8383

8484
[!code-java[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/java/image-analysis/how-to/ImageAnalysis.java?name=visual_features)]
8585

86-
86+
<!--
8787
### Set model name when using a custom model
8888
8989
You can also do image analysis with a custom trained model. To create and train a model, see [Create a custom Image Analysis model](/azure/ai-services/computer-vision/how-to/model-customization). Once your model is trained, all you need is the model's name.
9090
9191
To use a custom model, create the [ImageAnalysisOptions](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisoptions) object and call the [setModelName](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisoptions#com-azure-ai-vision-imageanalysis-imageanalysisoptions-setmodelname(java-lang-string)) method. You don't need to set any other properties on **ImageAnalysisOptions**. There's no need to call [setFeatures](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisoptions#com-azure-ai-vision-imageanalysis-imageanalysisoptions-setfeatures(java-util-enumset(com-azure-ai-vision-imageanalysis-imageanalysisfeature))), as you do with the standard model, since your custom model already implies the visual features the service extracts.
9292
9393
[!code-java[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/java/image-analysis/custom-model/ImageAnalysis.java?name=model_name)]
94-
94+
-->
9595

9696
### Specify languages
9797

@@ -143,7 +143,7 @@ This section shows you how to make an analysis call to the service using the sta
143143

144144
[!code-java[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/java/image-analysis/how-to/ImageAnalysis.java?name=analyze)]
145145

146-
146+
<!--
147147
### Get results using custom model
148148
149149
This section shows you how to make an analysis call to the service, when using a custom model.
@@ -152,7 +152,7 @@ This section shows you how to make an analysis call to the service, when using a
152152
The code is similar to the standard model case. The only difference is that results from the custom model are available by calling **getCustomTags** and/or **getCustomObjects** methods on the [ImageAnalysisResult](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisresult) object.
153153
154154
[!code-java[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/java/image-analysis/custom-model/ImageAnalysis.java?name=analyze)]
155-
155+
-->
156156

157157
## Error codes
158158

articles/ai-services/computer-vision/includes/how-to-guides/analyze-image-40-python.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -73,15 +73,15 @@ Create a new [ImageAnalysisOptions](/python/api/azure-ai-vision/azure.ai.vision.
7373

7474
[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/how-to/main.py?name=visual_features)]
7575

76-
76+
<!--
7777
### Set model name when using a custom model
7878
7979
You can also do image analysis with a custom trained model. To create and train a model, see [Create a custom Image Analysis model](/azure/ai-services/computer-vision/how-to/model-customization). Once your model is trained, all you need is the model's name. You don't need to specify visual features if you use a custom model.
8080
8181
To use a custom model, create the [ImageAnalysisOptions](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions) object and set the [model_name](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-model-name) property. You don't need to set any other properties on **ImageAnalysisOptions**. There's no need to set the [features](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-features) property, as you do with the standard model, since your custom model already implies the visual features the service extracts.
8282
8383
[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/custom-model/main.py?name=model_name)]
84-
84+
-->
8585

8686
### Specify languages
8787

@@ -137,7 +137,7 @@ This section shows you how to make an analysis call to the service using the sta
137137

138138
[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/how-to/main.py?name=analyze)]
139139

140-
140+
<!--
141141
### Get results using custom model
142142
143143
This section shows you how to make an analysis call to the service, when using a custom model.
@@ -146,7 +146,7 @@ This section shows you how to make an analysis call to the service, when using a
146146
The code is similar to the standard model case. The only difference is that results from the custom model are available on the **custom_tags** and/or **custom_objects** properties of the [ImageAnalysisResult](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisresult) object.
147147
148148
[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/custom-model/main.py?name=analyze)]
149-
149+
-->
150150

151151
## Error codes
152152

0 commit comments

Comments
 (0)