You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/computer-vision/includes/how-to-guides/analyze-image-40-csharp.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,9 +20,9 @@ This guide assumes you've followed the steps mentioned in the [quickstart](/azur
20
20
To authenticate against the Image Analysis service, you need a Computer Vision key and endpoint URL. This guide assumes that you've defined the environment variables `VISION_KEY` and `VISION_ENDPOINT` with your key and endpoint.
21
21
22
22
> [!TIP]
23
-
> Don't include the key directly in your code, and never post it publicly. See the Azure AI services [security](/azure/ai-services/security-features) article for more authentication options like [Azure Key Vault](/azure/ai-services/use-key-vault).
23
+
> Don't include the key directly in your code, and never post it publicly. See the Azure AI services [security](/azure/ai-services/security-features) article for more authentication options like [Azure Key Vault](/azure/ai-services/use-key-vault).
24
24
25
-
Start by creating a **ImageAnalysisClient** object. For example:
25
+
Start by creating a [ImageAnalysisClient](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisclient) object. For example:
Alternatively, you can pass the image data to the SDK through a **BinaryData** object. For example, read from a local image file you want to analyze.
43
+
Alternatively, you can pass the image data to the SDK through a [BinaryData](/dotnet/api/system.binarydata) object. For example, read from a local image file you want to analyze.
@@ -49,7 +49,7 @@ Alternatively, you can pass the image data to the SDK through a **BinaryData** o
49
49
50
50
## Select visual features
51
51
52
-
The Analysis 4.0 API gives you access to all of the service's image analysis features. Choose which operations to do based on your own use case. See the [overview](/azure/ai-services/computer-vision/overview-image-analysis) for a description of each feature. The example in this section adds all of the available visual features, but for practical usage you likely need fewer.
52
+
The Analysis 4.0 API gives you access to all of the service's image analysis features. Choose which operations to do based on your own use case. See the [overview](/azure/ai-services/computer-vision/overview-image-analysis) for a description of each feature. The example in this section adds all of the [available visual features](/dotnet/api/azure.ai.vision.imageanalysis.visualfeatures), but for practical usage you likely need fewer.
53
53
54
54
> [!IMPORTANT]
55
55
> The visual features `Captions` and `DenseCaptions` are only supported in the following Azure regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US.
@@ -72,20 +72,20 @@ To use a custom model, create the [ImageAnalysisOptions](/dotnet/api/azure.ai.vi
72
72
73
73
## Select analysis options
74
74
75
-
Use an **ImageAnalysisOptions** object to specify various options for the Analyze API call.
75
+
Use an [ImageAnalysisOptions](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions) object to specify various options for the Analyze API call.
76
76
77
77
-**Language**: You can specify the language of the returned data. The language is optional, with the default being English. See [Language support](https://aka.ms/cv-languages) for a list of supported language codes and which visual features are supported for each language.
78
-
-**Gender neutral captions**: If you're extracting captions or dense captions (using **VisualFeatures.Caption** or **VisualFeatures.DenseCaptions**), you can ask for gender neutral captions. Gender neutral captions are optional, with the default being gendered captions. For example, in English, when you select gender neutral captions, terms like **woman** or **man** are replaced with **person**, and **boy** or **girl** are replaced with **child**.
79
-
-**Crop aspect ratio**: An aspect ratio is calculated by dividing the target crop width by the height. Supported values are from 0.75 to 1.8 (inclusive). Setting this property is only relevant when **VisualFeatures.SmartCrops** was selected as part the visual feature list. If you select **VisualFeatures.SmartCrops** but don't specify aspect ratios, the service returns one crop suggestion with an aspect ratio it sees fit. In this case, the aspect ratio is between 0.5 and 2.0 (inclusive).
78
+
-**Gender neutral captions**: If you're extracting captions or dense captions (using [VisualFeatures.Caption](/dotnet/api/azure.ai.vision.imageanalysis.visualfeatures) or [VisualFeatures.DenseCaptions](/dotnet/api/azure.ai.vision.imageanalysis.visualfeatures)), you can ask for gender neutral captions. Gender neutral captions are optional, with the default being gendered captions. For example, in English, when you select gender neutral captions, terms like **woman** or **man** are replaced with **person**, and **boy** or **girl** are replaced with **child**.
79
+
-**Crop aspect ratio**: An aspect ratio is calculated by dividing the target crop width by the height. Supported values are from 0.75 to 1.8 (inclusive). Setting this property is only relevant when [VisualFeatures.SmartCrops](/dotnet/api/azure.ai.vision.imageanalysis.visualfeatures) was selected as part the visual feature list. If you select [VisualFeatures.SmartCrops](/dotnet/api/azure.ai.vision.imageanalysis.visualfeatures) but don't specify aspect ratios, the service returns one crop suggestion with an aspect ratio it sees fit. In this case, the aspect ratio is between 0.5 and 2.0 (inclusive).
This section shows you how to make an analysis call to the service.
86
+
This section shows you how to make an analysis call to the service.
87
87
88
-
Call the **Analyze** method on the **ImageAnalysisClient** object, as shown here. The call is synchronous, and will block until the service returns the results or an error occurred. Alternatively, you can call the non-blocking **AnalyzeAsync** method.
88
+
Call the [Analyze](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisclient#methods) method on the [ImageAnalysisClient](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisclient) object, as shown here. The call is synchronous, and will block until the service returns the results or an error occurred. Alternatively, you can call the non-blocking [AnalyzeAsync](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisclient#methods) method.
89
89
90
90
Use the input objects created in the above sections. To analyze from an image buffer instead of URL, replace `imageURL` in the method call with the `imageData` variable.
Copy file name to clipboardExpand all lines: articles/ai-services/computer-vision/includes/how-to-guides/analyze-image-40-java.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ To authenticate with the Image Analysis service, you need a Computer Vision key
23
23
> [!TIP]
24
24
> Don't include the key directly in your code, and never post it publicly. See the Azure AI services [security](/azure/ai-services/security-features) article for more authentication options like [Azure Key Vault](/azure/ai-services/use-key-vault).
25
25
26
-
Start by creating a **ImageAnalysisClient** object. For example:
26
+
Start by creating an [ImageAnalysisClient](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisclient) object. For example:
@@ -47,11 +47,11 @@ Alternatively, you can pass in the image as a data array using a **BinaryData**
47
47
48
48
## Select visual features
49
49
50
-
The Analysis 4.0 API gives you access to all of the service's image analysis features. Choose which operations to do based on your own use case. See the [overview](/azure/ai-services/computer-vision/overview-image-analysis) for a description of each feature. The example in this section adds all of the available visual features, but for practical usage you likely need fewer.
50
+
The Analysis 4.0 API gives you access to all of the service's image analysis features. Choose which operations to do based on your own use case. See the [overview](/azure/ai-services/computer-vision/overview-image-analysis) for a description of each feature. The example in this section adds all of the [available visual features](/java/api/com.azure.ai.vision.imageanalysis.models.visualfeatures), but for practical usage you likely need fewer.
51
51
52
52
53
53
> [!IMPORTANT]
54
-
> The visual features `Captions` and `DenseCaptions` are only supported in the following Azure regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US.
54
+
> The visual features [Captions](/java/api/com.azure.ai.vision.imageanalysis.models.visualfeatures#com-azure-ai-vision-imageanalysis-models-visualfeatures-caption) and [DenseCaptions](/java/api/com.azure.ai.vision.imageanalysis.models.visualfeatures#com-azure-ai-vision-imageanalysis-models-visualfeatures-dense-captions) are only supported in the following Azure regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US.
55
55
56
56
57
57
@@ -70,11 +70,11 @@ To use a custom model, create the [ImageAnalysisOptions](/java/api/com.azure.ai.
70
70
71
71
## Select analysis options
72
72
73
-
Use an **ImageAnalysisOptions** object to specify various options for the Analyze API call.
73
+
Use an [ImageAnalysisOptions](/java/api/com.azure.ai.vision.imageanalysis.models.imageanalysisoptions) object to specify various options for the Analyze API call.
74
74
75
75
-**Language**: You can specify the language of the returned data. The language is optional, with the default being English. See [Language support](https://aka.ms/cv-languages) for a list of supported language codes and which visual features are supported for each language.
76
-
-**Gender neutral captions**: If you're extracting captions or dense captions (using **VisualFeatures.caption** or **VisualFeatures.denseCaptions**), you can ask for gender neutral captions. Gender neutral captions are optional, with the default being gendered captions. For example, in English, when you select gender neutral captions, terms like **woman** or **man** are replaced with **person**, and **boy** or **girl** are replaced with **child**.
77
-
-**Crop aspect ratio**: An aspect ratio is calculated by dividing the target crop width by the height. Supported values are from 0.75 to 1.8 (inclusive). Setting this property is only relevant when **VisualFeatures.smartCrops** was selected as part the visual feature list. If you select **VisualFeatures.smartCrops** but don't specify aspect ratios, the service returns one crop suggestion with an aspect ratio it sees fit. In this case, the aspect ratio is between 0.5 and 2.0 (inclusive).
76
+
-**Gender neutral captions**: If you're extracting captions or dense captions (using [VisualFeatures.CAPTION](/java/api/com.azure.ai.vision.imageanalysis.models.visualfeatures#com-azure-ai-vision-imageanalysis-models-visualfeatures-caption) or [VisualFeatures.DENSE_CAPTIONS](/java/api/com.azure.ai.vision.imageanalysis.models.visualfeatures#com-azure-ai-vision-imageanalysis-models-visualfeatures-dense-captions)), you can ask for gender neutral captions. Gender neutral captions are optional, with the default being gendered captions. For example, in English, when you select gender neutral captions, terms like **woman** or **man** are replaced with **person**, and **boy** or **girl** are replaced with **child**.
77
+
-**Crop aspect ratio**: An aspect ratio is calculated by dividing the target crop width by the height. Supported values are from 0.75 to 1.8 (inclusive). Setting this property is only relevant when [VisualFeatures.SMART_CROPS](/java/api/com.azure.ai.vision.imageanalysis.models.visualfeatures#com-azure-ai-vision-imageanalysis-models-visualfeatures-smart-crops) was selected as part the visual feature list. If you select [VisualFeatures.SMART_CROPS](/java/api/com.azure.ai.vision.imageanalysis.models.visualfeatures#com-azure-ai-vision-imageanalysis-models-visualfeatures-smart-crops) but don't specify aspect ratios, the service returns one crop suggestion with an aspect ratio it sees fit. In this case, the aspect ratio is between 0.5 and 2.0 (inclusive).
@@ -85,7 +85,7 @@ Use an **ImageAnalysisOptions** object to specify various options for the Analyz
85
85
86
86
This section shows you how to make an analysis call to the service.
87
87
88
-
Call the **analyze** method on the **ImageAnalysisClient** object, as shown here. The call is synchronous, and will block until the service returns the results or an error occurred. Alternatively, you can use a **ImageAnalysisAsyncClient** object instead, and call its **analyze** method which is non-blocking.
88
+
Call the [analyze](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisclient#method-summary) method on the [ImageAnalysisClient](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisclient) object, as shown here. The call is synchronous, and will block until the service returns the results or an error occurred. Alternatively, you can use a [ImageAnalysisAsyncClient](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisasyncclient) object instead, and call its [analyze](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisasyncclient#method-summary) method which is non-blocking.
89
89
90
90
Use the input objects created in the above sections. To analyze from an image buffer instead of URL, replace `imageURL` in the method call with `imageData`.
Copy file name to clipboardExpand all lines: articles/ai-services/computer-vision/includes/how-to-guides/analyze-image-40-python.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,7 +25,7 @@ To authenticate against the Image Analysis service, you need a Computer Vision k
25
25
26
26
27
27
28
-
Start by creating a **ImageAnalysisClient** object using one of the constructors. For example:
28
+
Start by creating an [ImageAnalysisClient](/python/api/azure-ai-vision-imageanalysis/azure.ai.vision.imageanalysis.imageanalysisclient) object using one of the constructors. For example:
@@ -49,7 +49,7 @@ Alternatively, you can pass in the image as a data array. For example, read from
49
49
50
50
## Select visual features
51
51
52
-
The Analysis 4.0 API gives you access to all of the service's image analysis features. Choose which operations to do based on your own use case. See the [overview](/azure/ai-services/computer-vision/overview-image-analysis) for a description of each feature. The example in this section adds all of the available visual features, but for practical usage you likely need fewer.
52
+
The Analysis 4.0 API gives you access to all of the service's image analysis features. Choose which operations to do based on your own use case. See the [overview](/azure/ai-services/computer-vision/overview-image-analysis) for a description of each feature. The example in this section adds all of the [available visual features](/python/api/azure-ai-vision-imageanalysis/azure.ai.vision.imageanalysis.models.visualfeatures), but for practical usage you likely need fewer.
@@ -65,18 +65,18 @@ To use a custom model, create the [ImageAnalysisOptions](/python/api/azure-ai-vi
65
65
66
66
## Call the Analyze API with options
67
67
68
-
The following code calls the Analyze API with the features you selected above and additional options, defined below. To analyze from an image buffer instead of URL, replace `image_url=image_url` in the method call with `image_data=image_data`.
68
+
The following code calls the [Analyze API](/python/api/azure-ai-vision-imageanalysis/azure.ai.vision.imageanalysis.imageanalysisclient#azure-ai-vision-imageanalysis-imageanalysisclient-analyze) with the features you selected above and additional options, defined below. To analyze from an image buffer instead of URL, replace `image_url=image_url` in the method call with `image_data=image_data`.
An aspect ratio is calculated by dividing the target crop width by the height. Supported values are from 0.75 to 1.8 (inclusive). Setting this property is only relevant when **VisualFeatures.SMART_CROPS** was selected as part the visual feature list. If you select **VisualFeatures.SMART_CROPS** but don't specify aspect ratios, the service returns one crop suggestion with an aspect ratio it sees fit. In this case, the aspect ratio is between 0.5 and 2.0 (inclusive).
74
+
An aspect ratio is calculated by dividing the target crop width by the height. Supported values are from 0.75 to 1.8 (inclusive). Setting this property is only relevant when [VisualFeatures.SMART_CROPS](/python/api/azure-ai-vision-imageanalysis/azure.ai.vision.imageanalysis.models.visualfeatures) was selected as part the visual feature list. If you select [VisualFeatures.SMART_CROPS](/python/api/azure-ai-vision-imageanalysis/azure.ai.vision.imageanalysis.models.visualfeatures) but don't specify aspect ratios, the service returns one crop suggestion with an aspect ratio it sees fit. In this case, the aspect ratio is between 0.5 and 2.0 (inclusive).
75
75
76
76
77
77
### Select gender neutral captions
78
78
79
-
If you're extracting captions or dense captions (using **VisualFeatures.CAPTION** or **VisualFeatures.DENSE_CAPTIONS**), you can ask for gender neutral captions. Gender neutral captions are optional, with the default being gendered captions. For example, in English, when you select gender neutral captions, terms like **woman** or **man** are replaced with **person**, and **boy** or **girl** are replaced with **child**.
79
+
If you're extracting captions or dense captions (using [VisualFeatures.CAPTION](/python/api/azure-ai-vision-imageanalysis/azure.ai.vision.imageanalysis.models.visualfeatures) or [VisualFeatures.DENSE_CAPTIONS](/python/api/azure-ai-vision-imageanalysis/azure.ai.vision.imageanalysis.models.visualfeatures)), you can ask for gender neutral captions. Gender neutral captions are optional, with the default being gendered captions. For example, in English, when you select gender neutral captions, terms like **woman** or **man** are replaced with **person**, and **boy** or **girl** are replaced with **child**.
0 commit comments