You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The code in this guide uses remote images referenced by URL. You may want to try different images on your own to see the full capability of the Image Analysis features.
35
+
You can select an image by providing a publicly accessible image URL, a local image file name, or by copying the image into the SDK's input buffer. See [Image requirements](../../overview-image-analysis.md?tabs=4-0#image-requirements) for supported image formats.
36
36
37
+
### Image URL
37
38
38
-
Create a new **VisionSource** object from the URL of the image you want to analyze, using the static constructor [VisionSource.FromUrl](/dotnet/api/azure.ai.vision.common.visionsource.fromurl).
39
+
Create a new **VisionSource** object from the URL of the image you want to analyze, using the static constructor [VisionSource.fromUrl](/java/api/azure.ai.vision.common.visionsource.fromurl).
39
40
40
-
**VisionSource** implements **IDisposable**, therefore create the object with a **using** statement or explicitly call **Dispose** method after analysis completes.
**VisionSource** implements **AutoCloseable**, therefore create the object in a try-with-resources block, or explicitly call the **close** method on this object when you're done analyzing the image.
43
44
44
-
> [!TIP]
45
-
> You can also analyze a local image by passing in the full-path image file name. See [VisionSource.FromFile](/dotnet/api/azure.ai.vision.common.visionsource.fromfile).
45
+
### Image file
46
+
47
+
Create a new **VisionSource** object from the local image file you want to analyze, using the static constructor [VisionSource.fromFile](/java/api/azure.ai.vision.common.visionsource.fromfile).
**VisionSource** implements **AutoCloseable**, therefore create the object in a try-with-resources block, or explicitly call the **close** method on this object when you're done analyzing the image.
54
+
55
+
### Image buffer
56
+
57
+
Create a new **VisionSource** object from a memory buffer containing the image data, by using the static constructor [VisionSource.fromImageSourceBuffer](/dotnet/api/azure.ai.vision.common.visionsource.fromimagesourcebuffer).
58
+
59
+
Start by creating a new [ImageSourceBuffer](/java/api/azure.ai.vision.common.imagesourcebuffer), then get access to its [ImageWriter](/java/api/azure.ai.vision.common.imagewriter) object and write the image data into it. In the following code example, `imageBuffer` is a variable of type `ByteBuffer` containing the image data.
Both **VisionSource** and **ImageSourceBuffer** implements **AutoCloseable**, therefore create the objects in a try-with-resources block, or explicitly call the **close** method on these objects when you're done analyzing the image.
47
69
48
70
## Select analysis options
49
71
@@ -57,19 +79,18 @@ Visual features 'Captions' and 'DenseCaptions' are only supported in the followi
57
79
> The REST API uses the terms **Smart Crops** and **Smart Crops Aspect Ratios**. The SDK uses the terms **Crop Suggestions** and **Cropping Aspect Ratios**. They both refer to the same service operation. Similarly, the REST API users the term **Read** for detecting text in the image, whereas the SDK uses the term **Text** for the same operation.
58
80
59
81
60
-
Create a new [ImageAnalysisOptions](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions) object and specify the visual features you'd like to extract, by setting the [Features](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.features#azure-ai-vision-imageanalysis-imageanalysisoptions-features)property. [ImageAnalysisFeature](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisfeature) enum defines the supported values.
82
+
Create a new [ImageAnalysisOptions](/java/api/azure.ai.vision.imageanalysis.imageanalysisoptions) object and specify the visual features you'd like to extract, by call the [setFeatures](/java/api/azure.ai.vision.imageanalysis.imageanalysisoptions.features#azure-ai-vision-imageanalysis-imageanalysisoptions-features)method. [ImageAnalysisFeature](/java/api/azure.ai.vision.imageanalysis.imageanalysisfeature) enum defines the supported values.
You can also do image analysis with a custom trained model. To create and train a model, see [Create a custom Image Analysis model](/azure/ai-services/computer-vision/how-to/model-customization). Once your model is trained, all you need is the model's name. You do not need to specify visual features if you use a custom model.
68
-
89
+
You can also do image analysis with a custom trained model. To create and train a model, see [Create a custom Image Analysis model](/azure/ai-services/computer-vision/how-to/model-customization). Once your model is trained, all you need is the model's name. You don't need to specify visual features if you use a custom model.
69
90
70
-
To use a custom model, create the [ImageAnalysisOptions](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions) object and set the [ModelName](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.modelname#azure-ai-vision-imageanalysis-imageanalysisoptions-modelname)property. You don't need to set any other properties on **ImageAnalysisOptions**. There's no need to set the [Features](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.features#azure-ai-vision-imageanalysis-imageanalysisoptions-features) property, as you do with the standard model, since your custom model already implies the visual features the service extracts.
91
+
To use a custom model, create the [ImageAnalysisOptions](/java/api/azure.ai.vision.imageanalysis.imageanalysisoptions) object and call the [setModelName](/java/api/azure.ai.vision.imageanalysis.imageanalysisoptions.modelname#azure-ai-vision-imageanalysis-imageanalysisoptions-modelname)method. You don't need to set any other properties on **ImageAnalysisOptions**. There's no need to call [setFeatures](/java/api/azure.ai.vision.imageanalysis.imageanalysisoptions.features#azure-ai-vision-imageanalysis-imageanalysisoptions-features), as you do with the standard model, since your custom model already implies the visual features the service extracts.
@@ -78,10 +99,9 @@ You can specify the language of the returned data. The language is optional, wit
78
99
79
100
Language option only applies when you're using the standard model.
80
101
102
+
Call the [setLanguage](/java/api/azure.ai.vision.imageanalysis.imageanalysisoptions.language) method on your **ImageAnalysisOptions** object to specify a language.
81
103
82
-
Use the [Language](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.language) property of your **ImageAnalysisOptions** object to specify a language.
@@ -90,22 +110,19 @@ If you're extracting captions or dense captions, you can ask for gender neutral
90
110
91
111
Gender neutral caption option only applies when you're using the standard model.
92
112
113
+
To enable gender neutral captions, call the [setGenderNeutralCaption](/java/api/azure.ai.vision.imageanalysis.imageanalysisoptions.genderneutralcaption) method on your **ImageAnalysisOptions** object with `true` as the argument.
93
114
94
-
Set the [GenderNeutralCaption](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.genderneutralcaption) property of your **ImageAnalysisOptions** object to true to enable gender neutral captions.
An aspect ratio is calculated by dividing the target crop width by the height. Supported values are from 0.75 to 1.8 (inclusive). Setting this property is only relevant when the **smartCrop** option (REST API) or **CropSuggestions** (SDK) was selected as part the visual feature list. If you select smartCrop/CropSuggestions but don't specify aspect ratios, the service returns one crop suggestion with an aspect ratio it sees fit. In this case, the aspect ratio is between 0.5 and 2.0 (inclusive).
102
120
103
121
Smart cropping aspect rations only applies when you're using the standard model.
104
122
123
+
Call the [setCroppingAspectRatios](/java/api/azure.ai.vision.imageanalysis.imageanalysisoptions.croppingaspectratios) method on your **ImageAnalysisOptions** with a list of aspect ratios. For example, to set aspect ratios of 0.9 and 1.33:
105
124
106
-
Set the [CroppingAspectRatios](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.croppingaspectratios) property of your **ImageAnalysisOptions** to a list of aspect ratios. For example, to set aspect ratios of 0.9 and 1.33:
@@ -114,41 +131,40 @@ Set the [CroppingAspectRatios](/dotnet/api/azure.ai.vision.imageanalysis.imagean
114
131
115
132
This section shows you how to make an analysis call to the service using the standard model, and get the results.
116
133
134
+
1. Using the **VisionServiceOptions**, **VisionSource** and **ImageAnalysisOptions** objects, construct a new [ImageAnalyzer](/java/api/azure.ai.vision.imageanalysis.imageanalyzer) object. **ImageAnalyzer** implements **AutoCloseable**, therefore create the object in a try-with-resources block, or explicitly call the **close** method on this object when you're done analyzing the image.
117
135
118
-
1. Using the **VisionServiceOptions**, **VisionSource** and **ImageAnalysisOptions** objects, construct a new [ImageAnalyzer](/dotnet/api/azure.ai.vision.imageanalysis.imageanalyzer) object. **ImageAnalyzer** implements **IDisposable**, therefore create the object with a **using** statement, or explicitly call **Dispose** method after analysis completes.
119
-
120
-
1. Call the **Analyze** method on the **ImageAnalyzer** object, as shown here. This is a blocking (synchronous) call until the service returns the results or an error occurred. Alternatively, you can call the nonblocking **AnalyzeAsync** method.
136
+
1. Call the **analyze** method on the **ImageAnalyzer** object, as shown here. The call is synchronous, and will block until the service returns the results or an error occurred. Alternatively, you can call the nonblocking **analyzeAsync** method.
121
137
122
-
1.Check the **Reason**property on the [ImageAnalysisResult](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisresult) object, to determine if analysis succeeded or failed.
138
+
1.Call the **getReason**method on the [ImageAnalysisResult](/java/api/azure.ai.vision.imageanalysis.imageanalysisresult) object, to determine if analysis succeeded or failed.
123
139
124
-
1. If succeeded, proceed to access the relevant result properties based on your selected visual features, as shown here. Additional information (not commonly needed) can be obtained by constructing the [ImageAnalysisResultDetails](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisresultdetails) object.
140
+
1. If succeeded, proceed to access the call relevant result methods based on your selected visual features, as shown here. Additional information (not commonly needed) can be obtained by constructing the [ImageAnalysisResultDetails](/java/api/azure.ai.vision.imageanalysis.imageanalysisresultdetails) object.
125
141
126
-
1. If failed, you can construct the [ImageAnalysisErrorDetails](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisresultdetails) object to get information on the failure.
142
+
1. If failed, you can construct the [ImageAnalysisErrorDetails](/java/api/azure.ai.vision.imageanalysis.imageanalysisresultdetails) object to get information on the failure.
This section shows you how to make an analysis call to the service, when using a custom model.
134
150
135
151
136
-
The code is similar to the standard model case. The only difference is that results from the custom model are available on the **CustomTags** and/or **CustomObjects**properties of the [ImageAnalysisResult](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisresult) object.
152
+
The code is similar to the standard model case. The only difference is that results from the custom model are available on by calling **getCustomTags** and/or **getCustomObjects**methods on the [ImageAnalysisResult](/java/api/azure.ai.vision.imageanalysis.imageanalysisresult) object.
The sample code for getting analysis results shows how to handle errors and get the [ImageAnalysisErrorDetails](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysiserrordetails) object that contains the error information. The error information includes:
159
+
The sample code for getting analysis results shows how to handle errors and get the [ImageAnalysisErrorDetails](/java/api/azure.ai.vision.imageanalysis.imageanalysiserrordetails) object that contains the error information. The error information includes:
144
160
145
-
* Error reason. See enum [ImageAnalysisErrorReason](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysiserrorreason).
161
+
* Error reason. See enum [ImageAnalysisErrorReason](/java/api/azure.ai.vision.imageanalysis.imageanalysiserrorreason).
146
162
* Error code and error message. Click on the **REST API** tab to see a list of some common error codes and messages.
147
163
148
164
In addition to those errors, the SDK has a few other error messages, including:
149
165
*`Missing Image Analysis options: You must set at least one visual feature (or model name) for the 'analyze' operation. Or set segmentation mode for the 'segment' operation`
150
166
*`Invalid combination of Image Analysis options: You cannot set both visual features (or model name), and segmentation mode`
151
167
152
-
Make sure the [ImageAnalysisOptions](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions) object is set correctly to fix these errors.
168
+
Make sure the [ImageAnalysisOptions](/java/api/azure.ai.vision.imageanalysis.imageanalysisoptions) object is set correctly to fix these errors.
153
169
154
-
To help resolve issues, look at the [Image Analysis Samples](https://github.com/Azure-Samples/azure-ai-vision-sdk) repository and run the closest sample to your scenario. Search the [GitHub issues](https://github.com/Azure-Samples/azure-ai-vision-sdk/issues) to see if your issue was already address. If not, create a new.
170
+
To help resolve issues, look at the [Image Analysis Samples](https://github.com/Azure-Samples/azure-ai-vision-sdk) repository and run the closest sample to your scenario. Search the [GitHub issues](https://github.com/Azure-Samples/azure-ai-vision-sdk/issues) to see if your issue was already address. If not, create a new one.
0 commit comments