You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/computer-vision/includes/how-to-guides/analyze-image-40-java.md
+17-17Lines changed: 17 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,7 +25,7 @@ To authenticate against the Image Analysis service, you need a Computer Vision k
25
25
The SDK example assumes that you defined the environment variables `VISION_KEY` and `VISION_ENDPOINT` with your key and endpoint.
26
26
27
27
28
-
Start by creating a [VisionServiceOptions](/java/api/azure.ai.vision.common.visionserviceoptions) object using one of the constructors. For example:
28
+
Start by creating a [VisionServiceOptions](/java/api/com.azure.ai.vision.common.visionserviceoptions) object using one of the constructors. For example:
@@ -36,15 +36,15 @@ You can select an image by providing a publicly accessible image URL, a local im
36
36
37
37
### Image URL
38
38
39
-
Create a new **VisionSource** object from the URL of the image you want to analyze, using the static constructor [VisionSource.fromUrl](/java/api/azure.ai.vision.common.visionsource.fromurl).
39
+
Create a new **VisionSource** object from the URL of the image you want to analyze, using the static constructor [VisionSource.fromUrl](/java/api/com.azure.ai.vision.common.visionsource#com-azure-ai-vision-common-visionsource-fromurl(java-net-url)).
**VisionSource** implements **AutoCloseable**, therefore create the object in a try-with-resources block, or explicitly call the **close** method on this object when you're done analyzing the image.
44
44
45
45
### Image file
46
46
47
-
Create a new **VisionSource** object from the local image file you want to analyze, using the static constructor [VisionSource.fromFile](/java/api/azure.ai.vision.common.visionsource.fromfile).
47
+
Create a new **VisionSource** object from the local image file you want to analyze, using the static constructor [VisionSource.fromFile](/java/api/com.azure.ai.vision.common.visionsource#com-azure-ai-vision-common-visionsource-fromfile(java-lang-string)).
Create a new **VisionSource** object from a memory buffer containing the image data, by using the static constructor [VisionSource.fromImageSourceBuffer](/dotnet/api/azure.ai.vision.common.visionsource.fromimagesourcebuffer).
58
58
59
-
Start by creating a new [ImageSourceBuffer](/java/api/azure.ai.vision.common.imagesourcebuffer), then get access to its [ImageWriter](/java/api/azure.ai.vision.common.imagewriter) object and write the image data into it. In the following code example, `imageBuffer` is a variable of type `ByteBuffer` containing the image data.
59
+
Start by creating a new [ImageSourceBuffer](/java/api/com.azure.ai.vision.common.imagesourcebuffer), then get access to its [ImageWriter](/java/api/com.azure.ai.vision.common.imagewriter) object and write the image data into it. In the following code example, `imageBuffer` is a variable of type `ByteBuffer` containing the image data.
@@ -79,7 +79,7 @@ Visual features 'Captions' and 'DenseCaptions' are only supported in the followi
79
79
> The REST API uses the terms **Smart Crops** and **Smart Crops Aspect Ratios**. The SDK uses the terms **Crop Suggestions** and **Cropping Aspect Ratios**. They both refer to the same service operation. Similarly, the REST API users the term **Read** for detecting text in the image, whereas the SDK uses the term **Text** for the same operation.
80
80
81
81
82
-
Create a new [ImageAnalysisOptions](/java/api/azure.ai.vision.imageanalysis.imageanalysisoptions) object and specify the visual features you'd like to extract, by call the [setFeatures](/java/api/azure.ai.vision.imageanalysis.imageanalysisoptions.features#azure-ai-vision-imageanalysis-imageanalysisoptions-features) method. [ImageAnalysisFeature](/java/api/azure.ai.vision.imageanalysis.imageanalysisfeature) enum defines the supported values.
82
+
Create a new [ImageAnalysisOptions](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisoptions) object and specify the visual features you'd like to extract, by call the [setFeatures](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisoptions#com-azure-ai-vision-imageanalysis-imageanalysisoptions-setfeatures(java-util-enumset(com-azure-ai-vision-imageanalysis-imageanalysisfeature))) method. [ImageAnalysisFeature](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisfeature) enum defines the supported values.
@@ -88,7 +88,7 @@ Create a new [ImageAnalysisOptions](/java/api/azure.ai.vision.imageanalysis.imag
88
88
89
89
You can also do image analysis with a custom trained model. To create and train a model, see [Create a custom Image Analysis model](/azure/ai-services/computer-vision/how-to/model-customization). Once your model is trained, all you need is the model's name.
90
90
91
-
To use a custom model, create the [ImageAnalysisOptions](/java/api/azure.ai.vision.imageanalysis.imageanalysisoptions) object and call the [setModelName](/java/api/azure.ai.vision.imageanalysis.imageanalysisoptions.modelname#azure-ai-vision-imageanalysis-imageanalysisoptions-modelname) method. You don't need to set any other properties on **ImageAnalysisOptions**. There's no need to call [setFeatures](/java/api/azure.ai.vision.imageanalysis.imageanalysisoptions.features#azure-ai-vision-imageanalysis-imageanalysisoptions-features), as you do with the standard model, since your custom model already implies the visual features the service extracts.
91
+
To use a custom model, create the [ImageAnalysisOptions](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisoptions) object and call the [setModelName](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisoptions#com-azure-ai-vision-imageanalysis-imageanalysisoptions-setmodelname(java-lang-string)) method. You don't need to set any other properties on **ImageAnalysisOptions**. There's no need to call [setFeatures](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisoptions#com-azure-ai-vision-imageanalysis-imageanalysisoptions-setfeatures(java-util-enumset(com-azure-ai-vision-imageanalysis-imageanalysisfeature))), as you do with the standard model, since your custom model already implies the visual features the service extracts.
@@ -99,7 +99,7 @@ You can specify the language of the returned data. The language is optional, wit
99
99
100
100
Language option only applies when you're using the standard model.
101
101
102
-
Call the [setLanguage](/java/api/azure.ai.vision.imageanalysis.imageanalysisoptions.language) method on your **ImageAnalysisOptions** object to specify a language.
102
+
Call the [setLanguage](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisoptions#com-azure-ai-vision-imageanalysis-imageanalysisoptions-setlanguage(java-lang-string)) method on your **ImageAnalysisOptions** object to specify a language.
@@ -110,7 +110,7 @@ If you're extracting captions or dense captions, you can ask for gender neutral
110
110
111
111
Gender neutral caption option only applies when you're using the standard model.
112
112
113
-
To enable gender neutral captions, call the [setGenderNeutralCaption](/java/api/azure.ai.vision.imageanalysis.imageanalysisoptions.genderneutralcaption) method on your **ImageAnalysisOptions** object with `true` as the argument.
113
+
To enable gender neutral captions, call the [setGenderNeutralCaption](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisoptions#com-azure-ai-vision-imageanalysis-imageanalysisoptions-setgenderneutralcaption(java-lang-boolean)) method on your **ImageAnalysisOptions** object with `true` as the argument.
@@ -120,7 +120,7 @@ An aspect ratio is calculated by dividing the target crop width by the height. S
120
120
121
121
Smart cropping aspect rations only applies when you're using the standard model.
122
122
123
-
Call the [setCroppingAspectRatios](/java/api/azure.ai.vision.imageanalysis.imageanalysisoptions.croppingaspectratios) method on your **ImageAnalysisOptions** with a list of aspect ratios. For example, to set aspect ratios of 0.9 and 1.33:
123
+
Call the [setCroppingAspectRatios](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisoptions#com-azure-ai-vision-imageanalysis-imageanalysisoptions-setcroppingaspectratios(java-util-list(java-lang-double))) method on your **ImageAnalysisOptions** with a list of aspect ratios. For example, to set aspect ratios of 0.9 and 1.33:
@@ -131,15 +131,15 @@ Call the [setCroppingAspectRatios](/java/api/azure.ai.vision.imageanalysis.image
131
131
132
132
This section shows you how to make an analysis call to the service using the standard model, and get the results.
133
133
134
-
1. Using the **VisionServiceOptions**, **VisionSource** and **ImageAnalysisOptions** objects, construct a new [ImageAnalyzer](/java/api/azure.ai.vision.imageanalysis.imageanalyzer) object. **ImageAnalyzer** implements **AutoCloseable**, therefore create the object in a try-with-resources block, or explicitly call the **close** method on this object when you're done analyzing the image.
134
+
1. Using the **VisionServiceOptions**, **VisionSource** and **ImageAnalysisOptions** objects, construct a new [ImageAnalyzer](s/java/api/com.azure.ai.vision.imageanalysis.imageanalyzer) object. **ImageAnalyzer** implements **AutoCloseable**, therefore create the object in a try-with-resources block, or explicitly call the **close** method on this object when you're done analyzing the image.
135
135
136
136
1. Call the **analyze** method on the **ImageAnalyzer** object, as shown here. The call is synchronous, and will block until the service returns the results or an error occurred. Alternatively, you can call the nonblocking **analyzeAsync** method.
137
137
138
-
1. Call the **getReason** method on the [ImageAnalysisResult](/java/api/azure.ai.vision.imageanalysis.imageanalysisresult) object, to determine if analysis succeeded or failed.
138
+
1. Call the **getReason** method on the [ImageAnalysisResult](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisresult) object, to determine if analysis succeeded or failed.
139
139
140
-
1. If succeeded, proceed to call the relevant result methods based on your selected visual features, as shown here. Additional information (not commonly needed) can be obtained by constructing the [ImageAnalysisResultDetails](/java/api/azure.ai.vision.imageanalysis.imageanalysisresultdetails) object.
140
+
1. If succeeded, proceed to call the relevant result methods based on your selected visual features, as shown here. Additional information (not commonly needed) can be obtained by constructing the [ImageAnalysisResultDetails](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisresultdetails) object.
141
141
142
-
1. If failed, you can construct the [ImageAnalysisErrorDetails](/java/api/azure.ai.vision.imageanalysis.imageanalysisresultdetails) object to get information on the failure.
142
+
1. If failed, you can construct the [ImageAnalysisErrorDetails](/java/api/com.azure.ai.vision.imageanalysis.imageanalysiserrordetails) object to get information on the failure.
@@ -149,22 +149,22 @@ This section shows you how to make an analysis call to the service using the sta
149
149
This section shows you how to make an analysis call to the service, when using a custom model.
150
150
151
151
152
-
The code is similar to the standard model case. The only difference is that results from the custom model are available by calling **getCustomTags** and/or **getCustomObjects** methods on the [ImageAnalysisResult](/java/api/azure.ai.vision.imageanalysis.imageanalysisresult) object.
152
+
The code is similar to the standard model case. The only difference is that results from the custom model are available by calling **getCustomTags** and/or **getCustomObjects** methods on the [ImageAnalysisResult](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisresult) object.
The sample code for getting analysis results shows how to handle errors and get the [ImageAnalysisErrorDetails](/java/api/azure.ai.vision.imageanalysis.imageanalysiserrordetails) object that contains the error information. The error information includes:
159
+
The sample code for getting analysis results shows how to handle errors and get the [ImageAnalysisErrorDetails](/java/api/com.azure.ai.vision.imageanalysis.imageanalysiserrordetails) object that contains the error information. The error information includes:
160
160
161
-
* Error reason. See enum [ImageAnalysisErrorReason](/java/api/azure.ai.vision.imageanalysis.imageanalysiserrorreason).
161
+
* Error reason. See enum [ImageAnalysisErrorReason](/java/api/com.azure.ai.vision.imageanalysis.imageanalysiserrorreason).
162
162
* Error code and error message. Click on the **REST API** tab to see a list of some common error codes and messages.
163
163
164
164
In addition to those errors, the SDK has a few other error messages, including:
165
165
*`Missing Image Analysis options: You must set at least one visual feature (or model name) for the 'analyze' operation. Or set segmentation mode for the 'segment' operation`
166
166
*`Invalid combination of Image Analysis options: You cannot set both visual features (or model name), and segmentation mode`
167
167
168
-
Make sure the [ImageAnalysisOptions](/java/api/azure.ai.vision.imageanalysis.imageanalysisoptions) object is set correctly to fix these errors.
168
+
Make sure the [ImageAnalysisOptions](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisoptions) object is set correctly to fix these errors.
169
169
170
170
To help resolve issues, look at the [Image Analysis Samples](https://github.com/Azure-Samples/azure-ai-vision-sdk) repository and run the closest sample to your scenario. Search the [GitHub issues](https://github.com/Azure-Samples/azure-ai-vision-sdk/issues) to see if your issue was already address. If not, create a new one.
Copy file name to clipboardExpand all lines: articles/ai-services/computer-vision/includes/quickstarts-sdk/image-analysis-java-sdk-40.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ ms.author: pafarley
16
16
17
17
Use the Image Analysis client SDK for Java to analyze an image to read text and generate an image caption. This quickstart analyzes a remote image and prints the results to the console.
> The Analysis 4.0 API can do many different operations. See the [Analyze Image how-to guide](../../how-to/call-analyze-image-40.md) for examples that showcase all of the available features.
Copy file name to clipboardExpand all lines: articles/ai-services/computer-vision/sdk/overview-sdk.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ The Vision SDK supports the following languages and platforms:
28
28
| C# <sup>1</sup> |[quickstart](../quickstarts-sdk/image-analysis-client-library-40.md?pivots=programming-language-csharp)|[reference](/dotnet/api/azure.ai.vision.imageanalysis)| Windows, UWP, Linux |
29
29
| C++ <sup>2</sup> |[quickstart](../quickstarts-sdk/image-analysis-client-library-40.md?pivots=programming-language-cpp)|[reference](/cpp/cognitive-services/vision)| Windows, Linux |
30
30
| Python |[quickstart](../quickstarts-sdk/image-analysis-client-library-40.md?pivots=programming-language-python)|[reference](/python/api/azure-ai-vision)| Windows, Linux |
31
-
| Java |[quickstart](../quickstarts-sdk/image-analysis-client-library-40.md?pivots=programming-language-java)|[reference](/java/api/azure-ai-vision)| Windows, Linux |
31
+
| Java |[quickstart](../quickstarts-sdk/image-analysis-client-library-40.md?pivots=programming-language-java)|[reference](/java/api/com.azure.ai.vision.imageanalysis)| Windows, Linux |
32
32
33
33
34
34
<sup>1 The Vision SDK for C# is based on .NET Standard 2.0. See [.NET Standard](/dotnet/standard/net-standard?tabs=net-standard-2-0#net-implementation-support) documentation.</sup>
0 commit comments