Skip to content

Commit 4df9c55

Browse files
authored
Merge pull request #266330 from MicrosoftDocs/main
02/14 PM Publishing
2 parents 950766b + 8afd4c3 commit 4df9c55

File tree

204 files changed

+1865
-815
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

204 files changed

+1865
-815
lines changed

articles/ai-services/computer-vision/how-to/video-retrieval.md

Lines changed: 5 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Do video retrieval using vectorization - Image Analysis 4.0
33
titleSuffix: Azure AI services
4-
description: Learn how to call the Spatial Analysis Video Retrieval APIs to vectorize video frames and search terms.
4+
description: Learn how to call the Video Retrieval APIs to vectorize video frames and search terms.
55
#services: cognitive-services
66
author: PatrickFarley
77
manager: nitinme
@@ -14,7 +14,7 @@ ms.author: pafarley
1414

1515
# Do video retrieval using vectorization (version 4.0 preview)
1616

17-
Azure AI Spatial Analysis Video Retrieval APIs are part of Azure AI Vision and enable developers to create an index, add documents (videos and images) to it, and search with natural language. Developers can define metadata schemas for each index and ingest metadata to the service to help with retrieval. Developers can also specify what features to extract from the index (vision, speech) and filter their search based on features.
17+
Azure AI Video Retrieval APIs are part of Azure AI Vision and enable developers to create an index, add documents (videos and images) to it, and search with natural language. Developers can define metadata schemas for each index and ingest metadata to the service to help with retrieval. Developers can also specify what features to extract from the index (vision, speech) and filter their search based on features.
1818

1919
## Prerequisites
2020

@@ -24,38 +24,11 @@ Azure AI Spatial Analysis Video Retrieval APIs are part of Azure AI Vision and e
2424

2525
## Input requirements
2626

27-
### Supported formats
28-
29-
| File format | Description |
30-
| ----------- | ----------- |
31-
| `asf` | ASF (Advanced / Active Streaming Format) |
32-
| `avi` | AVI (Audio Video Interleaved) |
33-
| `flv` | FLV (Flash Video) |
34-
| `matroskamm`, `webm` | Matroska / WebM |
35-
| `mov`,`mp4`,`m4a`,`3gp`,`3g2`,`mj2` | QuickTime / MOV |
36-
37-
### Supported video codecs
38-
39-
| Codec | Format |
40-
| ----------- | ----------- |
41-
| `h264` | H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 |
42-
| `h265` | H.265/HEVC |
43-
| `libvpx-vp9` | libvpx VP9 (codec vp9) |
44-
| `mpeg4` | MPEG-4 part 2 |
45-
46-
### Supported audio codecs
47-
48-
| Codec | Format |
49-
| ----------- | ----------- |
50-
| `aac` | AAC (Advanced Audio Coding) |
51-
| `mp3` | MP3 (MPEG audio layer 3) |
52-
| `pcm` | PCM (uncompressed) |
53-
| `vorbis` | Vorbis |
54-
| `wmav2` | Windows Media Audio 2 |
27+
[!INCLUDE [video-retrieval-input](../includes/video-retrieval-input.md)]
5528

5629
## Call the Video Retrieval APIs
5730

58-
To use the Spatial Analysis Video Retrieval APIs in a typical pattern, you would do the following steps:
31+
To use the Video Retrieval APIs in a typical pattern, you would do the following steps:
5932

6033
1. Create an index using **PUT - Create an index**.
6134
2. Add video documents to the index using **PUT - CreateIngestion**.
@@ -65,7 +38,7 @@ To use the Spatial Analysis Video Retrieval APIs in a typical pattern, you would
6538

6639
### Use Video Retrieval APIs for metadata-based search
6740

68-
The Spatial Analysis Video Retrieval APIs allows a user to add metadata to video files. Metadata is additional information associated with video files such as "Camera ID," "Timestamp," or "Location" that can be used to organize, filter, and search for specific videos. This example demonstrates how to create an index, add video files with associated metadata, and perform searches using different features.
41+
The Video Retrieval APIs allows a user to add metadata to video files. Metadata is additional information associated with video files such as "Camera ID," "Timestamp," or "Location" that can be used to organize, filter, and search for specific videos. This example demonstrates how to create an index, add video files with associated metadata, and perform searches using different features.
6942

7043
### Step 1: Create an Index
7144

articles/ai-services/computer-vision/includes/how-to-guides/analyze-image-40-java.md

Lines changed: 7 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -34,27 +34,23 @@ You can select an image by providing a publicly accessible image URL, or by read
3434

3535
### Image URL
3636

37-
Create a [URL](https://docs.oracle.com/javase/8/docs/api/java/net/URL.html) object for the image you want to analyze.
37+
Create an `imageUrl` string to hold the publicly accessible URL of the image you want to analyze.
3838

3939
[!code-java[](~/cognitive-services-quickstart-code/java/ComputerVision/4-0/ImageAnalysisHowTo.java?name=snippet_url)]
4040

41-
4241
### Image buffer
4342

44-
Alternatively, you can pass in the image as a data array using a **BinaryData** object. For example, read from a local image file you want to analyze.
43+
Alternatively, you can pass in the image as memory buffer using a [BinaryData](/java/api/com.azure.core.util.binarydata) object. For example, read from a local image file you want to analyze.
4544

4645
[!code-java[](~/cognitive-services-quickstart-code/java/ComputerVision/4-0/ImageAnalysisHowTo.java?name=snippet_file)]
4746

4847
## Select visual features
4948

5049
The Analysis 4.0 API gives you access to all of the service's image analysis features. Choose which operations to do based on your own use case. See the [overview](/azure/ai-services/computer-vision/overview-image-analysis) for a description of each feature. The example in this section adds all of the [available visual features](/java/api/com.azure.ai.vision.imageanalysis.models.visualfeatures), but for practical usage you likely need fewer.
5150

52-
5351
> [!IMPORTANT]
5452
> The visual features [Captions](/java/api/com.azure.ai.vision.imageanalysis.models.visualfeatures#com-azure-ai-vision-imageanalysis-models-visualfeatures-caption) and [DenseCaptions](/java/api/com.azure.ai.vision.imageanalysis.models.visualfeatures#com-azure-ai-vision-imageanalysis-models-visualfeatures-dense-captions) are only supported in the following Azure regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US.
5553
56-
57-
5854
[!code-java[](~/cognitive-services-quickstart-code/java/ComputerVision/4-0/ImageAnalysisHowTo.java?name=snippet_features)]
5955

6056
<!--
@@ -67,7 +63,6 @@ To use a custom model, create the [ImageAnalysisOptions](/java/api/com.azure.ai.
6763
[!code-java[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/java/image-analysis/custom-model/ImageAnalysis.java?name=model_name)]
6864
-->
6965

70-
7166
## Select analysis options
7267

7368
Use an [ImageAnalysisOptions](/java/api/com.azure.ai.vision.imageanalysis.models.imageanalysisoptions) object to specify various options for the Analyze API call.
@@ -78,16 +73,13 @@ Use an [ImageAnalysisOptions](/java/api/com.azure.ai.vision.imageanalysis.models
7873

7974
[!code-java[](~/cognitive-services-quickstart-code/java/ComputerVision/4-0/ImageAnalysisHowTo.java?name=snippet_options)]
8075

76+
## Call the analyzeFromUrl method
8177

78+
This section shows you how to make an analysis call to the service.
8279

80+
Call the [analyzeFromUrl](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisclient#method-summary) method on the [ImageAnalysisClient](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisclient) object, as shown here. The call is synchronous, and will block until the service returns the results or an error occurred. Alternatively, you can use a [ImageAnalysisAsyncClient](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisasyncclient) object instead, and call its [analyzeFromUrl](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisasyncclient#method-summary) method which is non-blocking.
8381

84-
## Call the Analyze API
85-
86-
This section shows you how to make an analysis call to the service.
87-
88-
Call the [analyze](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisclient#method-summary) method on the [ImageAnalysisClient](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisclient) object, as shown here. The call is synchronous, and will block until the service returns the results or an error occurred. Alternatively, you can use a [ImageAnalysisAsyncClient](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisasyncclient) object instead, and call its [analyze](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisasyncclient#method-summary) method which is non-blocking.
89-
90-
Use the input objects created in the above sections. To analyze from an image buffer instead of URL, replace `imageURL` in the method call with `imageData`.
82+
To analyze from an image buffer instead of URL, call the [analyze](/java/api/com.azure.ai.vision.imageanalysis.imageanalysisclient#method-summary) method instead, and pass in the `imageData` as the first argument.
9183

9284
[!code-java[](~/cognitive-services-quickstart-code/java/ComputerVision/4-0/ImageAnalysisHowTo.java?name=snippet_call)]
9385

@@ -104,7 +96,7 @@ The code is similar to the standard model case. The only difference is that resu
10496

10597
## Get results from the service
10698

107-
The following code shows you how to parse the results of the various Analyze operations.
99+
The following code shows you how to parse the results from the **analyzeFromUrl** and **analyze** operations.
108100

109101
[!code-java[](~/cognitive-services-quickstart-code/java/ComputerVision/4-0/ImageAnalysisHowTo.java?name=snippet_results)]
110102

articles/ai-services/computer-vision/includes/how-to-guides/analyze-image-40-python.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ You can use the following sample image URL.
4343

4444
### Image buffer
4545

46-
Alternatively, you can pass in the image as a data array. For example, read from a local image file you want to analyze.
46+
Alternatively, you can pass in the image as [bytes](https://docs.python.org/3/library/stdtypes.html#bytes-objects) object. For example, read from a local image file you want to analyze.
4747

4848
[!code-python[](~/cognitive-services-quickstart-code/python/ComputerVision/4-0/how-to.py?name=snippet_file)]
4949

@@ -63,9 +63,9 @@ To use a custom model, create the [ImageAnalysisOptions](/python/api/azure-ai-vi
6363
[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/custom-model/main.py?name=model_name)]
6464
-->
6565

66-
## Call the Analyze API with options
66+
## Call the analyze_from_url method with options
6767

68-
The following code calls the [Analyze API](/python/api/azure-ai-vision-imageanalysis/azure.ai.vision.imageanalysis.imageanalysisclient#azure-ai-vision-imageanalysis-imageanalysisclient-analyze) with the features you selected above and additional options, defined below. To analyze from an image buffer instead of URL, replace `image_url=image_url` in the method call with `image_data=image_data`.
68+
The following code calls the [analyze_from_url](/python/api/azure-ai-vision-imageanalysis/azure.ai.vision.imageanalysis.imageanalysisclient#azure-ai-vision-imageanalysis-imageanalysisclient-analyzefromurl) method on the client with the features you selected above and additional options, defined below. To analyze from an image buffer instead of URL, call the method [analyze](/python/api/azure-ai-vision-imageanalysis/azure.ai.vision.imageanalysis.imageanalysisclient#azure-ai-vision-imageanalysis-imageanalysisclient-analyze) instead, with `image_data=image_data` as the first argument.
6969

7070
[!code-python[](~/cognitive-services-quickstart-code/python/ComputerVision/4-0/how-to.py?name=snippet_call)]
7171

@@ -85,7 +85,7 @@ You can specify the language of the returned data. The language is optional, wit
8585

8686
## Get results from the service
8787

88-
The following code shows you how to parse the results of the various **analyze** operations.
88+
The following code shows you how to parse the results from the **analyze_from_url** or **analyze** operations.
8989

9090
[!code-python[](~/cognitive-services-quickstart-code/python/ComputerVision/4-0/how-to.py?name=snippet_result)]
9191

articles/ai-services/computer-vision/includes/quickstarts-sdk/image-analysis-java-sdk-40.md

Lines changed: 15 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -35,31 +35,37 @@ Use the Image Analysis client SDK for Java to analyze an image to read text and
3535
Open a console window and create a new folder for your quickstart application.
3636

3737
1. Open a text editor and copy the following content to a new file. Save the file as `pom.xml` in your project directory
38-
<!-- [!INCLUDE][](https://raw.githubusercontent.com/Azure-Samples/azure-ai-vision-sdk/main/docs/learn.microsoft.com/java/image-analysis/quick-start/pom.xml)] -->
38+
<!-- [!INCLUDE][](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-quickstart-code/master/java/ComputerVision/4-0/pom.xml)] -->
3939
```xml
4040
<project xmlns="http://maven.apache.org/POM/4.0.0"
41-
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
42-
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
41+
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
42+
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
4343
<modelVersion>4.0.0</modelVersion>
44-
<groupId>azure.ai.vision.imageanalysis.samples</groupId>
45-
<artifactId>image-analysis-quickstart</artifactId>
46-
<version>0.0</version>
44+
<groupId>com.example</groupId>
45+
<artifactId>my-application-name</artifactId>
46+
<version>1.0.0</version>
4747
<dependencies>
48+
<!-- https://mvnrepository.com/artifact/com.azure/azure-ai-vision-imageanalysis -->
4849
<dependency>
4950
<groupId>com.azure</groupId>
5051
<artifactId>azure-ai-vision-imageanalysis</artifactId>
51-
<version>1.0.0-beta.1</version>
52+
<version>1.0.0-beta.2</version>
5253
</dependency>
54+
<!-- https://mvnrepository.com/artifact/org.slf4j/slf4j-nop -->
55+
<!-- Optional: provide a slf4j implementation. Here we use a no-op implementation
56+
just to make the slf4j console spew warning go away. We can still use the internal
57+
logger in azure.core library. See
58+
https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/vision/azure-ai-vision-imageanalysis#enable-http-requestresponse-logging -->
5359
<dependency>
5460
<groupId>org.slf4j</groupId>
5561
<artifactId>slf4j-nop</artifactId>
56-
<version>1.7.36</version>
62+
<version>1.7.36</version>
5763
</dependency>
5864
</dependencies>
5965
</project>
6066
```
6167

62-
1. Update the version value (`1.0.0-beta.1`) based on the latest available version of the [azure-ai-vision-imageanalysis](https://aka.ms/azsdk/image-analysis/package/maven) package in the Maven repository.
68+
1. Update the version value (`1.0.0-beta.2`) based on the latest available version of the [azure-ai-vision-imageanalysis](https://aka.ms/azsdk/image-analysis/package/maven) package in the Maven repository.
6369
1. Install the SDK and dependencies by running the following in the project directory:
6470
```console
6571
mvn clean dependency:copy-dependencies
Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
---
2+
title: "Video Retrieval input requirements"
3+
titleSuffix: "Azure AI services"
4+
#services: cognitive-services
5+
author: PatrickFarley
6+
manager: nitinme
7+
ms.service: azure-ai-vision
8+
ms.custom:
9+
ms.topic: include
10+
ms.date: 02/12/2024
11+
ms.author: pafarley
12+
---
13+
14+
15+
16+
### Supported formats
17+
18+
| File format | Description |
19+
| ----------- | ----------- |
20+
| `asf` | ASF (Advanced / Active Streaming Format) |
21+
| `avi` | AVI (Audio Video Interleaved) |
22+
| `flv` | FLV (Flash Video) |
23+
| `matroskamm`, `webm` | Matroska / WebM |
24+
| `mov`,`mp4`,`m4a`,`3gp`,`3g2`,`mj2` | QuickTime / MOV |
25+
26+
### Supported video codecs
27+
28+
| Codec | Format |
29+
| ----------- | ----------- |
30+
| `h264` | H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 |
31+
| `h265` | H.265/HEVC |
32+
| `libvpx-vp9` | libvpx VP9 (codec vp9) |
33+
| `mpeg4` | MPEG-4 part 2 |
34+
35+
### Supported audio codecs
36+
37+
| Codec | Format |
38+
| ----------- | ----------- |
39+
| `aac` | AAC (Advanced Audio Coding) |
40+
| `mp3` | MP3 (MPEG audio layer 3) |
41+
| `pcm` | PCM (uncompressed) |
42+
| `vorbis` | Vorbis |
43+
| `wmav2` | Windows Media Audio 2 |

articles/ai-services/computer-vision/index.yml

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -115,17 +115,20 @@ conceptualContent:
115115
footerLink:
116116
text: More
117117
url: overview-identity.md
118-
- title: Spatial Analysis
118+
- title: Video Analysis
119119
links:
120120
- itemType: overview
121-
text: About Spatial Analysis
121+
text: About Video Analysis
122122
url: intro-to-spatial-analysis-public-preview.md
123123
- itemType: quickstart
124124
text: Get started with Spatial Analysis
125125
url: spatial-analysis-container.md
126126
- itemType: how-to-guide
127127
text: Configure Spatial Analysis operations
128128
url: spatial-analysis-operations.md
129+
- itemType: how-to-guide
130+
text: Call the Video Retrieval APIs
131+
url: how-to/video-retrieval.md
129132
- itemType: concept
130133
text: Zone and line placement
131134
url: spatial-analysis-zone-line-placement.md
@@ -137,7 +140,7 @@ conceptualContent:
137140
url: spatial-analysis-logging.md
138141
footerLink:
139142
text: More
140-
url: index-spatial-analysis.yml
143+
url: intro-to-spatial-analysis-public-preview.md
141144

142145
tools:
143146
title: Software development kits (SDKs)

0 commit comments

Comments
 (0)