Skip to content

Commit 4bce020

Browse files
committed
comvis freshness
1 parent 132503d commit 4bce020

File tree

5 files changed

+54
-54
lines changed

5 files changed

+54
-54
lines changed

articles/ai-services/computer-vision/faq.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ metadata:
99

1010
ms.service: azure-ai-vision
1111
ms.topic: faq
12-
ms.date: 02/27/2024
12+
ms.date: 07/28/2025
1313
ms.collection: "ce-skilling-fresh-tier2, ce-skilling-ai-copilot"
1414
ms.update-cycle: 365-days
1515
ms.author: pafarley

articles/ai-services/computer-vision/how-to/analyze-video.md

Lines changed: 24 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -9,26 +9,26 @@ ms.update-cycle: 365-days
99
ms.author: pafarley
1010
ms.service: azure-ai-vision
1111
ms.topic: how-to
12-
ms.date: 02/27/2024
12+
ms.date: 07/28/2025
1313
ms.devlang: csharp
1414
ms.custom: devx-track-csharp, cogserv-non-critical-vision
1515
---
1616

1717
# Analyze videos in near real time
1818

19-
This article demonstrates how to use the Azure AI Vision API to perform near real-time analysis on frames that are taken from a live video stream. The basic elements of such an analysis are:
19+
This article shows how to use the Azure AI Vision API to analyze frames from a live video stream in near real time. The basic elements of this analysis are:
2020

21-
- Acquiring frames from a video source
22-
- Selecting which frames to analyze
23-
- Submitting these frames to the API
24-
- Consuming each analysis result that's returned from the API call
21+
- Getting frames from a video source
22+
- Choosing which frames to analyze
23+
- Sending these frames to the API
24+
- Using each analysis result that the API returns
2525

2626
> [!TIP]
2727
> The samples in this article are written in C#. To access the code, go to the [Video frame analysis sample](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) page on GitHub.
2828
2929
## Approaches to running near real-time analysis
3030

31-
You can solve the problem of running near real-time analysis on video streams using a variety of approaches. This article outlines three of them, in increasing levels of sophistication.
31+
You can solve the problem of running near real-time analysis on video streams by using a variety of approaches. This article outlines three of them, in increasing levels of sophistication.
3232

3333
### Method 1: Design an infinite loop
3434

@@ -46,11 +46,11 @@ while (true)
4646
}
4747
```
4848

49-
If your analysis were to consist of a lightweight, client-side algorithm, this approach would be suitable. However, when the analysis occurs in the cloud, the resulting latency means that an API call might take several seconds. During this time, you're not capturing images, and your thread is essentially doing nothing. Your maximum frame rate is limited by the latency of the API calls.
49+
If your analysis consists of a lightweight, client-side algorithm, this approach is suitable. However, when the analysis occurs in the cloud, the resulting latency means that an API call might take several seconds. During this time, you don't capture images, and your thread is essentially doing nothing. Your maximum frame rate is limited by the latency of the API calls.
5050

5151
### Method 2: Allow the API calls to run in parallel
5252

53-
Although a simple, single-threaded loop makes sense for a lightweight, client-side algorithm, it doesn't fit well with the latency of a cloud API call. The solution to this problem is to allow the long-running API call to run in parallel with the frame-grabbing. In C#, you could do this by using task-based parallelism. For example, you can run the following code:
53+
Although a simple, single-threaded loop makes sense for a lightweight, client-side algorithm, it doesn't fit well with the latency of a cloud API call. The solution to this problem is to allow the long-running API call to run in parallel with the frame-grabbing. In C#, you can do this by using task-based parallelism. For example, you can run the following code:
5454

5555
```csharp
5656
while (true)
@@ -68,13 +68,13 @@ while (true)
6868
```
6969

7070
With this approach, you launch each analysis in a separate task. The task can run in the background while you continue grabbing new frames. The approach avoids blocking the main thread as you wait for an API call to return. However, the approach can present certain disadvantages:
71-
* It costs you some of the guarantees that the simple version provided. That is, multiple API calls might occur in parallel, and the results might get returned in the wrong order.
72-
* It could also cause multiple threads to enter the ConsumeResult() function simultaneously, which might be dangerous if the function isn't thread-safe.
71+
* You lose some of the guarantees that the simple version provided. That is, multiple API calls might occur in parallel, and the results might get returned in the wrong order.
72+
* It could cause multiple threads to enter the `ConsumeResult()` function simultaneously, which might be dangerous if the function isn't thread-safe.
7373
* Finally, this simple code doesn't keep track of the tasks that get created, so exceptions silently disappear. Thus, you need to add a "consumer" thread that tracks the analysis tasks, raises exceptions, kills long-running tasks, and ensures that the results get consumed in the correct order, one at a time.
7474

7575
### Method 3: Design a producer-consumer system
7676

77-
To design a "producer-consumer" system, you build a producer thread that looks similar to the previous section's infinite loop. Then, instead of consuming the analysis results as soon as they're available, the producer simply places the tasks in a queue to keep track of them.
77+
To design a "producer-consumer" system, build a producer thread that looks similar to the previous section's infinite loop. Then, instead of consuming the analysis results as soon as they're available, the producer simply places the tasks in a queue to keep track of them.
7878

7979
```csharp
8080
// Queue that will contain the API call tasks.
@@ -111,7 +111,7 @@ while (true)
111111
}
112112
```
113113

114-
You also create a consumer thread, which takes tasks off the queue, waits for them to finish, and either displays the result or raises the exception that was thrown. By using this queue, you can guarantee that the results get consumed one at a time, in the correct order, without limiting the maximum frame rate of the system.
114+
Also create a consumer thread, which takes tasks off the queue, waits for them to finish, and either displays the result or raises the exception that was thrown. By using this queue, you can guarantee that the results get consumed one at a time, in the correct order, without limiting the maximum frame rate of the system.
115115

116116
```csharp
117117
// Consumer thread.
@@ -139,13 +139,13 @@ while (true)
139139

140140
### Get sample code
141141

142-
To help get your app up and running as quickly as possible, we've implemented the system that's described in the previous section. It's intended to be flexible enough to accommodate many scenarios, while being easy to use. To access the code, go to the [Video frame analysis sample](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) repo on GitHub.
142+
To help you get your app running as quickly as possible, we implemented the system described in the previous section. It's intended to be flexible enough to accommodate many scenarios, while being easy to use. To access the code, go to the [Video frame analysis sample](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) repo on GitHub.
143143

144-
The library contains the `FrameGrabber` class, which implements the producer-consumer system to process video frames from a webcam. Users can specify the exact form of the API call, and the class uses events to let the calling code know when a new frame is acquired, or when a new analysis result is available.
144+
The library contains the `FrameGrabber` class, which implements the producer-consumer system to process video frames from a webcam. Users can specify the exact form of the API call, and the class uses events to let the calling code know when a new frame is acquired or when a new analysis result is available.
145145

146146
### View sample implementations
147147

148-
To illustrate some of the possibilities, we've provided two sample apps that use the library.
148+
To illustrate some of the possibilities, we provide two sample apps that use the library.
149149

150150
The first sample app is a simple console app that grabs frames from the default webcam and then submits them to the Face service for face detection. A simplified version of the app is represented in the following code:
151151

@@ -221,27 +221,27 @@ The second sample app offers more functionality. It allows you to choose which A
221221

222222
In most modes, there's a visible delay between the live video on the left and the visualized analysis on the right. This delay is the time that it takes to make the API call. An exception is in the `EmotionsWithClientFaceDetect` mode, which performs face detection locally on the client computer by using OpenCV before it submits any images to Azure AI services.
223223

224-
By using this approach, you can visualize the detected face immediately. You can then update the attributes later, after the API call returns. This demonstrates the possibility of a "hybrid" approach. That is, some simple processing can be performed on the client, and then Azure AI services APIs can be used to augment this processing with more advanced analysis when necessary.
224+
By using this approach, you can visualize the detected face immediately. You can then update the attributes later, after the API call returns. This approach demonstrates the possibility of a "hybrid" approach. Some simple processing can be performed on the client, and then Azure AI services APIs can augment this processing with more advanced analysis when necessary.
225225

226226
![The LiveCameraSample app displaying an image with tags](../images/frame-by-frame.jpg)
227227

228228
### Integrate samples into your codebase
229229

230-
To get started with this sample, do the following:
230+
To get started with this sample, complete the following steps:
231231

232-
1. Create an [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you already have one, you can skip to the next step.
232+
1. Create an [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you already have an account, go to the next step.
233233
1. Create resources for Azure AI Vision and Face in the Azure portal to get your key and endpoint. Make sure to select the free tier (F0) during setup.
234234
- [Azure AI Vision](https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision)
235235
- [Face](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace)
236-
After the resources are deployed, select **Go to resource** to collect your key and endpoint for each resource.
236+
After the portal deploys the resources, select **Go to resource** to collect your key and endpoint for each resource.
237237
1. Clone the [Cognitive-Samples-VideoFrameAnalysis](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) GitHub repo.
238-
1. Open the sample in Visual Studio 2015 or later, and then build and run the sample applications:
239-
- For BasicConsoleSample, the Face key is hard-coded directly in [BasicConsoleSample/Program.cs](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/blob/master/Windows/BasicConsoleSample/Program.cs).
240-
- For LiveCameraSample, enter the keys in the **Settings** pane of the app. The keys are persisted across sessions as user data.
238+
1. Open the sample in Visual Studio 2015 or later, then build and run the sample applications:
239+
- For BasicConsoleSample, hard-code the Face key directly in [BasicConsoleSample/Program.cs](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/blob/master/Windows/BasicConsoleSample/Program.cs).
240+
- For LiveCameraSample, enter the keys in the **Settings** pane of the app. The app persists the keys across sessions as user data.
241241

242242
When you're ready to integrate the samples, reference the **VideoFrameAnalyzer** library from your own projects.
243243

244-
The image-, voice-, video-, and text-understanding capabilities of **VideoFrameAnalyzer** use Azure AI services. Microsoft receives the images, audio, video, and other data that you upload (through this app) and might use them for service-improvement purposes. We ask for your help in protecting the people whose data your app sends to Azure AI services.
244+
The image-, voice-, video-, and text-understanding capabilities of **VideoFrameAnalyzer** use Azure AI services. Microsoft receives the images, audio, video, and other data that you upload through this app and might use them for service-improvement purposes. We ask for your help in protecting the people whose data your app sends to Azure AI services.
245245

246246
## Next steps
247247

articles/ai-services/computer-vision/includes/model-customization-deprecation.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,6 @@ ms.author: pafarley
1212
---
1313

1414
> [!IMPORTANT]
15-
> This feature is now deprecated. On March 31, 2025, Azure AI Image Analysis 4.0 Custom Image Classification, Custom Object Detection, and Product Recognition preview API will be retired. After this date, API calls to these services will fail.
15+
> This feature is now retired. On March 31, 2025, Azure AI Image Analysis 4.0 Custom Image Classification, Custom Object Detection, and Product Recognition preview API were retired. API calls to these services will fail.
1616
>
17-
> To maintain a smooth operation of your models, transition to [Azure AI Custom Vision](/azure/ai-services/Custom-Vision-Service/overview), which is now generally available. Custom Vision offers similar functionality to these retiring features.
17+
> Transition to [Azure AI Custom Vision](/azure/ai-services/Custom-Vision-Service/overview), which is now generally available. Custom Vision offers similar functionality to these retiring features.

articles/ai-services/computer-vision/sdk/install-sdk.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: PatrickFarley
66
manager: nitinme
77
ms.service: azure-ai-vision
88
ms.topic: quickstart
9-
ms.date: 06/01/2024
9+
ms.date: 07/28/2025
1010
ms.collection: "ce-skilling-fresh-tier2, ce-skilling-ai-copilot"
1111
ms.update-cycle: 365-days
1212
ms.author: pafarley

0 commit comments

Comments
 (0)