You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This article demonstrates how to use the Azure AI Vision API to perform near real-time analysis on frames that are taken from a live video stream. The basic elements of such an analysis are:
19
+
This article shows how to use the Azure AI Vision API to analyze frames from a live video stream in near real time. The basic elements of this analysis are:
20
20
21
-
-Acquiring frames from a video source
22
-
-Selecting which frames to analyze
23
-
-Submitting these frames to the API
24
-
-Consuming each analysis result that's returned from the API call
21
+
-Getting frames from a video source
22
+
-Choosing which frames to analyze
23
+
-Sending these frames to the API
24
+
-Using each analysis result thatthe API returns
25
25
26
26
> [!TIP]
27
27
> The samples in this article are written in C#. To access the code, go to the [Video frame analysis sample](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) page on GitHub.
28
28
29
29
## Approaches to running near real-time analysis
30
30
31
-
You can solve the problem of running near real-time analysis on video streams using a variety of approaches. This article outlines three of them, in increasing levels of sophistication.
31
+
You can solve the problem of running near real-time analysis on video streams by using a variety of approaches. This article outlines three of them, in increasing levels of sophistication.
32
32
33
33
### Method 1: Design an infinite loop
34
34
@@ -46,11 +46,11 @@ while (true)
46
46
}
47
47
```
48
48
49
-
If your analysis were to consist of a lightweight, client-side algorithm, this approach would be suitable. However, when the analysis occurs in the cloud, the resulting latency means that an API call might take several seconds. During this time, you're not capturing images, and your thread is essentially doing nothing. Your maximum frame rate is limited by the latency of the API calls.
49
+
If your analysis consists of a lightweight, client-side algorithm, this approach is suitable. However, when the analysis occurs in the cloud, the resulting latency means that an API call might take several seconds. During this time, you don't capture images, and your thread is essentially doing nothing. Your maximum frame rate is limited by the latency of the API calls.
50
50
51
51
### Method 2: Allow the API calls to run in parallel
52
52
53
-
Although a simple, single-threaded loop makes sense for a lightweight, client-side algorithm, it doesn't fit well with the latency of a cloud API call. The solution to this problem is to allow the long-running API call to run in parallel with the frame-grabbing. In C#, you could do this by using task-based parallelism. For example, you can run the following code:
53
+
Although a simple, single-threaded loop makes sense for a lightweight, client-side algorithm, it doesn't fit well with the latency of a cloud API call. The solution to this problem is to allow the long-running API call to run in parallel with the frame-grabbing. In C#, you can do this by using task-based parallelism. For example, you can run the following code:
54
54
55
55
```csharp
56
56
while (true)
@@ -68,13 +68,13 @@ while (true)
68
68
```
69
69
70
70
Withthisapproach, youlauncheachanalysisinaseparatetask. Thetaskcanruninthebackgroundwhileyoucontinuegrabbingnewframes. TheapproachavoidsblockingthemainthreadasyouwaitforanAPIcalltoreturn. However, theapproachcanpresentcertaindisadvantages:
*Finally, thissimplecodedoesn't keep track of the tasks that get created, so exceptions silently disappear. Thus, you need to add a "consumer" thread that tracks the analysis tasks, raises exceptions, kills long-running tasks, and ensures that the results get consumed in the correct order, one at a time.
74
74
75
75
### Method 3: Design a producer-consumer system
76
76
77
-
Todesigna"producer-consumer"system, youbuildaproducerthreadthatlookssimilartotheprevioussection's infinite loop. Then, instead of consuming the analysis results as soon as they'reavailable, theproducersimplyplacesthetasksinaqueuetokeeptrackofthem.
77
+
Todesigna"producer-consumer"system, buildaproducerthreadthatlookssimilartotheprevioussection's infinite loop. Then, instead of consuming the analysis results as soon as they'reavailable, theproducersimplyplacesthetasksinaqueuetokeeptrackofthem.
Tohelpgetyourappupandrunningasquicklyaspossible, we've implemented the system that'sdescribedintheprevioussection. It's intended to be flexible enough to accommodate many scenarios, while being easy to use. To access the code, go to the [Video frame analysis sample](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) repo on GitHub.
142
+
Tohelpyougetyourapprunningasquicklyaspossible, weimplementedthesystemdescribedintheprevioussection. It's intended to be flexible enough to accommodate many scenarios, while being easy to use. To access the code, go to the [Video frame analysis sample](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) repo on GitHub.
Thelibrarycontainsthe `FrameGrabber` class, whichimplementstheproducer-consumersystemtoprocessvideoframesfromawebcam. UserscanspecifytheexactformoftheAPIcall, andtheclassuseseventstoletthecallingcodeknowwhenanewframeisacquiredor when a new analysis result is available.
145
145
146
146
### View sample implementations
147
147
148
-
Toillustratesomeofthepossibilities, we've provided two sample apps that use the library.
@@ -221,27 +221,27 @@ The second sample app offers more functionality. It allows you to choose which A
221
221
222
222
Inmostmodes, there's a visible delay between the live video on the left and the visualized analysis on the right. This delay is the time that it takes to make the API call. An exception is in the `EmotionsWithClientFaceDetect` mode, which performs face detection locally on the client computer by using OpenCV before it submits any images to Azure AI services.
Copy file name to clipboardExpand all lines: articles/ai-services/computer-vision/includes/model-customization-deprecation.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,6 +12,6 @@ ms.author: pafarley
12
12
---
13
13
14
14
> [!IMPORTANT]
15
-
> This feature is now deprecated. On March 31, 2025, Azure AI Image Analysis 4.0 Custom Image Classification, Custom Object Detection, and Product Recognition preview API will be retired. After this date, API calls to these services will fail.
15
+
> This feature is now retired. On March 31, 2025, Azure AI Image Analysis 4.0 Custom Image Classification, Custom Object Detection, and Product Recognition preview API were retired. API calls to these services will fail.
16
16
>
17
-
> To maintain a smooth operation of your models, transition to [Azure AI Custom Vision](/azure/ai-services/Custom-Vision-Service/overview), which is now generally available. Custom Vision offers similar functionality to these retiring features.
17
+
> Transition to [Azure AI Custom Vision](/azure/ai-services/Custom-Vision-Service/overview), which is now generally available. Custom Vision offers similar functionality to these retiring features.
0 commit comments