Skip to content

Commit 10f3a2d

Browse files
authored
Merge pull request #203815 from PatrickFarley/freshness-pass
[cog svcs] Freshness pass
2 parents 94122ba + 724cbf9 commit 10f3a2d

10 files changed

+56
-61
lines changed

articles/cognitive-services/Computer-vision/concept-brand-detection.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ manager: nitinme
99
ms.service: cognitive-services
1010
ms.subservice: computer-vision
1111
ms.topic: conceptual
12-
ms.date: 01/05/2022
12+
ms.date: 07/05/2022
1313
ms.author: pafarley
1414
---
1515

articles/cognitive-services/Computer-vision/concept-categorizing-images.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,24 @@
11
---
22
title: Image categorization - Computer Vision
33
titleSuffix: Azure Cognitive Services
4-
description: Learn concepts related to the image categorization feature of the Computer Vision API.
4+
description: Learn concepts related to the image categorization feature of the Image Analysis API.
55
services: cognitive-services
66
author: PatrickFarley
77
manager: nitinme
88

99
ms.service: cognitive-services
1010
ms.subservice: computer-vision
1111
ms.topic: conceptual
12-
ms.date: 04/17/2019
12+
ms.date: 07/05/2022
1313
ms.author: pafarley
1414
ms.custom: seodec18
1515
---
1616

1717
# Categorize images by subject matter
1818

19-
In addition to tags and a description, Computer Vision returns the taxonomy-based categories detected in an image. Unlike tags, categories are organized in a parent/child hereditary hierarchy, and there are fewer of them (86, as opposed to thousands of tags). All category names are in English. Categorization can be done by itself or alongside the newer tags model.
19+
In addition to tags and a description, Image Analysis can return the taxonomy-based categories detected in an image. Unlike tags, categories are organized in a parent/child hierarchy, and there are fewer of them (86, as opposed to thousands of tags). All category names are in English. Categorization can be done by itself or alongside the newer tags model.
2020

21-
## The 86-category concept
21+
## The 86-category hierarchy
2222

2323
Computer vision can categorize an image broadly or specifically, using the list of 86 categories in the following diagram. For the full taxonomy in text format, see [Category Taxonomy](category-taxonomy.md).
2424

articles/cognitive-services/Computer-vision/concept-detecting-adult-content.md

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ manager: nitinme
99
ms.service: cognitive-services
1010
ms.subservice: computer-vision
1111
ms.topic: conceptual
12-
ms.date: 10/01/2019
12+
ms.date: 07/05/2022
1313
ms.author: pafarley
1414
ms.custom: seodec18
1515
---
@@ -18,9 +18,6 @@ ms.custom: seodec18
1818

1919
Computer Vision can detect adult material in images so that developers can restrict the display of these images in their software. Content flags are applied with a score between zero and one so developers can interpret the results according to their own preferences.
2020

21-
> [!NOTE]
22-
> Much of this functionality is offered by the [Azure Content Moderator](../content-moderator/overview.md) service. See this alternative for solutions to more rigorous content moderation scenarios, such as text moderation and human review workflows.
23-
2421
Try out the adult content detection features quickly and easily in your browser using Vision Studio.
2522

2623
> [!div class="nextstepaction"]

articles/cognitive-services/Computer-vision/how-to/analyze-video.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ manager: nitinme
88
ms.service: cognitive-services
99
ms.subservice: computer-vision
1010
ms.topic: how-to
11-
ms.date: 09/09/2019
11+
ms.date: 07/05/2022
1212
ms.devlang: csharp
1313
ms.custom: [seodec18, devx-track-csharp, cogserv-non-critical-vision]
1414
---
@@ -135,7 +135,7 @@ while (true)
135135

136136
## Implement the solution
137137

138-
### Get started quickly
138+
### Get sample code
139139

140140
To help get your app up and running as quickly as possible, we've implemented the system that's described in the preceding section. It's intended to be flexible enough to accommodate many scenarios, while being easy to use. To access the code, go to the [Video frame analysis sample](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) page on GitHub.
141141

@@ -221,7 +221,7 @@ By using this approach, you can visualize the detected face immediately. You can
221221

222222
![The LiveCameraSample app displaying an image with tags](../../Video/Images/FramebyFrame.jpg)
223223

224-
### Integrate the samples into your codebase
224+
### Integrate samples into your codebase
225225

226226
To get started with this sample, do the following:
227227

@@ -239,9 +239,10 @@ When you're ready to integrate the samples, reference the VideoFrameAnalyzer lib
239239

240240
The image-, voice-, video-, and text-understanding capabilities of VideoFrameAnalyzer use Azure Cognitive Services. Microsoft receives the images, audio, video, and other data that you upload (via this app) and might use them for service-improvement purposes. We ask for your help in protecting the people whose data your app sends to Azure Cognitive Services.
241241

242-
## Summary
242+
## Next steps
243243

244244
In this article, you learned how to run near real-time analysis on live video streams by using the Face and Computer Vision services. You also learned how you can use our sample code to get started.
245245

246246
Feel free to provide feedback and suggestions in the [GitHub repository](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/). To provide broader API feedback, go to our [UserVoice](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858) site.
247247
248+
- [Call the Image Analysis API (how to)](call-analyze-image.md)

articles/cognitive-services/Computer-vision/how-to/identity-analyze-video.md

Lines changed: 16 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -3,36 +3,36 @@ title: "Example: Real-time video analysis - Face"
33
titleSuffix: Azure Cognitive Services
44
description: Use the Face service to perform near-real-time analysis on frames taken from a live video stream.
55
services: cognitive-services
6-
author: SteveMSFT
6+
author: PatrickFarley
77
manager: nitinme
88

99
ms.service: cognitive-services
10-
ms.subservice: face-api
10+
ms.subservice: computer-vision
1111
ms.topic: how-to
12-
ms.date: 03/01/2018
13-
ms.author: sbowles
12+
ms.date: 07/05/2022
13+
ms.author: pafarley
1414
ms.devlang: csharp
1515
ms.custom: devx-track-csharp
1616
---
1717

18-
# Example: How to Analyze Videos in Real-time
18+
# Example: How to analyze videos in real time
1919

2020
[!INCLUDE [Gate notice](../includes/identity-gate-notice.md)]
2121

22-
This guide will demonstrate how to perform near-real-time analysis on frames taken from a live video stream. The basic components in such a system are:
22+
This guide will demonstrate how to perform near-real-time analysis on frames taken from a live video stream. The basic steps in this system are:
2323

2424
- Acquire frames from a video source
2525
- Select which frames to analyze
2626
- Submit these frames to the API
2727
- Consume each analysis result that is returned from the API call
2828

29-
These samples are written in C# and the code can be found on GitHub here: [https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/).
29+
These samples are written in C# and the code can be found [on GitHub](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/).
3030

31-
## The Approach
31+
## Methods
3232

3333
There are multiple ways to solve the problem of running near-real-time analysis on video streams. We will start by outlining three approaches in increasing levels of sophistication.
3434

35-
### A Simple Approach
35+
### Using infinite loop
3636

3737
The simplest design for a near-real-time analysis system is an infinite loop, where each iteration grabs a frame, analyzes it, and then consumes the result:
3838

@@ -71,7 +71,7 @@ while (true)
7171

7272
This code launches each analysis in a separate Task, which can run in the background while we continue grabbing new frames. With this method we avoid blocking the main thread while waiting for an API call to return, but we have lost some of the guarantees that the simple version provided. Multiple API calls might occur in parallel, and the results might get returned in the wrong order. This could also cause multiple threads to enter the ConsumeResult() function simultaneously, which could be dangerous, if the function is not thread-safe. Finally, this simple code does not keep track of the Tasks that get created, so exceptions will silently disappear. Therefore, the final step is to add a "consumer" thread that will track the analysis tasks, raise exceptions, kill long-running tasks, and ensure that the results get consumed in the correct order.
7373

74-
### A Producer-Consumer Design
74+
### Producer-consumer design
7575

7676
In our final "producer-consumer" system, we have a producer thread that looks similar to our previous infinite loop. However, instead of consuming analysis results as soon as they are available, the producer simply puts the tasks into a queue to keep track of them.
7777

@@ -134,13 +134,13 @@ while (true)
134134
}
135135
```
136136

137-
## Implementing the Solution
137+
## Implementation
138138

139-
### Getting Started
139+
### Get sample code
140140

141141
To get your app up and running as quickly as possible, you will use a flexible implementation of the system described above. To access the code, go to [https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis).
142142
143-
The library contains the class FrameGrabber, which implements the producer-consumer system discussed above to process video frames from a webcam. The user can specify the exact form of the API call, and the class uses events to let the calling code know when a new frame is acquired or a new analysis result is available.
143+
The library contains the class **FrameGrabber**, which implements the producer-consumer system discussed above to process video frames from a webcam. The user can specify the exact form of the API call, and the class uses events to let the calling code know when a new frame is acquired or a new analysis result is available.
144144

145145
To illustrate some of the possibilities, there are two sample apps that use the library. The first is a simple console app, and a simplified version of it is reproduced below. It grabs frames from the default webcam, and submits them to the Face service for face detection.
146146

@@ -152,7 +152,7 @@ In most modes, there will be a visible delay between the live video on the left,
152152

153153
![HowToAnalyzeVideo](../../Video/Images/FramebyFrame.jpg)
154154

155-
### Integrating into your codebase
155+
### Integrate into your codebase
156156

157157
To get started with this sample, follow these steps:
158158

@@ -167,13 +167,12 @@ To get started with this sample, follow these steps:
167167
- For LiveCameraSample, the keys should be entered into the Settings pane of the app. They will be persisted across sessions as user data.
168168

169169

170-
When you're ready to integrate, **reference the VideoFrameAnalyzer library from your own projects.**
170+
When you're ready to integrate, reference the **VideoFrameAnalyzer** library from your own projects.
171171

172-
## Summary
172+
## Next steps
173173

174174
In this guide, you learned how to run near-real-time analysis on live video streams using the Face, Computer Vision, and Emotion APIs, and how to use our sample code to get started.
175175

176176
Feel free to provide feedback and suggestions in the [GitHub repository](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) or, for broader API feedback, on our [UserVoice](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858) site.
177177
178-
## Related Topics
179178
- [Call the detect API](identity-detect-faces.md)

articles/cognitive-services/Custom-Vision-Service/export-model-python.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,15 +9,15 @@ manager: nitinme
99
ms.service: cognitive-services
1010
ms.subservice: custom-vision
1111
ms.topic: how-to
12-
ms.date: 01/05/2022
12+
ms.date: 07/05/2022
1313
ms.author: pafarley
1414
ms.devlang: python
1515
ms.custom: devx-track-python
1616
---
1717

1818
# Tutorial: Run a TensorFlow model in Python
1919

20-
After you have [exported your TensorFlow model](./export-your-model.md) from the Custom Vision Service, this quickstart will show you how to use this model locally to classify images.
20+
After you've [exported your TensorFlow model](./export-your-model.md) from the Custom Vision Service, this quickstart will show you how to use this model locally to classify images.
2121

2222
> [!NOTE]
2323
> This tutorial applies only to models exported from "General (compact)" image classification projects. If you exported other models, please visit our [sample code repository](https://github.com/Azure-Samples/customvision-export-samples).

articles/cognitive-services/Custom-Vision-Service/export-your-model.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,13 +9,13 @@ manager: nitinme
99
ms.service: cognitive-services
1010
ms.subservice: custom-vision
1111
ms.topic: how-to
12-
ms.date: 10/27/2021
12+
ms.date: 07/05/2022
1313
ms.author: pafarley
1414
---
1515

1616
# Export your model for use with mobile devices
1717

18-
Custom Vision Service allows classifiers to be exported to run offline. You can embed your exported classifier into an application and run it locally on a device for real-time classification.
18+
Custom Vision Service lets you export your classifiers to be run offline. You can embed your exported classifier into an application and run it locally on a device for real-time classification.
1919

2020
## Export options
2121

0 commit comments

Comments
 (0)