You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/active-directory/saas-apps/documo-tutorial.md
+6-1Lines changed: 6 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -144,7 +144,12 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
144
144
145
145
### Create Documo test user
146
146
147
-
In this section, a user called Britta Simon is created in Documo. Documo supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Documo, a new one is created after authentication.
147
+
In this section, a user called B.Simon is created in Documo.
148
+
149
+
1. Navigate to the [Users page](https://app.documo.com?redirectTo=/users) on the Documo app.
150
+
1. Click the **New user** button.
151
+
1. Fill out the user form with name, email, phone number, user role, and password information. Make sure the **email** field matches the email for B.Simon in **Azure AD**.
Copy file name to clipboardExpand all lines: articles/azure-percept/azure-percept-devkit-container-release-notes.md
+5Lines changed: 5 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,6 +14,11 @@ This page provides information of changes and fixes for Azure Percept DK Contain
14
14
15
15
To download the container updates, go to [Azure Percept Studio](https://ms.portal.azure.com/#blade/AzureEdgeDevices/main/overview), select Devices from the left navigation pane, choose the specific device, and then select Vision and Speech tabs to initiate container downloads.
16
16
17
+
## December (2112) Release
18
+
19
+
- Removed lines in the image frames using automatic image capture in Azure Percept Studio. This issue was introduced in the 2108 module release.
20
+
- Security fixes for docker services running as root in azureeyemodule, azureearspeechclientmodule, and webstreammodule.
Copy file name to clipboardExpand all lines: articles/azure-video-analyzer/video-analyzer-docs/live-pipeline-topologies.md
+12-12Lines changed: 12 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,11 @@
1
1
---
2
-
title: Live pipeline topologies
3
-
description: This article describes the supported by Azure Video Analyzer live pipeline topologies in detail.
2
+
title: List of pipeline topologies
3
+
description: This article lists the live pipeline topologies supported by Azure Video Analyzer.
4
4
ms.topic: conceptual
5
-
ms.date: 12/13/2021
5
+
ms.date: 12/15/2021
6
6
---
7
7
8
-
# Live pipeline topologies
8
+
# List of pipeline topologies
9
9
10
10
The tables that follow list the [live pipeline topologies](terminology.md#pipeline-topology) supported by Azure Video Analyzer. The tables also provide
11
11
@@ -15,9 +15,9 @@ The tables that follow list the [live pipeline topologies](terminology.md#pipeli
15
15
16
16
Clicking on a topology name redirects to the corresponding JSON file located in [this GitHub folder](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/), clicking on a sample redirects to the corresponding sample document.
17
17
18
-
## Pipeline topology tables
18
+
## Live pipeline topologies
19
19
20
-
## Continuous video recording
20
+
###Continuous video recording
21
21
22
22
Name | Description | Samples | VSCode Name
23
23
:----- | :---- | :---- | :---
@@ -28,7 +28,7 @@ Name | Description | Samples | VSCode Name
28
28
[cvr-with-motion](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/cvr-with-motion/topology.json) | Perform CVR. When motion is detected from a live video feed, relevant inferencing events are published to the IoT Edge Hub. | | Continuous Video Recording > Record on motion detection
29
29
[audio-video](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/audio-video/topology.json) | Perform CVR and record audio using the outputSelectors property. | | Continuous Video Recording > Record audio with video
30
30
31
-
## Event-based video recording
31
+
###Event-based video recording
32
32
33
33
Name | Description | Samples | VSCode Name
34
34
:----- | :---- | :---- | :---
@@ -40,23 +40,23 @@ Name | Description | Samples | VSCode Name
40
40
[evr-motion-video-sink](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/evr-motion-video-sink/topology.json) | When motion is detected, those events are published to the IoT Edge Hub. In addition, the motion events are used to trigger the signal gate processor node that will send frames to the video sink node when motion is detected. As a result, new video clips are appended to the Azure Video Analyzer video, corresponding to when motion was detected. | [Detect motion, record video to Video Analyzer](edge/detect-motion-record-video-clips-cloud.md) | Event-based Video Recording > Record motion events to Video Analyzer video
41
41
[evr-motion-file-sink](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/evr-motion-file-sink/topology.json) | When motion is detected from a live video feed, events are sent to a signal gate processor node that opens, sending frames to a file sink node. As a result, new files are created on the local file system of the edge device, containing the frames where motion was detected. | [Detect motion and record video on edge devices](edge/detect-motion-record-video-edge-devices.md) | Event-based Video Recording > Record motion events to local files
42
42
43
-
## Motion detection
43
+
###Motion detection
44
44
45
45
Name | Description | Samples | VSCode Name
46
46
:----- | :---- | :---- | :---
47
47
[motion-detection](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/motion-detection/topology.json) | Detect motion in a live video feed. When motion is detected, those events are published to the IoT Hub. | [Get started with Azure Video Analyzer](edge/get-started-detect-motion-emit-events.md), [Get started with Video Analyzer in the portal](edge/get-started-detect-motion-emit-events-portal.md), [Detect motion and emit events](detect-motion-emit-events-quickstart.md) | Motion Detection > Publish motion events to IoT Hub
48
48
[motion-with-grpcExtension](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/motion-with-grpcExtension/topology.json) | Perform event-based recording in the presence of motion. When motion is detected from a live video feed, those events are published to the IoT Edge Hub. In addition, the motion events are used to trigger a signal gate processor node that will send frames to a video sink node only when motion is present. As a result, new video clips are appended to the Azure Video Analyzer video, corresponding to when motion was detected. Additionally, run video analytics only when motion is detected. Upon detecting motion, a subset of the video frames is sent to an external AI inference engine via the gRPC extension. The results are then published to the IoT Edge Hub. | [Analyze live video with your own model - gRPC](analyze-live-video-use-your-model-grpc.md) | Motion Detection > Publish motion events using gRPC Extension
49
49
[motion-with-httpExtension](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/motion-with-httpExtension/topology.json) | Perform event-based recording in the presence of motion. When motion is detected in a live video feed, those events are published to the IoT Edge Hub. In addition, the motion events are used to trigger a signal gate processor node that will send frames to a video sink node only when motion is present. As a result, new video clips are appended to the Azure Video Analyzer video, corresponding to when motion was detected. Additionally, run video analytics only when motion is detected. Upon detecting motion, a subset of the video frames is sent to an external AI inference engine via the HTTP extension. The results are then published to the IoT Edge Hub. | [Analyze live video with your own model - HTTP](edge/analyze-live-video-use-your-model-http.md#generate-and-deploy-the-iot-edge-deployment-manifest) | Motion Detection > Publish motion events using HTTP Extension
50
50
51
-
## Extensions
51
+
###Extensions
52
52
53
53
Name | Description | Samples | VSCode Name
54
54
:----- | :---- | :---- | :---
55
55
[grpcExtensionOpenVINO](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/grpcExtensionOpenVINO/topology.json) | Run video analytics on a live video feed. The gRPC extension allows you to create images at video frame rate from the camera that are converted to images, and sent to the [OpenVINO™ DL Streamer - Edge AI Extension module](https://aka.ms/ava-intel-ovms) provided by Intel. The results are then published to the IoT Edge Hub. | [Analyze live video with Intel OpenVINO™ DL Streamer – Edge AI Extension](use-intel-grpc-video-analytics-serving-tutorial.md)
56
56
[httpExtension](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/httpExtension/topology.json) | Run video analytics on a live video feed. A subset of the video frames from the camera are converted to images, and sent to an external AI inference engine. The results are then published to the IoT Edge Hub. | [Analyze live video with your own model - HTTP](analyze-live-video-use-your-model-http.md), [Analyze live video with Azure Video Analyzer on IoT Edge and Azure Custom Vision](edge/analyze-live-video-custom-vision.md) | Extensions > Analyzer video using HTTP Extension
57
57
[httpExtensionOpenVINO](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/httpExtensionOpenVINO/topology.json) | Run video analytics on a live video feed. A subset of the video frames from the camera are converted to images, and sent to the [OpenVINO™ Model Server – AI Extension module](https://aka.ms/ava-intel-ovms) provided by Intel. The results are then published to the IoT Edge Hub. | [Analyze live video using OpenVINO™ Model Server – AI Extension from Intel](https://aka.ms/ava-intel-ovms-tutorial) | Extensions > Analyzer video with Intel OpenVINO Model Server
58
58
59
-
## Computer vision
59
+
###Computer vision
60
60
61
61
Name | Description | Samples | VSCode Name
62
62
:----- | :---- | :---- | :---
@@ -66,13 +66,13 @@ Name | Description | Samples | VSCode Name
66
66
[spatial-analysis/person-distance-operation-topology](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/spatial-analysis/person-distance-operation-topology.json) | Live video is sent to an external [spatialAnalysis](../../cognitive-services/computer-vision/spatial-analysis-operations.md) module that tracks when people violate a distance rule. When the criteria defined by the AI operation is met, events are sent to a signal gate processor that opens, sending the frames to a video sink node. As a result, a new clip is appended to the Azure Video Analyzer video resource. | | Computer Vision > Person distance operation with Computer Vision for Spatial Analysis
67
67
[spatial-analysis/custom-operation-topology](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/spatial-analysis\custom-operation-topology.json) | Live video is sent to an external [spatialAnalysis](../../cognitive-services/computer-vision/spatial-analysis-operations.md) module that carries out a supported AI operation. When the criteria defined by the AI operation is met, events are sent to a signal gate processor that opens, sending the frames to a video sink node. As a result, a new clip is appended to the Azure Video Analyzer video resource. | | Computer Vision > Custom operation with Computer Vision for Spatial Analysis
68
68
69
-
## AI composition
69
+
###AI composition
70
70
71
71
Name | Description | Samples | VSCode Name
72
72
:----- | :---- | :---- | :---
73
73
[ai-composition](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/ai-composition/topology.json) | Run 2 AI inferencing models of your choice. In this example, classified video frames are sent from an AI inference engine using the [Tiny YOLOv3 model](https://github.com/Azure/video-analyzer/tree/main/edge-modules/extensions/yolo/tinyyolov3/grpc-cpu) to another engine using the [YOLOv3 model](https://github.com/Azure/video-analyzer/tree/main/edge-modules/extensions/yolo/yolov3/grpc-cpu). Having such a topology enables you to trigger a heavy AI module, only when a light AI module indicates a need to do so. | [Analyze live video streams with multiple AI models using AI composition](edge/analyze-ai-composition.md) | AI composition > Record to the Video Analyzer service using multiple AI models
Copy file name to clipboardExpand all lines: articles/role-based-access-control/transfer-subscription.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ ms.author: rolyon
14
14
15
15
# Transfer an Azure subscription to a different Azure AD directory
16
16
17
-
Organizations might have several Azure subscriptions. Each subscription is associated with a particular Azure Active Directory (Azure AD) directory. To make management easier, you might want to transfer a subscription to a different Azure AD directory. When you transfer a subscription to a different Azure AD directory, some resources are not transferred to the target directory. For example, all role assignments and custom roles in Azure role-based access control (Azure RBAC) are **permanently** deleted from the source directory and are not be transferred to the target directory.
17
+
Organizations might have several Azure subscriptions. Each subscription is associated with a particular Azure Active Directory (Azure AD) directory. To make management easier, you might want to transfer a subscription to a different Azure AD directory. When you transfer a subscription to a different Azure AD directory, some resources are not transferred to the target directory. For example, all role assignments and custom roles in Azure role-based access control (Azure RBAC) are **permanently** deleted from the source directory and are not transferred to the target directory.
18
18
19
19
This article describes the basic steps you can follow to transfer a subscription to a different Azure AD directory and re-create some of the resources after the transfer.
0 commit comments