You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/Computer-vision/intro-to-spatial-analysis-public-preview.md
+11-10Lines changed: 11 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,20 +1,21 @@
1
1
---
2
2
title: What is Spatial Analysis?
3
3
titleSuffix: Azure Cognitive Services
4
-
description: This document explains the basic concepts and features of a Computer Vision Spatial Analysis container.
4
+
description: This document explains the basic concepts and features of the Azure Spatial Analysis container.
5
5
services: cognitive-services
6
6
author: nitinme
7
7
manager: nitinme
8
8
ms.author: nitinme
9
9
ms.service: cognitive-services
10
10
ms.subservice: computer-vision
11
11
ms.topic: overview
12
-
ms.date: 06/21/2021
12
+
ms.date: 10/06/2021
13
+
ms.custom: contperf-fy22q2
13
14
---
14
15
15
16
# What is Spatial Analysis?
16
17
17
-
The Spatial Analysis service helps organizations maximize the value of their physical spaces by understanding people's movements and presence within a given area. It allows you to ingest video from CCTV or surveillance cameras, run AI operations to extract insights from the video streams, and generate events to be used by other systems. With input from a camera stream, an AI operation can do things like count the number of people entering a space or measure compliance with face mask and social distancing guidelines.
18
+
Spatial Analysis is an AI service that helps organizations maximize the value of their physical spaces by understanding people's movements and presence within a given area. It allows you to ingest video from CCTV or surveillance cameras, extract insights from the video streams, and generate events to be used by other systems. With input from a camera stream, the service can do things like count the number of people entering a space or measure compliance with face mask and social distancing guidelines.
18
19
19
20
<!--This documentation contains the following types of articles:
20
21
* The [quickstarts](./quickstarts-sdk/analyze-image-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
@@ -24,21 +25,21 @@ The Spatial Analysis service helps organizations maximize the value of their phy
24
25
25
26
## What it does
26
27
27
-
The core operations of Spatial Analysis are all built on a pipeline that ingests video, detects people in the video, tracks the people as they move around over time, and generates events as people interact with regions of interest.
28
+
The core operations of Spatial Analysis are built on a system that ingests video, detects people in the video, tracks the people as they move around over time, and generates events as people interact with regions of interest.
28
29
29
30
## Spatial Analysis features
30
31
31
32
| Feature | Definition |
32
33
|------|------------|
33
-
|**People Detection**| This component answers the question, "Where are the people in this image?" It finds people in an image and passes a bounding box indicating the location of each person to the people tracking component. |
34
-
|**People Tracking**| This component connects the people detections over time as people move around in front of a camera. It uses temporal logic about how people typically move and basic information about the overall appearance of the people. It does not track people across multiple cameras. If a person exits the field of view for longer than approximately a minute and then re-enters the camera view, the system will perceive this as a new person. People Tracking does not uniquely identify individuals across cameras. It does not use facial recognition or gait tracking. |
35
-
|**Face Mask Detection**| This component detects the location of a person's face in the camera's field of view and identifies the presence of a face mask. The AI operation scans images from video; where a face is detected the service provides a bounding box around the face. Using object detection capabilities, it identifies the presence of face masks within the bounding box. Face Mask detection does not involve distinguishing one face from another face, predicting or classifying facial attributes or performing facial recognition. |
36
-
|**Region of Interest**| This is a user-defined zone or line in the input video frame. When a person interacts with this region on the video, the system generates an event. For example, for the PersonCrossingLine operation, a line is defined in the video. When a person crosses that line an event is generated. |
37
-
|**Event**| An event is the primary output of Spatial Analysis. Each operation emits a specific event either periodically (like once per minute) or whenever a specific trigger occurs. The event includes information about what occurred in the input video but does not include any images or video. For example, the PeopleCount operation can emit an event containing the updated count every time the count of people changes (trigger) or once every minute (periodically). |
34
+
|**People Detection**| This component answers the question, "Where are the people in this image?" It finds people in an image and passes bounding box coordinates indicating the location of each person to the **People Tracking** component. |
35
+
|**People Tracking**| This component connects the people detections over time as people move around in front of a camera. It uses temporal logic about how people typically move and basic information about the overall appearance of the people. It does not track people across multiple cameras. If a person exits the field of view for longer than approximately one minute and then reenters the view, the system will perceive them as a new person. People Tracking does not uniquely identify individuals across cameras. It does not use facial recognition or gait tracking. |
36
+
|**Face Mask Detection**| This component detects the location of a person's face in the camera's field of view and identifies the presence of a face mask. The AI operation scans images from video; where a face is detected the service provides a bounding box around the face. Using object detection capabilities, it identifies the presence of face masks within the bounding box. Face Mask detection does not involve distinguishing one face from another face, predicting or classifying facial attributes or doing face recognition. |
37
+
|**Region of Interest**| This component is a user-defined zone or line in the input video frame. When a person interacts with this region on the video, the system generates an event. For example, for the **PersonCrossingLine** operation, a line is defined in the video frame. When a person crosses that line, an event is generated. |
38
+
|**Event**| An event is the primary output of Spatial Analysis. Each operation raises a specific event either periodically (like once per minute) or whenever a specific trigger occurs. The event includes information about what occurred in the input video but does not include any images or video. For example, the **PeopleCount** operation can raise an event containing the updated count every time the count of people changes (trigger) or once every minute (periodically). |
38
39
39
40
## Get started
40
41
41
-
Follow the [quickstart](spatial-analysis-container.md) to set up the container and begin analyzing video.
42
+
Follow the [quickstart](spatial-analysis-container.md) to set up the Spatial Analysis container and begin analyzing video.
0 commit comments