Skip to content

Commit a51e048

Browse files
authored
Merge pull request #174747 from PatrickFarley/comvis-updates
[cog svcs] low perf article update
2 parents 682568d + 9520de8 commit a51e048

File tree

1 file changed

+11
-10
lines changed

1 file changed

+11
-10
lines changed

articles/cognitive-services/Computer-vision/intro-to-spatial-analysis-public-preview.md

Lines changed: 11 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,21 @@
11
---
22
title: What is Spatial Analysis?
33
titleSuffix: Azure Cognitive Services
4-
description: This document explains the basic concepts and features of a Computer Vision Spatial Analysis container.
4+
description: This document explains the basic concepts and features of the Azure Spatial Analysis container.
55
services: cognitive-services
66
author: nitinme
77
manager: nitinme
88
ms.author: nitinme
99
ms.service: cognitive-services
1010
ms.subservice: computer-vision
1111
ms.topic: overview
12-
ms.date: 06/21/2021
12+
ms.date: 10/06/2021
13+
ms.custom: contperf-fy22q2
1314
---
1415

1516
# What is Spatial Analysis?
1617

17-
The Spatial Analysis service helps organizations maximize the value of their physical spaces by understanding people's movements and presence within a given area. It allows you to ingest video from CCTV or surveillance cameras, run AI operations to extract insights from the video streams, and generate events to be used by other systems. With input from a camera stream, an AI operation can do things like count the number of people entering a space or measure compliance with face mask and social distancing guidelines.
18+
Spatial Analysis is an AI service that helps organizations maximize the value of their physical spaces by understanding people's movements and presence within a given area. It allows you to ingest video from CCTV or surveillance cameras, extract insights from the video streams, and generate events to be used by other systems. With input from a camera stream, the service can do things like count the number of people entering a space or measure compliance with face mask and social distancing guidelines.
1819

1920
<!--This documentation contains the following types of articles:
2021
* The [quickstarts](./quickstarts-sdk/analyze-image-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
@@ -24,21 +25,21 @@ The Spatial Analysis service helps organizations maximize the value of their phy
2425

2526
## What it does
2627

27-
The core operations of Spatial Analysis are all built on a pipeline that ingests video, detects people in the video, tracks the people as they move around over time, and generates events as people interact with regions of interest.
28+
The core operations of Spatial Analysis are built on a system that ingests video, detects people in the video, tracks the people as they move around over time, and generates events as people interact with regions of interest.
2829

2930
## Spatial Analysis features
3031

3132
| Feature | Definition |
3233
|------|------------|
33-
| **People Detection** | This component answers the question, "Where are the people in this image?" It finds people in an image and passes a bounding box indicating the location of each person to the people tracking component. |
34-
| **People Tracking** | This component connects the people detections over time as people move around in front of a camera. It uses temporal logic about how people typically move and basic information about the overall appearance of the people. It does not track people across multiple cameras. If a person exits the field of view for longer than approximately a minute and then re-enters the camera view, the system will perceive this as a new person. People Tracking does not uniquely identify individuals across cameras. It does not use facial recognition or gait tracking. |
35-
| **Face Mask Detection** | This component detects the location of a person's face in the camera's field of view and identifies the presence of a face mask. The AI operation scans images from video; where a face is detected the service provides a bounding box around the face. Using object detection capabilities, it identifies the presence of face masks within the bounding box. Face Mask detection does not involve distinguishing one face from another face, predicting or classifying facial attributes or performing facial recognition. |
36-
| **Region of Interest** | This is a user-defined zone or line in the input video frame. When a person interacts with this region on the video, the system generates an event. For example, for the PersonCrossingLine operation, a line is defined in the video. When a person crosses that line an event is generated. |
37-
| **Event** | An event is the primary output of Spatial Analysis. Each operation emits a specific event either periodically (like once per minute) or whenever a specific trigger occurs. The event includes information about what occurred in the input video but does not include any images or video. For example, the PeopleCount operation can emit an event containing the updated count every time the count of people changes (trigger) or once every minute (periodically). |
34+
| **People Detection** | This component answers the question, "Where are the people in this image?" It finds people in an image and passes bounding box coordinates indicating the location of each person to the **People Tracking** component. |
35+
| **People Tracking** | This component connects the people detections over time as people move around in front of a camera. It uses temporal logic about how people typically move and basic information about the overall appearance of the people. It does not track people across multiple cameras. If a person exits the field of view for longer than approximately one minute and then reenters the view, the system will perceive them as a new person. People Tracking does not uniquely identify individuals across cameras. It does not use facial recognition or gait tracking. |
36+
| **Face Mask Detection** | This component detects the location of a person's face in the camera's field of view and identifies the presence of a face mask. The AI operation scans images from video; where a face is detected the service provides a bounding box around the face. Using object detection capabilities, it identifies the presence of face masks within the bounding box. Face Mask detection does not involve distinguishing one face from another face, predicting or classifying facial attributes or doing face recognition. |
37+
| **Region of Interest** | This component is a user-defined zone or line in the input video frame. When a person interacts with this region on the video, the system generates an event. For example, for the **PersonCrossingLine** operation, a line is defined in the video frame. When a person crosses that line, an event is generated. |
38+
| **Event** | An event is the primary output of Spatial Analysis. Each operation raises a specific event either periodically (like once per minute) or whenever a specific trigger occurs. The event includes information about what occurred in the input video but does not include any images or video. For example, the **PeopleCount** operation can raise an event containing the updated count every time the count of people changes (trigger) or once every minute (periodically). |
3839

3940
## Get started
4041

41-
Follow the [quickstart](spatial-analysis-container.md) to set up the container and begin analyzing video.
42+
Follow the [quickstart](spatial-analysis-container.md) to set up the Spatial Analysis container and begin analyzing video.
4243

4344
## Responsible use of Spatial Analysis technology
4445

0 commit comments

Comments
 (0)