You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: demos/background_subtraction_demo/python/README.md
+28-2Lines changed: 28 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ This demo shows how to perform background subtraction using OpenVINO.
8
8
9
9
## How It Works
10
10
11
-
The demo application expects an instance segmentation model in the Intermediate Representation (IR) format with the following constraints:
11
+
The demo application expects an instance segmentation or background matting model in the Intermediate Representation (IR) format with the following constraints:
12
12
1. for instance segmentation models based on `Mask RCNN` approach:
13
13
* One input: `image` for input image.
14
14
* At least three outputs including:
@@ -22,12 +22,29 @@ The demo application expects an instance segmentation model in the Intermediate
22
22
*`conf` with confidence scores for each class for all boxes
23
23
*`mask` with fixed-size mask channels for all boxes.
24
24
*`proto` with fixed-size segmentation heat maps prototypes for all boxes.
25
+
3. for image background matting models:
26
+
* Two inputs:
27
+
*`src` for input image
28
+
*`bgr` for input real background
29
+
* At least two outputs including:
30
+
*`fgr` with normalized in [0, 1] range foreground
31
+
*`pha` with normalized in [0, 1] range alpha
32
+
4. for video background matting models based on RNN architecture:
33
+
* Five inputs:
34
+
*`src` for input image
35
+
* recurrent inputs: `r1`, `r2`, `r3`, `r4`
36
+
* At least six outputs including:
37
+
*`fgr` with normalized in [0, 1] range foreground
38
+
*`pha` with normalized in [0, 1] range alpha
39
+
* recurrent outputs: `rr1`, `rr2`, `rr3`, `rr4`
25
40
26
41
The use case for the demo is an online conference where is needed to show only foreground - people and, respectively, to hide or replace background.
27
42
Based on this an instance segmentation model must be trained at least for person class.
28
43
29
44
As input, the demo application accepts a path to a single image file, a video file or a numeric ID of a web camera specified with a command-line argument `-i`
30
45
46
+
> **NOTE**: if you use image background matting models, `--background` argument should be specified. This is a background image that equal to a real background behind a person on an input frame and must have the same shape as an input image.
47
+
31
48
The demo workflow is the following:
32
49
33
50
1. The demo application reads image/video frames one by one, resizes them to fit into the input image blob of the network (`image`).
@@ -60,6 +77,8 @@ omz_converter --list models.lst
60
77
61
78
* instance-segmentation-person-????
62
79
* yolact-resnet50-fpn-pytorch
80
+
* background-matting-mobilenetv2
81
+
* robust-video-matting
63
82
64
83
> **NOTE**: Refer to the tables [Intel's Pre-Trained Models Device Support](../../../models/intel/device_support.md) and [Public Pre-Trained Models Device Support](../../../models/public/device_support.md) for the details on models inference support at different devices.
65
84
@@ -74,6 +93,7 @@ usage: background_subtraction_demo.py [-h] -m MODEL
0 commit comments