Skip to content

Commit 5169619

Browse files
kiranpradeepalalek
authored andcommitted
Merge pull request #493 from kiranpradeep:bg_segm_documentation_fix
Correcting bgsegm module descriptions. (#493) * Correcting bgsegm module descriptions. The algorithm implementation doesn't have multi target tracking as mentioned in original paper. it only does foreground/background segmentation. * Removing opencv_ from heading Removing opencv_ from description
1 parent 23c0256 commit 5169619

File tree

3 files changed

+9
-4
lines changed

3 files changed

+9
-4
lines changed

modules/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ $ cmake -D OPENCV_EXTRA_MODULES_PATH=<opencv_contrib>/modules -D BUILD_opencv_<r
1010

1111
- **aruco**: ArUco and ChArUco Markers -- Augmented reality ArUco marker and "ChARUco" markers where ArUco markers embedded inside the white areas of the checker board.
1212

13-
- **bgsegm**: Background Segmentation -- Improved Adaptive Background Mixture Model and use for real time human tracking under Variable-Lighting Conditions.
13+
- **bgsegm**: Background segmentation algorithm combining statistical background image estimation and per-pixel Bayesian segmentation.
1414

1515
- **bioinspired**: Biological Vision -- Biologically inspired vision model: minimize noise and luminance variance, transient event segmentation, high dynamic range tone mapping methods.
1616

modules/bgsegm/README.md

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,10 @@
11
Improved Background-Foreground Segmentation Methods
22
===================================================
33

4-
1. Adaptive Background Mixture Model for Real-time Tracking
5-
2. Visual Tracking of Human Visitors under Variable-Lighting Conditions.
4+
This algorithm combines statistical background image estimation and per-pixel Bayesian segmentation. It[1] was introduced by Andrew B. Godbehere, Akihiro Matsukawa, Ken Goldberg in 2012. As per the paper, the system ran a successful interactive audio art installation called “Are We There Yet?” from March 31 - July 31 2011 at the Contemporary Jewish Museum in San Francisco, California.
5+
6+
It uses first few (120 by default) frames for background modelling. It employs probabilistic foreground segmentation algorithm that identifies possible foreground objects using Bayesian inference. The estimates are adaptive; newer observations are more heavily weighted than old observations to accommodate variable illumination. Several morphological filtering operations like closing and opening are done to remove unwanted noise. You will get a black window during first few frames.
7+
8+
References
9+
----------
10+
[1]: A.B. Godbehere, A. Matsukawa, K. Goldberg. Visual tracking of human visitors under variable-lighting conditions for a responsive audio art installation. American Control Conference. (2012), pp. 4305–4312

modules/bgsegm/src/bgfg_gmg.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@
4141
//M*/
4242

4343
/*
44-
* This class implements an algorithm described in "Visual Tracking of Human Visitors under
44+
* This class implements a particular BackgroundSubtraction algorithm described in "Visual Tracking of Human Visitors under
4545
* Variable-Lighting Conditions for a Responsive Audio Art Installation," A. Godbehere,
4646
* A. Matsukawa, K. Goldberg, American Control Conference, Montreal, June 2012.
4747
*

0 commit comments

Comments
 (0)