Skip to content

Commit e8ed375

Browse files
committed
Merge pull request #1477 from berak:fix_non_ascii
2 parents 60a510c + 102c80a commit e8ed375

File tree

11 files changed

+17
-17
lines changed

11 files changed

+17
-17
lines changed

modules/bgsegm/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
1-
Improved Background-Foreground Segmentation Methods
1+
Improved Background-Foreground Segmentation Methods
22
===================================================
33

4-
This algorithm combines statistical background image estimation and per-pixel Bayesian segmentation. It[1] was introduced by Andrew B. Godbehere, Akihiro Matsukawa, Ken Goldberg in 2012. As per the paper, the system ran a successful interactive audio art installation called Are We There Yet? from March 31 - July 31 2011 at the Contemporary Jewish Museum in San Francisco, California.
4+
This algorithm combines statistical background image estimation and per-pixel Bayesian segmentation. It[1] was introduced by Andrew B. Godbehere, Akihiro Matsukawa, Ken Goldberg in 2012. As per the paper, the system ran a successful interactive audio art installation called "Are We There Yet?" from March 31 - July 31 2011 at the Contemporary Jewish Museum in San Francisco, California.
55

66
It uses first few (120 by default) frames for background modelling. It employs probabilistic foreground segmentation algorithm that identifies possible foreground objects using Bayesian inference. The estimates are adaptive; newer observations are more heavily weighted than old observations to accommodate variable illumination. Several morphological filtering operations like closing and opening are done to remove unwanted noise. You will get a black window during first few frames.
77

88
References
99
----------
10-
[1]: A.B. Godbehere, A. Matsukawa, K. Goldberg. Visual tracking of human visitors under variable-lighting conditions for a responsive audio art installation. American Control Conference. (2012), pp. 4305–4312
10+
[1]: A.B. Godbehere, A. Matsukawa, K. Goldberg. Visual tracking of human visitors under variable-lighting conditions for a responsive audio art installation. American Control Conference. (2012), pp. 4305–4312

modules/datasets/include/opencv2/datasets/dataset.hpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -485,7 +485,7 @@ Implements loading dataset:
485485
486486
"VOT 2015 dataset comprises 60 short sequences showing various objects in challenging backgrounds.
487487
The sequences were chosen from a large pool of sequences including the ALOV dataset, OTB2 dataset,
488-
non-tracking datasets, Computer Vision Online, Professor Bob Fishers Image Database, Videezy,
488+
non-tracking datasets, Computer Vision Online, Professor Bob Fisher's Image Database, Videezy,
489489
Center for Research in Computer Vision, University of Central Florida, USA, NYU Center for Genomics
490490
and Systems Biology, Data Wrangling, Open Access Directory and Learning and Recognition in Vision
491491
Group, INRIA, France. The VOT sequence selection protocol was applied to obtain a representative

modules/face/include/opencv2/face.hpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ which is available since the 2.4 release. I suggest you take a look at its descr
7070
7171
Algorithm provides the following features for all derived classes:
7272
73-
- So called virtual constructor. That is, each Algorithm derivative is registered at program
73+
- So called "virtual constructor". That is, each Algorithm derivative is registered at program
7474
start and you can get the list of registered algorithms and create instance of a particular
7575
algorithm by its name (see Algorithm::create). If you plan to add your own algorithms, it is
7676
good practice to add a unique prefix to your algorithms to distinguish them from other

modules/fuzzy/doc/fuzzy.bib

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ @article{Fusion:AFS12
5252
}
5353

5454
@incollection{IPMU2012,
55-
title={$F^1$-transform edge detector inspired by cannys algorithm},
55+
title={$F^1$-transform edge detector inspired by canny's algorithm},
5656
author={Perfilieva, Irina and Hod'{\'a}kov{\'a}, Petra and Hurtík, Petr},
5757
booktitle={Advances on Computational Intelligence},
5858
pages={230--239},
@@ -75,4 +75,4 @@ @inproceedings{vlavsanek2015patch
7575
pages={235--240},
7676
year={2015},
7777
organization={IEEE}
78-
}
78+
}

modules/saliency/include/opencv2/saliency/saliencyBaseClasses.hpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ class CV_EXPORTS_W StaticSaliency : public virtual Saliency
9393
targets, a segmentation by clustering is performed, using *K-means algorithm*. Then, to gain a
9494
binary representation of clustered saliency map, since values of the map can vary according to
9595
the characteristics of frame under analysis, it is not convenient to use a fixed threshold. So,
96-
*Otsus algorithm* is used, which assumes that the image to be thresholded contains two classes
96+
*Otsu's algorithm* is used, which assumes that the image to be thresholded contains two classes
9797
of pixels or bi-modal histograms (e.g. foreground and back-ground pixels); later on, the
9898
algorithm calculates the optimal threshold separating those two classes, so that their
9999
intra-class variance is minimal.

modules/sfm/src/libmv_light/libmv/correspondence/feature_matching.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ void FindCandidateMatches(const FeatureSet &left,
7777
// method.
7878
// I.E: A match is considered as strong if the following test is true :
7979
// I.E distance[0] < fRatio * distances[1].
80-
// From David Lowe Distinctive Image Features from Scale-Invariant Keypoints.
80+
// From David Lowe "Distinctive Image Features from Scale-Invariant Keypoints".
8181
// You can use David Lowe's magic ratio (0.6 or 0.8).
8282
// 0.8 allow to remove 90% of the false matches while discarding less than 5%
8383
// of the correct matches.

modules/structured_light/include/opencv2/structured_light/graycodepattern.hpp

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -137,7 +137,7 @@ class CV_EXPORTS_W GrayCodePattern : public StructuredLightPattern
137137
* @param patternImages The pattern images acquired by the camera, stored in a grayscale vector < Mat >.
138138
* @param x x coordinate of the image pixel.
139139
* @param y y coordinate of the image pixel.
140-
* @param projPix Projector's pixel corresponding to the camera's pixel: projPix.x and projPix.y are the image coordinates of the projectors pixel corresponding to the pixel being decoded in a camera.
140+
* @param projPix Projector's pixel corresponding to the camera's pixel: projPix.x and projPix.y are the image coordinates of the projector's pixel corresponding to the pixel being decoded in a camera.
141141
*/
142142
CV_WRAP
143143
virtual bool getProjPixel( InputArrayOfArrays patternImages, int x, int y, Point &projPix ) const = 0;
@@ -146,4 +146,4 @@ class CV_EXPORTS_W GrayCodePattern : public StructuredLightPattern
146146
//! @}
147147
}
148148
}
149-
#endif
149+
#endif

modules/structured_light/include/opencv2/structured_light/structured_light.hpp

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ namespace structured_light {
5353
// other algorithms can be implemented
5454
enum
5555
{
56-
DECODE_3D_UNDERWORLD = 0 //!< Kyriakos Herakleous, Charalambos Poullis. 3DUNDERWORLD-SLS: An Open-Source Structured-Light Scanning System for Rapid Geometry Acquisition, arXiv preprint arXiv:1406.6595 (2014).
56+
DECODE_3D_UNDERWORLD = 0 //!< Kyriakos Herakleous, Charalambos Poullis. "3DUNDERWORLD-SLS: An Open-Source Structured-Light Scanning System for Rapid Geometry Acquisition", arXiv preprint arXiv:1406.6595 (2014).
5757
};
5858

5959
/** @brief Abstract base class for generating and decoding structured light patterns.
@@ -88,4 +88,4 @@ class CV_EXPORTS_W StructuredLightPattern : public virtual Algorithm
8888

8989
}
9090
}
91-
#endif
91+
#endif

modules/text/include/opencv2/text/textDetector.hpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
#ifndef __OPENCV_TEXT_TEXTDETECTOR_HPP__
66
#define __OPENCV_TEXT_TEXTDETECTOR_HPP__
77

8-
#include"ocr.hpp"
8+
#include "ocr.hpp"
99

1010
namespace cv
1111
{

modules/text/tutorials/install_tesseract/install_tesseract.markdown

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -113,4 +113,4 @@ CMAKE_OPTIONS='-DBUILD_PERF_TESTS:BOOL=OFF -DBUILD_TESTS:BOOL=OFF -DBUILD_DOCS:B
113113
@endcode
114114
-# now we need the language files from tesseract. either clone https://github.com/tesseract-ocr/tessdata, or copy only those language files you need to a folder (example c:\\lib\\install\\tesseract\\tessdata). If you don't want to add a new folder you must copy language file in same folder than your executable
115115
-# if you created a new folder, then you must add a new variable, TESSDATA_PREFIX with the value c:\\lib\\install\\tessdata to your system's environment
116-
-# add c:\\Lib\\install\\leptonica\\bin and c:\\Lib\\install\\tesseract\\bin to your PATH environment. If you don't want to modify the PATH then copy tesseract400.dll and leptonica-1.74.4.dll to the same folder than your exe file.
116+
-# add c:\\Lib\\install\\leptonica\\bin and c:\\Lib\\install\\tesseract\\bin to your PATH environment. If you don't want to modify the PATH then copy tesseract400.dll and leptonica-1.74.4.dll to the same folder than your exe file.

0 commit comments

Comments
 (0)