Skip to content

Commit b74d285

Browse files
author
Pashchenkov Maxim
committed
Applying comments + added check for negative value of source
1 parent ee08b01 commit b74d285

File tree

3 files changed

+20
-4
lines changed

3 files changed

+20
-4
lines changed

demos/gesture_recognition_demo/cpp_gapi/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,9 +22,9 @@ The demo workflow is the following:
2222

2323
> **NOTE**: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with the `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model.html#general-conversion-parameters).
2424
25-
## Creating a Gallery for gestures window
25+
## Creating a Gallery for Gestures Window
2626

27-
The gallery of sample videos can be created to show the sample gestures on an additional window:
27+
The gallery of sample videos and list of paths to gesture videos must be created to show the sample gestures on an additional window:
2828

2929
1. Put videos containing gestures to a separate empty folder. Each video must have only one gesture.
3030
2. Run the `python3 <omz_dir>/demos/gesture_recognition_demo/cpp_gapi/create_list.py --classes_map <path_to_a_file_with_gesture_classes> --gesture_storage <path_to_directory_with_gesture_videos>` command, which will create a `gesture_gallery.json` file with list of gestures and paths to appropriate videos.

demos/gesture_recognition_demo/cpp_gapi/create_list.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@
2020
import json
2121
import argparse
2222

23-
parser = argparse.ArgumentParser(description='')
23+
parser = argparse.ArgumentParser()
2424

2525
parser.add_argument('--gesture_storage',
2626
help='Path to the gesture directory')

demos/gesture_recognition_demo/cpp_gapi/include/stream_source.hpp

Lines changed: 17 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@
55
#pragma once
66

77
#include <utils/images_capture.h>
8+
#include <utils/slog.hpp>
89
#include <opencv2/gapi.hpp>
910
#include <chrono>
1011
#include <thread>
@@ -93,7 +94,12 @@ class CustomCapSource : public cv::gapi::wip::IStreamSource
9394
const cv::Size& frame_size,
9495
const int batch_size,
9596
const float batch_fps)
96-
: cap(cap), producer(batch_size, batch_fps) {
97+
: cap(cap), producer(batch_size, batch_fps), source_fps(cap->fps()) {
98+
if (source_fps <= 0.) {
99+
source_fps = 30.;
100+
wait_gap = true;
101+
slog::warn << "Got a non-positive value as FPS of the input. Interpret it as 30 FPS" << slog::endl;
102+
}
97103
/** Create and get first image for batch **/
98104
GAPI_Assert(first_batch.empty());
99105
if (batch_size == 0 || batch_size == 1) {
@@ -119,6 +125,8 @@ class CustomCapSource : public cv::gapi::wip::IStreamSource
119125
protected:
120126
std::shared_ptr<ImagesCapture> cap; // wrapper for cv::VideoCapture
121127
BatchProducer producer; // class batch-construcor
128+
double source_fps = 0.; // input source framerate
129+
bool wait_gap = false; // waiting for fast frame reading (stop main thread when got a non-positive FPS value)
122130
bool first_pulled = false; // is first already pulled
123131
std::vector<cv::Mat> first_batch; // batch from constructor
124132
cv::Mat fast_frame; // frame from cv::VideoCapture
@@ -147,6 +155,14 @@ class CustomCapSource : public cv::gapi::wip::IStreamSource
147155

148156
/** Put fast frame to the batch **/
149157
producer.fillFastFrame(fast_frame);
158+
if (wait_gap) {
159+
const auto cur_step = std::chrono::steady_clock::now() - read_time;
160+
const auto gap = std::chrono::duration_cast<std::chrono::milliseconds>(cur_step).count();
161+
const int time_step = int(1000.f / float(source_fps));
162+
if (gap < time_step) {
163+
std::this_thread::sleep_for(std::chrono::milliseconds(time_step - gap));
164+
}
165+
}
150166

151167
/** Put pulled batch to GRunArg data **/
152168
cv::detail::VectorRef ref(std::move(producer.getBatch()));

0 commit comments

Comments
 (0)