|
| 1 | +# Frequently asked questions (SVET sample application) |
| 2 | + |
| 3 | +## Where can I find the descritpion of options used in par file? |
| 4 | +See chapter 2.4 in doc/svet_sample_application_user_guide_2020.1.0.pdf |
| 5 | +Running the SVET sample applicton with option "-?" can show the usage of options. |
| 6 | + |
| 7 | +## Why does the system need to be switched to text mode before running the sample application |
| 8 | +The sample application uses libDRM to render the video directly to display, so it needs to act as master of the DRM display, which isn't allowed when X server is running. |
| 9 | +If the par file doesn't include display session, there is no need to switch to text mode. |
| 10 | + |
| 11 | +## Why it needs "su -p" to switch to root user before running the sample application |
| 12 | +To become DRM master, it needs root privileges. With option "-p", it will preserve environment variables, like LIBVA_DRIVERS_PATH, LIBVA_DRIVER_NAME and LD_LIBRARY_PATH. If without "-p", these environment variables will be reset and the sample application will run into problems. |
| 13 | + |
| 14 | +## The loading time of 16-channel face detection demo is too long |
| 15 | +Please enable cl_cache by running command "export cl_cache_dir=/tmp/cl_cache" and "mkdir -p /tmp/cl_cache". Then after the first running of 16-channel face detection demo, the compiled OpenCL kernles are cached and the model loading time of next runnings of 16-channel face detection demo will only take about 10 seconds. |
| 16 | +More details about cl_cache can be found at https://github.com/intel/compute-runtime/blob/master/opencl/doc/FAQ.md |
| 17 | + |
| 18 | +## Can sources number for "-vpp_comp_only" or "-vpp_comp" be different from number of decoding sessions? |
| 19 | +No. The sources number for "-vpp_comp_only" or "-vpp_comp" must be equal to the numer of decoding sessions. Otherwise, the sample application will fail during pipeline initialization or running. |
| 20 | + |
| 21 | +## How to limit the fps of whole pipeline to 30? |
| 22 | +Add "-fps 30" to every decoding session. |
| 23 | + |
| 24 | +## How to limit the frame number of input to 1000? |
| 25 | +Add "-n 1000" to every decoding dessions. However this option won't work if both "-vpp_comp_only" and "-vpp_comp" are set. |
| 26 | + |
| 27 | +## Where can I find tutorials for inference engine? |
| 28 | +Please refer to https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html |
| 29 | + |
| 30 | +## Where can I find information for the models? |
| 31 | +Please refer to https://github.com/opencv/open_model_zoo/tree/master/models/intel. The names of models used in sample application are |
| 32 | +face-detection-retail-0004, human-pose-estimation-0001, vehicle-attributes-recognition-barrier-0039, vehicle-license-plate-detection-barrier-0106. |
| 33 | + |
| 34 | +## Can I use other OpenVINO version rather than 2019 R3? |
| 35 | +Yes, but you have to modify some code due to interfaces changing. And also you need to download the IR files and copy them to ./model manually. Please refer to script/download_and_copy_models.sh for how to download the IR files. |
0 commit comments