You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Sep 30, 2024. It is now read-only.
Main chanages:
1.Upgrade MediaSDK to OneVPL dispather v2022.0.3 and Onevpl GPU 22.3.2.
The libva, gmmlib and media-driver versions are also upgrade
2.Upgrade OpenVINO version to 2022.1
3.Add support for Intel ADL platforms
4.Drop the support for old Intel platforms like Skylake, Kabylake, Apollo Lake
and Coffee Lake
Signed-off-by: Elaine Wang <[email protected]>
This script will install the dependent software packages by running command "apt install". So it will ask for sudo password. Then it will download libva, libva-util, media-driverand MediaSDK source code and install these libraries. It might take 10 to 20 minutes depending on the network bandwidth.
112
+
This script will install the dependent software packages by running command "apt install". So it will ask for sudo password. Then it will download libva, libva-util, gmmlib, media-driver, onevpl and onevpl_gpu source code and install these libraries. It might take 10 to 20 minutes depending on the network bandwidth.
70
113
71
114
After the script finishing, the sample application video_e2e_sample can be found under ./bin.
72
115
73
-
In order to enable the media SDK installed by SVET to coexist with different versions of media SDK installed on the same computer, we suggest (we have done so in our build script) that the media SDK environment variables of SVET should only be set in the current bash, not saved to the global system environment.
116
+
In order to enable the libva/media-driver/onevpl installed by SVET to coexist with different versions of libva/media-driver/onevpl installed on the same computer, we suggest (we have done so in our build script) that the media SDK environment variables of SVET should only be set in the current bash, not saved to the global system environment.
74
117
So please run 'source ./svet_env_setup.sh' first when you start a new shell (or change user in shell such as run 'su -') to run ./bin/video_e2e_sample".
Please refer to "Run sample application" in [user guide](./doc/concurrent_video_analytic_sample_application_user_guide.pdf) for details.
125
+
n1nodsp.par is a basic test to check if the media stack work correctly.
126
+
n2fd.par runs two video decode and face detection inference without display. It can be used to check if the OpenVINO and NEO installed correctly.
127
+
Please refer to "Run sample application" for more use cases in [user guide](./doc/concurrent_video_analytic_sample_application_user_guide.pdf) for details.
81
128
82
129
# Known limitations
83
130
84
-
The sample application has been validated on Intel® platforms Skylake(i7-6770HQ), Coffee Lake(i7-8559U i7-8700), Whiskey Lake(i7-8665UE) and Tiger Lake U(i7-1185G7E, i5-1135G7E).
131
+
The sample application has been validated on Intel® platforms Tiger Lake U(i7-1185G7E, i5-1135G7E, Celeron 6305E) and Adler Lake (i5-12400)
echo"For TigerLake CPU, please refer to user guide chapter 1.3 to upgrade the kernel with https://github.com/intel/linux-intel-lts/releases/tag/lts-v5.4.102-yocto-210310T010318Z";
228
-
fi
117
+
echo"Please run ./bin/video_e2e_sample for testing"
118
+
else
119
+
echo"Please run the cmd below to setup running environment:"
120
+
echo""
121
+
echo"source ./svet_env_setup.sh"
122
+
echo""
123
+
echo"Then use ./bin/video_e2e_sample for testing"
124
+
echo"IMPORTANT NOTICE: please run 'source ./svet_env_setup.sh' first when you start a new shell (or change user in shell such as run 'su -') to run ./bin/video_e2e_sample"
echo"Please run the cmd below to setup running environment:"
232
-
echo""
233
-
echo"source ./svet_env_setup.sh"
234
-
echo""
235
-
echo"Then use ./bin/video_e2e_sample for testing"
236
-
echo"IMPORTANT NOTICE: please run 'source ./svet_env_setup.sh' first when you start a new shell (or change user in shell such as run 'su -') to run ./bin/video_e2e_sample"
Copy file name to clipboardExpand all lines: doc/FAQ.md
-7Lines changed: 0 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,9 +34,6 @@ Add "-fps 30" to every decoding session.
34
34
## How to limit the frame number of input to 1000?
35
35
Add "-n 1000" to every decoding dessions. But please not add "-n" to encode, display and fake sink session. These sink sessions will automatically stop when the source session stops. Note, this option won't work if both "-vpp_comp_only" and "-vpp_comp" are set.
36
36
37
-
## Where can I find tutorials for inference engine?
38
-
Please refer to https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html
39
-
40
37
## Why HDDL card usage ratio is low for face detection inference?
41
38
It can be caused by the decoded frames aren't fed to inference engine efficiently. The default inference interval of face detection is 6. You try to set the inference interval to a lower valuer when using HDDL as inference target device. For example, with 3 HDDL L2 card, adding "-infer::inverval 1" to 16-channel face detection par file can increase the HDDL usage ratio to 100%.
## Can I use other OpenVINO version rather than 2021.3 ?
48
45
Yes, but you have to modify some code due to interfaces changing. And also you need to download the IR files and copy them to ./model manually. Please refer to script/download_and_copy_models.sh for how to download the IR files.
49
-
50
-
## When run 4 channel decode plus inference and display on APL, the CPU occupy ratio is very high and fps is low
51
-
Please refer to par file par_file/inference/n4_face_detection_rgbp.par. It uses option "-dc::rgbp" that make the SFC outputs RGB4 for display and VPP outputs RGBP for inference input. Then there is no need to use OpenCV for resizing and color conversion which consume much CPU time on APL.
52
-
Note, "-dc::rgbp" only works with "-infer::fd". Will support more inference types in the future.
0 commit comments