-
Notifications
You must be signed in to change notification settings - Fork 8k
[RFC] UVC Video encoder (zephyr,videoenc) support #97425
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[RFC] UVC Video encoder (zephyr,videoenc) support #97425
Conversation
The UVC class was deciding itself which formats were sent to the host. Remove this logic out of the UVC class and introduce uvc_add_format() to give the application the freedom of which format to list. Signed-off-by: Josuah Demangeon <[email protected]>
The UVC class now lets the application select the format list sent to the host. Leverage this in the sample to filter out any format that is not expected to work (buffer too big, rarely supported formats). Signed-off-by: Josuah Demangeon <[email protected]>
Add USB UVC device's new uvc_add_format() function to the release note, and document the semantic changes of UVC APIs in the migration guide. Signed-off-by: Josuah Demangeon <[email protected]>
Currently the DCMIPP driver rely on a Kconfig in order to select the right sensor resolution / format to pick. This also makes the exposure of caps easier since it can be exposed as: DUMP pipe: same caps as mentioned in Kconfig MAIN pipe: any format supported on this pipe and resolution starting at sensor selected resolution down to 64 times smaller (which is the maximum of the downscale) AUX pipe: same as MAIN except without the semi-planar and planar formats Signed-off-by: Alain Volmat <[email protected]>
Some devices allow for downscale / upscale via the set_selection compose API. When using it, it is necessary to perform a set_selection of the compose target prior to setting the format. In order to allow non-compose aware application to benefit from it, introduce a helper which take care of setting the compose prior to setting the format. Signed-off-by: Alain Volmat <[email protected]>
Simplify the code by using the video_set_compose_format helper. Signed-off-by: Alain Volmat <[email protected]>
Honor the CONFIG_VIDEO_BUFFER_POOL_ALIGN config by using the video_buffer_aligned_alloc function instead of video_buffer_alloc in order to provide properly aligned buffers to drivers. Signed-off-by: Alain Volmat <[email protected]>
Use the helper video_set_compose_format in order to allow controlling the compose. Signed-off-by: Alain Volmat <[email protected]>
Select from commonly used resolution when the video device advertise capabilities using range. Signed-off-by: Alain Volmat <[email protected]>
Add board specific conf files for the stm32n6570_dk Signed-off-by: Alain Volmat <[email protected]>
Allow creating a pipeline as follow camera receiver -> encoder -> uvc In the chosen zephyr,videoenc is available, the sample will pipe the camera receiver to the encoder and then the UVC device instead of directly the camera receiver to the UVC. In order to making the change as simple as possible, the source device of the UVC device is renamed uvc_src_dev instead of video_dev previously since, depending on the configuration, the UVC source might be either the video_dev or the encoder_dev. Current implementation has several points hardcoded for the time being: 1. intermediate pixel format between the camera receiver and encoder is set to NV12. This is temporary until proper analysis of video_dev caps and encoder caps is done, allowing to select the common format of the two devices. 2. it is considered that encoder device do NOT perform any resolution change and that encoder output resolution is directly based on the camera receiver resolution. Thanks to this, UVC exposed formats are thus the encoder output pixel format & camera receiver resolutions. This has been tested using the STM32N6-DK and the JPEG codec, leading to the following pipe: IMX335 -> CSI/DCMIPP -> JPEG -> UVC Signed-off-by: Alain Volmat <[email protected]>
|
Moving assignee to Video subsystem maintainer, as it seems more appropriate to the content of the RFC |
Adding a DNM flag for PRs dependencies. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this submission. It makes sense for me as it is, and AFAI the FIXME are for planning future infrastructure of the video area, not for this particular video sample
ret = video_stream_start(video_dev, VIDEO_BUF_TYPE_OUTPUT); | ||
if (ret != 0) { | ||
LOG_ERR("Failed to start %s", video_dev->name); | ||
return ret; | ||
} | ||
ret = video_stream_start(uvc_src_dev, VIDEO_BUF_TYPE_INPUT); | ||
if (ret != 0) { | ||
LOG_ERR("Failed to start %s", uvc_src_dev->name); | ||
return ret; | ||
} | ||
#endif |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about using videoenc_dev
for both here?
Then if a videoscaler_dev
is introduced, this becomes another #if DT_HAS_CHOSEN(zephyr_videoscaler)
that uses the associated videoscaler_dev
for both input/output?
ret = video_stream_start(uvc_src_dev, VIDEO_BUF_TYPE_OUTPUT); | ||
if (ret != 0) { | ||
LOG_ERR("Failed to start %s", video_dev->name); | ||
return ret; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about using video_dev
here?
#if DT_HAS_CHOSEN(zephyr_videoenc) | ||
vbuf_enc_in = &(struct video_buffer){.type = VIDEO_BUF_TYPE_OUTPUT}; | ||
|
||
if (video_dequeue(video_dev, &vbuf_enc_in, K_NO_WAIT) == 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This works well for this sample but this is where things will start to become difficult.
One way to solve this maybe is to have a configuration table at the top:
video_dst = video_dev;
prev = &video_dst;
#if DT_HAS_CHOSEN(zephyr_videoisp)
videoisp_src = *prev;
*prev = videoisp_dev;
prev = &videoisp_dst;
#endif
#if DT_HAS_CHOSEN(zephyr_videoscaler)
videoscaler_src = *prev;
*prev = videoscaler_dev;
prev = &videoscaler_dst;
#endif
#if DT_HAS_CHOSEN(zephyr_videoenc)
videoenc_src = *prev;
*prev = videoenc_dev;
prev = &videoenc_dst;
#endif
videousb_src = *prev;
*prev = videousb_dev;
Then in each block, it is possible to just use _src
and _dst
.
Of course a component-based system is better, but this would be a different sample for libMP. :]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just for discussion and not necessarily to implement on this PR, thoough, as there is only one M2M device.
Allow creating a pipeline as follow
camera receiver -> encoder -> uvc
I post this PR while there are still some points to improve (cf hardcoded stuff detailed below) in order to get some first feedbacks. Since this depends on the UVC PR (PR #93192) and DCMIPP UVC PR (PR #94562), there are lots of commits in this PR. However only the LAST COMMIT is relevant for this PR.
If the chosen zephyr,videoenc is available, the sample will pipe the camera receiver to the encoder and then the UVC device instead of directly the camera receiver to the UVC.
In order to making the change as simple as possible, the source device of the UVC device is renamed uvc_src_dev instead of video_dev previously since, depending on the configuration, the UVC source might be either the video_dev or the encoder_dev.
Current implementation has several points hardcoded for the time being:
1. intermediate pixel format between the camera receiver and encoder
is set to NV12. This is temporary until proper analysis of video_dev
caps and encoder caps is done, allowing to select the common
format of the two devices.
2. it is considered that encoder device do NOT perform any resolution
change and that encoder output resolution is directly based on the
camera receiver resolution. Thanks to this, UVC exposed formats
are thus the encoder output pixel format & camera receiver
resolutions.
This has been tested using the STM32N6-DK and the JPEG codec, leading to the following pipe:
IMX335 -> CSI/DCMIPP -> JPEG -> UVC