Skip to content

Conversation

avolmat-st
Copy link

@avolmat-st avolmat-st commented Oct 12, 2025

Allow creating a pipeline as follow
camera receiver -> encoder -> uvc

I post this PR while there are still some points to improve (cf hardcoded stuff detailed below) in order to get some first feedbacks. Since this depends on the UVC PR (PR #93192) and DCMIPP UVC PR (PR #94562), there are lots of commits in this PR. However only the LAST COMMIT is relevant for this PR.

If the chosen zephyr,videoenc is available, the sample will pipe the camera receiver to the encoder and then the UVC device instead of directly the camera receiver to the UVC.

In order to making the change as simple as possible, the source device of the UVC device is renamed uvc_src_dev instead of video_dev previously since, depending on the configuration, the UVC source might be either the video_dev or the encoder_dev.

Current implementation has several points hardcoded for the time being:
1. intermediate pixel format between the camera receiver and encoder
is set to NV12. This is temporary until proper analysis of video_dev
caps and encoder caps is done, allowing to select the common
format of the two devices.
2. it is considered that encoder device do NOT perform any resolution
change and that encoder output resolution is directly based on the
camera receiver resolution. Thanks to this, UVC exposed formats
are thus the encoder output pixel format & camera receiver
resolutions.

This has been tested using the STM32N6-DK and the JPEG codec, leading to the following pipe:
IMX335 -> CSI/DCMIPP -> JPEG -> UVC

josuah and others added 11 commits October 10, 2025 20:10
The UVC class was deciding itself which formats were sent to the host.
Remove this logic out of the UVC class and introduce uvc_add_format() to
give the application the freedom of which format to list.

Signed-off-by: Josuah Demangeon <[email protected]>
The UVC class now lets the application select the format list sent to the
host. Leverage this in the sample to filter out any format that is not
expected to work (buffer too big, rarely supported formats).

Signed-off-by: Josuah Demangeon <[email protected]>
Add USB UVC device's new uvc_add_format() function to the release note,
and document the semantic changes of UVC APIs in the migration guide.

Signed-off-by: Josuah Demangeon <[email protected]>
Currently the DCMIPP driver rely on a Kconfig in order to
select the right sensor resolution / format to pick.
This also makes the exposure of caps easier since it can
be exposed as:
  DUMP pipe: same caps as mentioned in Kconfig
  MAIN pipe: any format supported on this pipe and resolution
             starting at sensor selected resolution down to
             64 times smaller (which is the maximum of the
             downscale)
  AUX pipe: same as MAIN except without the semi-planar and
            planar formats

Signed-off-by: Alain Volmat <[email protected]>
Some devices allow for downscale / upscale via the set_selection
compose API. When using it, it is necessary to perform a
set_selection of the compose target prior to setting the format.
In order to allow non-compose aware application to benefit from
it, introduce a helper which take care of setting the compose
prior to setting the format.

Signed-off-by: Alain Volmat <[email protected]>
Simplify the code by using the video_set_compose_format helper.

Signed-off-by: Alain Volmat <[email protected]>
Honor the CONFIG_VIDEO_BUFFER_POOL_ALIGN config by using the
video_buffer_aligned_alloc function instead of video_buffer_alloc
in order to provide properly aligned buffers to drivers.

Signed-off-by: Alain Volmat <[email protected]>
Use the helper video_set_compose_format in order to
allow controlling the compose.

Signed-off-by: Alain Volmat <[email protected]>
Select from commonly used resolution when the video device
advertise capabilities using range.

Signed-off-by: Alain Volmat <[email protected]>
Add board specific conf files for the stm32n6570_dk

Signed-off-by: Alain Volmat <[email protected]>
Allow creating a pipeline as follow
   camera receiver -> encoder -> uvc

In the chosen zephyr,videoenc is available, the sample will pipe
the camera receiver to the encoder and then the UVC device instead
of directly the camera receiver to the UVC.

In order to making the change as simple as possible, the source
device of the UVC device is renamed uvc_src_dev instead of video_dev
previously since, depending on the configuration, the UVC source
might be either the video_dev or the encoder_dev.

Current implementation has several points hardcoded for the time
being:
1. intermediate pixel format between the camera receiver and encoder
   is set to NV12. This is temporary until proper analysis of video_dev
   caps and encoder caps is done, allowing to select the common
   format of the two devices.
2. it is considered that encoder device do NOT perform any resolution
   change and that encoder output resolution is directly based on the
   camera receiver resolution.  Thanks to this, UVC exposed formats
   are thus the encoder output pixel format & camera receiver
   resolutions.

This has been tested using the STM32N6-DK and the JPEG codec, leading
to the following pipe:

IMX335 -> CSI/DCMIPP -> JPEG -> UVC

Signed-off-by: Alain Volmat <[email protected]>
Copy link

@erwango erwango assigned josuah and unassigned jfischer-no Oct 13, 2025
@erwango
Copy link
Member

erwango commented Oct 13, 2025

Moving assignee to Video subsystem maintainer, as it seems more appropriate to the content of the RFC

@josuah josuah added the DNM This PR should not be merged (Do Not Merge) label Oct 14, 2025
@josuah
Copy link
Contributor

josuah commented Oct 14, 2025

Adding a DNM flag for PRs dependencies.
Please anyone feel free to remove it once dependencies are merged and this PR rebased.

Copy link
Contributor

@josuah josuah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this submission. It makes sense for me as it is, and AFAI the FIXME are for planning future infrastructure of the video area, not for this particular video sample

Comment on lines 359 to +369
ret = video_stream_start(video_dev, VIDEO_BUF_TYPE_OUTPUT);
if (ret != 0) {
LOG_ERR("Failed to start %s", video_dev->name);
return ret;
}
ret = video_stream_start(uvc_src_dev, VIDEO_BUF_TYPE_INPUT);
if (ret != 0) {
LOG_ERR("Failed to start %s", uvc_src_dev->name);
return ret;
}
#endif
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about using videoenc_dev for both here?

Then if a videoscaler_dev is introduced, this becomes another #if DT_HAS_CHOSEN(zephyr_videoscaler) that uses the associated videoscaler_dev for both input/output?

Comment on lines +371 to +375
ret = video_stream_start(uvc_src_dev, VIDEO_BUF_TYPE_OUTPUT);
if (ret != 0) {
LOG_ERR("Failed to start %s", video_dev->name);
return ret;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about using video_dev here?

#if DT_HAS_CHOSEN(zephyr_videoenc)
vbuf_enc_in = &(struct video_buffer){.type = VIDEO_BUF_TYPE_OUTPUT};

if (video_dequeue(video_dev, &vbuf_enc_in, K_NO_WAIT) == 0) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This works well for this sample but this is where things will start to become difficult.

One way to solve this maybe is to have a configuration table at the top:

video_dst = video_dev;
prev = &video_dst;

#if DT_HAS_CHOSEN(zephyr_videoisp)
videoisp_src = *prev;
*prev = videoisp_dev;
prev = &videoisp_dst;
#endif

#if DT_HAS_CHOSEN(zephyr_videoscaler)
videoscaler_src = *prev;
*prev = videoscaler_dev;
prev = &videoscaler_dst;
#endif

#if DT_HAS_CHOSEN(zephyr_videoenc)
videoenc_src = *prev;
*prev = videoenc_dev;
prev = &videoenc_dst;
#endif

videousb_src = *prev;
*prev = videousb_dev;

Then in each block, it is possible to just use _src and _dst.

Of course a component-based system is better, but this would be a different sample for libMP. :]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just for discussion and not necessarily to implement on this PR, thoough, as there is only one M2M device.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area: Samples Samples area: USB Universal Serial Bus area: Video Video subsystem DNM This PR should not be merged (Do Not Merge) platform: STM32 ST Micro STM32 Release Notes To be mentioned in the release notes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants