Skip to content

Conversation

@NicolasHug
Copy link
Contributor

I could be wrong!
But if I'm not, I think we should just indicate that it's ignored on CPU. I don't think it's worth raising an error if users pass num_ffmpeg_threads=3, device="cuda" because it makes it really annoying to run "generic" loop and comparisons, e.g. stuff like

for device in ("cuda", "cpu"):
    decoder = VideoDecoder(..., num_ffmpeg_threads=1, device=device)  # no point errorring here

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Nov 8, 2024
@scotts
Copy link
Contributor

scotts commented Nov 8, 2024

Seems correct to me!

@ahmadsharif1
Copy link
Contributor

I don't think this is ignored for CUDA.

https://github.com/pytorch/torchcodec/blob/43ee8076e861a975a93fd019a95bf1860f827edf/src/torchcodec/decoders/_core/VideoDecoder.cpp#L438

It would be nice to run some experiments to see how it affects CUDA performance. And if we can confirm it doesn't affect performance, we can gate setting the threads on device=="cpu".

@NicolasHug
Copy link
Contributor Author

Thanks for your input @ahmadsharif1 , I'll close this and we can follow-up on #353 then

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Meta Open Source bot.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants