Skip to content

Conversation

jonrecker
Copy link

@jonrecker jonrecker commented Aug 23, 2025

  • Add check_decoder_support function to benchmark_decoders_library
  • Add argument parsing in generate_readme_data:
    --decoders option to specify list of decoders to test
    --resize_device option to specify device for resize
  • Warn and skip tests if requested backends (xpu, cuda) or implementations (torchaudio, torchvision) are not supported

New options for generate_readme_data.py:

python3 generate_readme_data.py --help
usage: generate_readme_data.py [-h] [--decoders DECODERS] [--resize_device RESIZE_DEVICE]

options:
  -h, --help            show this help message and exit
  --decoders DECODERS   Comma-separated list of decoders to benchmark. Valid options: cpu, cuda, xpu, torchvision, torchaudio
  --resize_device RESIZE_DEVICE
                        Device for resize. Default: cuda if available, else cpu. Valid options: cpu, cuda, xpu

Sample command-lines to run benchmark scripts with xpu enabled:

python3 benchmark_decoders.py --decoders torchcodec_core:device=xpu,torchcodec_core_batch:device=xpu,torchcodec_public:device=xpu,torchcodec_public_nonbatch:device=xpu
python3 gpu_benchmark.py --devices xpu,cpu --resize_devices xpu,cpu,native,none
python3 generate_readme_data.py --decoders cpu,xpu

* Add check_decoder_support function to benchmark_decoders_library
* Add argument parsing in generate_readme_data:
  --decoders option to specify list of decoders to test
  --resize_device option to specify device for resize
* Warn and skip tests if requested backends (xpu, cuda) or
  implementations (torchaudio, torchvision) are not supported
@jonrecker jonrecker force-pushed the jonrecker/enable-xpu branch from 5025b6d to 97c1ec2 Compare August 24, 2025 03:46
@jonrecker jonrecker marked this pull request as ready for review August 25, 2025 01:09
return True

if decoder_type == "torchvision":
try:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does not quite help:

$ python3 benchmark_decoders.py --decoders decord,decord_batch,torchcodec_core,torchcodec_core_batch,torchcodec_core_compiled,torchcodec_public,torchcodec_public_nonbatch
Traceback (most recent call last):
  File "/home/dvrogozh/git/pytorch/torchcodec/benchmarks/decoders/benchmark_decoders.py", line 172, in <module>
    main()
  File "/home/dvrogozh/git/pytorch/torchcodec/benchmarks/decoders/benchmark_decoders.py", line 129, in main
    decoders_to_run[display] = kind(**options)
  File "/home/dvrogozh/git/pytorch/torchcodec/benchmarks/decoders/benchmark_decoders_library.py", line 382, in __init__
    from torchvision.transforms import v2 as transforms_v2
ModuleNotFoundError: No module named 'torchvision'

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This check was only enabled in generate_readme_data.py since the other scripts allow specifying which decoders to test. But it is no problem to add checks to the other benchmarks also. Latest commit updates benchmark_decoders.py and gpu_benchmark.py to fail gracefully if dependencies are missing, and run the remaining tests if possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants