Skip to content

RuntimeError: Could not load libtorchcodec when torchcodec being installed along with torch 2.9 RC #912

@ydshieh

Description

@ydshieh

🐛 Describe the bug

Install transformers

git clone https://github.com/huggingface/transformers.git && cd transformers && git fetch origin && git checkout bb45d3631ec7026db04a77d33a52b31766372160 && pip install -e .[torch,testing]

Install

python3 -m pip uninstall -y torch torchvision torchaudio codecarbon torchcodec
python3 -m pip install --no-cache-dir torch==2.9.0 torchvision --index-url https://download.pytorch.org/whl/test/cu126
pip install --progress-bar=off --pre torchcodec --index-url="https://download.pytorch.org/whl/nightly/cu126"

What we get

torch==2.9.0+cu126
torchcodec==0.7.0.dev20250929+cu126
torchvision==0.24.0+cu126

Same error if

 python3 -m pip install --no-cache-dir torch==2.9.0 torchvision torchcodec --index-url https://download.pytorch.org/whl/test/cu126

torch==2.9.0+cu126
torchcodec==0.4.0+cu126
torchvision==0.24.0+cu126

No error if

python3 -m pip install torch torchvision torchcodec --index-url https://download.pytorch.org/whl/cu126


torch==2.8.0+cu126
torchcodec==0.7.0+cu126
torchvision==0.23.0+cu126

(so it should not be the issue of FFmpeg in the CI ruuner)

Run test

RUN_SLOW=1 python3 -m pytest -v tests/models/clvp/test_modeling_clvp.py::ClvpModelForConditionalGenerationTest::test_config_attn_implementation_setter

Error

================================================================================================ test session starts ================================================================================================
platform linux -- Python 3.10.12, pytest-8.4.2, pluggy-1.6.0 -- /usr/bin/python3
cachedir: .pytest_cache
hypothesis profile 'default'
rootdir: /transformers
configfile: pyproject.toml
plugins: typeguard-4.4.4, hydra-core-1.3.2, anyio-4.11.0, hypothesis-6.140.2, asyncio-1.2.0, order-1.3.0, rerunfailures-15.1, rich-0.2.0, timeout-2.4.0, xdist-3.8.0
asyncio: mode=strict, debug=False, asyncio_default_fixture_loop_scope=function, asyncio_default_test_loop_scope=function
collected 1 item                                                                                                                                                                                                    

tests/models/clvp/test_modeling_clvp.py::ClvpModelForConditionalGenerationTest::test_config_attn_implementation_setter FAILED                                                                                 [100%]

===================================================================================================== FAILURES ======================================================================================================
___________________________________________________________________ ClvpModelForConditionalGenerationTest.test_config_attn_implementation_setter ____________________________________________________________________

self = <tests.models.clvp.test_modeling_clvp.ClvpModelForConditionalGenerationTest testMethod=test_config_attn_implementation_setter>

    def test_config_attn_implementation_setter(self):
>       config, _ = self.model_tester.prepare_config_and_inputs_for_common()

tests/test_modeling_common.py:4355:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/models/clvp/test_modeling_clvp.py:398: in prepare_config_and_inputs_for_common
    config_and_inputs = self.prepare_config_and_inputs()
tests/models/clvp/test_modeling_clvp.py:376: in prepare_config_and_inputs
    audio = ds.sort("id")[0]["audio"]
/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:2862: in __getitem__
    return self._getitem(key)
/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:2844: in _getitem
    formatted_output = format_table(
/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:658: in format_table
    return formatter(pa_table, query_type=query_type)
/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:411: in __call__
    return self.format_row(pa_table)
/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:460: in format_row
    row = self.python_features_decoder.decode_row(row)
/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:224: in decode_row
    return self.features.decode_example(row, token_per_repo_id=self.token_per_repo_id) if self.features else row
/usr/local/lib/python3.10/dist-packages/datasets/features/features.py:2096: in decode_example
    return {
/usr/local/lib/python3.10/dist-packages/datasets/features/features.py:2097: in <dictcomp>
    column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
/usr/local/lib/python3.10/dist-packages/datasets/features/features.py:1409: in decode_nested_example
    return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None                                                                                                              
/usr/local/lib/python3.10/dist-packages/datasets/features/audio.py:179: in decode_example
    from ._torchcodec import AudioDecoder
/usr/local/lib/python3.10/dist-packages/datasets/features/_torchcodec.py:2: in <module>
    from torchcodec.decoders import AudioDecoder as _AudioDecoder
/usr/local/lib/python3.10/dist-packages/torchcodec/__init__.py:10: in <module>
    from . import decoders, samplers  # noqa                                                                                                                                                                         
/usr/local/lib/python3.10/dist-packages/torchcodec/decoders/__init__.py:7: in <module>
    from .._core import AudioStreamMetadata, VideoStreamMetadata
/usr/local/lib/python3.10/dist-packages/torchcodec/_core/__init__.py:8: in <module>
    from ._metadata import (
/usr/local/lib/python3.10/dist-packages/torchcodec/_core/_metadata.py:16: in <module>
    from torchcodec._core.ops import (
/usr/local/lib/python3.10/dist-packages/torchcodec/_core/ops.py:84: in <module>
    load_torchcodec_shared_libraries()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

    def load_torchcodec_shared_libraries():
        # Successively try to load the shared libraries for each version of FFmpeg                                                                                                                                   
        # that we support. We always start with the highest version, working our way                                                                                                                                 
        # down to the lowest version. Once we can load ALL shared libraries for a                                                                                                                                    
        # version of FFmpeg, we have succeeded and we stop.                                                                                                                                                          
        #                                                                                                                                                                                                            
        # Note that we use two different methods for loading shared libraries:                                                                                                                                       
        #                                                                                                                                                                                                            
        #   1. torch.ops.load_library(): For PyTorch custom ops and the C++ only                                                                                                                                     
        #      libraries the custom ops depend on. Loading libraries through PyTorch                                                                                                                                 
        #      registers the custom ops with PyTorch's runtime and the ops can be                                                                                                                                    
        #      accessed through torch.ops after loading.                                                                                                                                                             
        #                                                                                                                                                                                                            
        #   2. importlib: For pybind11 modules. We load them dynamically, rather                                                                                                                                     
        #      than using a plain import statement. A plain import statement only                                                                                                                                    
        #      works when the module name and file name match exactly. Our shared                                                                                                                                    
        #      libraries do not meet those conditions.                                                                                                                                                               

        exceptions = []
        for ffmpeg_major_version in (7, 6, 5, 4):
            pybind_ops_module_name = _get_pybind_ops_module_name(ffmpeg_major_version)
            decoder_library_name = f"libtorchcodec_core{ffmpeg_major_version}"                                                                                                                                       
            custom_ops_library_name = f"libtorchcodec_custom_ops{ffmpeg_major_version}"                                                                                                                              
            pybind_ops_library_name = f"libtorchcodec_pybind_ops{ffmpeg_major_version}"                                                                                                                              
            try:
                torch.ops.load_library(_get_extension_path(decoder_library_name))
                torch.ops.load_library(_get_extension_path(custom_ops_library_name))

                pybind_ops_library_path = _get_extension_path(pybind_ops_library_name)
                global _pybind_ops
                _pybind_ops = _load_pybind11_module(
                    pybind_ops_module_name, pybind_ops_library_path
                )
                return                                                                                                                                                                                               
            except Exception as e:
                # TODO: recording and reporting exceptions this way is OK for now as  it's just for debugging,                                                                                                       
                # but we should probably handle that via a proper logging mechanism.                                                                                                                                 
                exceptions.append((ffmpeg_major_version, e))

        traceback = (
            "\n[start of libtorchcodec loading traceback]\n"                                                                                                                                                         
            + "\n".join(f"FFmpeg version {v}: {str(e)}" for v, e in exceptions)
            + "\n[end of libtorchcodec loading traceback]."                                                                                                                                                          
        )
>       raise RuntimeError(
            f"""Could not load libtorchcodec. Likely causes:                                                                                                                                                         
              1. FFmpeg is not properly installed in your environment. We support                                                                                                                                    
                 versions 4, 5, 6 and 7.                                                                                                                                                                             
              2. The PyTorch version ({torch.__version__}) is not compatible with                                                                                                                                    
                 this version of TorchCodec. Refer to the version compatibility                                                                                                                                      
                 table:                                                                                                                                                                                              
                 https://github.com/pytorch/torchcodec?tab=readme-ov-file#installing-torchcodec.                                                                                                                     
              3. Another runtime dependency; see exceptions below.                                                                                                                                                   
            The following exceptions were raised as we tried to load libtorchcodec:                                                                                                                                  
            """                                                                                                                                                                                                      
            f"{traceback}"                                                                                                                                                                                           
        )
E       RuntimeError: Could not load libtorchcodec. Likely causes:
E                 1. FFmpeg is not properly installed in your environment. We support
E                    versions 4, 5, 6 and 7.
E                 2. The PyTorch version (2.9.0+cu126) is not compatible with
E                    this version of TorchCodec. Refer to the version compatibility
E                    table:
E                    https://github.com/pytorch/torchcodec?tab=readme-ov-file#installing-torchcodec.
E                 3. Another runtime dependency; see exceptions below.
E               The following exceptions were raised as we tried to load libtorchcodec:
E
E       [start of libtorchcodec loading traceback]
E       FFmpeg version 7: Could not load this library: /usr/local/lib/python3.10/dist-packages/torchcodec/libtorchcodec_core7.so
E       FFmpeg version 6: Could not load this library: /usr/local/lib/python3.10/dist-packages/torchcodec/libtorchcodec_core6.so
E       FFmpeg version 5: Could not load this library: /usr/local/lib/python3.10/dist-packages/torchcodec/libtorchcodec_core5.so
E       FFmpeg version 4: Could not load this library: /usr/local/lib/python3.10/dist-packages/torchcodec/libtorchcodec_core4.so
E       [end of libtorchcodec loading traceback].

/usr/local/lib/python3.10/dist-packages/torchcodec/_core/ops.py:69: RuntimeError

Versions

Collecting environment information...
PyTorch version: 2.9.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35

Python version: 3.10.12 (main, Aug 15 2025, 14:32:43) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.10.242-239.961.amzn2.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: NVIDIA A10G
Nvidia driver version: 570.172.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7R32
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 0
BogoMIPS: 5599.85
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl
nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 32 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Indirect target selection: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsa: Not affected
Vulnerability Tsx async abort: Not affected

Versions of relevant libraries:
[pip3] mypy_extensions==1.1.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.7.1
[pip3] nvidia-nccl-cu12==2.27.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.19.0
[pip3] onnxruntime==1.23.0
[pip3] onnxruntime-tools==1.7.0
[pip3] torch==2.9.0+cu126
[pip3] torchcodec==0.7.0.dev20250929+cu126
[pip3] torchvision==0.24.0+cu126
[pip3] triton==3.5.0
[conda] Could not collect

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions