Skip to content

ValueError: could not determine the shape of object type 'VideoMetadata' #10246

@markmochi200

Description

@markmochi200

Reminder

  • I have read the above rules and searched the existing issues.

System Info

  • llamafactory version: 0.9.5.dev0
  • Platform: Linux-6.8.0-94-generic-x86_64-with-glibc2.35
  • Python version: 3.12.12
  • PyTorch version: 2.10.0+cu128 (GPU)
  • Transformers version: 5.3.0.dev0
  • Datasets version: 4.0.0
  • Accelerate version: 1.11.0
  • PEFT version: 0.18.1
  • GPU type: NVIDIA GeForce RTX 2080 Ti
  • GPU number: 1
  • GPU memory: 21.48GB
  • TRL version: 0.24.0
  • vLLM version: 0.16.1rc1.dev198+g70c73df69
  • Default data directory: not detected

Reproduction

[WARNING|2026-03-04 14:06:48] llamafactory.chat.hf_engine:155 >> There is no current event loop, creating a new one.
Traceback (most recent call last):
  File "/home/lx3005/anaconda3/envs/GrokkingTrf/lib/python3.12/site-packages/gradio/queueing.py", line 759, in process_events
    response = await route_utils.call_process_api(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lx3005/anaconda3/envs/GrokkingTrf/lib/python3.12/site-packages/gradio/route_utils.py", line 354, in call_process_api
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lx3005/anaconda3/envs/GrokkingTrf/lib/python3.12/site-packages/gradio/blocks.py", line 2191, in process_api
    result = await self.call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lx3005/anaconda3/envs/GrokkingTrf/lib/python3.12/site-packages/gradio/blocks.py", line 1710, in call_function
    prediction = await utils.async_iteration(iterator)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lx3005/anaconda3/envs/GrokkingTrf/lib/python3.12/site-packages/gradio/utils.py", line 760, in async_iteration
    return await anext(iterator)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/lx3005/anaconda3/envs/GrokkingTrf/lib/python3.12/site-packages/gradio/utils.py", line 751, in __anext__
    return await anyio.to_thread.run_sync(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lx3005/anaconda3/envs/GrokkingTrf/lib/python3.12/site-packages/anyio/to_thread.py", line 63, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lx3005/anaconda3/envs/GrokkingTrf/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 2502, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "/home/lx3005/anaconda3/envs/GrokkingTrf/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 986, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lx3005/anaconda3/envs/GrokkingTrf/lib/python3.12/site-packages/gradio/utils.py", line 734, in run_sync_iterator_async
    return next(iterator)
           ^^^^^^^^^^^^^^
  File "/home/lx3005/anaconda3/envs/GrokkingTrf/lib/python3.12/site-packages/gradio/utils.py", line 898, in gen_wrapper
    response = next(iterator)
               ^^^^^^^^^^^^^^
  File "/home/lx3005/Desktop/GrokkingTrf/LlamaFactory/src/llamafactory/webui/chatter.py", line 218, in stream
    for new_text in self.stream_chat(
                    ^^^^^^^^^^^^^^^^^
  File "/home/lx3005/Desktop/GrokkingTrf/LlamaFactory/src/llamafactory/chat/chat_model.py", line 135, in stream_chat
    yield task.result()
          ^^^^^^^^^^^^^
  File "/home/lx3005/anaconda3/envs/GrokkingTrf/lib/python3.12/concurrent/futures/_base.py", line 456, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/home/lx3005/anaconda3/envs/GrokkingTrf/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
  File "/home/lx3005/Desktop/GrokkingTrf/LlamaFactory/src/llamafactory/chat/chat_model.py", line 150, in astream_chat
    async for new_token in self.engine.stream_chat(
  File "/home/lx3005/Desktop/GrokkingTrf/LlamaFactory/src/llamafactory/chat/hf_engine.py", line 394, in stream_chat
    stream = self._stream_chat(*input_args)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lx3005/anaconda3/envs/GrokkingTrf/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 124, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/lx3005/Desktop/GrokkingTrf/LlamaFactory/src/llamafactory/chat/hf_engine.py", line 281, in _stream_chat
    gen_kwargs, _ = HuggingfaceEngine._process_args(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lx3005/Desktop/GrokkingTrf/LlamaFactory/src/llamafactory/chat/hf_engine.py", line 190, in _process_args
    value = torch.tensor(value)
            ^^^^^^^^^^^^^^^^^^^
ValueError: could not determine the shape of object type 'VideoMetadata'

Others

I am chatting with Qwen3.5-0.8B through webui, and run into issues when inferring with video

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingpendingThis problem is yet to be addressed

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions