Traceback (most recent call last):
File "/VCD/experiments/eval/object_hallucination_vqa_llava.py", line 128, in <module>
eval_model(args)
File "/VCD/experiments/eval/object_hallucination_vqa_llava.py", line 60, in eval_model
image_preprocessed = image_processor.preprocess(image, return_tensors='pt')
File "/opt/conda/envs/vcd/lib/python3.9/site-packages/transformers/models/clip/image_processing_clip.py", line 337, in preprocess
return BatchFeature(data=data, tensor_type=return_tensors)
File "/opt/conda/envs/vcd/lib/python3.9/site-packages/transformers/feature_extraction_utils.py", line 78, in __init__
self.convert_to_tensors(tensor_type=tensor_type)
File "/opt/conda/envs/vcd/lib/python3.9/site-packages/transformers/feature_extraction_utils.py", line 181, in convert_to_tensors
raise ValueError(
ValueError: Unable to create tensor, you should probably activate padding with 'padding=True' to have batched tensors with the same length.
I followed README to create my environment, and then changed some path parameters in object_hallucination_vqa_llava.py. Please let me know if you need more information.
Thank you