❓ ImportError: open_clip is required for VLM models #2891
Replies: 10 comments
-
Thanks for submitting this issue! It has been added to our triage queue. A maintainer will review it shortly. |
Beta Was this translation helpful? Give feedback.
-
Hi @enrico310786 The error message is outdated. I will make a PR to fix it. |
Beta Was this translation helpful? Give feedback.
-
Hi. The issue still exists. I have to comment this ''' |
Beta Was this translation helpful? Give feedback.
-
Another issue: another issue with former version (not sure it still exists or not): |
Beta Was this translation helpful? Give feedback.
-
Hello @mvanaee The number of epochs for PaDim has to be 1. Models like PaDim or Patchcore are memory bank-based models. They are fitted for one epoch to extract features from the images, process them and save them in a memory bank. After this one epoch, the model is done, meaning that setting epochs to more than 1, is redundant. What you want for EfficientAD is basically what they implemented for PaDim or Patchcore. |
Beta Was this translation helpful? Give feedback.
-
Hi @mvanaee |
Beta Was this translation helpful? Give feedback.
-
You comment on batch_size to be 1 by default/overwritten for efficientAD is valid. I will discuss and fix this. |
Beta Was this translation helpful? Give feedback.
-
@waschsalz @rajeshgangireddy I have another question. I appreciate your time and guide. |
Beta Was this translation helpful? Give feedback.
-
I always worked with the Short: If you have a CUDA-ready GPU --> TorchInferencer (Don't use it on a CPU...) For whichever case, you need to export the trained model via Short example:
For predicting an image you can simply use:
|
Beta Was this translation helpful? Give feedback.
-
This isn't straightforward as a datamodule is created independent of the model. To force the batch_size, we might have to (re)setup the datamodule in the model, which is not the best way to do it. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Your Question
Hi,
I installed anomalib using a requirements.txt. First I create a venv
python3.10 -m venv venv
then I install all the packages using
pip install -r requirements.txt
Inside the requirements ther is "anomalib==2.1.0". The installed version of anomalib is checked by
pip show anomalib
Name: anomalib
Version: 2.1.0
Now, I run the training script to train a ReverseDistillation model on my custom dataset, but i receive the error
Traceback (most recent call last):
File "/home/randellini/progetto_eltek/train_test_anomalib_models/train_reversedistillation_anomalib.py", line 6, in
from anomalib.models import ReverseDistillation
File "/home/randellini/progetto_eltek/venv/lib/python3.10/site-packages/anomalib/models/init.py", line 57, in
from .image import (
File "/home/randellini/progetto_eltek/venv/lib/python3.10/site-packages/anomalib/models/image/init.py", line 67, in
from .winclip import WinClip
File "/home/randellini/progetto_eltek/venv/lib/python3.10/site-packages/anomalib/models/image/winclip/init.py", line 30, in
from .lightning_model import WinClip
File "/home/randellini/progetto_eltek/venv/lib/python3.10/site-packages/anomalib/models/image/winclip/lightning_model.py", line 48, in
from .torch_model import WinClipModel
File "/home/randellini/progetto_eltek/venv/lib/python3.10/site-packages/anomalib/models/image/winclip/torch_model.py", line 49, in
raise ImportError(msg)
ImportError: open_clip is required for VLM models. Install it with: pip install anomalib[vlm_clip]
I did "pip install anomalib[vlm_clip]", but I continue to receive the same error.
Can you help me?
Forum Check
Beta Was this translation helpful? Give feedback.
All reactions