Replies: 2 comments
-
Hi, threshold computation might be an issue with this approach. How about something like this to handle threshold computation for each category? # First train the model with normal samples
datamodule_train = Folder(
name = "bottle",
root = dataset_path,
normal_dir = "train/good"
)
datamodule_train.setup()
# Train the model
model = Patchcore()
engine = Engine()
engine.fit(datamodule=datamodule_train, model=model)
# For each defect type, validate to compute its specific threshold
for defect_type in ["1", "2", "3", "4", "5", "6"]:
# Create validation datamodule with this defect type
val_datamodule = Folder(
name = f"bottle_defect_{defect_type}",
root = dataset_path,
normal_dir = "train/good",
abnormal_dir = f"test/{defect_type}",
normal_test_dir = "test/good"
)
val_datamodule.setup()
# Validate to compute threshold for this defect type
engine.validate(model=model, datamodule=val_datamodule)
# Test with the computed threshold
test_datamodule = Folder(
name = f"bottle_defect_{defect_type}",
root = dataset_path,
normal_dir = "train/good",
abnormal_dir = f"test/{defect_type}",
mask_dir = f"mask/{defect_type}"
)
test_datamodule.setup()
predictions = engine.predict(model=model, datamodule=test_datamodule) With this approach you would train the model once on normal samples. For each defect type, you would validate that specific defect type to compute an appropriate threshold. And finally, you would use that threshold for testing on that defect type.
|
Beta Was this translation helpful? Give feedback.
-
Hi, thannks for the tips, much appreciated. "OutOfMemoryError: CUDA out of memory. Tried to allocate 764.00 MiB. GPU 0 has a total capacity of 9.65 GiB of which 702.75 MiB is free. Process 4103259 has 8.91 GiB memory in use. Of the allocated memory 8.60 GiB is allocated by PyTorch, and 54.03 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. " The solution is using different parameters for the datamodule setup such as num_workers, batch_size and others or to resolve the problem i need a more powerful GPU (because i work on docker i might think that i can change the GPU's model) Thansk |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
i have different kind of defect folder such as 1,2,3,4,5,6 and if i want to train a patchcore model for testing on every category it is correct to change the code into
and then
eng_patchcore.fit(model = model datamodule = datamodule_bottle)
end for every type of defect folder write
predictions = eng_patchcore.predict(model=model, datamodule=datamodule)
wheredatamodule
is the previuos one used while it has specified theabnormal_dir
.It is correct or i'm doing some mistake?
Thanks a lot
Beta Was this translation helpful? Give feedback.
All reactions