How can we fineturn the trained PatchCore model #1578
Unanswered
zhanghuayu-seu
asked this question in
Q&A
Replies: 2 comments 2 replies
-
Hello, if I understood correctly, you need to use both your dataset and new images to train the new PatchCore model; you cannot finetune it. PatchCore is not being trained in the usual way; instead, it extracts features from the training dataset and then uses these features to determine how different the new input is. You can find more details here. |
Beta Was this translation helpful? Give feedback.
2 replies
-
As @abc-125 mentioned, those images may not help for the training for the patchcore model. They could help for the validation when tuning your threshold value to get a better performance. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
What is the motivation for this task?
We need to update the model in the system.
Describe the solution you'd like
I have trained the patchcore model using my dataset and save the weights in .ckpt and onnx files. Now we get a few images (around 50 images), how can we update the model based on the trained patchcore model in the small datasets. We are looking forward to your reply.
Additional context
dataset:
name: anomaly
format: folder
path: ./datasets/Anomaly
normal_dir: normal
abnormal_dir: anomaly
mask_dir: mask/anomaly
normal_test_dir: null
extensions: null
task: segmentation
category: bottle
train_batch_size: 1
eval_batch_size: 1
num_workers: 1
image_size: 256
center_crop: 224
normalization: imagenet
transform_config:
train: null
eval: null
test_split_mode: from_dir
test_split_ratio: 0.2
val_split_mode: same_as_test
val_split_ratio: 0.5
tiling:
apply: false
tile_size: null
stride: null
remove_border_count: 0
use_random_tiling: false
random_tile_count: 16
model:
name: patchcore
backbone: wide_resnet50_2
pre_trained: true
layers:
coreset_sampling_ratio: 0.1
num_neighbors: 1
normalization_method: min_max
metrics:
image:
pixel:
threshold:
method: adaptive
manual_image: null
manual_pixel: null
visualization:
show_images: false
save_images: true
log_images: true
image_save_path: null
mode: custom
project:
seed: 0
path: ./results
logging:
logger: []
log_graph: false
optimization:
export_mode: onnx
trainer:
enable_checkpointing: true
default_root_dir: null
gradient_clip_val: 0
gradient_clip_algorithm: norm
num_nodes: 1
devices: 1
enable_progress_bar: true
overfit_batches: 0.0
track_grad_norm: -1
check_val_every_n_epoch: 1
fast_dev_run: false
accumulate_grad_batches: 1
max_epochs: 1
min_epochs: null
max_steps: -1
min_steps: null
max_time: null
limit_train_batches: 1.0
limit_val_batches: 1.0
limit_test_batches: 1.0
limit_predict_batches: 1.0
val_check_interval: 1.0
log_every_n_steps: 50
accelerator: cpu
strategy: null
sync_batchnorm: false
precision: 32
enable_model_summary: true
num_sanity_val_steps: 0
profiler: null
benchmark: false
deterministic: false
reload_dataloaders_every_n_epochs: 0
auto_lr_find: false
replace_sampler_ddp: true
detect_anomaly: false
auto_scale_batch_size: false
plugins: null
move_metrics_to_cpu: false
multiple_trainloader_mode: max_size_cycle
Beta Was this translation helpful? Give feedback.
All reactions