Replies: 1 comment 3 replies
-
How long does the training take? It's not unusual that the training might take more than one hour. |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I tried to train my model with EfficientAd, but it seems that it took very long.
Can any one check with my yaml file on what is the problem?
`seed_everything: true
trainer:
enable_checkpointing: true
default_root_dir: null
gradient_clip_val: 0
gradient_clip_algorithm: norm
callbacks:
class_path: lightning.pytorch.callbacks.EarlyStopping
init_args:
mode: max
monitor: image_F1Score
patience: 5
num_nodes: 1
devices: "cuda"
enable_progress_bar: true
overfit_batches: 0.0
check_val_every_n_epoch: 1 # Don't validate before extracting features.
fast_dev_run: false
accumulate_grad_batches: 1
max_epochs: 10
min_epochs: 3
max_steps: -1
min_steps: null
max_time: null
limit_train_batches: 1.0
limit_val_batches: 1.0
limit_test_batches: 1.0
limit_predict_batches: 1.0
val_check_interval: 1.0 # Don't validate before extracting features.
log_every_n_steps: 50
accelerator: gpu # <"cpu", "gpu", "tpu", "ipu", "hpu", "auto">
strategy: auto
sync_batchnorm: false
precision: 32
enable_model_summary: true
num_sanity_val_steps: 0
profiler: null
benchmark: false
deterministic: false
reload_dataloaders_every_n_epochs: 0
detect_anomaly: false
plugins: null
normalization:
normalization_method: MIN_MAX
task: SEGMENTATION
metrics:
image:
pixel: null
threshold:
class_path: anomalib.metrics.F1AdaptiveThreshold
init_args:
default_value: 0.5
thresholds: null
ignore_index: null
validate_args: true
compute_on_cpu: false
dist_sync_on_step: false
sync_on_compute: true
compute_with_cache: true
logging:
log_graph: false
default_root_dir: results
ckpt_path: null
data:
class_path: anomalib.data.MVTec
init_args:
root: datasets\MVTec
category: led
train_batch_size: 1
eval_batch_size: 1
num_workers: 23
image_size: null
transform: null
train_transform: null
eval_transform: null
test_split_mode: FROM_DIR
test_split_ratio: 0.2
val_split_mode: SAME_AS_TEST
val_split_ratio: 0.5
seed: null
model:
class_path: anomalib.models.EfficientAd
init_args:
teacher_out_channels: 384
model_size: S
lr: 0.0001
weight_decay: 1.0e-05
padding: false
pad_maps: true
metrics:
pixel:
- AUROC
trainer:
max_epochs: 1000
max_steps: 70000`
Beta Was this translation helpful? Give feedback.
All reactions