wandb_logger.experiment.config has no attribute 'update' for GPUs > 1 #13157
Answered
by
drscotthawley
drscotthawley
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
-
Hi, There's a little snippet from the WandB Logger Docs under "Add other config parameters:" import pytorch_lightning as pl
wandb_logger = pl.loggers.WandbLogger(project='my_project')
wandb_logger.experiment.config.update({"my_key": "my_value"})
# and then later I run...
trainer = pl.Trainer(gpus=args.num_gpus,...) And when I use this in my full training code and only run the trainer on 1 GPU, then I have no problems. It's fine. But if I try to run on more than one GPU, then that config.update line above yields an error:
What's going on here, and how do I fix it? Thanks. |
Beta Was this translation helpful? Give feedback.
Answered by
drscotthawley
May 26, 2022
Replies: 1 comment
-
Update. Ok so according to this issue, the higher GPUs don't get "real" experiments? So my stupid hack which seems to work is just to check that the attribute exists! 💯 if hasattr(wandb_logger.experiment.config, 'update'):
wandb_logger.experiment.config.update({"my_key": "my_value"}) |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
drscotthawley
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Update. Ok so according to this issue, the higher GPUs don't get "real" experiments?
So my stupid hack which seems to work is just to check that the attribute exists! 💯