You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Nov 3, 2023. It is now read-only.
Hi, thank you for the great integration of Lightning & Ray!
I found that using 16 bit precision returns the following error: pytorch_lightning.utilities.exceptions.MisconfigurationException: You have asked for native AMP on CPU, but AMP is only available on GPU
Using the same script, with 32 bit works fine.
I believe this is due to the fact that the number of GPU's is only set in the Ray Plugin and not in the Lightning Trainer, and this check happens prior to the Ray plugin being utilized. More generally it may be a bit dangerous to not set the Trainer number of gpus when actually intending to use GPU's as there may be other internal checks that Lightning does which could lead to unexpected behavior such as this one.
Would appreciate any tips on getting this integration to work with half precision training, thank you!
Hi, thank you for the great integration of Lightning & Ray!
I found that using 16 bit precision returns the following error:
pytorch_lightning.utilities.exceptions.MisconfigurationException: You have asked for native AMP on CPU, but AMP is only available on GPUUsing the same script, with 32 bit works fine.
I believe this is due to the fact that the number of GPU's is only set in the Ray Plugin and not in the Lightning Trainer, and this check happens prior to the Ray plugin being utilized. More generally it may be a bit dangerous to not set the Trainer number of gpus when actually intending to use GPU's as there may be other internal checks that Lightning does which could lead to unexpected behavior such as this one.
Would appreciate any tips on getting this integration to work with half precision training, thank you!