You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Remove remaining GPU/CUDA mentions in torch_xla directory. (#9608)
This PR removes the remaining CUDA specific code from the PyTorch/XLA
package (i.e. `torch_xla` directory) as well a few other related files.
This is in line with the CUDA deprecation that started on release 2.8.
**Key Changes:**
- (`CONTRIBUTING.md`) Removed mention to CUDA specific environment
variables
- (`configuration.yaml`) Removed description of CUDA specific
environment variables
- (`docs/source/learn/_pjrt.md`) Removed PjRt documentation on CUDA
- (`torch_xla/amp`) Removed CUDA specific branches, as well as
`GradScaler`
- (`torch_xla/core/xla_env_vars.py`) Removed CUDA specific environment
variables
- (`torch_xla/utils/checkpoint.py`) Fixed incorrect function name
Copy file name to clipboardExpand all lines: torch_xla/_internal/pjrt.py
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -205,7 +205,7 @@ def spawn(fn: Callable,
205
205
return_run_singleprocess(spawn_fn)
206
206
elifnprocsisnotNone:
207
207
raiseValueError(
208
-
'Unsupported nprocs (%d). Please use nprocs=1 or None (default). If None, spawn will use all available devices. Use the environment variable X_NUM_DEVICES (where X is CPU, GPU, TPU, NEURONCORE, etc) to limit the number of devices used.'
208
+
'Unsupported nprocs (%d). Please use nprocs=1 or None (default). If None, spawn will use all available devices. Use the environment variable X_NUM_DEVICES (where X is CPU, TPU, NEURONCORE, etc) to limit the number of devices used.'
0 commit comments