Skip to content

Commit add9727

Browse files
authored
Fixes as per Grammarly
1 parent 328f220 commit add9727

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

docs/src/gpu.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -97,9 +97,9 @@ Some of the common workflows involving the use of GPUs are presented below.
9797

9898
### Transferring Training Data
9999

100-
In order to train the model using the GPU both model and the training data have to be transferred to GPU memory. This process can be done with the `gpu` function in two different ways:
100+
In order to train the model using the GPU both model and the training data have to be transferred to GPU memory. This process can be done with the `gpu` function in two different ways:
101101

102-
1. Iterating over the batches in a [DataLoader](@ref) object transfering each one of the training batches at a time to the GPU.
102+
1. Iterating over the batches in a [DataLoader](@ref) object transferring each one of the training batches at a time to the GPU.
103103
```julia
104104
train_loader = Flux.DataLoader((xtrain, ytrain), batchsize = 64, shuffle = true)
105105
# ... model, optimizer and loss definitions
@@ -112,14 +112,14 @@ In order to train the model using the GPU both model and the training data have
112112
end
113113
```
114114

115-
2. Transferring all training data to the GPU at once before creating the [DataLoader](@ref) object. This is usually performed for smaller datasets which are sure to fit in the available GPU memory. Some possitilities are:
115+
2. Transferring all training data to the GPU at once before creating the [DataLoader](@ref) object. This is usually performed for smaller datasets which are sure to fit in the available GPU memory. Some possibilities are:
116116
```julia
117117
gpu_train_loader = Flux.DataLoader((xtrain |> gpu, ytrain |> gpu), batchsize = 32)
118118
```
119119
```julia
120120
gpu_train_loader = Flux.DataLoader((xtrain, ytrain) |> gpu, batchsize = 32)
121121
```
122-
Note that both `gpu` and `cpu` are smart enough to recurse through tuples and namedtuples. Other possibility is to use [`MLUtils.mapsobs`](https://juliaml.github.io/MLUtils.jl/dev/api/#MLUtils.mapobs) to push the data movement invocation into the background thread:
122+
Note that both `gpu` and `cpu` are smart enough to recurse through tuples and namedtuples. Another possibility is to use [`MLUtils.mapsobs`](https://juliaml.github.io/MLUtils.jl/dev/api/#MLUtils.mapobs) to push the data movement invocation into the background thread:
123123
```julia
124124
using MLUtils: mapobs
125125
# ...
@@ -159,7 +159,7 @@ let model = cpu(model)
159159
BSON.@save "./path/to/trained_model.bson" model
160160
end
161161

162-
# is equivalente to the above, but uses `key=value` storing directve from BSON.jl
162+
# is equivalent to the above, but uses `key=value` storing directive from BSON.jl
163163
BSON.@save "./path/to/trained_model.bson" model = cpu(model)
164164
```
165165
The reason behind this is that models trained in the GPU but not transferred to the CPU memory scope will expect `CuArray`s as input. In other words, Flux models expect input data coming from the same kind device in which they were trained on.
@@ -181,4 +181,4 @@ $ export CUDA_VISIBLE_DEVICES='0,1'
181181
```
182182

183183

184-
More information for conditional use of GPUs in CUDA.jl can be found in its [documentation](https://cuda.juliagpu.org/stable/installation/conditional/#Conditional-use), and information about the specific use of the variable is described in the [Nvidia CUDA blogpost](https://developer.nvidia.com/blog/cuda-pro-tip-control-gpu-visibility-cuda_visible_devices/).
184+
More information for conditional use of GPUs in CUDA.jl can be found in its [documentation](https://cuda.juliagpu.org/stable/installation/conditional/#Conditional-use), and information about the specific use of the variable is described in the [Nvidia CUDA blog post](https://developer.nvidia.com/blog/cuda-pro-tip-control-gpu-visibility-cuda_visible_devices/).

0 commit comments

Comments
 (0)