You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Note that both `gpu` and `cpu` are smart enough to recurse through tuples and namedtuples.
123
124
124
125
### Saving GPU-Trained Models
125
126
@@ -136,8 +137,12 @@ BSON.@save "./path/to/trained_model.bson" model
136
137
# in this approach the cpu-transferred model (referenced by the variable `model`)
137
138
# only exists inside the `let` statement
138
139
let model =cpu(model)
140
+
# ...
139
141
BSON.@save"./path/to/trained_model.bson" model
140
142
end
143
+
144
+
# is equivalente to the above, but uses `key=value` storing directve from BSON.jl
145
+
BSON.@save"./path/to/trained_model.bson" model =cpu(model)
141
146
```
142
147
The reason behind this is that models trained in the GPU but not transferred to the CPU memory scope will expect `CuArray`s as input. In other words, Flux models expect input data coming from the same kind device in which they were trained on.
0 commit comments