Skip to content

Commit 2adbfef

Browse files
committed
updates to vignettes + examples
1 parent 7193743 commit 2adbfef

File tree

6 files changed

+44
-41
lines changed

6 files changed

+44
-41
lines changed

vignettes-src/distributed_training_with_tensorflow.Rmd

Lines changed: 10 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -123,20 +123,16 @@ get_compiled_model <- function() {
123123
model |> compile(
124124
optimizer = optimizer_adam(),
125125
loss = loss_sparse_categorical_crossentropy(from_logits = TRUE),
126-
metrics = list(metric_sparse_categorical_accuracy()),
127-
128-
# XLA compilation is temporarily disabled due to a bug
129-
# https://github.com/keras-team/keras/issues/19005
130-
jit_compile = FALSE
126+
metrics = list(metric_sparse_categorical_accuracy())
131127
)
132128
model
133129
}
134130
135131
get_dataset <- function(batch_size = 64) {
136132
137133
c(c(x_train, y_train), c(x_test, y_test)) %<-% dataset_mnist()
138-
x_train <- array_reshape(x_train, c(-1, 784))
139-
x_test <- array_reshape(x_test, c(-1, 784))
134+
x_train <- array_reshape(x_train, c(-1, 784)) / 255
135+
x_test <- array_reshape(x_test, c(-1, 784)) / 255
140136
141137
# Reserve 10,000 samples for validation.
142138
val_i <- sample.int(nrow(x_train), 10000)
@@ -145,18 +141,25 @@ get_dataset <- function(batch_size = 64) {
145141
x_train = x_train[-val_i,]
146142
y_train = y_train[-val_i]
147143
144+
y_train <- array_reshape(y_train, c(-1, 1))
145+
y_val <- array_reshape(y_val, c(-1, 1))
146+
y_test <- array_reshape(y_test, c(-1, 1))
147+
148148
# Prepare the training dataset.
149149
train_dataset <- list(x_train, y_train) |>
150+
lapply(np_array, "float32") |>
150151
tensor_slices_dataset() |>
151152
dataset_batch(batch_size)
152153
153154
# Prepare the validation dataset.
154155
val_dataset <- list(x_val, y_val) |>
156+
lapply(np_array, "float32") |>
155157
tensor_slices_dataset() |>
156158
dataset_batch(batch_size)
157159
158160
# Prepare the test dataset.
159161
test_dataset <- list(x_test, y_test) |>
162+
lapply(np_array, "float32") |>
160163
tensor_slices_dataset() |>
161164
dataset_batch(batch_size)
162165

vignettes-src/examples/nlp/text_classification_from_scratch.Rmd

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,6 @@ word splitting & indexing.
2525
library(tensorflow, exclude = c("shape", "set_random_seed"))
2626
library(tfdatasets, exclude = "shape")
2727
library(keras3)
28-
use_virtualenv("r-keras")
2928
```
3029

3130
## Load the data: IMDB movie review sentiment classification

vignettes-src/getting_started.Rmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ install.packages("keras3")
4444
or install the development version with:
4545

4646
```{r, eval=FALSE}
47-
remotes::install_github("rstudio/keras")
47+
remotes::install_github("rstudio/keras3")
4848
```
4949

5050
The Keras R interface requires that a backend engine be installed. This is [TensorFlow](https://www.tensorflow.org/) by default.

vignettes/custom_train_step_in_tensorflow.Rmd

Lines changed: 15 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -127,11 +127,11 @@ model |> fit(x, y, epochs = 3)
127127

128128
```
129129
## Epoch 1/3
130-
## 32/32 - 1s - 23ms/step - loss: 3.2271 - mae: 1.4339
130+
## 32/32 - 1s - 24ms/step - mae: 1.4339 - loss: 3.2271
131131
## Epoch 2/3
132-
## 32/32 - 0s - 1ms/step - loss: 2.9034 - mae: 1.3605
132+
## 32/32 - 0s - 2ms/step - mae: 1.3605 - loss: 2.9034
133133
## Epoch 3/3
134-
## 32/32 - 0s - 1ms/step - loss: 2.6272 - mae: 1.2960
134+
## 32/32 - 0s - 2ms/step - mae: 1.2960 - loss: 2.6272
135135
```
136136

137137
## Going lower-level
@@ -212,11 +212,11 @@ model |> fit(x, y, epochs = 3)
212212

213213
```
214214
## Epoch 1/3
215-
## 32/32 - 1s - 22ms/step - loss: 2.5170 - mae: 1.2923
215+
## 32/32 - 1s - 23ms/step - loss: 2.5170 - mae: 1.2923
216216
## Epoch 2/3
217-
## 32/32 - 0s - 1ms/step - loss: 2.2689 - mae: 1.2241
217+
## 32/32 - 0s - 2ms/step - loss: 2.2689 - mae: 1.2241
218218
## Epoch 3/3
219-
## 32/32 - 0s - 1ms/step - loss: 2.0578 - mae: 1.1633
219+
## 32/32 - 0s - 3ms/step - loss: 2.0578 - mae: 1.1633
220220
```
221221

222222
## Supporting `sample_weight` & `class_weight`
@@ -282,11 +282,11 @@ model |> fit(x, y, sample_weight = sw, epochs = 3)
282282

283283
```
284284
## Epoch 1/3
285-
## 32/32 - 1s - 26ms/step - loss: 0.1681 - mae: 1.3434
285+
## 32/32 - 1s - 26ms/step - mae: 1.3434 - loss: 0.1681
286286
## Epoch 2/3
287-
## 32/32 - 0s - 9ms/step - loss: 0.1394 - mae: 1.3364
287+
## 32/32 - 0s - 2ms/step - mae: 1.3364 - loss: 0.1394
288288
## Epoch 3/3
289-
## 32/32 - 0s - 1ms/step - loss: 0.1148 - mae: 1.3286
289+
## 32/32 - 0s - 2ms/step - mae: 1.3286 - loss: 0.1148
290290
```
291291

292292
## Providing your own evaluation step
@@ -332,15 +332,16 @@ model |> evaluate(x, y)
332332
```
333333

334334
```
335-
## 32/32 - 0s - 10ms/step - loss: 0.0000e+00 - mae: 1.3871
335+
## 32/32 - 0s - 10ms/step - mae: 1.3871 - loss: 0.0000e+00
336336
```
337337

338338
```
339339
## $loss
340-
## [1] 0
340+
## tf.Tensor(0.0, shape=(), dtype=float32)
341341
##
342-
## $mae
343-
## [1] 1.387149
342+
## $compile_metrics
343+
## $compile_metrics$mae
344+
## tf.Tensor(1.3871489, shape=(), dtype=float32)
344345
```
345346

346347
## Wrapping up: an end-to-end GAN example
@@ -507,7 +508,7 @@ gan |> fit(
507508
```
508509

509510
```
510-
## 100/100 - 5s - 54ms/step - d_loss: 0.0000e+00 - g_loss: 0.0000e+00
511+
## 100/100 - 5s - 55ms/step - d_loss: 0.0000e+00 - g_loss: 0.0000e+00
511512
```
512513

513514
The ideas behind deep learning are simple, so why should their implementation be painful?

vignettes/distribution.Rmd

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -184,24 +184,24 @@ model |> fit(dataset, epochs = 3)
184184

185185
```
186186
## Epoch 1/3
187-
## 8/8 - 0s - 38ms/step - loss: 1.0768
187+
## 8/8 - 0s - 40ms/step - loss: 1.1398
188188
## Epoch 2/3
189-
## 8/8 - 0s - 6ms/step - loss: 0.9754
189+
## 8/8 - 0s - 7ms/step - loss: 1.0593
190190
## Epoch 3/3
191-
## 8/8 - 0s - 5ms/step - loss: 0.9347
191+
## 8/8 - 0s - 5ms/step - loss: 1.0071
192192
```
193193

194194
``` r
195195
model |> evaluate(dataset)
196196
```
197197

198198
```
199-
## 8/8 - 0s - 7ms/step - loss: 0.8936
199+
## 8/8 - 0s - 6ms/step - loss: 0.9609
200200
```
201201

202202
```
203203
## $loss
204-
## [1] 0.8935966
204+
## [1] 0.9609067
205205
```
206206

207207

@@ -278,24 +278,24 @@ model |> fit(dataset, epochs = 3)
278278

279279
```
280280
## Epoch 1/3
281-
## 8/8 - 0s - 29ms/step - loss: 1.0836
281+
## 8/8 - 0s - 25ms/step - loss: 1.1612
282282
## Epoch 2/3
283-
## 8/8 - 0s - 4ms/step - loss: 1.0192
283+
## 8/8 - 0s - 4ms/step - loss: 1.0702
284284
## Epoch 3/3
285-
## 8/8 - 0s - 4ms/step - loss: 0.9821
285+
## 8/8 - 0s - 5ms/step - loss: 1.0184
286286
```
287287

288288
``` r
289289
model |> evaluate(dataset)
290290
```
291291

292292
```
293-
## 8/8 - 0s - 8ms/step - loss: 0.9576
293+
## 8/8 - 0s - 7ms/step - loss: 0.9844
294294
```
295295

296296
```
297297
## $loss
298-
## [1] 0.9576273
298+
## [1] 0.9844129
299299
```
300300

301301

vignettes/functional_api.Rmd

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -208,17 +208,17 @@ history <- model |> fit(
208208

209209
```
210210
## Epoch 1/2
211-
## 750/750 - 2s - 3ms/step - accuracy: 0.8979 - loss: 0.3540 - val_accuracy: 0.9448 - val_loss: 0.1903
211+
## 750/750 - 2s - 3ms/step - accuracy: 0.8980 - loss: 0.3540 - val_accuracy: 0.9444 - val_loss: 0.1898
212212
## Epoch 2/2
213-
## 750/750 - 1s - 784us/step - accuracy: 0.9509 - loss: 0.1635 - val_accuracy: 0.9597 - val_loss: 0.1397
213+
## 750/750 - 1s - 1ms/step - accuracy: 0.9512 - loss: 0.1633 - val_accuracy: 0.9605 - val_loss: 0.1387
214214
```
215215

216216
``` r
217217
test_scores <- model |> evaluate(x_test, y_test, verbose=2)
218218
```
219219

220220
```
221-
## 313/313 - 0s - 1ms/step - accuracy: 0.9595 - loss: 0.1328
221+
## 313/313 - 1s - 2ms/step - accuracy: 0.9598 - loss: 0.1322
222222
```
223223

224224

@@ -229,8 +229,8 @@ cat("Test accuracy:", test_scores$accuracy, "\n")
229229
```
230230

231231
```
232-
## Test loss: 0.132778
233-
## Test accuracy: 0.9595
232+
## Test loss: 0.1321879
233+
## Test accuracy: 0.9598
234234
```
235235

236236
For further reading, see the [training and evaluation](training_with_built_in_methods.html) guide.
@@ -643,9 +643,9 @@ model |> fit(
643643

644644
```
645645
## Epoch 1/2
646-
## 40/40 - 3s - 66ms/step - department_loss: 381.0319 - loss: 498.7886 - priority_loss: 117.7568
646+
## 40/40 - 3s - 65ms/step - department_loss: 2.8465 - loss: 0.7669 - priority_loss: 0.1976
647647
## Epoch 2/2
648-
## 40/40 - 0s - 10ms/step - department_loss: 358.5705 - loss: 438.4252 - priority_loss: 79.8546
648+
## 40/40 - 0s - 6ms/step - department_loss: 2.8554 - loss: 0.7538 - priority_loss: 0.1828
649649
```
650650

651651
When calling fit with a `Dataset` object, it should yield either a
@@ -776,7 +776,7 @@ model |> fit(
776776
```
777777

778778
```
779-
## 13/13 - 5s - 375ms/step - acc: 0.1250 - loss: 2.3001 - val_acc: 0.1400 - val_loss: 2.2938
779+
## 13/13 - 5s - 367ms/step - acc: 0.1250 - loss: 2.3003 - val_acc: 0.1250 - val_loss: 2.2946
780780
```
781781

782782
## Shared layers

0 commit comments

Comments
 (0)