Skip to content

Commit 92d3370

Browse files
committed
render guides
1 parent 2adbfef commit 92d3370

9 files changed

+115
-111
lines changed

vignettes/custom_train_step_in_tensorflow.Rmd

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@ model |> fit(x, y, epochs = 3)
127127

128128
```
129129
## Epoch 1/3
130-
## 32/32 - 1s - 24ms/step - mae: 1.4339 - loss: 3.2271
130+
## 32/32 - 1s - 29ms/step - mae: 1.4339 - loss: 3.2271
131131
## Epoch 2/3
132132
## 32/32 - 0s - 2ms/step - mae: 1.3605 - loss: 2.9034
133133
## Epoch 3/3
@@ -212,11 +212,11 @@ model |> fit(x, y, epochs = 3)
212212

213213
```
214214
## Epoch 1/3
215-
## 32/32 - 1s - 23ms/step - loss: 2.5170 - mae: 1.2923
215+
## 32/32 - 1s - 20ms/step - loss: 2.5170 - mae: 1.2923
216216
## Epoch 2/3
217217
## 32/32 - 0s - 2ms/step - loss: 2.2689 - mae: 1.2241
218218
## Epoch 3/3
219-
## 32/32 - 0s - 3ms/step - loss: 2.0578 - mae: 1.1633
219+
## 32/32 - 0s - 2ms/step - loss: 2.0578 - mae: 1.1633
220220
```
221221

222222
## Supporting `sample_weight` & `class_weight`
@@ -282,11 +282,11 @@ model |> fit(x, y, sample_weight = sw, epochs = 3)
282282

283283
```
284284
## Epoch 1/3
285-
## 32/32 - 1s - 26ms/step - mae: 1.3434 - loss: 0.1681
285+
## 32/32 - 1s - 28ms/step - mae: 1.3434 - loss: 0.1681
286286
## Epoch 2/3
287-
## 32/32 - 0s - 2ms/step - mae: 1.3364 - loss: 0.1394
287+
## 32/32 - 0s - 3ms/step - mae: 1.3364 - loss: 0.1394
288288
## Epoch 3/3
289-
## 32/32 - 0s - 2ms/step - mae: 1.3286 - loss: 0.1148
289+
## 32/32 - 0s - 3ms/step - mae: 1.3286 - loss: 0.1148
290290
```
291291

292292
## Providing your own evaluation step
@@ -332,7 +332,7 @@ model |> evaluate(x, y)
332332
```
333333

334334
```
335-
## 32/32 - 0s - 10ms/step - mae: 1.3871 - loss: 0.0000e+00
335+
## 32/32 - 0s - 9ms/step - mae: 1.3871 - loss: 0.0000e+00
336336
```
337337

338338
```
@@ -508,7 +508,7 @@ gan |> fit(
508508
```
509509

510510
```
511-
## 100/100 - 5s - 55ms/step - d_loss: 0.0000e+00 - g_loss: 0.0000e+00
511+
## 100/100 - 6s - 57ms/step - d_loss: 0.0000e+00 - g_loss: 0.0000e+00
512512
```
513513

514514
The ideas behind deep learning are simple, so why should their implementation be painful?

vignettes/distribution.Rmd

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -184,24 +184,24 @@ model |> fit(dataset, epochs = 3)
184184

185185
```
186186
## Epoch 1/3
187-
## 8/8 - 0s - 40ms/step - loss: 1.1398
187+
## 8/8 - 0s - 38ms/step - loss: 1.1533
188188
## Epoch 2/3
189-
## 8/8 - 0s - 7ms/step - loss: 1.0593
189+
## 8/8 - 0s - 5ms/step - loss: 1.0621
190190
## Epoch 3/3
191-
## 8/8 - 0s - 5ms/step - loss: 1.0071
191+
## 8/8 - 0s - 7ms/step - loss: 1.0163
192192
```
193193

194194
``` r
195195
model |> evaluate(dataset)
196196
```
197197

198198
```
199-
## 8/8 - 0s - 6ms/step - loss: 0.9609
199+
## 8/8 - 0s - 7ms/step - loss: 0.9673
200200
```
201201

202202
```
203203
## $loss
204-
## [1] 0.9609067
204+
## [1] 0.9673058
205205
```
206206

207207

@@ -278,24 +278,24 @@ model |> fit(dataset, epochs = 3)
278278

279279
```
280280
## Epoch 1/3
281-
## 8/8 - 0s - 25ms/step - loss: 1.1612
281+
## 8/8 - 0s - 42ms/step - loss: 1.1424
282282
## Epoch 2/3
283-
## 8/8 - 0s - 4ms/step - loss: 1.0702
283+
## 8/8 - 0s - 7ms/step - loss: 1.0528
284284
## Epoch 3/3
285-
## 8/8 - 0s - 5ms/step - loss: 1.0184
285+
## 8/8 - 0s - 7ms/step - loss: 1.0393
286286
```
287287

288288
``` r
289289
model |> evaluate(dataset)
290290
```
291291

292292
```
293-
## 8/8 - 0s - 7ms/step - loss: 0.9844
293+
## 8/8 - 0s - 9ms/step - loss: 1.0088
294294
```
295295

296296
```
297297
## $loss
298-
## [1] 0.9844129
298+
## [1] 1.008847
299299
```
300300

301301

vignettes/functional_api.Rmd

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -208,17 +208,17 @@ history <- model |> fit(
208208

209209
```
210210
## Epoch 1/2
211-
## 750/750 - 2s - 3ms/step - accuracy: 0.8980 - loss: 0.3540 - val_accuracy: 0.9444 - val_loss: 0.1898
211+
## 750/750 - 2s - 3ms/step - accuracy: 0.8979 - loss: 0.3540 - val_accuracy: 0.9448 - val_loss: 0.1903
212212
## Epoch 2/2
213-
## 750/750 - 1s - 1ms/step - accuracy: 0.9512 - loss: 0.1633 - val_accuracy: 0.9605 - val_loss: 0.1387
213+
## 750/750 - 1s - 2ms/step - accuracy: 0.9509 - loss: 0.1635 - val_accuracy: 0.9597 - val_loss: 0.1397
214214
```
215215

216216
``` r
217217
test_scores <- model |> evaluate(x_test, y_test, verbose=2)
218218
```
219219

220220
```
221-
## 313/313 - 1s - 2ms/step - accuracy: 0.9598 - loss: 0.1322
221+
## 313/313 - 1s - 2ms/step - accuracy: 0.9595 - loss: 0.1328
222222
```
223223

224224

@@ -229,8 +229,8 @@ cat("Test accuracy:", test_scores$accuracy, "\n")
229229
```
230230

231231
```
232-
## Test loss: 0.1321879
233-
## Test accuracy: 0.9598
232+
## Test loss: 0.132778
233+
## Test accuracy: 0.9595
234234
```
235235

236236
For further reading, see the [training and evaluation](training_with_built_in_methods.html) guide.
@@ -643,7 +643,7 @@ model |> fit(
643643

644644
```
645645
## Epoch 1/2
646-
## 40/40 - 3s - 65ms/step - department_loss: 2.8465 - loss: 0.7669 - priority_loss: 0.1976
646+
## 40/40 - 3s - 74ms/step - department_loss: 2.8465 - loss: 0.7669 - priority_loss: 0.1976
647647
## Epoch 2/2
648648
## 40/40 - 0s - 6ms/step - department_loss: 2.8554 - loss: 0.7538 - priority_loss: 0.1828
649649
```
@@ -776,7 +776,7 @@ model |> fit(
776776
```
777777

778778
```
779-
## 13/13 - 5s - 367ms/step - acc: 0.1250 - loss: 2.3003 - val_acc: 0.1250 - val_loss: 2.2946
779+
## 13/13 - 6s - 453ms/step - acc: 0.1250 - loss: 2.3006 - val_acc: 0.1300 - val_loss: 2.2969
780780
```
781781

782782
## Shared layers

vignettes/intro_to_keras_for_engineers.Rmd

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -178,25 +178,25 @@ model |> fit(
178178

179179
```
180180
## Epoch 1/10
181-
## 399/399 - 8s - 20ms/step - acc: 0.7476 - loss: 0.7467 - val_acc: 0.9663 - val_loss: 0.1179
181+
## 399/399 - 7s - 18ms/step - acc: 0.7496 - loss: 0.7385 - val_acc: 0.9641 - val_loss: 0.1228
182182
## Epoch 2/10
183-
## 399/399 - 2s - 5ms/step - acc: 0.9384 - loss: 0.2066 - val_acc: 0.9770 - val_loss: 0.0765
183+
## 399/399 - 3s - 7ms/step - acc: 0.9382 - loss: 0.2037 - val_acc: 0.9769 - val_loss: 0.0774
184184
## Epoch 3/10
185-
## 399/399 - 2s - 5ms/step - acc: 0.9569 - loss: 0.1467 - val_acc: 0.9817 - val_loss: 0.0622
185+
## 399/399 - 3s - 7ms/step - acc: 0.9567 - loss: 0.1458 - val_acc: 0.9816 - val_loss: 0.0636
186186
## Epoch 4/10
187-
## 399/399 - 2s - 5ms/step - acc: 0.9652 - loss: 0.1170 - val_acc: 0.9860 - val_loss: 0.0499
187+
## 399/399 - 3s - 7ms/step - acc: 0.9658 - loss: 0.1163 - val_acc: 0.9866 - val_loss: 0.0468
188188
## Epoch 5/10
189-
## 399/399 - 2s - 5ms/step - acc: 0.9709 - loss: 0.0999 - val_acc: 0.9873 - val_loss: 0.0447
189+
## 399/399 - 3s - 9ms/step - acc: 0.9719 - loss: 0.0975 - val_acc: 0.9880 - val_loss: 0.0433
190190
## Epoch 6/10
191-
## 399/399 - 2s - 5ms/step - acc: 0.9752 - loss: 0.0863 - val_acc: 0.9877 - val_loss: 0.0400
191+
## 399/399 - 3s - 8ms/step - acc: 0.9758 - loss: 0.0853 - val_acc: 0.9874 - val_loss: 0.0413
192192
## Epoch 7/10
193-
## 399/399 - 2s - 5ms/step - acc: 0.9764 - loss: 0.0787 - val_acc: 0.9890 - val_loss: 0.0395
193+
## 399/399 - 3s - 7ms/step - acc: 0.9765 - loss: 0.0782 - val_acc: 0.9891 - val_loss: 0.0398
194194
## Epoch 8/10
195-
## 399/399 - 2s - 5ms/step - acc: 0.9794 - loss: 0.0678 - val_acc: 0.9874 - val_loss: 0.0432
195+
## 399/399 - 3s - 8ms/step - acc: 0.9797 - loss: 0.0678 - val_acc: 0.9881 - val_loss: 0.0419
196196
## Epoch 9/10
197-
## 399/399 - 2s - 5ms/step - acc: 0.9802 - loss: 0.0658 - val_acc: 0.9894 - val_loss: 0.0395
197+
## 399/399 - 3s - 7ms/step - acc: 0.9805 - loss: 0.0652 - val_acc: 0.9897 - val_loss: 0.0381
198198
## Epoch 10/10
199-
## 399/399 - 2s - 5ms/step - acc: 0.9825 - loss: 0.0584 - val_acc: 0.9914 - val_loss: 0.0342
199+
## 399/399 - 3s - 8ms/step - acc: 0.9831 - loss: 0.0576 - val_acc: 0.9912 - val_loss: 0.0340
200200
```
201201

202202
``` r
@@ -227,7 +227,7 @@ predictions <- model |> predict(x_test)
227227
```
228228

229229
```
230-
## 313/313 - 0s - 2ms/step
230+
## 313/313 - 1s - 2ms/step
231231
```
232232

233233
``` r
@@ -362,7 +362,7 @@ model |> fit(
362362
```
363363

364364
```
365-
## 399/399 - 6s - 15ms/step - acc: 0.7343 - loss: 0.7741 - val_acc: 0.9269 - val_loss: 0.2399
365+
## 399/399 - 7s - 18ms/step - acc: 0.7344 - loss: 0.7749 - val_acc: 0.9259 - val_loss: 0.2411
366366
```
367367

368368
## Training models on arbitrary data sources
@@ -439,7 +439,7 @@ model |> fit(train_dataset, epochs = 1, validation_data = test_dataset)
439439
```
440440

441441
```
442-
## 469/469 - 7s - 14ms/step - acc: 0.7499 - loss: 0.7454 - val_acc: 0.9051 - val_loss: 0.3089
442+
## 469/469 - 8s - 17ms/step - acc: 0.7493 - loss: 0.7476 - val_acc: 0.9123 - val_loss: 0.2965
443443
```
444444

445445
## Further reading

vignettes/making_new_layers_and_models_via_subclassing.Rmd

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -96,10 +96,11 @@ linear_layer$weights
9696

9797
```
9898
## [[1]]
99-
## <KerasVariable shape=(2, 4), dtype=float32, path=linear/variable>
99+
## <Variable path=linear/variable, shape=(2, 4), dtype=float32, value=[[-0.06251299 0.05335509 0.01485647 -0.00985784]
100+
## [ 0.08404355 0.10115016 0.00569303 0.05479009]]>
100101
##
101102
## [[2]]
102-
## <KerasVariable shape=(4), dtype=float32, path=linear/variable_1>
103+
## <Variable path=linear/variable_1, shape=(4), dtype=float32, value=[0. 0. 0. 0.]>
103104
```
104105

105106
## Layers can have non-trainable weights
@@ -480,7 +481,7 @@ model |> fit(random_normal(c(2, 3)), random_normal(c(2, 3)), epochs = 1)
480481
```
481482

482483
```
483-
## 1/1 - 0s - 144ms/step - loss: 1.9081
484+
## 1/1 - 0s - 142ms/step - loss: 1.9081
484485
```
485486

486487
``` r
@@ -492,7 +493,7 @@ model |> fit(random_normal(c(2, 3)), random_normal(c(2, 3)), epochs = 1)
492493
```
493494

494495
```
495-
## 1/1 - 0s - 78ms/step - loss: -2.2532e-03
496+
## 1/1 - 0s - 115ms/step - loss: 1.6613
496497
```
497498

498499
## You can optionally enable serialization on your layers
@@ -841,7 +842,7 @@ vae |> fit(x_train, x_train, epochs = 2, batch_size = 64)
841842

842843
```
843844
## Epoch 1/2
844-
## 938/938 - 4s - 4ms/step - loss: 0.0748
845+
## 938/938 - 5s - 5ms/step - loss: 0.0748
845846
## Epoch 2/2
846-
## 938/938 - 1s - 810us/step - loss: 0.0676
847+
## 938/938 - 1s - 2ms/step - loss: 0.0676
847848
```

vignettes/sequential_model.Rmd

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -167,10 +167,13 @@ layer$weights # Now it has weights, of shape (4, 3) and (3,)
167167

168168
```
169169
## [[1]]
170-
## <KerasVariable shape=(4, 3), dtype=float32, path=dense_9/kernel>
170+
## <Variable path=dense_9/kernel, shape=(4, 3), dtype=float32, value=[[ 0.48581433 0.78749573 0.61015 ]
171+
## [ 0.7962619 0.7261175 -0.8046875 ]
172+
## [-0.6189915 0.37973273 0.50559556]
173+
## [-0.5455791 -0.60714126 0.19791973]]>
171174
##
172175
## [[2]]
173-
## <KerasVariable shape=(3), dtype=float32, path=dense_9/bias>
176+
## <Variable path=dense_9/bias, shape=(3), dtype=float32, value=[0. 0. 0.]>
174177
```
175178

176179
Naturally, this also applies to Sequential models. When you instantiate a

0 commit comments

Comments
 (0)