@@ -201,20 +201,20 @@ cat("Target: "); str(y)
201201
202202```
203203## Input: List of 13
204- ## $ age :<tf.Tensor: shape=(), dtype=int32, numpy=45 >
204+ ## $ age :<tf.Tensor: shape=(), dtype=int32, numpy=59 >
205205## $ sex :<tf.Tensor: shape=(), dtype=int32, numpy=1>
206- ## $ cp :<tf.Tensor: shape=(), dtype=int32, numpy=1 >
207- ## $ trestbps:<tf.Tensor: shape=(), dtype=int32, numpy=110 >
208- ## $ chol :<tf.Tensor: shape=(), dtype=int32, numpy=264 >
209- ## $ fbs :<tf.Tensor: shape=(), dtype=int32, numpy=0 >
210- ## $ restecg :<tf.Tensor: shape=(), dtype=int32, numpy=0 >
211- ## $ thalach :<tf.Tensor: shape=(), dtype=int32, numpy=132 >
206+ ## $ cp :<tf.Tensor: shape=(), dtype=int32, numpy=4 >
207+ ## $ trestbps:<tf.Tensor: shape=(), dtype=int32, numpy=164 >
208+ ## $ chol :<tf.Tensor: shape=(), dtype=int32, numpy=176 >
209+ ## $ fbs :<tf.Tensor: shape=(), dtype=int32, numpy=1 >
210+ ## $ restecg :<tf.Tensor: shape=(), dtype=int32, numpy=2 >
211+ ## $ thalach :<tf.Tensor: shape=(), dtype=int32, numpy=90 >
212212## $ exang :<tf.Tensor: shape=(), dtype=int32, numpy=0>
213- ## $ oldpeak :<tf.Tensor: shape=(), dtype=float32, numpy=1.2 >
213+ ## $ oldpeak :<tf.Tensor: shape=(), dtype=float32, numpy=1.0 >
214214## $ slope :<tf.Tensor: shape=(), dtype=int32, numpy=2>
215- ## $ ca :<tf.Tensor: shape=(), dtype=int32, numpy=0 >
216- ## $ thal :<tf.Tensor: shape=(), dtype=string, numpy=b'reversible '>
217- ## Target: <tf.Tensor: shape=(), dtype=int32, numpy=0 >
215+ ## $ ca :<tf.Tensor: shape=(), dtype=int32, numpy=2 >
216+ ## $ thal :<tf.Tensor: shape=(), dtype=string, numpy=b'fixed '>
217+ ## Target: <tf.Tensor: shape=(), dtype=int32, numpy=1 >
218218```
219219
220220Let's batch the datasets:
@@ -371,13 +371,13 @@ preprocessed_x
371371
372372```
373373## tf.Tensor(
374- ## [[0. 0. 0. ... 0 . 0. 0.]
374+ ## [[0. 0. 0. ... 1 . 0. 0.]
375375## [0. 0. 0. ... 0. 0. 0.]
376376## [0. 0. 0. ... 0. 0. 0.]
377377## ...
378378## [0. 0. 0. ... 0. 0. 0.]
379379## [0. 0. 0. ... 0. 0. 0.]
380- ## [0. 0. 0 . ... 0. 0. 0.]], shape=(32, 136), dtype=float32)
380+ ## [0. 0. 1 . ... 0. 0. 0.]], shape=(32, 136), dtype=float32)
381381```
382382
383383## Two ways to manage preprocessing: as part of the ` tf.data ` pipeline, or in the model itself
@@ -463,45 +463,45 @@ training_model |> fit(
463463
464464```
465465## Epoch 1/20
466- ## 8/8 - 2s - 280ms /step - accuracy: 0.4689 - loss: 0.7471 - val_accuracy: 0.5167 - val_loss: 0.7019
466+ ## 8/8 - 2s - 274ms /step - accuracy: 0.4647 - loss: 0.7443 - val_accuracy: 0.4833 - val_loss: 0.7008
467467## Epoch 2/20
468- ## 8/8 - 1s - 140ms /step - accuracy: 0.5602 - loss: 0.6785 - val_accuracy: 0.6333 - val_loss: 0.6491
468+ ## 8/8 - 0s - 29ms /step - accuracy: 0.6141 - loss: 0.6784 - val_accuracy: 0.6167 - val_loss: 0.6540
469469## Epoch 3/20
470- ## 8/8 - 0s - 46ms /step - accuracy: 0.6307 - loss: 0.6478 - val_accuracy: 0.7000 - val_loss: 0.6053
470+ ## 8/8 - 0s - 32ms /step - accuracy: 0.5809 - loss: 0.6657 - val_accuracy: 0.7167 - val_loss: 0.6160
471471## Epoch 4/20
472- ## 8/8 - 0s - 12ms /step - accuracy: 0.6432 - loss: 0.6246 - val_accuracy: 0.7667 - val_loss: 0.5692
472+ ## 8/8 - 0s - 30ms /step - accuracy: 0.6763 - loss: 0.6155 - val_accuracy: 0.7333 - val_loss: 0.5833
473473## Epoch 5/20
474- ## 8/8 - 0s - 13ms /step - accuracy: 0.7178 - loss: 0.5813 - val_accuracy: 0.7667 - val_loss: 0.5359
474+ ## 8/8 - 0s - 30ms /step - accuracy: 0.7386 - loss: 0.5935 - val_accuracy: 0.7500 - val_loss: 0.5565
475475## Epoch 6/20
476- ## 8/8 - 0s - 13ms /step - accuracy: 0.7344 - loss: 0.5371 - val_accuracy: 0.7833 - val_loss: 0.5067
476+ ## 8/8 - 0s - 30ms /step - accuracy: 0.7261 - loss: 0.5560 - val_accuracy: 0.7500 - val_loss: 0.5304
477477## Epoch 7/20
478- ## 8/8 - 0s - 13ms /step - accuracy: 0.7884 - loss: 0.5158 - val_accuracy: 0.8333 - val_loss: 0.4810
478+ ## 8/8 - 0s - 30ms /step - accuracy: 0.7718 - loss: 0.5114 - val_accuracy: 0.7333 - val_loss: 0.5076
479479## Epoch 8/20
480- ## 8/8 - 0s - 13ms /step - accuracy: 0.7759 - loss: 0.5011 - val_accuracy: 0.8500 - val_loss: 0.4569
480+ ## 8/8 - 0s - 30ms /step - accuracy: 0.7925 - loss: 0.5025 - val_accuracy: 0.7500 - val_loss: 0.4875
481481## Epoch 9/20
482- ## 8/8 - 0s - 13ms /step - accuracy: 0.7676 - loss: 0.4865 - val_accuracy: 0.8500 - val_loss: 0.4354
482+ ## 8/8 - 0s - 30ms /step - accuracy: 0.7510 - loss: 0.5042 - val_accuracy: 0.7500 - val_loss: 0.4698
483483## Epoch 10/20
484- ## 8/8 - 0s - 13ms /step - accuracy: 0.7925 - loss: 0.4601 - val_accuracy: 0.8333 - val_loss: 0.4161
484+ ## 8/8 - 0s - 30ms /step - accuracy: 0.8008 - loss: 0.4562 - val_accuracy: 0.7500 - val_loss: 0.4555
485485## Epoch 11/20
486- ## 8/8 - 0s - 13ms /step - accuracy: 0.7967 - loss: 0.4617 - val_accuracy: 0.8667 - val_loss: 0.3976
486+ ## 8/8 - 0s - 30ms /step - accuracy: 0.8714 - loss: 0.4418 - val_accuracy: 0.7667 - val_loss: 0.4431
487487## Epoch 12/20
488- ## 8/8 - 0s - 13ms /step - accuracy: 0.7967 - loss: 0.4316 - val_accuracy: 0.8667 - val_loss: 0.3796
488+ ## 8/8 - 0s - 33ms /step - accuracy: 0.8506 - loss: 0.4182 - val_accuracy: 0.7500 - val_loss: 0.4327
489489## Epoch 13/20
490- ## 8/8 - 0s - 13ms /step - accuracy: 0.8506 - loss: 0.4058 - val_accuracy: 0.8833 - val_loss: 0.3643
490+ ## 8/8 - 0s - 30ms /step - accuracy: 0.8465 - loss: 0.3950 - val_accuracy: 0.7667 - val_loss: 0.4239
491491## Epoch 14/20
492- ## 8/8 - 0s - 13ms /step - accuracy: 0.8174 - loss: 0.4197 - val_accuracy: 0.8833 - val_loss: 0.3510
492+ ## 8/8 - 0s - 30ms /step - accuracy: 0.8382 - loss: 0.3905 - val_accuracy: 0.7667 - val_loss: 0.4166
493493## Epoch 15/20
494- ## 8/8 - 0s - 14ms /step - accuracy: 0.8299 - loss: 0.3888 - val_accuracy: 0.8833 - val_loss: 0.3405
494+ ## 8/8 - 0s - 30ms /step - accuracy: 0.8465 - loss: 0.3661 - val_accuracy: 0.7833 - val_loss: 0.4104
495495## Epoch 16/20
496- ## 8/8 - 0s - 13ms /step - accuracy: 0.8257 - loss: 0.3820 - val_accuracy: 0.8833 - val_loss: 0.3294
496+ ## 8/8 - 0s - 30ms /step - accuracy: 0.8631 - loss: 0.3725 - val_accuracy: 0.8000 - val_loss: 0.4053
497497## Epoch 17/20
498- ## 8/8 - 0s - 13ms /step - accuracy: 0.8299 - loss: 0.3746 - val_accuracy: 0.8833 - val_loss: 0.3223
498+ ## 8/8 - 0s - 30ms /step - accuracy: 0.8299 - loss: 0.3679 - val_accuracy: 0.8167 - val_loss: 0.4014
499499## Epoch 18/20
500- ## 8/8 - 0s - 13ms /step - accuracy: 0.8506 - loss: 0.3487 - val_accuracy: 0.8833 - val_loss: 0.3153
500+ ## 8/8 - 0s - 30ms /step - accuracy: 0.8714 - loss: 0.3501 - val_accuracy: 0.8167 - val_loss: 0.3984
501501## Epoch 19/20
502- ## 8/8 - 0s - 14ms /step - accuracy: 0.8465 - loss: 0.3558 - val_accuracy: 0.8667 - val_loss: 0.3093
502+ ## 8/8 - 0s - 30ms /step - accuracy: 0.8714 - loss: 0.3322 - val_accuracy: 0.8167 - val_loss: 0.3949
503503## Epoch 20/20
504- ## 8/8 - 0s - 14ms /step - accuracy: 0.8672 - loss: 0.3570 - val_accuracy: 0.8667 - val_loss: 0.3036
504+ ## 8/8 - 0s - 30ms /step - accuracy: 0.8506 - loss: 0.3303 - val_accuracy: 0.8167 - val_loss: 0.3924
505505```
506506
507507We quickly get to 80% validation accuracy.
@@ -534,7 +534,7 @@ predictions <- inference_model |> predict(input_dict)
534534```
535535
536536```
537- ## 1/1 - 0s - 394ms /step
537+ ## 1/1 - 0s - 341ms /step
538538```
539539
540540``` r
@@ -545,6 +545,6 @@ glue::glue(r"---(
545545```
546546
547547```
548- ## This particular patient had a 44.8 % probability
548+ ## This particular patient had a 51.4 % probability
549549## of having a heart disease, as evaluated by our model.
550550```
0 commit comments