|
13 | 13 | - `layer_stacked_rnn_cells()` |
14 | 14 | To learn more, including how to make a custom cell layer, see the new vignette: |
15 | 15 | "Working with RNNs". |
16 | | - |
| 16 | + |
17 | 17 | - New dataset loader `text_dataset_from_directory()`. |
18 | 18 |
|
19 | 19 | - New layers: |
20 | 20 | - `layer_additive_attention()` |
21 | 21 | - `layer_conv_lstm_1d()` |
22 | 22 | - `layer_conv_lstm_3d()` |
23 | 23 |
|
24 | | -- `layer_cudnn_gru()` and `layer_cudnn_lstm()` are deprecated. |
| 24 | +- `layer_cudnn_gru()` and `layer_cudnn_lstm()` are deprecated. |
25 | 25 | `layer_gru()` and `layer_lstm()` will automatically use CuDNN if it is available. |
26 | 26 |
|
27 | | -- `layer_lstm()` and `layer_gru()`: |
28 | | - default value for `recurrent_activation` changed |
| 27 | +- `layer_lstm()` and `layer_gru()`: |
| 28 | + default value for `recurrent_activation` changed |
29 | 29 | from `"hard_sigmoid"` to `"sigmoid"`. |
30 | 30 |
|
31 | 31 | - `layer_gru()`: default value `reset_after` changed from `FALSE` to `TRUE` |
|
35 | 35 | - New applications: |
36 | 36 | - MobileNet V3: `application_mobilenet_v3_large()`, `application_mobilenet_v3_small()` |
37 | 37 | - ResNet: `application_resnet101()`, `application_resnet152()`, `resnet_preprocess_input()` |
38 | | - - ResNet V2:`application_resnet50_v2()`, `application_resnet101_v2()`, |
| 38 | + - ResNet V2:`application_resnet50_v2()`, `application_resnet101_v2()`, |
39 | 39 | `application_resnet152_v2()` and `resnet_v2_preprocess_input()` |
40 | 40 | - EfficientNet: `application_efficientnet_b{0,1,2,3,4,5,6,7}()` |
41 | 41 |
|
|
51 | 51 | `keepdims` argument with default value `FALSE`. |
52 | 52 |
|
53 | 53 | - Signatures for layer functions are in the process of being simplified. |
54 | | - Standard layer arguments are moving to `...` where appropriate |
| 54 | + Standard layer arguments are moving to `...` where appropriate |
55 | 55 | (and will need to be provided as named arguments). |
56 | | - Standard layer arguments include: |
57 | | - `input_shape`, `batch_input_shape`, `batch_size`, `dtype`, |
| 56 | + Standard layer arguments include: |
| 57 | + `input_shape`, `batch_input_shape`, `batch_size`, `dtype`, |
58 | 58 | `name`, `trainable`, `weights`. |
59 | | - Layers updated: |
60 | | - `layer_global_{max,average}_pooling_{1,2,3}d()`, |
| 59 | + Layers updated: |
| 60 | + `layer_global_{max,average}_pooling_{1,2,3}d()`, |
61 | 61 | `time_distributed()`, `bidirectional()`, |
62 | 62 | `layer_gru()`, `layer_lstm()`, `layer_simple_rnn()` |
63 | 63 |
|
64 | 64 | - All the backend function with a shape argument `k_*(shape =)` that now accept a |
65 | 65 | a mix of integer tensors and R numerics in the supplied list. |
66 | 66 |
|
| 67 | +- All layer functions now accept `NA` as a synonym for `NULL` in arguments |
| 68 | + that specify shape as a vector of dimension values, |
| 69 | + e.g., `input_shape`, `batch_input_shape`. |
| 70 | + |
67 | 71 | - `k_random_uniform()` now automatically casts `minval` and `maxval` to the output dtype. |
68 | 72 |
|
69 | 73 | # keras 2.6.1 |
|
0 commit comments