|
15 | 15 | "Working with RNNs". |
16 | 16 |
|
17 | 17 | - New layers: |
18 | | - - `layer_additive_attention()` |
19 | | - - `layer_conv_lstm_1d()` |
| 18 | + - `layer_additive_attention()` |
| 19 | + - `layer_conv_lstm_1d()` |
20 | 20 | - `layer_conv_lstm_3d()` |
21 | | - |
| 21 | + |
22 | 22 | - `layer_lstm()` default value for `recurrent_activation` changed from `"hard_sigmoid"` to `"sigmoid"`. |
23 | 23 |
|
24 | 24 | - `layer_cudnn_gru()` and `layer_cudnn_lstm()` are deprecated. `layer_gru()` and `layer_lstm()` will |
25 | 25 | automatically use CuDNN if it is available. |
26 | 26 |
|
27 | 27 | - New vignette: "Transfer learning and fine-tuning". |
28 | 28 |
|
29 | | -- New `application_efficientnet_b{1,2,3,4,5,6,7}()`. |
| 29 | +- New applications: |
| 30 | + - MobileNet V3: `application_mobilenet_v3_large()`, `application_mobilenet_v3_small()` |
| 31 | + - ResNet: `application_resnet101()`, `application_resnet152()`, `resnet_preprocess_input()` |
| 32 | + - ResNet V2:`application_resnet50_v2()`, `application_resnet101_v2()`, |
| 33 | + `application_resnet152_v2()` and `resnet_v2_preprocess_input()` |
| 34 | + - EfficientNet: `application_efficientnet_b{0,1,2,3,4,5,6,7}()` |
| 35 | + |
| 36 | +- Many existing `application_*()`'s gain argument `classifier_activation`, with default `'softmax'`. |
| 37 | + Affected: `application_{xception, inception_resnet_v2, inception_v3, mobilenet, vgg16, vgg19}()` |
30 | 38 |
|
31 | 39 | - New function `%<-active%`, a ergonomic wrapper around `makeActiveBinding()` |
32 | 40 | for constructing Python `@property` decorated methods in `%py_class%`. |
33 | 41 |
|
34 | 42 | - `bidirectional()` sequence processing layer wrapper gains a `backwards_layer` arguments. |
35 | 43 |
|
36 | | -- Global pooling layers `layer_global_{max,average}_pooling_{1,2,3}d()` gain a |
| 44 | +- Global pooling layers `layer_global_{max,average}_pooling_{1,2,3}d()` gain a |
37 | 45 | `keepdims` argument with default value `FALSE`. |
38 | 46 |
|
39 | | -- Signatures for layer functions are in the process of being simplified. |
40 | | - Standard layer arguments are moving to `...` where appropriate (and will need to be provided as named arguments). |
41 | | - Standard layer arguments include: `input_shape`, `batch_input_shape`, `batch_size`, `dtype`, `name`, `trainable`, `weights`. |
42 | | - Layers updated: `layer_global_{max,average}_pooling_{1,2,3}d()`, `time_distributed()`, `bidirectional()`. |
43 | | - |
44 | | -- All the backend function with a shape argument `k_*(shape =)` that now accept a |
| 47 | +- Signatures for layer functions are in the process of being simplified. |
| 48 | + Standard layer arguments are moving to `...` where appropriate |
| 49 | + (and will need to be provided as named arguments). |
| 50 | + Standard layer arguments include: |
| 51 | + `input_shape`, `batch_input_shape`, `batch_size`, `dtype`, |
| 52 | + `name`, `trainable`, `weights`. |
| 53 | + Layers updated: |
| 54 | + `layer_global_{max,average}_pooling_{1,2,3}d()`, |
| 55 | + `time_distributed()`, `bidirectional()`. |
| 56 | + |
| 57 | +- All the backend function with a shape argument `k_*(shape =)` that now accept a |
45 | 58 | a mix of integer tensors and R numerics in the supplied list. |
46 | 59 |
|
47 | | -- `k_random_uniform()` now automatically coerces `minval` and `maxval` to the output dtype. |
| 60 | +- `k_random_uniform()` now automatically casts `minval` and `maxval` to the output dtype. |
48 | 61 |
|
49 | 62 | # keras 2.6.1 |
50 | 63 |
|
|
0 commit comments