Skip to content

Commit d9b32b5

Browse files
committed
update applications
1 parent f2d9b08 commit d9b32b5

16 files changed

+1211
-420
lines changed

NAMESPACE

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -105,10 +105,17 @@ export(application_inception_resnet_v2)
105105
export(application_inception_v3)
106106
export(application_mobilenet)
107107
export(application_mobilenet_v2)
108+
export(application_mobilenet_v3_large)
109+
export(application_mobilenet_v3_small)
108110
export(application_nasnet)
109111
export(application_nasnetlarge)
110112
export(application_nasnetmobile)
113+
export(application_resnet101)
114+
export(application_resnet101_v2)
115+
export(application_resnet152)
116+
export(application_resnet152_v2)
111117
export(application_resnet50)
118+
export(application_resnet50_v2)
112119
export(application_vgg16)
113120
export(application_vgg19)
114121
export(application_xception)
@@ -556,6 +563,8 @@ export(regularizer_l1)
556563
export(regularizer_l1_l2)
557564
export(regularizer_l2)
558565
export(reset_states)
566+
export(resnet_preprocess_input)
567+
export(resnet_v2_preprocess_input)
559568
export(run_dir)
560569
export(save_model_hdf5)
561570
export(save_model_tf)

NEWS.md

Lines changed: 25 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -15,36 +15,49 @@
1515
"Working with RNNs".
1616

1717
- New layers:
18-
- `layer_additive_attention()`
19-
- `layer_conv_lstm_1d()`
18+
- `layer_additive_attention()`
19+
- `layer_conv_lstm_1d()`
2020
- `layer_conv_lstm_3d()`
21-
21+
2222
- `layer_lstm()` default value for `recurrent_activation` changed from `"hard_sigmoid"` to `"sigmoid"`.
2323

2424
- `layer_cudnn_gru()` and `layer_cudnn_lstm()` are deprecated. `layer_gru()` and `layer_lstm()` will
2525
automatically use CuDNN if it is available.
2626

2727
- New vignette: "Transfer learning and fine-tuning".
2828

29-
- New `application_efficientnet_b{1,2,3,4,5,6,7}()`.
29+
- New applications:
30+
- MobileNet V3: `application_mobilenet_v3_large()`, `application_mobilenet_v3_small()`
31+
- ResNet: `application_resnet101()`, `application_resnet152()`, `resnet_preprocess_input()`
32+
- ResNet V2:`application_resnet50_v2()`, `application_resnet101_v2()`,
33+
`application_resnet152_v2()` and `resnet_v2_preprocess_input()`
34+
- EfficientNet: `application_efficientnet_b{0,1,2,3,4,5,6,7}()`
35+
36+
- Many existing `application_*()`'s gain argument `classifier_activation`, with default `'softmax'`.
37+
Affected: `application_{xception, inception_resnet_v2, inception_v3, mobilenet, vgg16, vgg19}()`
3038

3139
- New function `%<-active%`, a ergonomic wrapper around `makeActiveBinding()`
3240
for constructing Python `@property` decorated methods in `%py_class%`.
3341

3442
- `bidirectional()` sequence processing layer wrapper gains a `backwards_layer` arguments.
3543

36-
- Global pooling layers `layer_global_{max,average}_pooling_{1,2,3}d()` gain a
44+
- Global pooling layers `layer_global_{max,average}_pooling_{1,2,3}d()` gain a
3745
`keepdims` argument with default value `FALSE`.
3846

39-
- Signatures for layer functions are in the process of being simplified.
40-
Standard layer arguments are moving to `...` where appropriate (and will need to be provided as named arguments).
41-
Standard layer arguments include: `input_shape`, `batch_input_shape`, `batch_size`, `dtype`, `name`, `trainable`, `weights`.
42-
Layers updated: `layer_global_{max,average}_pooling_{1,2,3}d()`, `time_distributed()`, `bidirectional()`.
43-
44-
- All the backend function with a shape argument `k_*(shape =)` that now accept a
47+
- Signatures for layer functions are in the process of being simplified.
48+
Standard layer arguments are moving to `...` where appropriate
49+
(and will need to be provided as named arguments).
50+
Standard layer arguments include:
51+
`input_shape`, `batch_input_shape`, `batch_size`, `dtype`,
52+
`name`, `trainable`, `weights`.
53+
Layers updated:
54+
`layer_global_{max,average}_pooling_{1,2,3}d()`,
55+
`time_distributed()`, `bidirectional()`.
56+
57+
- All the backend function with a shape argument `k_*(shape =)` that now accept a
4558
a mix of integer tensors and R numerics in the supplied list.
4659

47-
- `k_random_uniform()` now automatically coerces `minval` and `maxval` to the output dtype.
60+
- `k_random_uniform()` now automatically casts `minval` and `maxval` to the output dtype.
4861

4962
# keras 2.6.1
5063

0 commit comments

Comments
 (0)