Skip to content

Commit 508d117

Browse files
authored
Merge pull request #2247 from mcabbott/more_news
Update NEWS
2 parents d022b9f + 5a183e9 commit 508d117

File tree

1 file changed

+29
-9
lines changed

1 file changed

+29
-9
lines changed

NEWS.md

Lines changed: 29 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,11 @@
11
# Flux Release Notes
22

3+
See also [github's page](https://github.com/FluxML/Flux.jl/releases) for a complete list of PRs merged before each release.
4+
5+
## v0.13.16
6+
* Most greek-letter keyword arguments are deprecated in favour of ascii.
7+
Thus `LayerNorm(3; ϵ=1e-4)` (not `ε`!) should become `LayerNorm(3; eps=1e-4)`.
8+
39
## v0.13.15
410
* Added [MultiHeadAttention](https://github.com/FluxML/Flux.jl/pull/2146) layer.
511
* `f16, f32, f64` now specifically target floating point arrays (i.e. integers arrays and other types are preserved).
@@ -14,22 +20,36 @@
1420

1521
## v0.13.13
1622
* Added `f16` which changes precision to `Float16`, recursively.
23+
* Most layers standardise their input to `eltype(layer.weight)`, [#2156](https://github.com/FluxML/Flux.jl/pull/2156),
24+
to limit the cost of accidental Float64 promotion.
25+
* Friendlier errors from size mismatches [#2176](https://github.com/FluxML/Flux.jl/pull/2176).
1726

1827
## v0.13.12
1928
* CUDA.jl 4.0 compatibility.
29+
* Use `dropout` from NNlib as back-end for `Dropout` layer.
2030

21-
## v0.13.7
22-
* Added [`@autosize` macro](https://github.com/FluxML/Flux.jl/pull/2078)
31+
## v0.13.9
2332
* New method of `train!` using Zygote's "explicit" mode. Part of a move away from "implicit" `Params`.
33+
* Added [Flux.setup](https://github.com/FluxML/Flux.jl/pull/2082), which is `Optimisers.setup` with extra checks,
34+
and translation from deprecated "implicit" optimisers like `Flux.Optimise.Adam` to new ones from Optimisers.jl.
35+
36+
## v0.13.7
37+
* Added [`@autosize` macro](https://github.com/FluxML/Flux.jl/pull/2078), as another way to use `outputsize`.
38+
* Export `Embedding`.
39+
40+
## v0.13.6
41+
* Use the package [OneHotArrays.jl](https://github.com/FluxML/OneHotArrays.jl) instead of having the same code here.
2442

2543
## v0.13.4
2644
* Added [`PairwiseFusion` layer](https://github.com/FluxML/Flux.jl/pull/1983)
45+
* Re-name `ADAM` to `Adam`, etc (with deprecations).
46+
47+
## v0.13 (April 2022)
2748

28-
## v0.13
2949
* After a deprecations cycle, the datasets in `Flux.Data` have
30-
been removed in favour of MLDatasets.jl.
50+
been removed in favour of [MLDatasets.jl](https://github.com/JuliaML/MLDatasets.jl).
3151
* `params` is not exported anymore since it is a common name and is also exported by Distributions.jl
32-
* `flatten` is not exported anymore due to clash with Iterators.flatten.
52+
* `flatten` is not exported anymore due to clash with `Iterators.flatten`.
3353
* Remove Juno.jl progress bar support as it is now obsolete.
3454
* `Dropout` gained improved compatibility with Int and Complex arrays and is now twice-differentiable.
3555
* Notation `Dense(2 => 3, σ)` for channels matches `Conv`; the equivalent `Dense(2, 3, σ)` still works.
@@ -70,7 +90,7 @@ been removed in favour of MLDatasets.jl.
7090
* CUDA.jl 3.0 support
7191
* Bug fixes and optimizations.
7292

73-
## v0.12.0
93+
## v0.12 (March 2021)
7494

7595
* Add [identity_init](https://github.com/FluxML/Flux.jl/pull/1524).
7696
* Add [Orthogonal Matrix initialization](https://github.com/FluxML/Flux.jl/pull/1496) as described in [Exact solutions to the nonlinear dynamics of learning in deep linear neural networks](https://arxiv.org/abs/1312.6120).
@@ -95,7 +115,7 @@ been removed in favour of MLDatasets.jl.
95115
* Adds the [AdaBelief](https://arxiv.org/abs/2010.07468) optimiser.
96116
* Other new features and bug fixes (see GitHub releases page)
97117

98-
## v0.11
118+
## v0.11 (July 2020)
99119

100120
* Moved CUDA compatibility to use [CUDA.jl instead of CuArrays.jl](https://github.com/FluxML/Flux.jl/pull/1204)
101121
* Add [kaiming initialization](https://arxiv.org/abs/1502.01852) methods: [kaiming_uniform and kaiming_normal](https://github.com/FluxML/Flux.jl/pull/1243)
@@ -116,14 +136,14 @@ been removed in favour of MLDatasets.jl.
116136
* Functors have now moved to [Functors.jl](https://github.com/FluxML/Flux.jl/pull/1174) to allow for their use outside of Flux.
117137
* Added [helper functions](https://github.com/FluxML/Flux.jl/pull/873) `Flux.convfilter` and `Flux.depthwiseconvfilter` to construct weight arrays for convolutions outside of layer constructors so as to not have to depend on the default layers for custom implementations.
118138
* `dropout` function now has a mandatory [active](https://github.com/FluxML/Flux.jl/pull/1263)
119-
keyword argument. The `Dropout` struct *whose behavior is left unchanged) is the recommended choice for common usage.
139+
keyword argument. The `Dropout` struct (whose behavior is left unchanged) is the recommended choice for common usage.
120140
* and many more fixes and additions...
121141

122142
## v0.10.1 - v0.10.4
123143

124144
See GitHub's releases.
125145

126-
## v0.10.0
146+
## v0.10.0 (November 2019)
127147

128148
* The default AD engine has switched from [Tracker to Zygote.jl](https://github.com/FluxML/Flux.jl/pull/669)
129149
- The dependency on Tracker.jl has been removed.

0 commit comments

Comments
 (0)