We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 0ae05d6 commit bb71298Copy full SHA for bb71298
README.md
@@ -21,8 +21,8 @@
21
22
Optimisers.jl defines many standard gradient-based optimisation rules, and tools for applying them to deeply nested models.
23
24
-This is the future of training for [Flux.jl](https://github.com/FluxML/Flux.jl) neural networks,
25
-and the present for [Lux.jl](https://github.com/avik-pal/Lux.jl).
+This was written as the new training system for [Flux.jl](https://github.com/FluxML/Flux.jl) neural networks,
+and also used by [Lux.jl](https://github.com/avik-pal/Lux.jl).
26
But it can be used separately on any array, or anything else understood by [Functors.jl](https://github.com/FluxML/Functors.jl).
27
28
## Installation
0 commit comments