4
4
[ ![ Build Status] ( https://github.com/TuringLang/NormalizingFlows.jl/actions/workflows/CI.yml/badge.svg?branch=main )] ( https://github.com/TuringLang/NormalizingFlows.jl/actions/workflows/CI.yml?query=branch%3Amain )
5
5
6
6
7
- ** Last updated: 2023-Aug-23 **
7
+ ** Last updated: 2025-Mar-04 **
8
8
9
9
A normalizing flow library for Julia.
10
10
@@ -21,16 +21,16 @@ See the [documentation](https://turinglang.org/NormalizingFlows.jl/dev/) for mor
21
21
To install the package, run the following command in the Julia REPL:
22
22
``` julia
23
23
] # enter Pkg mode
24
- (@v1 .9 ) pkg> add git @github . com : TuringLang / NormalizingFlows. jl . git
24
+ (@v1 .11 ) pkg> add NormalizingFlows
25
25
```
26
26
Then simply run the following command to use the package:
27
27
``` julia
28
28
using NormalizingFlows
29
29
```
30
30
31
31
## Quick recap of normalizing flows
32
- Normalizing flows transform a simple reference distribution $q_0$ (sometimes known as base distribution) to
33
- a complex distribution $q$ using invertible functions.
32
+ Normalizing flows transform a simple reference distribution $q_0$ (sometimes referred to as the base distribution)
33
+ to a complex distribution $q$ using invertible functions.
34
34
35
35
In more details, given the base distribution, usually a standard Gaussian distribution, i.e., $q_0 = \mathcal{N}(0, I)$,
36
36
we apply a series of parameterized invertible transformations (called flow layers), $T_ {1, \theta_1}, \cdots, T_ {N, \theta_k}$, yielding that
@@ -56,7 +56,7 @@ Given the feasibility of i.i.d. sampling and density evaluation, normalizing flo
56
56
\text{Reverse KL:}\quad
57
57
&\arg\min _{\theta} \mathbb{E}_{q_{\theta}}\left[\log q_{\theta}(Z)-\log p(Z)\right] \\
58
58
&= \arg\min _{\theta} \mathbb{E}_{q_0}\left[\log \frac{q_\theta(T_N\circ \cdots \circ T_1(Z_0))}{p(T_N\circ \cdots \circ T_1(Z_0))}\right] \\
59
- &= \arg\max _{\theta} \mathbb{E}_{q_0}\left[ \log p\left(T_N \circ \cdots \circ T_1(Z_0)\right)-\log q_0(X)+\sum_{n=1}^N \log J_n\left(F_n \circ \cdots \circ F_1 (X)\right)\right]
59
+ &= \arg\max _{\theta} \mathbb{E}_{q_0}\left[ \log p\left(T_N \circ \cdots \circ T_1(Z_0)\right)-\log q_0(X)+\sum_{n=1}^N \log J_n\left(T_n \circ \cdots \circ T_1 (X)\right)\right]
60
60
\end{aligned}
61
61
```
62
62
and
@@ -76,10 +76,12 @@ normalizing constant.
76
76
In contrast, forward KL minimization is typically used for ** generative modeling** ,
77
77
where one wants to learn the underlying distribution of some data.
78
78
79
- ## Current status and TODOs
79
+ ## Current status and to-dos
80
80
81
81
- [x] general interface development
82
82
- [x] documentation
83
+ - [ ] integrating [ Lux.jl] ( https://lux.csail.mit.edu/stable/tutorials/intermediate/7_RealNVP ) and [ Reactant.jl] ( https://github.com/EnzymeAD/Reactant.jl ) .
84
+ This could potentially solve the GPU compatibility issue as well.
83
85
- [ ] including more NF examples/Tutorials
84
86
- WIP: [ PR #11 ] ( https://github.com/TuringLang/NormalizingFlows.jl/pull/11 )
85
87
- [ ] GPU compatibility
0 commit comments