Flow-based generative model

Part of a series on
Machine learning
and data mining
Paradigms
  • Supervised learning
  • Unsupervised learning
  • Online learning
  • Batch learning
  • Meta-learning
  • Semi-supervised learning
  • Self-supervised learning
  • Reinforcement learning
  • Curriculum learning
  • Rule-based learning
  • Quantum machine learning
Problems
Learning with humans
Machine-learning venues
  • v
  • t
  • e

A flow-based generative model is a generative model used in machine learning that explicitly models a probability distribution by leveraging normalizing flow,[1][2][3] which is a statistical method using the change-of-variable law of probabilities to transform a simple distribution into a complex one.

The direct modeling of likelihood provides many advantages. For example, the negative log-likelihood can be directly computed and minimized as the loss function. Additionally, novel samples can be generated by sampling from the initial distribution, and applying the flow transformation.

In contrast, many alternative generative modeling methods such as variational autoencoder (VAE) and generative adversarial network do not explicitly represent the likelihood function.

Method

Scheme for normalizing flows

Let z 0 {\displaystyle z_{0}} be a (possibly multivariate) random variable with distribution p 0 ( z 0 ) {\displaystyle p_{0}(z_{0})} .

For i = 1 , . . . , K {\displaystyle i=1,...,K} , let z i = f i ( z i 1 ) {\displaystyle z_{i}=f_{i}(z_{i-1})} be a sequence of random variables transformed from z 0 {\displaystyle z_{0}} . The functions f 1 , . . . , f K {\displaystyle f_{1},...,f_{K}} should be invertible, i.e. the inverse function f i 1 {\displaystyle f_{i}^{-1}} exists. The final output z K {\displaystyle z_{K}} models the target distribution.

The log likelihood of z K {\displaystyle z_{K}} is (see derivation):

log p K ( z K ) = log p 0 ( z 0 ) i = 1 K log | det d f i ( z i 1 ) d z i 1 | {\displaystyle \log p_{K}(z_{K})=\log p_{0}(z_{0})-\sum _{i=1}^{K}\log \left|\det {\frac {df_{i}(z_{i-1})}{dz_{i-1}}}\right|}

To efficiently compute the log likelihood, the functions f 1 , . . . , f K {\displaystyle f_{1},...,f_{K}} should be 1. easy to invert, and 2. easy to compute the determinant of its Jacobian. In practice, the functions f 1 , . . . , f K {\displaystyle f_{1},...,f_{K}} are modeled using deep neural networks, and are trained to minimize the negative log-likelihood of data samples from the target distribution. These architectures are usually designed such that only the forward pass of the neural network is required in both the inverse and the Jacobian determinant calculations. Examples of such architectures include NICE,[4] RealNVP,[5] and Glow.[6]

Derivation of log likelihood

Consider z 1 {\displaystyle z_{1}} and z 0 {\displaystyle z_{0}} . Note that z 0 = f 1 1 ( z 1 ) {\displaystyle z_{0}=f_{1}^{-1}(z_{1})} .

By the change of variable formula, the distribution of z 1 {\displaystyle z_{1}} is:

p 1 ( z 1 ) = p 0 ( z 0 ) | det d f 1 1 ( z 1 ) d z 1 | {\displaystyle p_{1}(z_{1})=p_{0}(z_{0})\left|\det {\frac {df_{1}^{-1}(z_{1})}{dz_{1}}}\right|}

Where det d f 1 1 ( z 1 ) d z 1 {\displaystyle \det {\frac {df_{1}^{-1}(z_{1})}{dz_{1}}}} is the determinant of the Jacobian matrix of f 1 1 {\displaystyle f_{1}^{-1}} .

By the inverse function theorem:

p 1 ( z 1 ) = p 0 ( z 0 ) | det ( d f 1 ( z 0 ) d z 0 ) 1 | {\displaystyle p_{1}(z_{1})=p_{0}(z_{0})\left|\det \left({\frac {df_{1}(z_{0})}{dz_{0}}}\right)^{-1}\right|}

By the identity det ( A 1 ) = det ( A ) 1 {\displaystyle \det(A^{-1})=\det(A)^{-1}} (where A {\displaystyle A} is an invertible matrix), we have:

p 1 ( z 1 ) = p 0 ( z 0 ) | det d f 1 ( z 0 ) d z 0 | 1 {\displaystyle p_{1}(z_{1})=p_{0}(z_{0})\left|\det {\frac {df_{1}(z_{0})}{dz_{0}}}\right|^{-1}}

The log likelihood is thus:

log p 1 ( z 1 ) = log p 0 ( z 0 ) log | det d f 1 ( z 0 ) d z 0 | {\displaystyle \log p_{1}(z_{1})=\log p_{0}(z_{0})-\log \left|\det {\frac {df_{1}(z_{0})}{dz_{0}}}\right|}

In general, the above applies to any z i {\displaystyle z_{i}} and z i 1 {\displaystyle z_{i-1}} . Since log p i ( z i ) {\displaystyle \log p_{i}(z_{i})} is equal to log p i 1 ( z i 1 ) {\displaystyle \log p_{i-1}(z_{i-1})} subtracted by a non-recursive term, we can infer by induction that:

log p K ( z K ) = log p 0 ( z 0 ) i = 1 K log | det d f i ( z i 1 ) d z i 1 | {\displaystyle \log p_{K}(z_{K})=\log p_{0}(z_{0})-\sum _{i=1}^{K}\log \left|\det {\frac {df_{i}(z_{i-1})}{dz_{i-1}}}\right|}

Training method

As is generally done when training a deep learning model, the goal with normalizing flows is to minimize the Kullback–Leibler divergence between the model's likelihood and the target distribution to be estimated. Denoting p θ {\displaystyle p_{\theta }} the model's likelihood and p {\displaystyle p^{*}} the target distribution to learn, the (forward) KL-divergence is:

D K L [ p ( x ) | | p θ ( x ) ] = E p ( x ) [ log ( p θ ( x ) ) ] + E p ( x ) [ log ( p ( x ) ) ] {\displaystyle D_{KL}[p^{*}(x)||p_{\theta }(x)]=-\mathbb {E} _{p^{*}(x)}[\log(p_{\theta }(x))]+\mathbb {E} _{p^{*}(x)}[\log(p^{*}(x))]}

The second term on the right-hand side of the equation corresponds to the entropy of the target distribution and is independent of the parameter θ {\displaystyle \theta } we want the model to learn, which only leaves the expectation of the negative log-likelihood to minimize under the target distribution. This intractable term can be approximated with a Monte-Carlo method by importance sampling. Indeed, if we have a dataset { x i } i = 1 : N {\displaystyle \{x_{i}\}_{i=1:N}} of samples each independently drawn from the target distribution p ( x ) {\displaystyle p^{*}(x)} , then this term can be estimated as:

E ^ p ( x ) [ log ( p θ ( x ) ) ] = 1 N i = 0 N log ( p θ ( x i ) ) {\displaystyle -{\hat {\mathbb {E} }}_{p^{*}(x)}[\log(p_{\theta }(x))]=-{\frac {1}{N}}\sum _{i=0}^{N}\log(p_{\theta }(x_{i}))}

Therefore, the learning objective

a r g m i n θ   D K L [ p ( x ) | | p θ ( x ) ] {\displaystyle {\underset {\theta }{\operatorname {arg\,min} }}\ D_{KL}[p^{*}(x)||p_{\theta }(x)]}

is replaced by

a r g m a x θ   i = 0 N log ( p θ ( x i ) ) {\displaystyle {\underset {\theta }{\operatorname {arg\,max} }}\ \sum _{i=0}^{N}\log(p_{\theta }(x_{i}))}

In other words, minimizing the Kullback–Leibler divergence between the model's likelihood and the target distribution is equivalent to maximizing the model likelihood under observed samples of the target distribution.[7]

A pseudocode for training normalizing flows is as follows:[8]

  • INPUT. dataset x 1 : n {\displaystyle x_{1:n}} , normalizing flow model f θ ( ) , p 0 {\displaystyle f_{\theta }(\cdot ),p_{0}} .
  • SOLVE. max θ j ln p θ ( x j ) {\displaystyle \max _{\theta }\sum _{j}\ln p_{\theta }(x_{j})} by gradient descent
  • RETURN. θ ^ {\displaystyle {\hat {\theta }}}

Variants

Planar Flow

The earliest example.[9] Fix some activation function h {\displaystyle h} , and let θ = ( u , w , b ) {\displaystyle \theta =(u,w,b)} with the appropriate dimensions, then

x = f θ ( z ) = z + u h ( w , z + b ) {\displaystyle x=f_{\theta }(z)=z+uh(\langle w,z\rangle +b)}
The inverse f θ 1 {\displaystyle f_{\theta }^{-1}} has no closed-form solution in general.

The Jacobian is | det ( I + h ( w , z + b ) u w T ) | = | 1 + h ( w , z + b ) u , w | {\displaystyle |\det(I+h'(\langle w,z\rangle +b)uw^{T})|=|1+h'(\langle w,z\rangle +b)\langle u,w\rangle |} .

For it to be invertible everywhere, it must be nonzero everywhere. For example, h = tanh {\displaystyle h=\tanh } and u , w > 1 {\displaystyle \langle u,w\rangle >-1} satisfies the requirement.

Nonlinear Independent Components Estimation (NICE)

Let x , z R 2 n {\displaystyle x,z\in \mathbb {R} ^{2n}} be even-dimensional, and split them in the middle.[4] Then the normalizing flow functions are

x = [ x 1 x 2 ] = f θ ( z ) = [ z 1 z 2 ] + [ 0 m θ ( z 1 ) ] {\displaystyle x={\begin{bmatrix}x_{1}\\x_{2}\end{bmatrix}}=f_{\theta }(z)={\begin{bmatrix}z_{1}\\z_{2}\end{bmatrix}}+{\begin{bmatrix}0\\m_{\theta }(z_{1})\end{bmatrix}}}
where m θ {\displaystyle m_{\theta }} is any neural network with weights θ {\displaystyle \theta } .

f θ 1 {\displaystyle f_{\theta }^{-1}} is just z 1 = x 1 , z 2 = x 2 m θ ( x 1 ) {\displaystyle z_{1}=x_{1},z_{2}=x_{2}-m_{\theta }(x_{1})} , and the Jacobian is just 1, that is, the flow is volume-preserving.

When n = 1 {\displaystyle n=1} , this is seen as a curvy shearing along the x 2 {\displaystyle x_{2}} direction.

Real Non-Volume Preserving (Real NVP)

The Real Non-Volume Preserving model generalizes NICE model by:[5]

x = [ x 1 x 2 ] = f θ ( z ) = [ z 1 e s θ ( z 1 ) z 2 ] + [ 0 m θ ( z 1 ) ] {\displaystyle x={\begin{bmatrix}x_{1}\\x_{2}\end{bmatrix}}=f_{\theta }(z)={\begin{bmatrix}z_{1}\\e^{s_{\theta }(z_{1})}\odot z_{2}\end{bmatrix}}+{\begin{bmatrix}0\\m_{\theta }(z_{1})\end{bmatrix}}}

Its inverse is z 1 = x 1 , z 2 = e s θ ( x 1 ) ( x 2 m θ ( x 1 ) ) {\displaystyle z_{1}=x_{1},z_{2}=e^{-s_{\theta }(x_{1})}\odot (x_{2}-m_{\theta }(x_{1}))} , and its Jacobian is i = 1 n e s θ ( z 1 , ) {\displaystyle \prod _{i=1}^{n}e^{s_{\theta }(z_{1,})}} . The NICE model is recovered by setting s θ = 0 {\displaystyle s_{\theta }=0} . Since the Real NVP map keeps the first and second halves of the vector x {\displaystyle x} separate, it's usually required to add a permutation ( x 1 , x 2 ) ( x 2 , x 1 ) {\displaystyle (x_{1},x_{2})\mapsto (x_{2},x_{1})} after every Real NVP layer.

Generative Flow (Glow)

In generative flow model,[6] each layer has 3 parts:

  • channel-wise affine transform
    y c i j = s c ( x c i j + b c ) {\displaystyle y_{cij}=s_{c}(x_{cij}+b_{c})}
    with Jacobian c s c H W {\displaystyle \prod _{c}s_{c}^{HW}} .
  • invertible 1x1 convolution
    z c i j = c K c c y c i j {\displaystyle z_{cij}=\sum _{c'}K_{cc'}y_{cij}}
    with Jacobian det ( K ) H W {\displaystyle \det(K)^{HW}} . Here K {\displaystyle K} is any invertible matrix.
  • Real NVP, with Jacobian as described in Real NVP.

The idea of using the invertible 1x1 convolution is to permute all layers in general, instead of merely permuting the first and second half, as in Real NVP.

Masked autoregressive flow (MAF)

An autoregressive model of a distribution on R n {\displaystyle \mathbb {R} ^{n}} is defined as the following stochastic process:[10]

x 1 N ( μ 1 , σ 1 2 ) x 2 N ( μ 2 ( x 1 ) , σ 2 ( x 1 ) 2 ) x n N ( μ n ( x 1 : n 1 ) , σ n ( x 1 : n 1 ) 2 ) {\displaystyle {\begin{aligned}x_{1}\sim &N(\mu _{1},\sigma _{1}^{2})\\x_{2}\sim &N(\mu _{2}(x_{1}),\sigma _{2}(x_{1})^{2})\\&\cdots \\x_{n}\sim &N(\mu _{n}(x_{1:n-1}),\sigma _{n}(x_{1:n-1})^{2})\\\end{aligned}}}
where μ i : R i 1 R {\displaystyle \mu _{i}:\mathbb {R} ^{i-1}\to \mathbb {R} } and σ i : R i 1 ( 0 , ) {\displaystyle \sigma _{i}:\mathbb {R} ^{i-1}\to (0,\infty )} are fixed functions that define the autoregressive model.

By the reparametrization trick, the autoregressive model is generalized to a normalizing flow:

x 1 = μ 1 + σ 1 z 1 x 2 = μ 2 ( x 1 ) + σ 2 ( x 1 ) z 2 x n = μ n ( x 1 : n 1 ) + σ n ( x 1 : n 1 ) z n {\displaystyle {\begin{aligned}x_{1}=&\mu _{1}+\sigma _{1}z_{1}\\x_{2}=&\mu _{2}(x_{1})+\sigma _{2}(x_{1})z_{2}\\&\cdots \\x_{n}=&\mu _{n}(x_{1:n-1})+\sigma _{n}(x_{1:n-1})z_{n}\\\end{aligned}}}
The autoregressive model is recovered by setting z N ( 0 , I n ) {\displaystyle z\sim N(0,I_{n})} .

The forward mapping is slow (because it's sequential), but the backward mapping is fast (because it's parallel).

The Jacobian matrix is lower-diagonal, so the Jacobian is σ 1 σ 2 ( x 1 ) σ n ( x 1 : n 1 ) {\displaystyle \sigma _{1}\sigma _{2}(x_{1})\cdots \sigma _{n}(x_{1:n-1})} .

Reversing the two maps f θ {\displaystyle f_{\theta }} and f θ 1 {\displaystyle f_{\theta }^{-1}} of MAF results in Inverse Autoregressive Flow (IAF), which has fast forward mapping and slow backward mapping.[11]

Continuous Normalizing Flow (CNF)

Instead of constructing flow by function composition, another approach is to formulate the flow as a continuous-time dynamic.[12][13] Let z 0 {\displaystyle z_{0}} be the latent variable with distribution p ( z 0 ) {\displaystyle p(z_{0})} . Map this latent variable to data space with the following flow function:

x = F ( z 0 ) = z T = z 0 + 0 T f ( z t , t ) d t {\displaystyle x=F(z_{0})=z_{T}=z_{0}+\int _{0}^{T}f(z_{t},t)dt}

where f {\displaystyle f} is an arbitrary function and can be modeled with e.g. neural networks.

The inverse function is then naturally:[12]

z 0 = F 1 ( x ) = z T + T 0 f ( z t , t ) d t = z T 0 T f ( z t , t ) d t {\displaystyle z_{0}=F^{-1}(x)=z_{T}+\int _{T}^{0}f(z_{t},t)dt=z_{T}-\int _{0}^{T}f(z_{t},t)dt}

And the log-likelihood of x {\displaystyle x} can be found as:[12]

log ( p ( x ) ) = log ( p ( z 0 ) ) 0 T Tr [ f z t ] d t {\displaystyle \log(p(x))=\log(p(z_{0}))-\int _{0}^{T}{\text{Tr}}\left[{\frac {\partial f}{\partial z_{t}}}\right]dt}

Since the trace depends only on the diagonal of the Jacobian z t f {\displaystyle \partial _{z_{t}}f} , this allows "free-form" Jacobian.[14] Here, "free-form" means that there is no restriction on the Jacobian's form. It is contrasted with previous discrete models of normalizing flow, where the Jacobian is carefully designed to be only upper- or lower-diagonal, so that the Jacobian can be evaluated efficiently.

The trace can be estimated by "Hutchinson's trick":[15][16]

Given any matrix W R n × n {\displaystyle W\in \mathbb {R} ^{n\times n}} , and any random u R n {\displaystyle u\in \mathbb {R} ^{n}} with E [ u u T ] = I {\displaystyle E[uu^{T}]=I} , we have E [ u T W u ] = t r ( W ) {\displaystyle E[u^{T}Wu]=tr(W)} . (Proof: expand the expectation directly.)

Usually, the random vector is sampled from N ( 0 , I ) {\displaystyle N(0,I)} (normal distribution) or { ± n 1 / 2 } n {\displaystyle \{\pm n^{-1/2}\}^{n}} (Radamacher distribution).

When f {\displaystyle f} is implemented as a neural network, neural ODE methods[17] would be needed. Indeed, CNF was first proposed in the same paper that proposed neural ODE.

There are two main deficiencies of CNF, one is that a continuous flow must be a homeomorphism, thus preserve orientation and ambient isotopy (for example, it's impossible to flip a left-hand to a right-hand by continuous deforming of space, and it's impossible to turn a sphere inside out, or undo a knot), and the other is that the learned flow f {\displaystyle f} might be ill-behaved, due to degeneracy (that is, there are an infinite number of possible f {\displaystyle f} that all solve the same problem).

By adding extra dimensions, the CNF gains enough freedom to reverse orientation and go beyond ambient isotopy (just like how one can pick up a polygon from a desk and flip it around in 3-space, or unknot a knot in 4-space), yielding the "augmented neural ODE".[18]

Any homeomorphism of R n {\displaystyle \mathbb {R} ^{n}} can be approximated by a neural ODE operating on R 2 n + 1 {\displaystyle \mathbb {R} ^{2n+1}} , proved by combining Whitney embedding theorem for manifolds and the universal approximation theorem for neural networks.[19]

To regularize the flow f {\displaystyle f} , one can impose regularization losses. The paper [15] proposed the following regularization loss based on optimal transport theory:

λ K 0 T f ( z t , t ) 2 d t + λ J 0 T z f ( z t , t ) F 2 d t {\displaystyle \lambda _{K}\int _{0}^{T}\left\|f(z_{t},t)\right\|^{2}dt+\lambda _{J}\int _{0}^{T}\left\|\nabla _{z}f(z_{t},t)\right\|_{F}^{2}dt}
where λ K , λ J > 0 {\displaystyle \lambda _{K},\lambda _{J}>0} are hyperparameters. The first term punishes the model for oscillating the flow field over time, and the second term punishes it for oscillating the flow field over space. Both terms together guide the model into a flow that is smooth (not "bumpy") over space and time.

Downsides

Despite normalizing flows success in estimating high-dimensional densities, some downsides still exist in their designs. First of all, their latent space where input data is projected onto is not a lower-dimensional space and therefore, flow-based models do not allow for compression of data by default and require a lot of computation. However, it is still possible to perform image compression with them.[20]

Flow-based models are also notorious for failing in estimating the likelihood of out-of-distribution samples (i.e.: samples that were not drawn from the same distribution as the training set).[21] Some hypotheses were formulated to explain this phenomenon, among which the typical set hypothesis,[22] estimation issues when training models,[23] or fundamental issues due to the entropy of the data distributions.[24]

One of the most interesting properties of normalizing flows is the invertibility of their learned bijective map. This property is given by constraints in the design of the models (cf.: RealNVP, Glow) which guarantee theoretical invertibility. The integrity of the inverse is important in order to ensure the applicability of the change-of-variable theorem, the computation of the Jacobian of the map as well as sampling with the model. However, in practice this invertibility is violated and the inverse map explodes because of numerical imprecision.[25]

Applications

Flow-based generative models have been applied on a variety of modeling tasks, including:

  • Audio generation[26]
  • Image generation[6]
  • Molecular graph generation[27]
  • Point-cloud modeling[28]
  • Video generation[29]
  • Lossy image compression[20]
  • Anomaly detection[30]

References

  1. ^ Tabak, Esteban G.; Vanden-Eijnden, Eric (2010). "Density estimation by dual ascent of the log-likelihood". Communications in Mathematical Sciences. 8 (1): 217–233. doi:10.4310/CMS.2010.v8.n1.a11.
  2. ^ Tabak, Esteban G.; Turner, Cristina V. (2012). "A family of nonparametric density estimation algorithms". Communications on Pure and Applied Mathematics. 66 (2): 145–164. doi:10.1002/cpa.21423. hdl:11336/8930. S2CID 17820269.
  3. ^ Papamakarios, George; Nalisnick, Eric; Jimenez Rezende, Danilo; Mohamed, Shakir; Bakshminarayanan, Balaji (2021). "Normalizing flows for probabilistic modeling and inference". Journal of Machine Learning Research. 22 (1): 2617–2680. arXiv:1912.02762.
  4. ^ a b Dinh, Laurent; Krueger, David; Bengio, Yoshua (2014). "NICE: Non-linear Independent Components Estimation". arXiv:1410.8516 [cs.LG].
  5. ^ a b Dinh, Laurent; Sohl-Dickstein, Jascha; Bengio, Samy (2016). "Density estimation using Real NVP". arXiv:1605.08803 [cs.LG].
  6. ^ a b c Kingma, Diederik P.; Dhariwal, Prafulla (2018). "Glow: Generative Flow with Invertible 1x1 Convolutions". arXiv:1807.03039 [stat.ML].
  7. ^ Papamakarios, George; Nalisnick, Eric; Rezende, Danilo Jimenez; Shakir, Mohamed; Balaji, Lakshminarayanan (March 2021). "Normalizing Flows for Probabilistic Modeling and Inference". Journal of Machine Learning Research. 22 (57): 1–64. arXiv:1912.02762.
  8. ^ Kobyzev, Ivan; Prince, Simon J.D.; Brubaker, Marcus A. (November 2021). "Normalizing Flows: An Introduction and Review of Current Methods". IEEE Transactions on Pattern Analysis and Machine Intelligence. 43 (11): 3964–3979. arXiv:1908.09257. doi:10.1109/TPAMI.2020.2992934. ISSN 1939-3539. PMID 32396070. S2CID 208910764.
  9. ^ Danilo Jimenez Rezende; Mohamed, Shakir (2015). "Variational Inference with Normalizing Flows". arXiv:1505.05770 [stat.ML].
  10. ^ Papamakarios, George; Pavlakou, Theo; Murray, Iain (2017). "Masked Autoregressive Flow for Density Estimation". Advances in Neural Information Processing Systems. 30. Curran Associates, Inc. arXiv:1705.07057.
  11. ^ Kingma, Durk P; Salimans, Tim; Jozefowicz, Rafal; Chen, Xi; Sutskever, Ilya; Welling, Max (2016). "Improved Variational Inference with Inverse Autoregressive Flow". Advances in Neural Information Processing Systems. 29. Curran Associates, Inc. arXiv:1606.04934.
  12. ^ a b c Grathwohl, Will; Chen, Ricky T. Q.; Bettencourt, Jesse; Sutskever, Ilya; Duvenaud, David (2018). "FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models". arXiv:1810.01367 [cs.LG].
  13. ^ Lipman, Yaron; Chen, Ricky T. Q.; Ben-Hamu, Heli; Nickel, Maximilian; Le, Matt (2022-10-01). "Flow Matching for Generative Modeling". arXiv:2210.02747 [cs.LG].
  14. ^ Grathwohl, Will; Chen, Ricky T. Q.; Bettencourt, Jesse; Sutskever, Ilya; Duvenaud, David (2018-10-22). "FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models". arXiv:1810.01367 [cs.LG].
  15. ^ a b Finlay, Chris; Jacobsen, Joern-Henrik; Nurbekyan, Levon; Oberman, Adam (2020-11-21). "How to Train Your Neural ODE: the World of Jacobian and Kinetic Regularization". International Conference on Machine Learning. PMLR: 3154–3164. arXiv:2002.02798.
  16. ^ Hutchinson, M.F. (January 1989). "A Stochastic Estimator of the Trace of the Influence Matrix for Laplacian Smoothing Splines". Communications in Statistics - Simulation and Computation. 18 (3): 1059–1076. doi:10.1080/03610918908812806. ISSN 0361-0918.
  17. ^ Chen, Ricky T. Q.; Rubanova, Yulia; Bettencourt, Jesse; Duvenaud, David (2018). "Neural Ordinary Differential Equations". arXiv:1806.07366 [cs.LG].
  18. ^ Dupont, Emilien; Doucet, Arnaud; Teh, Yee Whye (2019). "Augmented Neural ODEs". Advances in Neural Information Processing Systems. 32. Curran Associates, Inc.
  19. ^ Zhang, Han; Gao, Xi; Unterman, Jacob; Arodz, Tom (2019-07-30). "Approximation Capabilities of Neural ODEs and Invertible Residual Networks". arXiv:1907.12998 [cs.LG].
  20. ^ a b Helminger, Leonhard; Djelouah, Abdelaziz; Gross, Markus; Schroers, Christopher (2020). "Lossy Image Compression with Normalizing Flows". arXiv:2008.10486 [cs.CV].
  21. ^ Nalisnick, Eric; Matsukawa, Teh; Zhao, Yee Whye; Song, Zhao (2018). "Do Deep Generative Models Know What They Don't Know?". arXiv:1810.09136v3 [stat.ML].
  22. ^ Nalisnick, Eric; Matsukawa, Teh; Zhao, Yee Whye; Song, Zhao (2019). "Detecting Out-of-Distribution Inputs to Deep Generative Models Using Typicality". arXiv:1906.02994 [stat.ML].
  23. ^ Zhang, Lily; Goldstein, Mark; Ranganath, Rajesh (2021). "Understanding Failures in Out-of-Distribution Detection with Deep Generative Models". Proceedings of Machine Learning Research. 139: 12427–12436. PMC 9295254. PMID 35860036.
  24. ^ Caterini, Anthony L.; Loaiza-Ganem, Gabriel (2022). "Entropic Issues in Likelihood-Based OOD Detection". pp. 21–26. arXiv:2109.10794 [stat.ML].
  25. ^ Behrmann, Jens; Vicol, Paul; Wang, Kuan-Chieh; Grosse, Roger; Jacobsen, Jörn-Henrik (2020). "Understanding and Mitigating Exploding Inverses in Invertible Neural Networks". arXiv:2006.09347 [cs.LG].
  26. ^ Ping, Wei; Peng, Kainan; Gorur, Dilan; Lakshminarayanan, Balaji (2019). "WaveFlow: A Compact Flow-based Model for Raw Audio". arXiv:1912.01219 [cs.SD].
  27. ^ Shi, Chence; Xu, Minkai; Zhu, Zhaocheng; Zhang, Weinan; Zhang, Ming; Tang, Jian (2020). "GraphAF: A Flow-based Autoregressive Model for Molecular Graph Generation". arXiv:2001.09382 [cs.LG].
  28. ^ Yang, Guandao; Huang, Xun; Hao, Zekun; Liu, Ming-Yu; Belongie, Serge; Hariharan, Bharath (2019). "PointFlow: 3D Point Cloud Generation with Continuous Normalizing Flows". arXiv:1906.12320 [cs.CV].
  29. ^ Kumar, Manoj; Babaeizadeh, Mohammad; Erhan, Dumitru; Finn, Chelsea; Levine, Sergey; Dinh, Laurent; Kingma, Durk (2019). "VideoFlow: A Conditional Flow-Based Model for Stochastic Video Generation". arXiv:1903.01434 [cs.CV].
  30. ^ Rudolph, Marco; Wandt, Bastian; Rosenhahn, Bodo (2021). "Same Same But DifferNet: Semi-Supervised Defect Detection with Normalizing Flows". arXiv:2008.12577 [cs.CV].

External links

  • Flow-based Deep Generative Models
  • Normalizing flow models