In 1991, Brenier proved a theorem that generalizes the polar decomposition for sq. matrices — factored as PSD ×occasions unitary — to any vector discipline F:Rd→RdF:mathbb{R}^drightarrow mathbb{R}^d. The concept, often called the polar factorization theorem, states that any discipline FF could be recovered because the composition of the gradient of a convex perform uu with a measure-preserving map MM, particularly F=∇u∘MF=nabla u circ M. We suggest a sensible implementation of this far-reaching theoretical consequence, and discover attainable makes use of inside machine studying. The concept is carefully associated to optimum transport (OT) principle, and we borrow from latest advances within the discipline of neural optimum transport to parameterize the potential uu as an enter convex neural community. The map MM could be both evaluated pointwise utilizing u∗u^*, the convex conjugate of uu, by the id M=∇u∗∘FM=nabla u^* circ F, or discovered as an auxiliary community. As a result of MM is, generally, not injective, we take into account the extra process of estimating the ill-posed inverse map that may approximate the pre-image measure M−1M^{-1} utilizing a stochastic generator. We illustrate attainable purposes of Brenier’s polar factorization to non-convex optimization issues, in addition to sampling of densities that aren’t log-concave.