Sampling Multimodal Distributions with the Vanilla Score: Benefits of Data-Based Initialization
Abstract
There is a long history, as well as a recent explosion of interest, in statistical and generative modeling approaches based on score functions -- derivatives of the log-likelihood of a distribution. In seminal works, Hyv\"arinen proposed vanilla score matching as a way to learn distributions from data by computing an estimate of the score function of the underlying ground truth, and established connections between this method and established techniques like Contrastive Divergence and Pseudolikelihood estimation. It is by now well-known that vanilla score matching has significant difficulties learning multimodal <PRE_TAG>distributions</POST_TAG>. Although there are various ways to overcome this difficulty, the following question has remained unanswered -- is there a natural way to sample multimodal <PRE_TAG>distributions</POST_TAG> using just the vanilla score? Inspired by a long line of related experimental works, we prove that the Langevin diffusion with early stopping, initialized at the empirical distribution, and run on a score function estimated from data successfully generates natural multimodal <PRE_TAG>distributions</POST_TAG> (mixtures of log-concave distributions).
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper