Title: Diffusion Models for Distribution Estimation and Optimization
Abstract: Diffusion models achieve state-of-the-art performance in various generation tasks. However, their theoretical foundations fall far behind. In this talk, we explore the methodology and theory of diffusion models, especially when data is supported on an unknown low-dimensional subspace. In the first part of the talk, we will introduce how diffusion models generate samples, and establish sample complexity bounds of diffusion models for learning nonparametric distributions. The obtained sample complexity depends on the data intrinsic dimension, implying that diffusion models can circumvent the curse of data ambient dimensionality. In the second part, we further consider directing diffusion models towards generating samples of desired properties as measured by an abstract reward function. We propose a learning-labeling-generating algorithm incorporating the reward as a guidance to the diffusion model. Theoretically, we show that in the offline setting, the generated samples under guidance provably improve the average reward and closely respect the data intrinsic structures. Empirically, we deploy our algorithm in vision and reinforcement learning tasks to support our theory.
Bio: Minshuo Chen is a postdoctoral researcher in the Department of Electrical and Computer Engineering at Princeton University. He completed his Ph.D. from the School of Industrial and Systems Engineering at Georgia Tech, majoring in Machine Learning. Prior to that, he was a master student at UCLA and an undergraduate student at Zhejiang University. His research focuses on developing principled methodologies and theoretical foundations of deep learning, with a particular interest in i) approximation theory and statistical sample complexities, ii) diffusion models, and iii) reinforcement learning.