Title: Revisiting neural network approximation theory in the age of generative AI  Abstract: Textbooks on deep learning theory primarily perceive neural networks as universal function approximators. While this classical viewpoint is fundamental, it inadequately explains the impressive capabilities of modern generative AI models such as language models and diffusion models. This talk puts forth a refined perspective: neural networks often serve as algorithm approximators, going beyond mere function approximation. I will explain how this refined perspective offers a deeper insight into the success of modern generative AI models.  Bio: Song Mei is an assistant professor of statistics and EECS at UC Berkeley. He received his Ph. D. from Stanford in June 2020. Song’s research lies at the intersection of statistics and machine learning. His recent research focuses on theory of deep learning and foundation models.