I was invited by Yingzhen Li to give a talk at Imperial College London - CSML reading group.
What do we want from a generative model and how do we get it from a VAE?
Diffusion models and VAEs are both popular generative models but with different strengths. While diffusion models are good at generating, VAEs are good at representation learning. In this talk, I will discuss the desired features of generative models and explore the possibility of combining the strengths of diffusion models and VAEs. Specifically, drawing insights from three recent works, we will explore: (1) how to optimize VAEs towards a desired capability by modifying the training objective; (2) how to exploit the strengths of diffusion models to enhance VAE performance; and (3) how the popular SVHN data set can be problematic for training generative models despite being well-suited for discriminative models.
Here for the slides.