Seminar: Andrew G. Wilson - New York University
How do we build models that learn and generalize?
Abstract
To answer scientific questions, and reason about data, we must build models and perform inference within those models. But how should we approach model construction and inference to make the most successful predictions? How do we represent uncertainty and prior knowledge? How flexible should our models be? Should we use a single model, or multiple different models? Should we follow a different procedure depending on how much data are available? How do we learn desirable constraints, such as rotation, translation, or reflection symmetries, when they don't improve standard training loss? In this talk I will present a philosophy for model construction, grounded in probability theory. I will exemplify this approach with methods that exploit loss surface geometry for scalable and practical Bayesian deep learning, and resolutions to seemingly mysterious generalization behaviour such as double descent. I will also consider prior specification, generalized Bayesian inference, and automatic symmetry learning.
Notes
- The talk is primarily based on arxiv.org/abs/2002.08791, and it also touches on arxiv.org/abs/2002.12880 and arxiv.org/abs/2010.11882.
- Andrew G. Wilson is an Assistant Professor at the Courant Institute of Mathematical Sciences and Center for Data Science, New York University. His website can be found here .