Why interpolating neural nets generalize well: recent insights from neural tangent model
Date: Thursday, 22 September 2022
Time: 10:00 - 11:00 am
A mystery of modern neural networks is their surprising generalization power in overparametrized regime: they comprise so many parameters that they can interpolate the training set, even if actual labels are replaced by purely random ones; despite this, they achieve good prediction error on unseen data. In this talk, we focus on the neural tangent (NT) model for two-layer neural networks, which is a simplified model. Under the isotropic input data, we first show that interpolation phase transition is around Nd ~ n, where Nd is the number of parameters and n is the sample size.
To demystify the generalization puzzle, we consider the min-norm interpolator and show that its test error/generalization error is largely determined by a smooth, low-degree component. Moreover, we find that nonlinearity of the activation function has an implicit regularization effect. These results offer new insights to recent discoveries in overparametrized models such as double descent phenomena.
Link to the ArXiv paper: https://arxiv.org/abs/2007.12826
Bio: Yiqiao Zhong is currently an assistant professor at the University of Wisconsin-Madison, Department of Statistics. He was previously a postdoc at Stanford University, advised by Prof. Andrea Montanari and Prof. David Donoho. His research interest includes deep learning theory, high-dimensional statistics, and optimization. Prior to this, Yiqiao Zhong obtained his Ph.D. in 2019 from Princeton University, where he was advised by Prof. Jianqing Fan.
Zoom Link: Please contact Yanrong Yang (yanrong.yang@anu.edu.au) to obtain the zoom link for this seminar.