Sgd implicitly regularizes generalization error

DA Roberts - arXiv preprint arXiv:2104.04874, 2021 - arxiv.org
arXiv preprint arXiv:2104.04874, 2021arxiv.org
We derive a simple and model-independent formula for the change in the generalization
gap due to a gradient descent update. We then compare the change in the test error for
stochastic gradient descent to the change in test error from an equivalent number of gradient
descent updates and show explicitly that stochastic gradient descent acts to regularize
generalization error by decorrelating nearby updates. These calculations depends on the
details of the model only through the mean and covariance of the gradient distribution, which …
We derive a simple and model-independent formula for the change in the generalization gap due to a gradient descent update. We then compare the change in the test error for stochastic gradient descent to the change in test error from an equivalent number of gradient descent updates and show explicitly that stochastic gradient descent acts to regularize generalization error by decorrelating nearby updates. These calculations depends on the details of the model only through the mean and covariance of the gradient distribution, which may be readily measured for particular models of interest. We discuss further improvements to these calculations and comment on possible implications for stochastic optimization.
arxiv.org