Simple nearest neighbor classification fails to exploit the additional information in image sets. We propose self-regularized nonnegative coding to define between set distance for robust face recognition. Set distance is measured between the nearest set points (samples) that can be approximated from their orthogonal basis vectors as well as from the set samples under the respective constraints of self-regularization and nonnegativity. Self-regularization constrains the orthogonal basis vectors to be similar to the approximated nearest point. The nonnegativity constraint ensures that each nearest point is approximated from a positive linear combination of the set samples. Both constraints are formulated as a single convex optimization problem and the accelerated proximal gradient method with linear-time Euclidean projection is adapted to efficiently find the optimal nearest points between two image sets. Using the nearest points between a query set and all the gallery sets as well as the active samples used to approximate them, we learn a more discriminative Mahalanobis distance for robust face recognition. The proposed algorithm works independently of the chosen features and has been tested on gray pixel values and local binary patterns. Experiments on three standard data sets show that the proposed method consistently outperforms existing state-of-the-art methods.