Methods of computational anatomy are typically based on a spatial transformation that maps a template to an individual anatomy and vice versa. However, important morphological characteristics are frequently not captured by this transformation, thereby leading to lossy representations. We extend this formulation by incorporating residual anatomical information, i.e., information that is not captured by the shape transformation but is necessary in order to fully and exactly reconstruct the anatomy under measurement. We, therefore, arrive at a lossless morphological representation. By virtue of being lossless, this representation allows us to represent the same anatomy by an infinite number of pairs [transformation, residual], since different residuals correspond to different transformations. We treat these pairs as members of an anatomical equivalence class (AEC), which we approximate using principal component analysis. We show that projection onto the orthogonal to the AEC subspace produces measurements that allow us to better detect morphological abnormalities by eliminating variation in the data that is irrelevant and confounds underlying subtle morphological characteristics. Finally, we show that higher classification rates between a group of normal brains and a group of brains with localized atrophy are obtained if we use nonmetric distances between AECs instead of conventional Euclidean distances between individual morphological measurements. The results confirm that this representation can improve the results compared to conventional analysis, but also highlight limitations of the current approach and point to directions of further development of this general morphological analysis framework.