We present a novel methodology for quantitative analysis of changes in facial display as the intensity of an emotion evolves from neutral to peak expression. The face is modeled as a combination of regions and their boundaries. An expression change in a face is characterized and quantified through a combination of non-rigid (elastic) deformations, i.e., expansions and contractions of these facial regions. After elastic interpolation, this yields a geometry-based high-dimensional 2D shape transformation, which is used to register regions defined on subjects (i.e., faces with expression) to those defined on the reference template face (a neutral face). This shape transformation produces a vector-valued deformation field and is used to define a scalar valued regional volumetric difference (RVD) function, which characterizes and quantifies the facial expression. The approach is applied to a standardized database consisting of single images of professional actors expressing emotions at predefined intensities. We perform a detailed analysis of the deformations generated and the regional volumetric differences computed for expressions. We were able to quantify subtle changes in expression that can distinguish the intended emotions. A model for the average expression of specific emotions was also constructed using the RVD maps. This method can be applied in basic and clinical investigations of facial affect and its neural substrates.