Accurate and reliable segmentation of the prostate gland using magnetic resonance (MR) imaging has critical importance for the diagnosis and treatment of prostate diseases, especially prostate cancer. Although many automated segmentation approaches, including those based on deep learning have been proposed, the segmentation performance still has room for improvement due to the large variability in image appearance, imaging interference, and anisotropic spatial resolution. In this paper, we propose the 3D adversarial pyramid anisotropic convolutional deep neural network (3D APA-Net) for prostate segmentation in MR images. This model is composed of a generator (i.e., 3D PA-Net) that performs image segmentation and a discriminator (i.e., a six-layer convolutional neural network) that differentiates between a segmentation result and its corresponding ground truth. The 3D PA-Net has an encoder-decoder architecture, which consists of a 3D ResNet encoder, an anisotropic convolutional decoder, and multi-level pyramid convolutional skip connections. The anisotropic convolutional blocks can exploit the 3D context information of the MR images with anisotropic resolution, the pyramid convolutional blocks address both voxel classification and gland localization issues, and the adversarial training regularizes 3D PA-Net and thus enables it to generate spatially consistent and continuous segmentation results. We evaluated the proposed 3D APA-Net against several state-of-the-art deep learning-based segmentation approaches on two public databases and the hybrid of the two. Our results suggest that the proposed model outperforms the compared approaches on three databases and could be used in a routine clinical workflow.